Compare commits

...

941 Commits

Author SHA1 Message Date
Yusuke Kuoka
6762c5c096 Fix excessive runnerreplicaset update issue since 0.25.0 (#1651)
Fixes #1643
2022-07-15 06:41:57 +09:00
everpcpc
fea1457f12 fix: annotate pod instead of label on UnregistrationFailureMessage (#1615)
The error message should go to annotation instead of pod label, since we only check annotations on autoscale:

https://github.com/actions-runner-controller/actions-runner-controller/blob/master/controllers/autoscaling.go#L338-L342

Then there is no need to truncate or normalize the error message.
2022-07-09 11:45:05 +09:00
Yusuke Kuoka
473295e3fc Enhance the E2E test to be runnable against remote clusters on e.g. AWS EKS (#1610)
This contains apparently enough changes to the current E2E test code to make it runnable against remote Kubernetes clusters. I was actually able to make the test passing against my AWS EKS based test clusters with these changes. You still need to trigger it manually from a local checkout of the ARC repo today. But this might be the foundation for automated E2E tests against major cloud providers.
2022-07-07 20:48:07 +09:00
Yusuke Kuoka
9f6f962fc7 Add toubleshooting for cert-manager ca error (#1598)
I encountered this once while E2E testing ARC with K8s 1.22 and cert-manager 1.1.1. The K8s version is too high / The cert-manager is too low so you generally need to fix either. In a standard scenario, it should be more feasible and meaningful to upgrade cert-manager to a recent enough version that supports the new Kubernetes version.
2022-07-07 11:27:49 +09:00
Yusuke Kuoka
2a475f25c7 Use Argo Tunnel for exposing the autoscaler's webhook server (#1595)
I've been manually setting up Argo Tunnel to expose the webhook server while running E2E tests so that I can cover the webhook-based autoscaling. This automates the setup process so that we can automatiaclly bring up and down cloudflared before/after the test run, so that it can be a part of our upcoming automated E2E test.
2022-07-07 11:27:27 +09:00
Viktor Lindgren
dd9f25ea78 Update README.md (#1606) 2022-07-06 08:57:54 +09:00
Yusuke Kuoka
b8e4eee904 Make it easier to E2E test on various K8s versions (#1599) 2022-07-06 08:57:21 +09:00
Yusuke Kuoka
edbdef8d20 Bump chart version to 0.20.0 for ARC 0.25.0 (#1600)
We'll be merging this immediately after ARC 0.25.0 gets released.
2022-07-05 11:19:24 +09:00
Nguyễn Đức Chiến
a190fa97bb Fix helm charts (#1603) 2022-07-05 10:35:57 +09:00
Yusuke Kuoka
bfc5ea4727 Fix a regression in webhook-based autoscaler (#1596)
The regression resulted in the webhook-based autoscaler be unable to find visible runner groups and therefore unable to scale up and down the target RunnerDeployment/RunnerSet at all when the webhook-based autoscaler was provided GitHub API credentials to enable the runner groups support. This fixes that.

The regression was introduced via #1578 which is not released yet. Users of existing ARC releases are therefore not affected.
2022-07-04 20:17:09 +09:00
renovate[bot]
5a9e8545aa fix(deps): update golang.org/x/oauth2 digest to 2104d58 (#1593)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-02 14:06:21 +09:00
Yusuke Kuoka
4446ba57e1 Cover ARC upgrade in E2E test (#1592)
* Cover ARC upgrade in E2E test

so that we can make it extra sure that you can upgrade the existing installation of ARC to the next and also (hopefully) it is backward-compatible, or at least it does not break immediately after upgrading.

* Consolidate E2E tests for RS and RD

* Fix E2E for RD to pass

* Add some comment in E2E for how to release disk consumed after dozens of test runs
2022-07-01 21:32:05 +09:00
Martin Moon (문성주)
d62c8a4697 fix typo. (#1594) 2022-07-01 10:24:41 +09:00
Yusuke Kuoka
946d5b1fa7 Add release note for v0.25.0 (#1591) 2022-06-30 22:11:22 +09:00
renovate[bot]
da6b07660e fix(deps): update golang.org/x/oauth2 digest to 02e64fa (#1480)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-30 11:32:26 +09:00
Callum Tait
e3deb0d752 chore: move runner docker check (#1548) 2022-06-30 11:31:50 +09:00
Callum Tait
82641e5036 chore: move HOME to more logical place (#1460)
* chore: move HOME to more logical place

* chore: don't break the PATH

* chore: don't break the PATH

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-06-30 11:21:05 +09:00
Vladyslav Miletskyi
2fe6adf5b7 Runner Entrypoint: fix daemon.json (#1409)
* Runner Entrypoint: fix daemon.json

Do not owerwrite daemon.json if it already exists.
Usage: custom images, which are using public image as source.

* Update runner/startup.sh

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-06-30 11:03:12 +09:00
renovate[bot]
736126b793 chore(deps): update helm values quay.io/brancz/kube-rbac-proxy to v0.13.0 (#1589)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-30 09:51:38 +09:00
renovate[bot]
6abf5bbac8 fix(deps): update module github.com/stretchr/testify to v1.8.0 (#1584)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-30 09:50:55 +09:00
Yusuke Kuoka
dc4f116bda Reflect manual test scenario for containerMode=kubernetes to E2E (#1588)
With this my semi-automatic E2E manual testing becomes even easier :)
2022-06-30 09:09:58 +09:00
Callum Tait
cda10fd243 docs: remove once feature flag env var (#1590) 2022-06-30 09:09:37 +09:00
Yusuke Kuoka
b5d1a63bdf Enhance the acceptance runnerset yaml template for manual E2E (#1587)
The primary goal of this change is to let the tester know about the config difference between the explicitly configured ephemeral work volume vs the automatically configured work volume with workVolumeClaimTemplate+containerMode=kubernetes.
2022-06-29 22:15:50 +09:00
Yusuke Kuoka
6f3e23973d Bump E2E runner version to 2.294.0 (#1586)
so that every runner does not result in auto-updating itself on startup in E2E, which makes E2E take longer to complete.
2022-06-29 22:05:50 +09:00
Yusuke Kuoka
a517c1ff66 Fix old runner pods stuck in Terminating since #1579 (#1585)
Ref #1579
2022-06-29 22:02:42 +09:00
Yusuke Kuoka
9b28e633c1 Drop support for --once (#1580)
Ref #1196
2022-06-29 21:49:52 +09:00
Yusuke Kuoka
8161136cbd Fix PercentageRunnersBusy scaling delay (#1579)
* Use a dedicated pod label to say it is a runner pod

Follow-up for #1546

* Fix PercentageRunnersBusy scaling delay

Ref #1374
2022-06-29 20:49:21 +09:00
Nikola Jokic
a9ac5a1cbf extracted validations to a single point (#1582) 2022-06-29 20:32:00 +09:00
Callum Tait
d4f35cff4f ci: add paths to push trigger (#1583) 2022-06-29 20:30:07 +09:00
Yusuke Kuoka
f661249f07 Use the go-github impl of ListRunnerGroups with visible_to_repository (#1578)
Ref #1402
2022-06-29 09:53:03 +01:00
Mike
73e430ce54 Add a solution to InternalError webhook context timedout (#1558)
* added troubleshooting solution

* added error example

* added entry to the pages index

* sorted

Co-authored-by: Mike Joseph <mike@Mikes-MacBook-Pro-5618.local>
Co-authored-by: Mike Joseph <mike@efrontier.com>
2022-06-29 09:40:23 +09:00
renovate[bot]
858ef8979d chore(deps): update helm/kind-action action to v1.3.0 (#1532)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 09:05:26 +09:00
renovate[bot]
1ce0a183a6 chore(deps): update azure/setup-helm action to v3 (#1571)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 09:04:33 +09:00
renovate[bot]
63935d2053 fix(deps): update module github.com/stretchr/testify to v1.7.5 (#1510)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 08:39:09 +09:00
John Delivuk
fc63d6d26e Fix: Match Ingress API Version correctly. (#1541)
* Updating conditional to match the api version and kind

mend

* Updating conditional to match the api version and kind

mend
2022-06-29 08:30:11 +09:00
renovate[bot]
5ea08411e6 chore(deps): update dependency golang to v1.18.3 (#1509)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 08:29:14 +09:00
Giuseppe Crinò
067ed2e5ec docs: fix logic explanation for scale down delay (#1562)
Signed-off: Giuseppe Crinò <giuscri@gmail.com>
2022-06-29 08:26:28 +09:00
renovate[bot]
d86bd2bcd7 fix(deps): update module sigs.k8s.io/controller-runtime to v0.12.2 (#1449)
* fix(deps): update module sigs.k8s.io/controller-runtime to v0.12.2

* Regenerate manfiests with the updated k8s and controller-runtime deps

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-29 06:42:17 +09:00
Yusuke Kuoka
ddd417f756 Bump go-github to v45 (#1573)
* Bump go-github to v45

Ref #1402

* fixup! Bump go-github to v45
2022-06-29 06:34:58 +09:00
Thomas Boop
0386c0734c containerMode option to allow running jobs in k8's instead of docker (#1546)
* added containerMode=kubernetes env variables to the runner

* removed unused logging

* restored configs and charts

* restored makefile cert version and acceptance/run

* added workVolumeClaimTemplate in pod definition, including logic

* added claim template name based on the runner

* Apply suggestions from code review

update errors

* added concurrent cleanup before runner pod is deleted

* update manifests

* added retry after 30s if pod cleanup contains err

* added admission webhook check, made workVolumeClaimTemplate mandatory for k8s

* style changes and added comments

* added izZero timestamp check for deleting runner-linked pods

* changed order of local variable to avoid copy if p is deleted

* removed docker from container mode k8s

* restored charts, config, makefile

* restored forked files back and not the ARC ones

* created PersistentVolume on containerMode k8s

* create pv only if storage class name is local-storage

* removed actions if storage class name is local-storage

* added service account validation if container mode kubernetes

* changed the coding style to match rest of the ARC

* added validation to the runnerdeployment webhook

* specified fields more precisely, added webhook validation to the replicaset as well

* remake manifests

* wraped delete runner-linked-pods in kube mode

* fixed empty line

* fixed import

* makefile changes for hooks

* added cleanup secrets

* create manifests

* docs

* update access modes

* update dockerfile

* nit changes

* fixed dockerfile

* rewrite allowing reuse for runners and runnersets

* deepcopy forgot to stage

* changed privileged

* make manifests

* partly moved to finalizer, still need to apply finalizer first

* finalizer added if env variable used in container mode exists

* bump runner version

* error message moved from Error to Info on cleanup pods/secrets

* removed useless dereferencing, added transformation tests of workVolumeClaimTemplate

* Apply suggestions from code review

* Update controllers/utils_test.go

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* Update controllers/utils_test.go

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* add hook version to cli, update to 0.1.2

* Apply suggestions from code review

* Update controllers/utils_test.go

* Update runner/Makefile

* Fix missing secret permission and the error handling

* Fix a runnerpod reconciler finalizer to not trigger unnecessary retry

Co-authored-by: Nikola Jokic <nikola-jokic@github.com>
Co-authored-by: Nikola Jokic <97525037+nikola-jokic@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-28 14:12:40 +09:00
Yusuke Kuoka
af96de6184 Fix completed runner pod recreation not to be blocked after max out (#1568)
Ref https://github.com/actions-runner-controller/actions-runner-controller/pull/1477#issuecomment-1164154496
2022-06-28 13:50:07 +09:00
Arnaud
abb8615796 Webhook server configuration with kustomize (#1312)
* webhook server configuration with kustomize

* Update README.md

* Update README.md

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-28 09:08:25 +09:00
Sam Weston
bc7a3cab1b Add priorityClassName to CRDs (#1513)
* Add pod priorityClassName to controller and crds

* Add missing bits in bases directory

* Regenerate crds
2022-06-28 08:45:19 +09:00
Yusuke Kuoka
e2c8163b8c Make webhook-based scale race-free (#1477)
* Make webhook-based scale operation asynchronous

This prevents race condition in the webhook-based autoscaler when it received another webhook event while processing another webhook event and both ended up
scaling up the same horizontal runner autoscaler.

Ref #1321

* Fix typos

* Update rather than Patch HRA to avoid race among webhook-based autoscaler servers

* Batch capacity reservation updates for efficient use of apiserver

* Fix potential never-ending HRA update conflicts in batch update

* Extract batchScaler out of webhook-based autoscaler for testability

* Fix log levels and batch scaler hang on start

* Correlate webhook event with scale trigger amount in logs

* Fix log message
2022-06-27 18:31:48 +09:00
Callum Tait
84d16c1c12 revert: "Overhauled startup.sh Script (#1454)" (#1561)
This reverts commit 071898c96b.
2022-06-23 12:39:32 +01:00
Richard Fussenegger
071898c96b Overhauled startup.sh Script (#1454)
This overhaul turns it into a shellcheck valid script with explicit error handling for all possible situations I could think of. This change takes https://github.com/actions-runner-controller/actions-runner-controller/pull/1409 into account and things can be merged in any order. There are a few important changes here to the logic:

- The wait logic for checking if docker comes up was fundamentally flawed because it checks for the PID. Docker will always come up and thus become visible in the process list, just to immediately die when it encounters an issue, after which supervisor starts it again. This means that our check so far is flaky due to the `sleep 1` it might encounter a PID, or it might not, and the existence of the PID does not mean anything. The `docker ps` check we have in the `entrypoint.sh` script does not suffer from this as it checks for a feature of docker and not a PID. I thus entirely removed the PID check, and instead I am handing things over to our `entrypoint.sh` script by setting the environment variables correctly.
- This change has an influence on the `docker0` interface MTU configuration, because the interface might or might not exist after we started docker. Hence, I changed this to a time boxed loop that tries for one minute to set up the interface's MTU. In case the command fails we log an error and continue with the run.
- I changed the entire MTU handling by validating its value before configuring it, logging an error and continuing without if it is set incorrectly. This ensures that we are not going to send our users on a bug hunt.
- The way we started supervisord did not make much sense to me. It sends itself into the background automatically, there is no need for us to do so with Bash.

The decision to not fail on errors but continue is a deliberate choice, because I believe that running a build is more important than having a perfectly configured system. However, this strategy might also hide issues for all users who are not properly checking their logs. It also makes testing harder. Hence, we could change all error conditions from graceful to panicking. We should then align the exit codes across `startup.sh` and `entrypoint.sh` to ensure that every possible error condition has its own unique error code for easy debugging.
2022-06-23 09:37:01 +09:00
renovate[bot]
f24e2fa44e chore(deps): update dependency actions/runner to v2.294.0 2022-06-22 21:45:32 +00:00
Callum Tait
3c7d3d6b57 ci: hardcode dockerhub username (#1555) 2022-06-22 16:15:50 +01:00
Callum Tait
23f091d7fa ci: don't login on a pr (#1554)
* ci: don't login on a pr

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-06-22 16:03:36 +01:00
Callum Tait
667764e027 chore: suggest gist first (#1539) 2022-06-18 17:38:37 +09:00
Callum Tait
de693c4191 ci: runners trigger on push (#1549)
* ci: runners trigger on push

* ci: comments

* ci: comments
2022-06-18 17:34:40 +09:00
Callum Tait
510fc9c834 ci: add GitHub packages to arc release (#1525)
* ci: add GitHub packages to arc release

* ci: use restrictive permissions
2022-06-15 11:37:19 +09:00
Callum Tait
7fd5e24961 chore: bump chart to app 0.24.1 (#1531) 2022-06-15 11:34:55 +09:00
Yusuke Kuoka
9974b1a2b7 e2e: Enable buildx in more images (#1530) 2022-06-14 09:29:30 +01:00
Yusuke Kuoka
bd91b73fd9 chore: update bug_report.yml (#1529) 2022-06-14 09:29:06 +01:00
Callum Tait
a7ae910ee4 docs: add CRD disclaimer to bug report (#1516) 2022-06-14 09:42:31 +09:00
Callum Tait
2733c36d0e ci: publish controller canary to github packages (#1524)
* ci: publish controller canary to github packages

* ci: include image name
2022-06-14 09:10:13 +09:00
Yusuke Kuoka
0ef9a22cd4 Fix confusing PV controller log (#1526)
Ref #1511
2022-06-14 08:35:04 +09:00
Renovate Bot
933b0c7888 chore(deps): update dependency actions/runner to v2.293.0 2022-06-13 17:09:29 +00:00
renovate[bot]
1b7ec33135 chore(deps): update actions/setup-python action to v4 (#1514)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-06-13 14:07:52 +01:00
Callum Tait
a62882d243 ci: fix permisions (#1512)
* ci: fix permisions

* chore: change to trigger build

* ci: add write permission to packages

* ci: remove conditionals for docker logins

* Update controllers/utils_test.go

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-09 10:25:56 +09:00
Callum Tait
0cd13fe51d ci: align pipeline files and setups (#1484)
* ci: align pipeline files and setups

* ci: more changes

* ci: various changes

* ci: fix setup-helm action ref

* ci: better pipeline name

* ci: more format aligning

* ci: more format aligning

* ci: better job name

* ci: supports multiple languages

* ci: better pipeline and job names

* ci: do a verb-noun thing for consistency

* ci: use 'arc' when talking holistically

* ci: add caching scope

* ci:  put canary in a scope

* ci: fix syntax error

* ci: better pipeline and job names

* ci: better job name

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-06-08 10:04:14 +09:00
Vinícius Garcia
01c8dc237e Fix example manifests for webhooks-based scaling (#1354)
* Fix example manifests for webhook based scaling

I tried running these on my k8s cluster and I got some easy to fix errors, so I am committing them here.

* Fix example manifests for webhook autoscaling with workflow_jobs

* Fix the explation on how to setup webhooks on your cluster

* Replace unclear comment with actual code examples

There was a comment instructing users to add minReplicas and
maxReplicas to all the HRA yamls, so I just removed it and added
these attributes to the yamls themselves for clarity.

* Make clear that using the ingress example is just a suggestion

* Apply some text improvements suggested by @mumoshu

* Update examples so the webhook server is exposed on a NodePort

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Remove an unnecessary field from one the examples

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Apply suggestion from @mumoshu

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Remove namespace fields from webhook autoscaler examples

This change was suggested by @mumoshu

* Apply final suggestion from @mumoshu

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-07 08:33:09 +09:00
Yusuke Kuoka
7c4db63718 chart: Bump appVersion to 0.24.0 (#1505) 2022-06-03 22:01:35 +09:00
Yusuke Kuoka
3d88b9630a doc: Add "people" section (#1498)
Ref #1497
2022-05-31 09:29:15 +09:00
Yusuke Kuoka
1152e6b31d Add release note for v0.24.0 (#1493) 2022-05-30 09:10:36 +09:00
renovate[bot]
ac27df8301 chore(deps): update dependency actions/runner to v2.292.0 (#1475)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-27 09:49:46 +09:00
Yusuke Kuoka
9dd26168d6 Fix confusing logs from pv and pvc controllers (#1487)
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1321#issuecomment-1137431212
2022-05-26 18:21:23 +09:00
Yusuke Kuoka
18bfb28c0b e2e: ARC_E2E_NO_CLEANUP to prevent cleanup (#1470)
A small improvement to our E2E test suite which allows you to set `ARC_E2E_NO_CLEANUP=whatever` to let it prevent the kind cluster cleanup on successful test run, so that you can rerun it without waiting for the new kind cluster to come up.
2022-05-26 10:59:50 +09:00
renovate[bot]
84210e900b chore(deps): update actions/setup-python digest to fff15a2 (#1458)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-25 12:12:22 +09:00
Yusuke Kuoka
ef3313d147 doc: Use RunnerSet to retain various cache by leveraging PV (#1464)
* doc: Use RunnerSet to retain various cache

In relation to #1286 and as a follow-up for #1340

* docs: clarify client vs daemon

* docs: better wording

* Separate RunnerSet examples for docker iimage layer caching

* Revert changes on testdata as it is going to be added via #1471 instead

* Update README.md

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>

* fixup! Update README.md

* Remove the outdated RunnerSet limitation

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-05-25 11:09:36 +09:00
Yusuke Kuoka
c7eea169ad test: Add test for runner with generic ephemeral volume as "work" (#1472)
This adds the test to verify the runner pod generation logic for the case that you use a generic ephemeral volume as "work".
It is almost an adaptation of the test cases writetn for RunnerSet in #1471, to RunnerDeployment and Runner.
2022-05-25 10:37:23 +09:00
Yusuke Kuoka
63be0223ad fix: Avoid duplicate volume and mount name error for generic ephemeral volume as "work" (#1471)
* fix: Avoid duplicate volume and mount name error for generic ephemeral volume as "work"

While manually testing configurations being documented in #1464, I discovered that the use of dynamic ephemeral volume for "work" directory was not working correctly due to the valiadation error.

This fixes the runner pod generation logic to not add the default volume and volume mount for "work" dir, so that the error disappears.

Ref #1464

* e2e: Ensure work generic ephemeral volume to work as expected
2022-05-22 10:25:50 +09:00
Yusuke Kuoka
5bbea772f7 doc: enhance troubleshooting guide with the scale-to-zero issue (#1469)
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1057#issuecomment-1133439061
2022-05-21 19:00:40 +01:00
Callum Tait
2aa3f1e142 chore: remove stale app config (#1465) 2022-05-21 14:19:41 +09:00
Yusuke Kuoka
3e988afc09 test: add fuzzing to the test suite (#1463)
As a part of #1298, we add fuzzing based on Go test's fuzzing support to the test suite
2022-05-19 13:34:23 +01:00
Yusuke Kuoka
84210f3d2b Bump Go to 1.18.2 (#1462)
As a part of #1298, I'm going to use Go fuzzing which is availabls since Go 1.18.

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-05-19 10:33:31 +01:00
Yusuke Kuoka
536692181b docs: Add CII Best Practices badge to README (#1461)
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1298
2022-05-19 10:16:39 +01:00
renovate[bot]
23403172cb chore(deps): update dependency golang to v1.18.2 (#1229)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-19 17:36:31 +09:00
renovate[bot]
8a8ec43364 chore(deps): update github/codeql-action action to v2.1.11 (#1455)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-18 09:02:26 +09:00
Felipe Galindo Sanchez
78c01fd31d cleanup some unused code and minor refactors (#1274)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-05-16 18:38:32 +09:00
Bernardo Meurer
bf45aa9f6b refactor(runner/entrypoint): don't mv externalstmp if it's not there (#1315) 2022-05-16 18:37:37 +09:00
Callum Tait
b5aa1750bb ci: match renovate with new dockerfile names (#1453) 2022-05-16 18:15:44 +09:00
Richard Fussenegger
cdc9d20e7a Renamed Runner Dockerfiles (#1248)
Renamed the runner dockerfiles so that we have proper syntax highlighting for them, as well as a consistent way to map from the image name to the dockerfile. Added a `.dockerignore` file to avoid uploading things to the daemon that we never use.
2022-05-16 11:41:28 +09:00
Hyeonmin Park
8035d6d9f8 chart: Add extraPaths to Ingress of GitHub Webhook Server (#1129)
* chart: Add extraPaths to Ingress of GitHub Webhook Server

* Update charts/actions-runner-controller/templates/githubwebhook.ingress.yaml

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Prefix the toYaml expression to remove the extra newline before extra paths

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-05-16 11:34:56 +09:00
Callum Tait
65f7ee92a6 refactor: remove registration runner dead code (#1260)
We had some dead code left over from the removal of registration runners. Registration runners were removed in #859 #1207

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-05-16 11:23:39 +09:00
Matéo Mévollon
fca8a538db docs: document the Docker MTU problem in troubleshooting guide (#1257)
* docs: document the Docker MTU problem

* Update TROUBLESHOOTING.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-05-16 11:13:05 +09:00
Nicholas Farley
95ddc77245 Allow customizing the controller webhook port (#1410)
Closes #1314

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-05-16 10:33:13 +09:00
Yusuke Kuoka
b5194fd75a Enhance RunnerSet to optionally retain PVs accross restarts (#1340)
* Enhance RunnerSet to optionally retain PVs accross restarts

This is our initial attempt to bring back the ability to retain PVs across runner pod restarts when using RunnerSet.
The implementation is composed of two new controllers, `runnerpersistentvolumeclaim-controller` and `runnerpersistentvolume-controller`.
It all starts from our existing `runnerset-controller`. The controller now tries to mark any PVCs created by StatefulSets created for the RunnerSet.
Once the controller terminated statefulsets, their corresponding PVCs are clean up by `runnerpersistentvolumeclaim-controller`, then PVs are unbound from their corresponding PVCs by `runnerpersistentvolume-controller` so that they can be reused by future PVCs createf for future StatefulSets that shares the same same StorageClass.

Ref #1286

* Update E2E test suite to cover runner, docker, and go caching with RunnerSet + PVs

Ref #1286
2022-05-16 09:26:48 +09:00
renovate[bot]
adf69bbea0 fix(deps): update module github.com/prometheus/client_golang to v1.12.2 (#1448)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-16 09:19:55 +09:00
renovate[bot]
b43ef70ac6 fix(deps): update module github.com/bradleyfalzon/ghinstallation/v2 to v2.0.4 (#1452)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-16 08:59:53 +09:00
Yusuke Kuoka
f1caebbaf0 Update codeql.yml (#1451)
Give up pinning deps with commit IDs because PRs were unreviewable due to missing changelog and it sends PRs for every commit to the master/main branch of the deps, which is undesired. We only need updates for tagged releases!
2022-05-16 08:59:29 +09:00
renovate[bot]
ede28f5046 chore(deps): update helm values quay.io/brancz/kube-rbac-proxy to v0.12.0 (#1323)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-16 08:50:12 +09:00
shettarvinay
f08ab1490d Update Dockerfile, github/github.go, go.mod and go.sum for fixing CVE-2020-2616 and CVE-2022-24921 (#1230)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-05-16 08:45:44 +09:00
renovate[bot]
772ca57056 fix(deps): update module github.com/stretchr/testify to v1.7.1 (#1228)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-16 08:43:41 +09:00
renovate[bot]
51b13e3bab fix(deps): update module github.com/onsi/gomega to v1.19.0 (#1069)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-16 08:41:59 +09:00
Michael Kuhnt
81017b130f fix(chart): add missing namespace to webhook.ingress (#1417)
The ingress needs to be deployed in the very same namespace
as the service it is forwarding to.
2022-05-16 08:41:35 +09:00
Yusuke Kuoka
bdbcf66569 chore: Add signrel command for sigining arc release assets (#1426)
* chore: Add signrel command for sigining arc release assets

I used this command to sign assets for the recent releases to comply with the recommendation of 5758364c82/docs/checks.md (signed-releases)

Ref #1298

* Implement signrel subcommands for listing tags and signing assets, with docs
2022-05-16 08:40:41 +09:00
Yusuke Kuoka
0e15a78541 Create SECURITY.md (#1424)
* Create SECURITY.md

According to 5758364c82/docs/checks.md (security-policy)

Ref #1298

* Update SECURITY.md
2022-05-16 08:40:16 +09:00
renovate[bot]
f85c3d06d9 chore(deps): update docker/setup-qemu-action action to v2 (#1450)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-14 16:07:23 +01:00
Callum Tait
51ba7d7160 chore: more initialisation info to help debug (#1276)
* chore: more initialisation info to help debug

* chore: clearer flag description

* chore: use actual english

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-05-14 17:11:20 +09:00
Yusuke Kuoka
759349de11 fix: force restartPolicy "Never" to prevent runner pods from stucking in Terminating when the container disappeared (#1395)
Ref #1369
2022-05-14 09:07:17 +01:00
renovate[bot]
3014e98681 chore(deps): update helm/chart-releaser-action digest to a3454e4 (#1441)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-13 07:47:19 +09:00
renovate[bot]
5f4be6a883 fix(deps): update module github.com/go-logr/logr to v1.2.3 (#1241)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-13 07:42:58 +09:00
Yusuke Kuoka
b98f470a70 ci: enable CodeQL Alerts following the OpenSSF Security Scorecards recommendation (#1421)
Ref #1298
2022-05-12 10:55:11 +01:00
Yusuke Kuoka
e46b90f758 fix: runner pods managed by RunnerSet to not stuck in Terminating (#1420)
This is intended to fix #1369 mostly for RunnerSet-managed runner pods. It is "mostly" because this fix might work well for RunnerDeployment in cases that #1395 does not work, like in a case that the user explicitly set the runner pod restart policy to anything other than "Never".

Ref #1369
2022-05-12 09:34:27 +01:00
Yusuke Kuoka
3a7e8c844b feat: Support arbitrarily setting privileged: true for runner container (#1383)
Resolves #1282
2022-05-12 09:25:51 +01:00
renovate[bot]
65a67ee61c chore(deps): update docker/setup-qemu-action digest to 0522dcd (#1440)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-12 11:09:43 +09:00
renovate[bot]
215ba36fd1 chore(deps): update docker/setup-buildx-action digest to 91cb32d (#1439)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-12 11:08:57 +09:00
renovate[bot]
27774b47bd fix(deps): update golang.org/x/oauth2 digest to 9780585 (#1329)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-12 10:50:29 +09:00
renovate[bot]
fbde2b9a41 chore(deps): update docker/login-action digest to d398f07 (#1438)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-12 10:37:34 +09:00
renovate[bot]
212098183a chore(deps): update docker/build-push-action digest to c5e6528 (#1437)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-12 10:37:09 +09:00
renovate[bot]
4a5097d8cf chore(deps): update actions/setup-go digest to 193b404 (#1431)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-12 10:36:53 +09:00
renovate[bot]
9c57d085f8 chore(deps): update actions/stale digest to 65d24b7 (#1433)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-12 09:21:08 +09:00
renovate[bot]
d6622f9369 chore(deps): update actions/setup-python digest to c57f793 (#1432)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-12 09:20:57 +09:00
Yusuke Kuoka
3b67ee727f e2e: Fix wrong scale trigger configuration used in test (#1434) 2022-05-12 09:19:58 +09:00
Yusuke Kuoka
e6bddcd238 Fix certain runnerset name in E2E and acceptance (#1435) 2022-05-12 09:19:47 +09:00
Callum Tait
f60e57d789 docs: improve troubleshooting (#1428)
* docs: runner cleanup instructions

* docs: add delay in job allocation

* docs: fix broken link

* docs: reorganise into categories

* docs: align the format across the doc

* docs: remove code comment

* docs: add a tools section

* docs: add a short description to each section

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-05-12 08:45:10 +09:00
renovate[bot]
3ca1152420 chore(deps): update actions/checkout digest to 2541b12 (#1430)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-12 08:43:35 +09:00
renovate[bot]
e94fa19843 chore(deps): update actions/cache digest to 95f200e (#1429)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-12 08:43:27 +09:00
renovate[bot]
99832d7104 chore(deps): update docker/setup-buildx-action action to v2 (#1416)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-11 16:34:53 +01:00
renovate[bot]
289bcd8b64 chore(deps): update docker/login-action action to v2 (#1415)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-05-11 16:34:40 +01:00
Jacob Gadikian
5e8cba82c2 docs: simplify wording (#1427)
clarify docs
2022-05-11 11:44:07 +01:00
Yusuke Kuoka
dabbc99c78 refactor(controller): stop auto-setting RUNNER_FEATURE_FLAG_EPHEMERAL (#1385)
This feature flag was provided from ARC to runner container automatically to let it use `--ephemeral` instead of `--once` by default. As the support for `--once` is being dropped from the runner image via #1384, we no longer need that.

Ref #1196
2022-05-11 11:42:55 +01:00
Yusuke Kuoka
d01595cfbc ci: pin GitHub Actions workflow actions by hash (#1422)
as recommended in 5758364c82/docs/checks.md (pinned-dependencies)

Ref #1298
2022-05-11 11:41:30 +01:00
Yusuke Kuoka
c1e5829b03 refactor(runner): ability to opt-out of using --ephemeral / opt-in to legacy --once for GHES older than 3.3 (#1384)
* runner: Remove the ability to use the deprecated `--once` flag

Ref #1196

* runner: Ability to opt-out of using --ephemeral

Although we are going to eventually remove the ability to use the legacy --once flag as proposed in #1196, there might be folks still using legacy GHES versions 3.2 or earlier.

This commit removes the existing feature flag to opt-in for --ephemeral, while adding another feature flag RUNNER_FEATURE_FLAG_ONCE to opt-in for --once so that folks stuck in legacy GHES versions
can still use ARC.

Since this change every user starts using --ephemeral by default. If they see any issues on legacy GHES instance, RUNNER_FEATURE_FLAG_ONCE=true can be set to opt-in to keep using --once, which gives one more ARC release until they upgrade their GHES instance.

But beware, we won't support legacy GHES instances forever as it's going to be a maintenance nightmare. Please upgrade!

Ref #1196
2022-05-11 09:55:33 +01:00
Renovate Bot
800d6bd586 chore(deps): update dependency actions/runner to v2.291.1 2022-04-29 19:05:31 +00:00
Callum Tait
d3b7f0bf7d chore: release chart targeting v0.23.0 (#1404) 2022-04-29 13:54:22 +01:00
Yusuke Kuoka
dbcb67967f Turn the bug report template into a form with more context (#1401)
I believe this helps us focus on relatively more important issues like critical bug reports and highly-requested feature requests.

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-04-29 21:09:59 +09:00
Callum Tait
55369bf846 fix: forgot to do the chart (#1388)
Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

> chart test is failing due to `flag provided but not defined: -default-scale-down-delay` which seems to come from the fact that we still use ARC 0.22.3 for chart testing.
> 
> Probably we'd better figure out how to test it against both the latest release version of ARC and the canary version of ARC?
> 
> Or just test it against the canary version so that it won't fail when the chart depends on features that are available only in the canary version of ARC? 🤔

yup, lets get this merged though so we can do a release today
2022-04-29 09:15:27 +01:00
Yusuke Kuoka
1f6303daed Merge pull request #1396 from actions-runner-controller/docs/pre-release
docs: final doc changes + v0.23.0 release notes
2022-04-29 12:35:42 +09:00
Yusuke Kuoka
0fd1a681af Update bug_report.md (#1400)
so that we can hopefully get enough information to diagnose the issue in case it's really a bug report, or it goes to Discussions in case it's a question.
2022-04-29 12:32:08 +09:00
toast-gear
58416db8c8 docs: add new runner group API enhancemnet 2022-04-28 16:17:53 +01:00
toast-gear
78a0817c2c docs: align release doc format 2022-04-28 16:06:59 +01:00
toast-gear
9ed429513d docs: bump the helm upgrade chart docs version 2022-04-28 16:04:58 +01:00
toast-gear
46291c1823 docs: highlight the new scale down delay flag 2022-04-28 16:04:16 +01:00
toast-gear
832e59338e docs: clarification of the release log 2022-04-28 16:00:03 +01:00
toast-gear
70ae5aef1f docs: add migration steps 2022-04-28 15:57:03 +01:00
toast-gear
6d10dd8e1d docs: breaking changes in v0.23.0 2022-04-28 10:51:19 +01:00
toast-gear
61c5a112db docs: remove reference to cleared limitation 2022-04-28 10:39:11 +01:00
toast-gear
7bc08fbe7c docs: remove TotalNumberOfQueuedAndInProgressWorkflowRuns limitation 2022-04-28 10:36:12 +01:00
Yusuke Kuoka
4053ab3e11 Fix label support for TotalNumberOfQueuedAndInProgressWorkflowRuns metric (#1390)
In #1373 we made two mistakes:

- We mistakenly checked if all the runner labels are included in the job labels and only after that it marked the target as eligible for scale. It should definitely be the opposite!
- We mistakenly checked for the existence of `self-hosted` labe l in the job. [Although it should be a good practice to explicitly say `runs-on: ["self-hosted", "custom-label"]`](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#example-using-labels-for-runner-selection), that's not a requirement so we should code accordingly.

The consequence of those two mistakes was that, for example, jobs with `self-hosted` + `custom` labels didn't result in scaling runner with `self-hosted` + `custom` + `custom2`. This should fix that.

Ref #1056
Ref #1373
2022-04-27 16:24:21 +01:00
Callum Tait
059481b610 refactor: remove legacy controller Docker build (#1360) [skip ci]
* refactor: remove legacy build and use buildkit

* refactor: add runner version to root makefie

* refactor: enable buildkit for runner make build

* refactor: ignore runner makefile in ci

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-27 08:21:02 +01:00
renovate[bot]
9fdb2c009d fix(deps): update module github.com/google/go-cmp to v0.5.8 (#1394)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-04-27 10:11:33 +09:00
Callum Tait
9f7ea0c014 docs: highlight breaking changes are possible (#1310)
It's probably worth highlighting it's ver 0.X.X by design and that breaking changes are possible until we move it to 1.0.0

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-26 11:20:11 +09:00
Callum Tait
0caa0315c6 feat: set default in chart (#1389)
Ref #963
Ref #899

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-26 10:25:01 +09:00
Yusuke Kuoka
1c726ae20c chore: Add unit test for RunnerReconciler.newPod (#1382)
Adds some unit tests for the runner pod generation logic that is used internally by runner controller as preparation for #1282
2022-04-25 09:59:21 +09:00
Yusuke Kuoka
d6cdd5964c chore: Add unit test for newRunnerPod (#1381)
Adds some unit tests for the runner pod generation logic that is used internally by runner deployment and runner set controllers as preparation for #1282
2022-04-25 08:52:58 +09:00
Yusuke Kuoka
a622968ff2 feat: Add label support to TotalNumberOfQueuedAndInProgressWorkflowRuns metric (#1373)
This is an implementation for my intepretation of the "bronze" case proposed in #1056

Ref #1056
2022-04-24 14:41:34 +09:00
Soham Banerjee
e8ef84ab76 Removed the default githubEvent: {} (#1361)
Ref #1358
See also #1379
2022-04-24 13:39:59 +09:00
Yusuke Kuoka
1551f3b5fc Remove the default githubEvent: {} requiring a event to be defined (#1379)
Ref #1358
2022-04-24 13:37:26 +09:00
Yusuke Kuoka
3ba7179995 Do not enable TotalNumberOfQueuedAndInProgressWorkflowRuns by default (#1372)
Previously, omitting hra.spec.metrics at all resulted in ARC enabling the TotalNumberOfQueuedAndInProgressWorkflowRuns.
That turned out not a good idea so since this change it is not enabled by default.

Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/728
2022-04-24 13:36:42 +09:00
Mário Uhrík
e7c6c26266 Runner CRD: Add required conversionReviewVersions field (#1259)
Without that field, GKE 1.21 refuses to create the CRD
with an error message that conversionReviewVersions is mandatory.

conversionReviewVersions is a required field when creating apiextensions.k8s.io/v1 custom resource definitions.
Webhooks are required to support at least one ConversionReview version understood by the current and previous API server.

See https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/_print/#webhook-request-and-response
2022-04-24 11:04:15 +09:00
Tingluo Huang
ebe7d060cb Find runner groups that visible to repository using a single API call. (#1324)
The [ListRunnerGroup API](https://docs.github.com/en/rest/reference/actions#list-self-hosted-runner-groups-for-an-organization) now add a new query parameter `visible_to_repository`.

We were doing `N+1` lookup when trying to find which runner group can be used for job from a certain repository.
- List all runner groups
- Loop through all groups to check repository access for each of them via [API](https://docs.github.com/en/rest/reference/actions#list-repository-access-to-a-self-hosted-runner-group-in-an-organization)

The new query parameter `visible_to_repository` should allow us to get the runner groups with access in one call.

Limitation:
- The new query parameter is only supported in GitHub.com, which means anyone who uses ARC in GitHub Enterprise Server won't get this.
- I am working on a PR to update `go-github` library to support the new parameter, but it will take a few weeks for a newer `go-github` to be released, so in the meantime, I am duplicating the implementation in ARC as well to support the new query parameter.
2022-04-24 10:54:40 +09:00
Callum Tait
c3e280eadb refactor: set sync period default to 1m (#1308)
Fixes: #1294

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-04-24 10:47:00 +09:00
Vinícius Garcia
9f254a2393 docs: run README files through Grammarly (#1353)
* Update README.md

* Run charts/actions-runner-controller/README.md thorugh Grammarly

* Fix broken link as suggested by @toast-gear

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-04-22 16:58:10 +01:00
renovate[bot]
e5cf3b95cf fix(deps): update module github.com/teambition/rrule-go to v1.8.0 (#1048)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-04-20 11:11:38 +09:00
Callum Tait
24aae58dbc feat: default scale down flag (#963)
Resolves #899

Co-authored-by: Callum <callum@domain.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-04-20 11:09:09 +09:00
Jeff Billimek
13bfa2da4e Fix runner pod dnsConfig (#1227)
Fixes #1226
Fixes #1224

Signed-off-by: Jeff Billimek <jeff@billimek.com>
2022-04-20 10:55:20 +09:00
Chris Bui
cb4e1fa8f2 breaking: Pluralize topologySpreadConstraint to match docs (#1089)
Original PR:
https://github.com/actions-runner-controller/actions-runner-controller/pull/814/files#diff-25283fab3c6d5fa726652c8741a122c1ba14d8486fe092774617a385e4bc1a92R145

If you're already using this feature, follow the process explained in https://github.com/actions-runner-controller/actions-runner-controller/pull/1089#issuecomment-1103354025 when upgrading.

Fixes #984
2022-04-20 10:47:18 +09:00
Patrick Ellis
7a5a6381c3 Add WorkflowJob to GitHubEventScaleUpTriggerSpec types (#922) 2022-04-20 09:59:08 +09:00
Renovate Bot
81951780b1 chore(deps): update dependency actions/runner to v2.290.1 2022-04-14 18:36:24 +00:00
renovate[bot]
3b48db0d26 chore(deps): update actions/stale action to v5 (#1338)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-04-13 09:42:27 +01:00
Callum Tait
352e206148 refactor: use apt-get instead of apt (#1342)
Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-13 09:40:15 +01:00
Richard Fussenegger
6288036ed4 Removed modprobe Script (#1247) [skip ci]
* Removed `modprobe` Script

I was able to find out that this script originates from https://github.com/docker-library/docker/blob/master/modprobe.sh but our image does not have `lsmod` nor `modprobe` installed. Hence, if it were in use, it would fail every time. 🤔

* fix: correct command order

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-13 09:39:55 +01:00
Siyuan Zhang
a37b4dfbe3 Fix scale down condition to exclude skipped (#1330)
* Fix scale down condition to exclude skipped
* Use fallthrough and break to let default handle the skipped case

Fixes #1326
2022-04-13 08:53:07 +09:00
Callum Tait
c4ff1a588f chore: migrate to actions stale bot (#1334)
Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-13 08:29:49 +09:00
Callum Tait
4a3b7bc8d5 refactor: location of some runner cmds (#1337)
Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-12 22:18:34 +01:00
Richard Fussenegger
8db071c4ba Improved Bash Logger (#1246)
* Improved Bash Logger

This is a first step towards having robust Bash scripts in the runner images. The changes _could_ be considered breaking, depending on our backwards compatibility definition.

* Fixed Log Formatting Issues

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-04-12 22:02:06 +01:00
Renovate Bot
7b8057e417 chore(deps): update dependency actions/runner to v2.290.0 2022-04-12 20:46:19 +00:00
renovate[bot]
960a704246 chore(deps): update azure/setup-helm action to v2.1 (#1328) [skip ci]
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-04-11 18:23:48 +01:00
Daniel Moran
f907f82275 Add more usages of RUNNER_VERSION to Renovate config. (#1313)
* Add more usages of RUNNER_VERSION to Renovate config.

* Double-escape `?` in pattern
2022-04-11 11:28:00 +01:00
Rolf Ahrenberg
7124451cea chore: fix typo (#1316) [skip ci] 2022-04-08 17:32:01 +01:00
Yusuke Kuoka
c8f1acd92c chore: bump chart to latest (#1319)
Bumps the chart version along with the controller version.
We bump the patch number for the chart as the release for the controller is a patch release.
That's the same handling as we've done in the previous version ecc8b4472a and #1300

As always, be sure to upgrade CRDs before updating the controller version!
Otherwise it can break in interesting ways.
2022-04-08 10:59:07 +09:00
Yusuke Kuoka
b0fd7a75ea Fix release workflow 2022-04-08 01:36:14 +00:00
Yusuke Kuoka
b09c54045a Prevent runners from stuck in Terminating when pod disappeared without standard termination process (#1318)
This fixes the said issue by additionally treating any runner pod whose phase is Failed or the runner container exited with non-zero code as "complete" so that ARC gives up unregistering the runner from Actions, deletes the runner pod anyway.

Note that there are a plenty of causes for that. If you are deploying runner pods on AWS spot instances or GCE preemptive instances and a job assigned to a runner took more time than the shutdown grace period provided by your cloud provider (2 minutes for AWS spot instances), the runner pod would be terminated prematurely without letting actions/runner unregisters itself from Actions. If your VM or hypervisor failed then runner pods that were running on the node will become PodFailed without unregistering runners from Actions.

Please beware that it is currently users responsibility to clean up any dangling runner resources on GitHub Actions.

Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1307
Might also relate to https://github.com/actions-runner-controller/actions-runner-controller/issues/1273
2022-04-08 10:17:33 +09:00
Yusuke Kuoka
96f2da1c2e Merge pull request #1262 from fgalind1/patch-4
Fix deleting a runner when pod was deleted
2022-04-08 10:17:13 +09:00
Yusuke Kuoka
cac8b76c68 Merge pull request #1292 from actions-runner-controller/renovate/sigs.k8s.io-controller-runtime-0.x
fix(deps): update module sigs.k8s.io/controller-runtime to v0.11.2
2022-04-08 10:14:47 +09:00
Felipe Galindo Sanchez
e24d942d63 Merge remote-tracking branch 'upstream/master' into patch-4 2022-04-06 06:43:01 -07:00
Felipe Galindo Sanchez
b855991373 ci: pin go version to the known working version (#1303) 2022-04-06 09:34:48 +01:00
Felipe Galindo Sanchez
e7e48a77e4 Merge remote-tracking branch 'upstream/master' into patch-4 2022-04-04 09:54:08 -07:00
Yusuke Kuoka
85dea9b67c Merge pull request #1285 from actions-runner-controller/docs/runnersets
docs: add limitations to runnersets + reorder
2022-04-03 18:18:54 +09:00
Yusuke Kuoka
1d9347f418 chore: bump chart to latest (#1300)
* chore: bump chart to latest

Bumps the chart version along with the controller version.
We bump the patch number for the chart as the release for the controller is a patch release.
That's the same handling as we've done in the previous version ecc8b4472a

As always, be sure to upgrade CRDs before updating the controller version!
Otherwise it can break in interesting ways.

* docs: expand on CRD upgrade requirement

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-04-03 10:15:39 +01:00
Yusuke Kuoka
631a70a35f Fix runner pod to be cleaned up earlier regardless of the sync period (#1299)
Ref #1291
2022-04-03 11:12:44 +09:00
Yusuke Kuoka
b614dcf54b Make the hard-coded runner startup timeout to avoid race on token expiration longer (#1296)
Ref #1295
2022-04-03 09:59:35 +09:00
Callum Tait
14f9e7229e docs: highlight why persistent are not ideal 2022-04-01 15:49:15 +01:00
Renovate Bot
82770e145b fix(deps): update module sigs.k8s.io/controller-runtime to v0.11.2 2022-03-30 21:38:12 +00:00
Renovate Bot
971c54bf5c chore(deps): update dependency actions/runner to v2.289.2 2022-03-30 18:18:17 +00:00
renovate[bot]
b80d9b0cdc chore(deps): update helm/chart-releaser-action action to v1.4.0 (#1287)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-03-30 13:24:26 +01:00
Bernardo Meurer
e46df413a1 refactor(runner/entrypoint): check for externalstmp (#1277)
* refactor(runner/entrypoint): check for externalstmp [skip ci]

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-03-30 12:18:18 +01:00
toast-gear
eb02f6f26e docs: redundant words 2022-03-30 10:36:34 +01:00
toast-gear
7a750b9285 docs: wording 2022-03-30 10:35:32 +01:00
toast-gear
d26c8d6529 docs: add autoscaling also causes problems 2022-03-30 10:26:08 +01:00
toast-gear
fd0092d13f chore: new line for consistency 2022-03-30 10:02:33 +01:00
toast-gear
88d17c7988 docs: use the right font 2022-03-30 10:00:34 +01:00
toast-gear
98567dadc9 docs: fix index 2022-03-30 09:59:32 +01:00
toast-gear
7e8d80689b docs: add limitations to runnersets + reorder 2022-03-30 09:53:59 +01:00
Callum Tait
d72c396ff1 docs: slight correction for a multi controller env 2022-03-29 16:57:58 +01:00
Milan Aleks
13e7b440a8 chore: typo fix in runner Dockerfile [skip ci] (#1270) 2022-03-29 11:05:24 +01:00
Michael Goodness
a95983fb98 feat(kustomize): add github-webhook-server overlay (#1198)
* feat(kustomize): add github-webhook-server overlay

* chore(kustomize): add image to github-webhook-server overlay

* feat(kustomize): drop sync-period
2022-03-29 11:00:55 +01:00
Callum Tait
ecc8b4472a chore: bump chart to latest (#1280) 2022-03-29 07:46:44 +01:00
Callum Tait
459beeafb9 docs: remove the nonsense 2022-03-27 14:15:42 +01:00
Rolf Ahrenberg
1b327a0721 refactor: use const envvars (#1251) 2022-03-27 12:14:56 +01:00
Jérôme Foray
1f8a23c129 fix(chart): add namespace selector to webhooks when in singleNamespace mode (#1237)
* fix(chart): add namespace selector to webhooks when in singleNamespace mode

* docs: expand multi controller setup

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-03-27 11:52:39 +01:00
Naka Masato
af8d8f7e1d Update runnerdeployment_webhook.go (#1271) 2022-03-25 09:24:13 +09:00
Yusuke Kuoka
e7ef21fdf9 Merge pull request #1264 from ekarlso/env-var-detection-fix
Use container name to detect runner container in Pod
2022-03-25 09:23:48 +09:00
Endre Karlson
ee7484ac91 Use container name to detect runner container in Pod 2022-03-23 12:39:58 +01:00
Yusuke Kuoka
debf53c640 Fix missing pip bin path (/home/runner/.local/bin) (#1263)
Fixes #1261
2022-03-23 10:28:12 +09:00
Felipe Galindo Sanchez
9657d3e5b3 Fix deleting a runner when pod was deleted
With the current implementation if a pod is deleted, controller is failing to delete the runner as it's trying to annotate a pod that doesn't exist as we're passing a new pod object that is not an existing resource
2022-03-22 14:44:50 -07:00
Callum Tait
2cb04ddde7 * feat: move to new run.sh container friendly file (#1244)
* fix: unit tests were very broken

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-03-22 19:02:51 +00:00
renovate[bot]
366f8927d8 chore(deps): update actions/cache action to v3 (#1252)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-03-22 18:48:23 +00:00
Richard Fussenegger
532a2bb2a9 feat: remove registration-only runner logic from entrypoint (#1249)
Closes #1207
2022-03-22 18:33:14 +00:00
Callum Tait
f28cecffe9 docs: various minor changes (#1250)
* docs: various minor changes

* docs: format fixes
2022-03-20 16:05:03 +00:00
Renovate Bot
4cbbcd64ce chore(deps): update dependency actions/runner to v2.289.1 2022-03-18 22:36:38 +00:00
Richard Fussenegger
a68eede616 feat: copy dotfiles from asset to service dir (#1136)
* feat: copy dotfiles from asset to service dir

* Fixed `UNITTEST` Condition

* Load `/etc/environment`

See https://github.com/actions/runner/issues/1703 for context on this change.
2022-03-18 07:40:52 +00:00
Julien Tanay
c06a806d75 Add note about having 100+ replicas (#1103) 2022-03-16 21:03:05 +00:00
Callum Tait
857c1700ba docs: add repo update to upgrade notes (#1233) 2022-03-16 10:37:37 +00:00
Callum Tait
a40793bb60 chore: bump chart to app 0.22.0 (#1232)
* chore: bump chart to app 0.22.0
2022-03-16 07:57:30 +00:00
Callum Tait
48a7b78bf3 docs: remove runnerset limitation (#1225)
This works great from testing now, this is no longer a limitation due to ARC now creating a statefulset per runner
2022-03-16 09:08:41 +09:00
renovate[bot]
6ff93eae95 chore(deps): update helm/chart-testing-action action to v2.2.1 (#1216)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-03-15 18:51:54 +00:00
Yusuke Kuoka
b25a0fd606 Merge pull request #1217 from actions-runner-controller/docs/re-order
docs: various changes in preparation for 0.22.0 release

- Move RunnerSets to a more predominant location in the docs
- Clean up  a few bits
- Highlight the deprecation and removal timeline for the `--once` flag
- Renamed ephemeral runners section to something more logical (persistent runners). Static runners were an option however the word static is awkward as it's sort of tied up with autoscaling and the `Runner` kind so Persistent was chosen instead.
- Update upgrade docs to use `replace` instead of `apply`
2022-03-15 09:01:32 +09:00
toast-gear
3beef84f30 docs: better sentences 2022-03-14 12:43:07 +00:00
toast-gear
76cc758d12 docs: minor consistency change 2022-03-14 12:37:57 +00:00
toast-gear
c4c6e833a7 chore: add deprecation warning 2022-03-14 12:35:07 +00:00
toast-gear
ecf74e615e docs: bump versions and upgrade instructions 2022-03-14 10:23:36 +00:00
toast-gear
bb19e85037 docs: various cleanups and re-orderings 2022-03-14 09:52:22 +00:00
Yusuke Kuoka
e7200f274d Merge pull request #1214 from actions-runner-controller/fix-static-runners
Fix runner{set,deployment} rollouts and static runner scaling

I was testing static runners as a preparation to cut the next release of ARC, v0.22.0, and found several problems that I thought worth being fixed.

In particular, this PR fixes static runners reliability issues in two means.

c4b24f8366 fixes the issue that ARC gives up retrying RemoveRunner calls too early, especially on static runners, that resulted in static runners to often get terminated prematurely while running jobs.

791634fb12 fixes the issue that ARC was unable to scale up any static runners when the corresponding desired replicas number in e.g. RunnerDeployment gets updated. It was caused by a bug in the mechanism that is intended to prevent ephemeral runners from being recreated in unwanted circumstances, mistakenly triggered also for static runners.

Since #1179, RunnerDeployment was not able to gracefully terminate old RunnerReplicaSet on update. c612e87 fixes that by changing RunnerDeployment to firstly scale old RunnerReplicaSet(s) down to zero and waits for sync, and set the deletion timestamp only after that. That way, RunnerDeployment can ensure that all the old RunnerReplicaSets that are being deleted are already scaled to zero passing the standard unregister-and-then-delete runner termination process.

It revealed a hidden bug in #1179 that sometimes the scale-to-zero-before-runnerreplicaset-termination does not work as intended. 4551309 fixes that, so that RunnerDeployment can actually terminate old RunnerReplicaSets gracefully.
2022-03-13 22:19:26 +09:00
Yusuke Kuoka
1cc06e7408 e2e: Make enterprise runners optional for testing GitHub App
As GitHub App does not allow ARC to access enterprise runner related API endpoints, like the create-registration-token API.
2022-03-13 13:11:26 +00:00
Yusuke Kuoka
4551309e30 Fix runners to not terminate before unregistration when scaling down
#1179 was not working particularly for scale down of static (and perhaps long-running ephemeral) runners, which resulted in some runner pods are terminated before the requested unregistration processes complete, that triggered some in-progress workflow jobs to hang forever. This fixes an edge-case that resulted in a decreased desired replicas to trigger the failure, so that every runner is unregistered then terminated, as originally designed.
2022-03-13 13:09:46 +00:00
Yusuke Kuoka
7123b18a47 chore: Log more variables when log level is -2 2022-03-13 13:04:28 +00:00
Yusuke Kuoka
cc55d0bd7d Let runnerdeployment controller log runnerreplicaset creation 2022-03-13 12:25:53 +00:00
Yusuke Kuoka
c612e87d85 fix: Let RunnerDeployment scale RunnerReplicaSet to zero before terminating it
so that hopefully RunnerDeployment can gracefully termiante older RunnerReplicaSet on update.
2022-03-13 12:18:22 +00:00
Yusuke Kuoka
326d6a1fe8 Fix the timing of Marking owner for unregistration completion log 2022-03-13 12:16:55 +00:00
Yusuke Kuoka
fa8ff70aa2 Add log when deletion timestamp is being set on owner object 2022-03-13 12:16:29 +00:00
Yusuke Kuoka
efb7fca308 Fix externally deleted runner pod to not block unregistration process 2022-03-13 12:15:49 +00:00
Yusuke Kuoka
e4280dcb0d Fix patch MergeFrom target 2022-03-13 12:14:14 +00:00
Yusuke Kuoka
f153870f5f fix: Do not block indefinitely on runner that cannot be deleted due to 403 2022-03-13 12:12:01 +00:00
Yusuke Kuoka
8ca39caff5 Fix log message on runner deletion 2022-03-13 12:11:11 +00:00
Yusuke Kuoka
791634fb12 Fix static runners not scaling up
It turned out that #1179 broke static runners in a way it is no longer able to scale up at all when the desired replicas is updated.
This fixes that by correcting a certain short-circuit that is intended only for ephemeral runners to not mistakenly triggered for static runners.
2022-03-13 07:26:43 +00:00
Yusuke Kuoka
c4b24f8366 Prevent static runners from terminating due to unregister timeout
The unregister timeout of 1 minute (no matter how long it is) can negatively impact availability of static runner constantly running workflow jobs, and ephemeral runner that runs a long-running job.
We deal with that by completely removing the unregistaration timeout, so that regarldess of the type of runner(static or ephemeral) it waits forever until it successfully to get unregistered before being terminated.
2022-03-13 07:26:36 +00:00
Yusuke Kuoka
a1c6d1d11a doc: Add release note for 0.22.0 (#1199)
As it turned out to be the biggest release ever, I was afraid I might not be able to write a summary of changes that communicates well. Here is my attempt. Please review and leave any comments so that we can be more confident in this release. Thank you!
2022-03-13 16:25:24 +09:00
Yusuke Kuoka
adc889ce8a Fix RunnerDeployment to be able to finish rollout (#1213)
I found that #1179 was unable to finish rollout of an RunnerDeployment update(like runner env update). It was able to create a new RunnerReplicaSet with the desired spec, but unable to tear down the older ones. This fixes that.
2022-03-13 10:10:24 +09:00
Yusuke Kuoka
b83db7be8f Merge pull request #1212 from actions-runner-controller/fix-runnerdeploy-duplicate-envvars
Fix RunnerDeployment-managed runner pods to not get RUNNER_NAME and RUNNER_TOKEN injected twice
2022-03-12 23:27:45 +09:00
Yusuke Kuoka
da2adc0cc5 e2e: Omit RUNNER_FEATURE_FLAG_EPHEMERAL when TEST_FEATURE_FLAG_EPHEMERAL is not set 2022-03-12 14:08:23 +00:00
Yusuke Kuoka
fa287c4395 Fix RunnerDeployment-managed runner pods to not get RUNNER_NAME and RUNNER_TOKEN injected twice
Since #1179, runner pods managed by RunnerDeployment had two duplicate environment variables for RUNNER_NAME and RUNNER_TOKEN. This fixes that.
2022-03-12 13:49:50 +00:00
Yusuke Kuoka
7c0340dea0 Merge pull request #1211 from actions-runner-controller/use-ephemeral-by-default
Use --ephemeral by default

Every runner is now --ephemeral by default.

Note that this works by ARC setting the RUNNER_FEATURE_FLAG_EPHEMERAL envvar to true by default. Previously you had to explicitly set it to true otherwise the runner was passed --once which is known to various race conditions.

It's worth noting that the very confusing and related configuration, ephemeral: true, which creates --once runners instead of static(or persistent) runners had been the default since many months ago. So, this should be the only change needed to make every runner ephemeral without any explicit configuration.

You can still fall back to static(persistent) runners by setting ephemeral: false, and to --once runners by setting RUNNER_FEATURE_FLAG_EPHEMERAL to "false". But I don't think there're many reasons to do so.

Ref #1189
2022-03-12 22:47:38 +09:00
Yusuke Kuoka
c3dd1c5c05 e2e: Make TEST_FEATURE_FLAG_EPHEMERAL optional 2022-03-12 13:32:42 +00:00
Yusuke Kuoka
051089733b Use --ephemeral by default
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1189
2022-03-12 13:20:07 +00:00
Yusuke Kuoka
757e0a82a2 Merge pull request #1210 from actions-runner-controller/fix-github-api-cache-for-github-app-mode
Fix GitHub API cache to work with GitHub App authentication
2022-03-12 21:17:25 +09:00
Yusuke Kuoka
83e550cde5 Experimetanl log level "-4" for logging every HTTP round-trip for GitHub API calls 2022-03-12 12:11:16 +00:00
Yusuke Kuoka
22ef7b3a71 acceptance,e2e: Fix deploy.sh and e2e_test.go for testing with GitHub App 2022-03-12 12:10:04 +00:00
Yusuke Kuoka
28fccbcecd Fix GitHub API cache to work with GitHub App authentication
The version of `bradleyfalzon/ghinstallation` which is used to enable GitHub App authentication turned out to add an extra header `application/vnd.github.machine-man-preview+json` to every HTTP request. That revealed an edge-case in our HTTP cache layer `gregjones/httpcache` that results it to not serve responses from cache when it should.

There were two problems. One was that it does not support multi-valued header and it only looked for the first value for each header, and another is that it does not support any http.RoundTripper implementation that modifies HTTP request headers in a RoundTrip function call.

I fixed it in my fork of httpcache, which is hosted at https://github.com/actions-runner-controller/httpcache.

The relevant commits are:

- 70d975e77d
- 197a8a3546

This can be considered as a follow-up for #1127, which turned out to have enabled the cache only for the case that ARC uses PAT for authentication.
Since this fix, the cache is also enabled when ARC authenticates as a GitHub App.
2022-03-12 11:14:16 +00:00
Yusuke Kuoka
9628bb2937 Prevent RemoveRunner spam on busy ephemeral runner scale down (#1204)
Since #1127 and #1167, we had been retrying `RemoveRunner` API call on each graceful runner stop attempt when the runner was still busy.
There was no reliable way to throttle the retry attempts. The combination of these resulted in ARC spamming RemoveRunner calls(one call per reconciliation loop but the loop runs quite often due to how the controller works) when it failed once due to that the runner is in the middle of running a workflow job.

This fixes that, by adding a few short-circuit conditions that would work for ephemeral runners. An ephemeral runner can unregister itself on completion so in most of cases ARC can just wait for the runner to stop if it's already running a job. As a RemoveRunner response of status 422 implies that the runner is running a job, we can use that as a trigger to start the runner stop waiter.

The end result is that 422 errors will be observed at most once per the whole graceful termination process of an ephemeral runner pod. RemoveRunner API calls are never retried for ephemeral runners. ARC consumes less GitHub API rate limit budget and logs are much cleaner than before.

Ref https://github.com/actions-runner-controller/actions-runner-controller/pull/1167#issuecomment-1064213271
2022-03-11 19:03:17 +09:00
Renovate Bot
736a53fed6 fix(deps): update golang.org/x/oauth2 commit hash to 6242fa9 2022-03-10 08:39:51 +09:00
yourmoonlight
132faa13a1 docs: fix the helm command for webhook installation (#1188)
* fix doc for install the webhook server

* modify cmd with single set && add double quote for zsh users
2022-03-08 17:59:01 +00:00
Callum Tait
66e070f798 docs: remove githubAPICacheDuration from docs (#1194) 2022-03-08 13:27:30 +00:00
Yusuke Kuoka
55ff4de79a Remove legacy GitHub API cache of HRA.Status.CachedEntries (#1192)
* Remove legacy GitHub API cache of HRA.Status.CachedEntries

We migrated to the transport-level cache introduced in #1127 so not only this is useless, it is harder to deduce which cache resulted in the desired replicas number calculated by HRA.
Just remove the legacy cache to keep it simple and easy to understand.

* Deprecate githubAPICacheDuration helm chart value and the --github-api-cache-duration as well

* Fix integration test
2022-03-08 19:05:43 +09:00
Yusuke Kuoka
301439b06a chore: Change log ts format to RFC3339 (#1191)
The TimeEncoder for zap seems to have been set to EpochTimeEncoder which is the default and it was not very readable. Changing it to a TimeEncoderOfLayout(time.RFC3339) for readability.

Another benefit of doing this is the ts format is now consistent with various timestamps ARC put into pod and other custom resource annotations.
2022-03-08 10:34:52 +09:00
Yusuke Kuoka
15ee6d6360 chore: Reorganize "Calculated desired replicas log fields (#1190)
So that `max` is emitted immediately after `min`, which is the counterpart of it.
2022-03-08 10:29:53 +09:00
Felipe Galindo Sanchez
5b899f578b fix(chart): allow to use basic auth when authSecret.create is false (#1149)
* fix(chart): allow to use basic auth when authSecret.create is false

When secret is created outside of the ARC chart using authSecret.create=false and basicAuth,
the controller fails as we're not including the basic password as environment variable as
the password value won't be inside the helm values.

This PR includes both environment variables for consistent regardless if
those are set or not similar as the rest of the other auth options (e.g
app_id, private  key, etc)

* chart: Add back the conditional block for .Values.authSecret.github_basicauth_username

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-03-07 10:07:24 +09:00
Yusuke Kuoka
d8c9eb7ba7 Fix arm64 image (#1185)
Fixes #1184
2022-03-07 10:00:20 +09:00
Yusuke Kuoka
cbbc383a80 Auto-correct replicas number on missing webhook_job completion event (#1180)
While testing #1179, I discovered that ARC sometimes stop resyncing RunnerReplicaSet when the desired replicas is greater than the actual number of runner pods.
This seems to happen when ARC missed receiving a workflow_job completion event but it has no way to decide if it is either (1) something went wrong on ARC or (2) a loadbalancer in the middle or GitHub or anything not ARC went wrong. It needs a standard to decide it, or if it's not impossible, how to deal with it.

In this change, I added a hard-coded 10 minutes timeout(can be made customizable later) to prevent runner pod recreation.
Now, a RunnerReplicaSet/RunnerSet to restart runner pod recreation 10 minutes after the last scale-up. If the workflow completion event arrived after the timeout, it will decrease the desired replicas number that results in the removal of a runner pod. The removed runner pod might be deleted without ever being used, but I think that's better than leaving the desired replicas and the actual number of replicas diverged forever.
2022-03-07 09:35:13 +09:00
seplak
b57e885a73 Fix service account typo in Helm README (#1183)
Just fixing a typo I discovered while reading through the README.
2022-03-07 08:39:01 +09:00
Yusuke Kuoka
bed927052d Merge pull request #1179 from actions-runner-controller/refactor-runner-and-runnerset
Refactor Runner and RunnerSet so that they use the same library code that powers RunnerSet.

RunnerSet is StatefulSet-based and RunnerSet/Runner is Pod-based so it had been hard to unify the implementation although they look very similar in many aspects.

This change finally resolves that issue, by first introducing a library that implements the generic logic that is used to reconcile RunnerSet, then adding an adapter that can be used to let the generic logic manage runner pods via Runner, instead of via StatefulSet.

Follow-up for #1127, #1167, and 1178
2022-03-06 15:56:51 +09:00
Yusuke Kuoka
14a878bfae refactor: Make RunnerReplicaSet and Runner backed by the same logic that backs RunnerSet 2022-03-06 05:53:26 +00:00
Yusuke Kuoka
c95e84a528 refactor: Extract runner pod owner management out of runnerset controller
so that it can potentially be reusable from runnerreplicaset controller
2022-03-05 12:18:02 +00:00
Yusuke Kuoka
95a5770d55 Fix regression that registration-timeout check was not working for runnerset (#1178)
Follow-up for #1167
2022-03-05 19:31:05 +09:00
Yusuke Kuoka
9cc9f8c182 chore: Add a few comments to runnerset and runnerpod controllers to help potential contributors 2022-03-05 05:41:56 +00:00
Yusuke Kuoka
b7c5611516 dockerfile: Fix unintended removal of CGO_ENABLED=0 2022-03-05 05:41:56 +00:00
Yusuke Kuoka
138e326705 chore: Add comment on lastSyncTime in runnerset controller 2022-03-05 05:41:56 +00:00
Renovate Bot
c21fa75afa fix(deps): update kubernetes packages to v0.23.4 2022-03-04 08:39:18 +09:00
Yusuke Kuoka
34483e268f ci: Enable actions/cache for Go modules 2022-03-03 18:47:54 +09:00
Yusuke Kuoka
5f2b5327f7 integration: Reduce error logs to ease debugging 2022-03-03 18:47:54 +09:00
renovate[bot]
a93b2fdad4 fix(deps): update golang.org/x/oauth2 commit hash to ee48083 (#1150)
fix(deps): update golang.org/x/oauth2 commit hash to ee48083

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-03-03 18:00:43 +09:00
Yusuke Kuoka
25570a0c6d Fix docker build 2022-03-03 02:05:38 +00:00
Felipe Galindo Sanchez
d20ad71071 Fix minor log in runner controller (#1175)
Log is mentioning registration only but this is about the standard runner pod
2022-03-03 09:51:30 +09:00
Daniel
8a379ac94b Add custom volume mount documentation (#1045)
one example for in-memory
and one example for NVME backed storage, also pointing out all the
current flaws/risks for that configuration
2022-03-03 09:13:42 +09:00
Felipe Galindo Sanchez
27563c4378 Remove unused function (#1173) 2022-03-03 09:02:47 +09:00
Felipe Galindo Sanchez
4a0f68bfe3 Cleanup extra block in runner controller (#1174) 2022-03-03 09:01:34 +09:00
Yusuke Kuoka
1917cf90c4 chore: Tweak runner-id annotation name and the annotation prefix to be more consistent 2022-03-02 19:03:20 +09:00
Yusuke Kuoka
0ba3cad6c2 fix: Prefix runner pod related annotation keys with actions/ to make them distinguishable from other annotations 2022-03-02 19:03:20 +09:00
Yusuke Kuoka
7f0e65cb73 refactor: Extract definitions of various annotation keys and other defaults to their own source 2022-03-02 19:03:20 +09:00
Yusuke Kuoka
12a04b7f38 Fix typo in comment 2022-03-02 19:03:20 +09:00
Yusuke Kuoka
a3072c110d Prevent runnerset pod unregistration until it gets runner ID
This eliminates the race condition that results in the runner terminated prematurely when RunnerSet triggered unregistration of StatefulSet that added just a few seconds ago.
2022-03-02 19:03:20 +09:00
Yusuke Kuoka
15b402bb32 Make RunnerSet much more reliable with or without webhook 2022-03-02 19:03:20 +09:00
Yusuke Kuoka
11be6c1fb6 Prevent runner pod deletion delay when pod disappeared before unregistration 2022-03-02 19:03:20 +09:00
Yusuke Kuoka
59c3288e87 acceptance,e2e: Automate restarts of ARC pods in case image tag is not changed 2022-03-02 19:03:20 +09:00
Yusuke Kuoka
5030e075a9 dockerfile,e2e: Use buildx and cache mounts for faster rebuilds in E2E 2022-03-02 19:03:20 +09:00
Yusuke Kuoka
3115d71471 acceptance,e2e: Enhance deploy.sh to support more types of runnersets 2022-03-02 19:03:20 +09:00
Renovate Bot
c221b6e278 chore(deps): update actions/checkout action to v3 2022-03-02 11:05:16 +09:00
Renovate Bot
a8dbc8a501 fix(deps): update module github.com/prometheus/client_golang to v1.12.1 2022-03-02 10:56:53 +09:00
Renovate Bot
b1ac63683f fix(deps): update module go.uber.org/zap to v1.21.0 2022-03-02 10:54:35 +09:00
Renovate Bot
10bc28af75 fix(deps): update module sigs.k8s.io/controller-runtime to v0.11.1 2022-03-02 10:52:43 +09:00
Renovate Bot
e23692b3bc chore(deps): update actions/setup-python action to v3 2022-03-02 10:51:22 +09:00
renovate[bot]
e7f4a0e200 chore(deps): update actions/setup-go action to v3 (#1163)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-03-02 10:51:01 +09:00
Yusuke Kuoka
828ddcd44e Merge pull request #1151 from fgalind1/improve-logs
logging: improve logs for scaling
2022-03-02 10:46:53 +09:00
Yusuke Kuoka
fc821fd473 Merge pull request #1168 from actions-runner-controller/docs/better-runner-group-description
docs: better runner group description
2022-03-02 10:31:22 +09:00
Callum Tait
4b0aa92286 docs: better wording 2022-03-01 08:56:30 +00:00
Callum Tait
c69c8dd84d docs: better runner group description 2022-03-01 08:54:24 +00:00
Renovate Bot
e42db00006 chore(deps): update dependency actions/runner to v2.288.1 2022-02-28 22:30:10 +00:00
Felipe Galindo Sanchez
eff0c7364f Merge branch 'master' into improve-logs 2022-02-28 09:25:30 -08:00
Tingluo Huang
516695b275 Set UserAgent to 'actions-runner-controller' for all Http Client. (#1140)
I can't find any requests made by user agent `actions-runner-controller` in GitHub.com's telemetry in the past 7 days.

Turns out we only set user agent `actions-runner-controller` if we are configured to use BasicAuth which is not the case for most customers I think.

I update the code a little bit to make sure it always set `actions-runner-controller` as UserAgent for the GitHub HttpClient in ARC.

A further step would be somehow baking the ARC release version into the UserAgent as well.
2022-02-28 09:17:58 +09:00
Yusuke Kuoka
686d40c20d Merge pull request #1127 from actions-runner-controller/github-api-cache
Enhances ARC(both the controller-manager and github-webhook-server) to cache any GitHub API responses with HTTP GET and an appropriate Cache-Control header.

Ref #920

## Cache Implementation

`gregjones/httpcache` has been chosen as a library to implement this feature, as it is as recommended in `go-github`'s documentation:

https://github.com/google/go-github#conditional-requests

`gregjones/httpcache` supports a number of cache backends like `diskcache`, `s3cache`, and so on:

https://github.com/gregjones/httpcache#cache-backends

We stick to the built-in in-memory cache as a starter. Probably this will never becomes an issue as long as various HTTP responses for all the GitHub API calls that ARC makes, list-runners, list-workflow-jobs, list-runner-groups, etc., doesn't overflow the in-memory cache.

`httpcache` has an known unfixed issue that it doesn't update cache on chunked responses. But we assume that the APIs that we call doesn't use chunked responses. See #1503 for more information on that.

## Ephemeral runner pods are no longer recreated

The addition of the cache layer resulted in a slow down of a scale-down process and a trade-off between making the runner pod termination process fragile to various race conditions(shorter grace period before runner deletion) or delaying runner pod deletion depending on how long the grace period is(longer grace period). A grace period needs to be at least longer than 60s (which is the same as cache duration of ListRunners API) to not prematurely delete a runner pod that was just created.

But once I disabled automatic recreation of ephemeral runner pod, it turned out to be no more of an issue when it's being scaled via workflow_job webhook.

Ephemeral runner resources are still automatically added on demand by RunnerDeployment via RunnerReplicaSet(I've added `EffectiveTime` fields to our CRDs but that's an implementation detail so let's omit). A good side-effect of disabling ephemeral runner pod recreations is that ARC will no longer create redundant ephemeral runners when used with webhook-based autoscaler.

Basically, autoscaling still works as everyone might expect. It's just better than before overall.
2022-02-28 08:37:26 +09:00
Renovate Bot
f0fa99fc53 chore(deps): update dependency actions/runner to v2.288.0 2022-02-26 01:34:49 +00:00
Javier Sotelo
6b12413fdd Add optional hostNetwork (#1035)
Co-authored-by: jsotelo <javier.sotelo@viasat.com>
2022-02-23 20:11:40 +00:00
Felipe Galindo Sanchez
3abecd0f19 logging: improve logs for scaling 2022-02-23 08:29:13 -08:00
Callum Tait
7156ce040e chore: bump chart (#1138) 2022-02-21 09:24:14 +00:00
Yusuke Kuoka
1463d4927f acceptance,e2e: Let capacity reservation expired more later 2022-02-21 00:07:49 +00:00
Yusuke Kuoka
5bc16f2619 Enhance HRA capacity reservation update log 2022-02-21 00:06:26 +00:00
Yusuke Kuoka
b8e65aa857 Prevent unnecessary ephemeral runner recreations 2022-02-20 13:45:42 +00:00
Yusuke Kuoka
d4a9750e20 acceptance,e2e: Enhance E2E test and deploy.sh to support scaleDownDelaySeconds~ and minReplicas for HRA 2022-02-20 13:45:42 +00:00
Yusuke Kuoka
a6f0e0008f Make unregistration timeout and retry delay configurable in integration tests 2022-02-20 12:05:34 +00:00
Yusuke Kuoka
79a31328a5 Stop recreating ephemeral runner pod
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/911#issuecomment-1046161384
2022-02-20 04:42:19 +00:00
Yusuke Kuoka
4e6bfd8114 e2e: Add ability to toggle dockerdWithinRunnerContainer 2022-02-20 04:37:15 +00:00
Yusuke Kuoka
3c16188371 Introduce consistent timeouts for runner unregistration and runner pod deletion
Enhances runner controller and runner pod controller to have consistent timeouts for runner unregistration and runner pod deletion,
so that we are very much unlikely to terminate pods that are running any jobs.
2022-02-20 04:36:35 +00:00
Yusuke Kuoka
9e356b419e chart: Add default-logs-container annotation to controller pods
so that you can run `kubectl logs` on controller pods without the specifying the container name.

It is especially useful when you want to run kubectl-logs on all ARC pods across controller-manager and github-webhook-server like:

```
kubectl -n actions-runner-system logs -l app.kubernetes.io/name=actions-runner-controller
```

That was previously impossible due to that the selector matches pods from both controller-manager and github-webhook-server and kubectl does not provide a way to specify container names for respective pods.
2022-02-19 12:22:53 +00:00
Yusuke Kuoka
f3ceccd904 acceptance: Improve deploy.sh to recreate ARC (not runner) pods on new test id
So that one does not need to manually recreate ARC pods frequently.
2022-02-19 12:22:53 +00:00
Yusuke Kuoka
4b557dc54c Add logging transport to log HTTP requests in log level -3
The log level -3 is the minimum log level that is supported today, smaller than debug(-1) and -2(used to log some HRA related logs).

This commit adds a logging HTTP transport to log HTTP requests and responses to that log level.

It implements http.RoundTripper so that it can log each HTTP request with useful metadata like `from_cache` and `ratelimit_remaining`.
The former is set to `true` only when the logged request's response was served from ARC's in-memory cache.
The latter is set to X-RateLimit-Remaining response header value if and only if the response was served by GitHub, not by ARC's cache.
2022-02-19 12:22:53 +00:00
Yusuke Kuoka
4c53e3aa75 Add GitHub API cache to avoid rate limit
This will cache any GitHub API responses with correct Cache-Control header.

`gregjones/httpcache` has been chosen as a library to implement this feature, as it is as recommended in `go-github`'s documentation:

https://github.com/google/go-github#conditional-requests

`gregjones/httpcache` supports a number of cache backends like `diskcache`, `s3cache`, and so on:

https://github.com/gregjones/httpcache#cache-backends

We stick to the built-in in-memory cache as a starter. Probably this will never becomes an issue as long as various HTTP responses for all the GitHub API calls that ARC makes, list-runners, list-workflow-jobs, list-runner-groups, etc., doesn't overflow the in-memory cache.

`httpcache` has an known unfixed issue that it doesn't update cache on chunked responses. But we assume that the APIs that we call doesn't use chunked responses. See #1503 for more information on that.

Ref #920
2022-02-19 12:22:53 +00:00
Tingluo Huang
0b9bef2c08 Try to unconfig runner before deleting the pod to recreate (#1125)
There is a race condition between ARC and GitHub service about deleting runner pod.

- The ARC use REST API to find a particular runner in a pod that is not running any jobs, so it decides to delete the pod.
- A job is queued on the GitHub service side, and it sends the job to this idle runner right before ARC deletes the pod.
- The ARC delete the runner pod which cause the in-progress job to end up canceled.

To avoid this race condition, I am calling `r.unregisterRunner()` before deleting the pod.
- `r.unregisterRunner()` will return 204 to indicate the runner is deleted from the GitHub service, we should be safe to delete the pod.
- `r.unregisterRunner` will return 400 to indicate the runner is still running a job, so we will leave this runner pod as it is.

TODO: I need to do some E2E tests to force the race condition to happen.

Ref #911
2022-02-19 21:22:31 +09:00
Yusuke Kuoka
a5ed6bd263 Fix RunerSet managed runner pods to terminate more gracefully (#1126)
Make RunnerSet-managed runners as reliable as RunnerDeployment-managed runners.

Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/911#issuecomment-1042404460
2022-02-19 21:19:37 +09:00
Yusuke Kuoka
921f547200 fix: Do recreate runner pod on registration token update (#1087)
Apparently, we've been missed taking an updated registration token into account when generating the pod template hash which is used to detect if the runner pod needs to be recreated.

This shouldn't have been the end of the world since the runner pod is recreated on the next reconciliation loop anyway, but this change will make the pod recreation happen one reconciliation loop earlier so that you're less likely to get runner pods with outdated refresh tokens.

Ref https://github.com/actions-runner-controller/actions-runner-controller/pull/1085#issuecomment-1027433365
2022-02-19 21:18:00 +09:00
Felipe Galindo Sanchez
9079c5d85f fix: configure logger before trying to log (#1128)
Log about GitHub client not being initialized is not seen as logger is configured after adding the log
2022-02-19 20:56:58 +09:00
Yusuke Kuoka
a9aea0bd9c Fix issue that visible runner groups are printed as if empty in log 2022-02-19 14:43:41 +09:00
Yusuke Kuoka
fcf4778bac Fix regression that prevented default organizational runner group from being scale target
Fixes #1131
2022-02-19 14:43:41 +09:00
Yusuke Kuoka
eb0a4a9603 chart: Bump to 0.16.0 (with appVersion 0.21.0) 2022-02-18 01:57:37 +00:00
Yusuke Kuoka
b6151ebb8d Fjx release.yml upload artifacts to not fail due to outdated go (1.15) 2022-02-18 10:27:39 +09:00
Yusuke Kuoka
ba4bd7c0db e2e,acceptance: Cover enterprise runners (#1124)
Adds various code and changes I have used while testing #1062
2022-02-17 09:16:28 +09:00
Yusuke Kuoka
5b92c412a4 chart: Allow using different secrets for controller-manager and gh-webhook-server (#1122)
* chart: Allow using different secrets for controller-manager and gh-webhook-server

As it is entirely possible to do so because they are two different K8s deployments. It may provide better scalability because then each component gets its own GitHub API quota.
2022-02-17 09:16:16 +09:00
Yusuke Kuoka
e22d981d58 githubwebhookserver: Tweak log levels of various messages (#1123)
Some of logs like `HRA keys indexed for HRA` were so excessive that it made testing and debugging the githubwebhookserver harder. This tries to fix that.
2022-02-17 09:15:26 +09:00
Yusuke Kuoka
a7b39cc247 acceptance: Avoid "metadata.annotations too long" errors on applying CRDs 2022-02-17 09:01:44 +09:00
Yusuke Kuoka
1e452358b4 acceptance: Do recreate the controller-manager secret on every deployment
We had to manually remove the secret first to update the GitHub credentials used by the controller, which was cumbersome.
Note that you still need to recreate the controller pods and the gh webhook server pods to let them remount the recreated secret.
2022-02-17 09:01:44 +09:00
Carlos Tadeu Panato Junior
92e133e007 ci: update helm to 3.8.0 and go to 1.17.7 (#1119)
Signed-off-by: Carlos Panato <ctadeu@gmail.com>
2022-02-16 20:40:27 +09:00
Felipe Galindo Sanchez
d0d316252e Option to consider runner group visibility on scale based on webhook (#1062)
This will work on GHES but GitHub Enterprise Cloud due to excessive GitHub API calls required.
More work is needed, like adding a cache layer to the GitHub client, to make it usable on GitHub Enterprise Cloud.

Fixes additional cases from https://github.com/actions-runner-controller/actions-runner-controller/pull/1012

If GitHub auth is provided in the webhooks controller then runner groups with custom visibility are supported. Otherwise, all runner groups will be assumed to be visible to all repositories

`getScaleUpTargetWithFunction()` will check if there is an HRA available with the following flow:

1. Search for **repository** HRAs - if so it ends here
2. Get available HRAs in k8s
3. Compute visible runner groups
  a. If GitHub auth is provided - get all the runner groups that are visible to the repository of the incoming webhook using GitHub API calls.  
  b. If GitHub auth is not provided - assume all runner groups are visible to all repositories
4. Search for **default organization** runners (a.k.a runners from organization's visible default runner group) with matching labels
5. Search for **default enterprise** runners (a.k.a runners from enterprise's visible default runner group) with matching labels
6. Search for **custom organization runner groups** with matching labels
7. Search for **custom enterprise runner groups** with matching labels

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-02-16 19:08:56 +09:00
Shu Ambat
b509eb4388 Update the helm chart app version (#1099) 2022-02-09 09:29:49 +09:00
Yusuke Kuoka
59437ef79f Update README.md
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1100#issuecomment-1032775144
2022-02-09 09:16:46 +09:00
Ryo Sakamoto
a51fb90cd2 modify chart ingress (#1098)
Signed-off-by: cw-sakamoto <sakamoto@chatwork.com>
2022-02-08 12:56:30 +09:00
Callum Tait
eb53d238d1 docs: move istio to troubleshooting (#1097)
Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-02-07 20:49:26 +00:00
renovate[bot]
7fdf9a6c67 chore(deps): update helm/chart-releaser-action action to v1.3.0 (#1091)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-02-07 20:40:12 +00:00
Callum Tait
6f591ee774 chore: bump docker version (#1094)
* chore: bump docker version

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-02-07 20:10:02 +00:00
Callum Tait
cc25dd7926 chore: change to trigger build (#1093)
* chore: change to trigger build

* ci: actually use variable

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-02-03 21:23:42 +00:00
Chris Bui
1b911749a6 feat: disable automatic runner updates (#1088)
* Add env variable to configure `disablupdate` flag

* Write test for entrypoint disable update

* Rename flag, update docs for DISABLE_RUNNER_UPDATE

* chore: bump runner version in makefile

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-02-03 21:03:38 +00:00
maruware
b652a8f9ae Update chart README (#1083)
Fix `services.port` and `services.type` description is reversed.
2022-01-31 20:28:19 +00:00
sdubey-optum
069bf6a042 docs: fixing helm readme typo (#1064) 2022-01-28 22:26:17 +00:00
Callum Tait
f09a974ac2 chore: change to trigger build (#1079)
* chore: change to trigger build

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-01-28 21:57:53 +00:00
Callum Tait
1f7e440030 ci: add required token permissions 2022-01-28 21:37:16 +00:00
cspargo
9d5a562407 fix: use copy instead of move (#1066)
* fix: use copy instead of move

Co-authored-by: Colin Spargo <cspargo@users.noreply.github.com>
2022-01-28 21:24:52 +00:00
Renovate Bot
715e6a40f1 chore(deps): update dependency actions/runner to v2.287.1 2022-01-27 23:37:58 +00:00
Renovate Bot
81b2c5ada9 chore(deps): update dependency actions/runner to v2.287.0 2022-01-27 19:50:48 +00:00
renovate[bot]
9ae83dfff5 fix(deps): update module github.com/google/go-cmp to v0.5.7 (#1060)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-01-20 08:38:07 +09:00
Renovate Bot
5e86881c30 chore(deps): update dependency actions/runner to v2.286.1 2022-01-14 20:23:54 +00:00
renovate[bot]
1c75b20767 chore(deps): update helm/chart-testing-action action to v2.2.0 (#1038)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-01-07 13:56:48 +00:00
Daniel
8a73560dbc if a Volume is defined by the operator don't add another "work" volume. (#1015)
This allows providing a different `work` Volume.

This should be a cloud agnostic way of allowing the operator to use (for example) NVME backed storage.

This is a working example where the workDir will use the provided volume, additionally here docker is placed on the same NVME.
```
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: runner-2
spec:
  template:
    spec:
      dockerdContainerResources: {}
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      # this is to mount the docker in docker onto NVME disk
      dockerVolumeMounts:
      - mountPath: /var/lib/docker
        name: scratch
        subPathExpr: $(POD_NAME)-docker
      - mountPath: /runner/_work
        name: work
        subPathExpr: $(POD_NAME)-work
      volumeMounts:
      - mountPath: /runner/_work
        name: work
        subPathExpr: $(POD_NAME)-work
      dockerEnv:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      volumes:
      - hostPath:
          path: /mnt/disks/ssd0
        name: scratch
      - hostPath:
          path: /mnt/disks/ssd0
        name: work
      nodeSelector:
       cloud.google.com/gke-nodepool: runner-16-with-nvme
      ephemeral: false
      image: ""
      imagePullPolicy: Always
      labels:
        - runner-2
        - self-hosted
      organization: yourorganization

```
2022-01-07 10:01:40 +09:00
Yusuke Kuoka
01301d3ce8 Stop creating registration-only runners on scale-to-zero (#1028)
Resolves #859
2022-01-07 09:56:21 +09:00
renovate[bot]
02679ac1d8 fix(deps): update module go.uber.org/zap to v1.20.0 (#1027)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-01-07 08:51:57 +09:00
Hyeonmin Park
1a6e5719c3 test: Add tests with self-hosted label for #953 (#1030) 2022-01-07 08:50:26 +09:00
Callum Tait
f72d871c5b docs: move troubleshooting out of main readme (#1023)
Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2021-12-29 10:24:09 +09:00
Callum Tait
ad48851dc9 feat: expose if docker is enabled and wait for docker to be ready (#962)
Resolves #897
Resolves #915
2021-12-29 10:23:35 +09:00
Lars Haugan
c5950d75fa fix: pagination for ListWorkflowJobs in autoscaler (#990) (#992)
Adding handling of paginated results when calling `ListWorkflowJobs`. By default the `per_page` is 30, which potentially would return 30 queued and 30 in_progress jobs.

This change should enable the autoscaler to scale workflows with more than 60 jobs to the exact number of runners needed.

Problem: I did not find any support for pagination in the Github fake client, and have not been able to test this (as I have not been able to push an image to an environment where I can verify this).
If anyone is able to help out verifying this PR, i would really appreciate it.

Resolves #990
2021-12-24 09:12:36 +09:00
Felipe Galindo Sanchez
de1f48111a feat: support routing GitHub API calls to custom proxy API (#1017)
GitHub currently has some limitations w.r.t permissions management on
runner groups as they all require org admin, however at our company
we're using runner groups to serve different internal teams (with
different permissions), thus we needed to deploy a custom proxy API with
our internal authentication to provide who has access to certain APIs
depending on the repository/runner group on a given org/enterprise

This change just allows to optionally send the GitHub API calls to an alternate custom
proxy URL instead of cloud github (github.com) or an enterprise URL with
basic authentication

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-12-23 09:24:10 +09:00
Renovate Bot
8a7720da77 chore(deps): update dependency actions/runner to v2.286.0 2021-12-21 19:47:32 +00:00
Felipe Galindo Sanchez
608c56936e Remove duplicate self-hosted condition (#1016)
Duplicate condition caused after merge of #953 and #1012
2021-12-21 09:08:21 +09:00
Felipe Galindo Sanchez
4ebec38208 Support runner groups with selected visibility in webhooks autoscaler (#1012)
The current implementation doesn't support yet runner groups with custom visibility (e.g selected repositories only). If there are multiple runner groups with selected visibility - not all runner groups may be a potential target to be scaled up. Thus this PR introduces support to allow having runner groups with selected visibility. This requires to query GitHub API to find what are the potential runner groups that are linked to a specific repository (whether using visibility all or selected).

This also improves resolving the `scaleTargetKey` that are used to match an HRA based on the inputs of the `RunnerSet`/`RunnerDeployment` spec to better support for runner groups.

This requires to configure github auth in the webhook server, to keep backwards compatibility if github auth is not provided to the webhook server, this will assume all runner groups have no selected visibility and it will target any available runner group as before
2021-12-19 18:29:44 +09:00
clement-loiselet-talend
0c34196d87 fix(#951): add exception for self-hosted label in webhook search (#953)
The webhook "workflowJob" pass the labels the job needs to the controller, who in turns search for them in its RunnerDeployment / RunnerSet. The current implementation ignore the search for `self-hosted` if this is the only label, however if multiple labels are found the `self-hosted` label must be declared explicitely or the RD / RS will not be selected for the autoscaling.

This PR fixes the behavior by ignoring this label, and add documentation on this webhook for the other labels that will still require an explicit declaration (OS and architecture). 

The exception should be temporary, ideally the labels implicitely created (self-hosted, OS, architecture) should be searchable alongside the explicitly declared labels.

code tested, work with `["self-hosted"]` and `["self-hosted","anotherLabel"]`

Fixes #951
2021-12-19 10:55:23 +09:00
Jacob Lauritzen
83c8a9809e Add GKE firewall issues to common errors (#1010)
A lot of people have issues with private GKE clusters and it seems they are all solved by setting up a firewall policy. I think it would be relevant to include this in a troubleshooting-section since so many people are searching around issues for it. I myself just spent most of my day trying to figure it out.

Issues where this is the solution:
* #293
* #335
* #68
2021-12-17 09:10:15 +09:00
renovate[bot]
c64000e11c fix(deps): update module sigs.k8s.io/controller-runtime to v0.11.0 (#740)
* fix(deps): update module sigs.k8s.io/controller-runtime to v0.11.0

* Fix dependencies and bump Go to 1.17 so that it builds after controller-runtime 0.11.0 upgrade

* Regenerate manifests with the latest K8s dependencies

Co-authored-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-12-17 09:06:55 +09:00
Felipe Galindo Sanchez
9bb21aef1f Add support for default image pull secret name (#921)
Resolves #896

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-12-15 09:29:31 +09:00
Gabriele Mambrini
7261d927fb fix: report busy status for offline workers (#1009)
ref #911 

Fix #993 cannot work because the runner busy status is not reported when offline
2021-12-15 08:57:13 +09:00
Pavel Smalenski
91102c8088 Add dockerEnv variable for RunnerDeployment (#912)
Resolves #878

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-12-14 17:13:24 +09:00
apr-1985
6f51f560ba fix: allow GH priv key from env in helm chart (#884)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-12-14 13:15:12 +09:00
Bryan Peterson
961f01baed allow providing webhook secret token via flag instead of environment variable (#876)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-12-12 17:00:32 +09:00
Skyler Mäntysaari
d0642eeff1 chart: ingress for k8s v1.22.x support (#988)
Also dropped the deprecated .Capabilities.KubeVersion.Gitversion usage in ingress template.

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-12-12 16:43:32 +09:00
kannappan senthilnathan
473fe7f736 update readme instruction for webhook scaling (#987)
added note about mandatory parameters for webhook driven scaling
2021-12-12 16:16:54 +09:00
Piaras Hoban
84b0c64d29 feat: add authSecret.enabled to Helm chart (#937)
When false the chart deployment template will not add GITHUB_*
environment variables to the manager container. In addition, the `volume`
and `volumeMount` for the secret will also be omitted from the
deployment manifest.

Signed-off-by: Piaras Hoban <phoban01@gmail.com>
2021-12-12 16:13:14 +09:00
Felipe Galindo Sanchez
f0fccc020b refactor: split Reconciler from Reconcile in a few methods (#926)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-12-12 14:22:55 +09:00
Yusuke Kuoka
2bd6d6342e Push packages to GHCR (#1004)
Ref #849
Ref #566

Co-authored-by: Pål Sollie <sollie@sparkz.no>
2021-12-12 11:41:33 +09:00
Patrick Ellis
ea2dbc2807 Update go-github from v37 -> v39 (#925) 2021-12-11 21:43:40 +09:00
Yusuke Kuoka
c718eaae4f Bump ginkgo and gomega (#1003)
Supercedes #880 and #746
2021-12-11 21:10:09 +09:00
renovate[bot]
67e39d719e fix(deps): update golang.org/x/oauth2 commit hash to d3ed0bb (#947)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-12-11 20:51:46 +09:00
Yusuke Kuoka
bbd328a7cc Bump controller-runtime to v0.10.3 (#1002)
Enhanced version of https://github.com/actions-runner-controller/actions-runner-controller/pull/740
2021-12-11 20:49:47 +09:00
renovate[bot]
8eb6c0f3f0 fix(deps): update module github.com/teambition/rrule-go to v1.7.2 (#747)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-12-11 18:58:09 +09:00
renovate[bot]
231c1f80e7 fix(deps): update module sigs.k8s.io/yaml to v1.3.0 (#841)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-12-11 18:41:52 +09:00
renovate[bot]
3c073c5e17 fix(deps): update module go.uber.org/zap to v1.19.1 (#799)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-12-11 18:41:22 +09:00
Callum Tait
a1cfe3be36 docs: re-order helm param order (#996)
* docs: re-order helm param order

* docs: re-order params in values
2021-12-09 10:20:51 +00:00
Yusuke Kuoka
898ad3c355 Work-around for offline+busy runners (#993)
Ref #911
2021-12-09 09:31:06 +09:00
renovate[bot]
164a91b18f chore(deps): update quay.io/brancz/kube-rbac-proxy docker tag to v0.11.0 (#745)
* chore(deps): update quay.io/brancz/kube-rbac-proxy docker tag to v0.11.0

* chore(deps): update quay.io/brancz/kube-rbac-proxy make tag to v0.11.0

Co-authored-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2021-12-08 22:53:50 +00:00
Callum Tait
acb004f291 docs: remove RunnerSet limitation (#991) 2021-12-08 22:03:42 +00:00
Jonathan Sokolowski
3de4e7e9c6 Support installing without cert-manager (#834)
* Support installing without cert-manager
2021-12-08 21:58:46 +00:00
Renovate Bot
4a55fe563c chore(deps): update dependency actions/runner to v2.285.1 2021-12-06 21:22:38 +00:00
Callum Tait
23841642df docs: bring ephemeral runners to current (#981)
Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2021-12-04 11:12:24 +00:00
toast-gear
7c4ac2ef44 ci: update workflow triggers 2021-12-04 10:52:38 +00:00
Callum Tait
550717020d ci: remove ubuntu 18.04 image (#980)
* ci: remove Ubuntu 18.04 runner

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2021-12-04 10:46:08 +00:00
Callum Tait
9a5ae93cb7 docs: various clarifications (#976)
* docs: various clarifications
2021-11-30 19:22:49 +00:00
Callum Tait
f5175256c6 chore(deps): bump 20.04 runner version (#975) 2021-11-30 14:34:04 +00:00
Callum Tait
031b1848e0 ci: separate ubuntu versions out in ci (#969)
* ci: separate ubuntu versions out in ci
2021-11-30 14:09:33 +00:00
Yusuke Kuoka
47a17754fd chore: bumping chart for new release 2021-11-27 10:54:54 +09:00
Greg Schofield
85ddd0d137 Update volumes and volumes mounts indent (#966)
Follow-up for #952

Signed-off-by: Gregory Schofield <gscho@github.com>
2021-11-27 10:54:01 +09:00
brunocous
eefb48ba3f add additionalVolumes and additionalVolumeMounts to helm chart (#952)
* added additional volumes and volumeMounts
2021-11-22 19:03:09 +00:00
Callum Tait
62995fec5b chore: bumping chart for new release (#948) 2021-11-15 19:58:49 +00:00
Roee Landesman
7ee1d6bcdb Add podDistruptionBudget resource for controller pods (#805)
* Add podDistruptionBudget resource for controller pods

* Add PDB to GithubWebhookServer

* Fix truncation on pdb naming

Co-authored-by: Roee Landesman <roee.landesman@gmail.com>
2021-11-15 19:07:23 +00:00
Yusuke Kuoka
b87e6e3966 doc: Add GitHub App Permission for Workflow Job Webhook (#932)
* doc: Add GitHub App Permission for Workflow Job Webhook

I think we've missed documenting about the app permission for receiving workflow_job events via the app webhook

* docs: clarification on webhook events

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2021-11-09 09:12:47 +09:00
Max N. Boyarov
88b8871830 Reduce number of http superfluous messages (#894)
write to http.ResponseWriter create HTTP OK response, so set *ok* to disable error code in defered function
2021-11-09 09:07:07 +09:00
Yusuke Kuoka
2191617eb5 Remove unnecessary scale-target-not-found error on in_progress workflow_job event (#927)
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/877#issuecomment-955614456
2021-11-09 09:05:50 +09:00
Yusuke Kuoka
b305e38b17 Add webhook-based autoscale for Enterprise runners (#906)
Fixes #892
2021-11-09 09:04:19 +09:00
Yusuke Kuoka
b6c33cee32 Fail with detailed message on envvar parse error (#907)
Ref #829
2021-11-03 10:24:24 +09:00
Renovate Bot
46da4a6b6e chore(deps): update dependency actions/runner to v2.284.0 2021-11-01 18:16:10 +00:00
Callum Tait
f7e14e06e8 docs: clarify runnerset limitation (#919)
* docs: clarify runnerset limitation

* docs: correct terminology

* docs: missing word
2021-10-28 11:19:56 +01:00
Callum Tait
09e6b1839b docs: correct anti-flapping example (#918)
* docs: correct anti-flapping example

* docs: provide complete example

* docs: use example/myrepo everywhere
2021-10-28 10:52:54 +01:00
Callum Tait
0416a9272f docs: minor correction to autoscaling description (#917)
* docs: minor correction to autoscaling description

* docs: make pull metrics plural

* docs: remove redundant words

* docs: make plural and expand anti-flapping
2021-10-28 10:33:20 +01:00
Huu Khiem (Mark)
c33578a041 Update examples of RunnerDeployment in README.md (#913)
The README had some examples of RunnerDeployment with wrong/outdated
spec.

This caused some pain when trying to create them based on the examples.

=> Fixed using the spec declared in: api/v1alpha1/runner_types.go
2021-10-28 10:12:48 +01:00
Callum Tait
49566aaebd docs: fix broken link (#916) 2021-10-28 09:57:46 +01:00
Callum Tait
f66e6a00fa docs: clean up auto scaling documentation (#909)
* docs: clean up of autoscaling section

* docs: clarifying anti-flapping

* docs: more improvements

* docs: more improvements

* docs: adding duration details and cavaets

* docs: smaller english and better structure

* docs: use consistent wording

* docs: adding limitation cavaet for RunnerSets

* docs: correct helm uprgade order

* docs: lines helm upgrade command with help switch

* docs: use existing limitations section

* docs: fix table of headers and contents

* docs: add link to runnersets on first mention

* docs: adding runnerset limitation

* chore: use new enterprise permission for PAT

* docs: bump example deploy to latest version

* docs: adding oauth apps link

* docs: adding cavaet to the oauth apps doc
2021-10-28 09:55:04 +01:00
Yusuke Kuoka
431c1ed04a Add how-to for testing controller built from PR (#908)
We occasionally ask you to help testing actions-runner-controller built from a pull requested branch, like in https://github.com/actions-runner-controller/actions-runner-controller/issues/829#issuecomment-950252645 and https://github.com/actions-runner-controller/actions-runner-controller/issues/892#issuecomment-950113578.

To make the process more streamlined, I'd like to add some guide for that, so that we can just point to the guide when asking for help and we can improve the guide in a more sustainable manner.

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2021-10-26 12:53:19 +09:00
apr-1985
0d3de9ee2a chore: correct logging typo (#904) 2021-10-22 09:03:23 +09:00
Callum Tait
79d63acded chore: bumping chart for new release (#903) 2021-10-18 22:05:41 +01:00
apr-1985
271a4dcd9d Revert "chore: support app ids as int or strings (#869)" (#883)
* Revert "chore: support app ids as int or strings (#869)"

This reverts commit 0a3d2b686e.

* docs: adding some comments to the code

* docs: adding comment to the chart values
2021-10-17 23:23:31 +01:00
Arun Anandhan
0401b2d786 Create optional serviceAnnotations value in helm chart (#867)
* Create optional serviceAnnotations value in helm chart

* update annotation key

* update annotation key - webhook service

* fix README.md

* docs: using consistent tense

* docs: making the code comments more generic
2021-10-17 22:37:43 +01:00
Maxim Tacu
43141cb751 feat: Added option for secret annotation (#824)
* feat: Added option for secret annotation

* bump the chart version

* chore: aligning values attributes with standard

* fixed template for manager_secrets

* docs: update annotations and fix layout

Co-authored-by: Maxim Tacu <maxim.tacu@mercedes-benz.io>
Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2021-10-17 22:18:35 +01:00
KeisukeYamashita
b805cfada7 Fix maxReplicas typo in HorizontalRunnerAutoscaler spec comment (#895)
* Fix maxreplicas in spec comment

Signed-off-by: KeisukeYamashita <19yamashita15@gmail.com>

* Generate manifests

Signed-off-by: KeisukeYamashita <19yamashita15@gmail.com>
2021-10-17 22:01:08 +01:00
Sebastien Le Digabel
c4e97d600d feat: Have githubwebhook monitor a single namespace (#828)
* feat: Have githubwebhook monitor a single namespace

When using `scope.singleNamespace: true` in Helm, expected behaviour is
that the github webhook server behaves the same way as the controller.

The current behaviour is that the webhook server monitors all the
namespaces.

* Changing the chart README.md to reflect the scope

The documentation now mentions that both the controller and the github
webhook server will make use of the `scope.watchNamespace` field if
`scope.singleNamespace` is set to `true`.

Co-authored-by: Sebastien Le Digabel <sebastien.ledigabel@skyscanner.net>
2021-10-17 21:54:32 +01:00
Maxim Pogozhiy
fce7d6d2a7 Add topologySpreadConstraints (#814) 2021-10-17 21:49:44 +01:00
Callum Tait
5805e39e1f Revert "feat: adding workflow_dispatch webhook event" (#879)
This reverts commit d36d47fe66.
2021-10-09 18:36:02 +01:00
Callum
d36d47fe66 feat: adding workflow_dispatch webhook event 2021-10-09 10:07:07 +01:00
Callum Tait
8657a34f32 ci: fix chart publish workflow (#874)
* ci: correcting cross job output syntax

* ci: adding job names
2021-10-05 22:35:32 +01:00
Callum Tait
b01e193aab ci: publish chart on chart.yaml changes only (#873)
* ci: clean up of workflows in general
2021-10-05 22:28:05 +01:00
Callum Tait
0a3d2b686e chore: support app ids as int or strings (#869)
Co-authored-by: Callum <callum@domain.com>
2021-10-05 09:01:27 +09:00
Renovate Bot
2bc050a62d chore(deps): update dependency actions/runner to v2.283.3 2021-10-04 21:17:21 +00:00
Callum Tait
0725e72ae0 ci: disable chart version bump check (#870) 2021-10-04 20:39:38 +01:00
Callum Tait
5f9fcaf016 ci: add label to renovate pull requests (#863)
* ci: add label to renovate pull requests

* ci: updating stale config to exempt new label
2021-10-02 22:48:28 +01:00
Callum Tait
e4e0b45933 chore: bump helm chart to latest app version (#862) 2021-10-02 10:05:06 +01:00
Renovate Bot
2937173101 chore(deps): update dependency actions/runner to v2.283.2 2021-09-30 15:01:11 +00:00
Aidan
fccf29970b Fix bug related to label matching. (#852)
* Fix bug related to label matching.

Add start of test framework for Workflow Job Events

Signed-off-by: Aidan Jensen <aidan@artificial.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-09-30 11:02:59 +09:00
Alex Kulikovskikh
ea06001819 fix: scaling issue based on workflow_job event (#850)
This PR fix scaling issue based on `workflow_job` event discussed in #819
2021-09-30 10:36:59 +09:00
Yusuke Kuoka
1bc1712519 Do bump go-github in codebase to fix build error in CI builds (#853) 2021-09-30 10:35:24 +09:00
Callum Tait
24224613f3 docs: updating to reflect the latest release (#844)
* docs: updating to reflect the latest release

* docs: further updates

* docs: format updates

* docs: correcting title sizes

* docs: correcting links

* docs: correcting the grammar of multi controllers

* docs: slightly less awkward English
2021-09-28 09:44:15 +01:00
Rob Bos
3f331e9a39 Fixing capitalization and a typo (#838)
* Fixing capitalization and a typo

* typo

* Typo

* Update controllers/autoscaling.go

* Update controllers/autoscaling.go

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-09-26 14:34:55 +09:00
Yusuke Kuoka
67c7b7a228 Bump chart version to 0.13.1 with controller 0.20.1 2021-09-24 00:40:08 +00:00
Yusuke Kuoka
2e325fa176 Merge pull request #843 from tyrken/add-preserve-unknown-false-crds
Add preserveUnknown=false to crds
2021-09-24 09:25:02 +09:00
Tristan Keen
5e3f89bdc5 Correct test to append docker container (#837)
Fixes #835
2021-09-24 09:18:20 +09:00
Tristan Keen
9f4f5ec951 Added preserveUnknownFields:false to CRDs 2021-09-23 22:00:18 +01:00
Tristan Keen
1fafd0d139 Force CRDs to have preserveUnknownFields: false 2021-09-23 22:00:18 +01:00
Renovate Bot
24602ff3ee chore(deps): update dependency actions/runner to v2.283.1 2021-09-20 17:41:07 +00:00
Callum Tait
cf75d24def ci: updating triggers (#827)
* ci: updating triggers
2021-09-17 09:21:00 +09:00
Renovate Bot
ac3721d0d5 chore(deps): update dependency actions/runner to v2.282.1 2021-09-15 20:09:19 +00:00
Callum Tait
594b086674 docs: adding election details (#821)
* docs: adding election details

* use consistent case
2021-09-15 12:44:31 +01:00
Yusuke Kuoka
58d2591f09 Bump chart version to 0.13.0 for actions-runner-controller 0.20.0 2021-09-15 00:38:43 +00:00
Yusuke Kuoka
1a75b4558b Fix E2E test to actualy pass
I have a dedicated GitHub organization and a private repository to run this E2E test. After a few fixes included in this change, it has successfully passed.
2021-09-15 09:34:48 +09:00
Callum Tait
40c88eb490 docs: slight update to the wording 2021-09-14 17:30:46 +09:00
Yusuke Kuoka
fe64850d3d Document and values.yaml updates for leader election customization
Follow-up for #806
2021-09-14 17:30:46 +09:00
Tristan Keen
4320e0e5e1 New generated CRDs 2021-09-14 17:12:09 +09:00
Tristan Keen
4a61c2f3aa Revert CRD workaround for K8s v1.18 2021-09-14 17:12:09 +09:00
Tristan Keen
1eb135cace Correct default image logic 2021-09-14 17:00:57 +09:00
Tristan Keen
d918c91bea Complete CRDs for acceptance testing 2021-09-14 17:00:39 +09:00
Sebastien Le Digabel
bf35c51440 Adding unit test for ephemeral feature flag
This was something that was missing in #707.
Adding a new test to make sure the ephemeral feature flag from upstream
is set up correctly by the script.
2021-09-14 16:37:25 +09:00
Yusuke Kuoka
b679a54196 Add missing //go:build tag on deepcopy source
But not sure why this is needed yet :)
2021-09-14 16:37:04 +09:00
Rolf Ahrenberg
5da808af96 Allow defining unique election leader id 2021-09-14 16:37:04 +09:00
Rolf Ahrenberg
e5b5ee6f1d Make target platform configurable for runner builds 2021-09-14 16:37:04 +09:00
Rolf Ahrenberg
cf3abcc7d6 Reorder docker build parameters 2021-09-14 16:37:04 +09:00
Rolf Ahrenberg
cffc2585f9 Use unique serving cert name
Based on the comments in https://github.com/actions-runner-controller/actions-runner-controller/issues/782
2021-09-14 16:37:04 +09:00
Renovate Bot
01928863b9 chore(deps): update dependency actions/runner to v2.282.0 2021-09-13 21:07:46 +00:00
Sebastien Le Digabel
a98729b08b Adding github action for entrypoint unit test
... and adding safety mechanism in UNITTEST handling.
2021-09-06 08:51:28 +09:00
Sebastien Le Digabel
ec0915ce7c Adding some unit testing for entrypoint.sh
The unit tests are simulating a run for entrypoint. It creates some
dummy config.sh and runsvc.sh and makes sure the logic behind
entrypoint.sh is correct.

Unfortunately the entrypoint.sh contains some sections that are not
mockable so I had to put some logic in there too.

Testing includes for now:
- the normal scenario
- the normal non-ephemeral scenario
- the configuration failure scenario

Also tested the entrypoint.sh on a real runner, still works as expected.
2021-09-06 08:51:28 +09:00
Sebastien Le Digabel
d355f05ac0 Adding retry after config and formatted logging
Adding a basic retry loop during configuration. If configuration fails,
the runner will just straight into a retry loop and will continuously
fail until it dies after a while.

This change will retry 10 times and will exit if the configuration
wasn't successful.

Also, changed the logging format, adding a bit of color in the event of
success or failure.
2021-09-06 08:51:28 +09:00
toast-gear
6f27b4920e docs: watch namespace feature (#786)
Fixes #455
2021-09-06 08:46:01 +09:00
Renovate Bot
f8959f973f chore(deps): update dependency actions/runner to v2.281.1 2021-09-01 23:01:06 +00:00
renovate[bot]
37955fa267 fix(deps): update module go.uber.org/zap to v1.19.0 (#748)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-08-31 09:58:52 +09:00
renovate[bot]
63fe89b7aa fix(deps): update golang.org/x/oauth2 commit hash to 2bc19b1 (#739)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-08-31 09:50:34 +09:00
Tarasovych
3f801af72a Update README.md (#774) 2021-08-31 09:47:20 +09:00
Tarasovych
7008b0c257 feat: Organization RunnerDeployment with webhook-based autoscaling only for certain repositories (#766)
Resolves #765

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-08-31 09:46:36 +09:00
Tarasovych
d9df455781 Update README.md (#775)
Update organization App url query parameters
2021-08-31 09:44:29 +09:00
renovate[bot]
7e42d3fa7c chore(deps): update dependency actions/runner to v2.281.0 (#777)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-08-31 09:42:25 +09:00
Sam
0593125d96 Add dnsConfig to runner deployments (#764)
Resolves #761
2021-08-31 09:42:05 +09:00
Patrick Ellis
a815c37614 docs: fix a few small YAML typos (#763)
- Remove two extra colons that were making the yaml invalid 🕵️
- Add `yaml` tags to the markdown blocks 🧹
2021-08-25 09:13:20 +01:00
renovate[bot]
3539569fed chore(deps): update helm/chart-releaser-action action to v1.2.1 (#742)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-08-25 09:29:31 +09:00
renovate[bot]
fc131870aa chore(deps): update helm/chart-testing-action action to v2.1.0 (#743)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-08-25 09:27:52 +09:00
renovate[bot]
382afa4450 chore(deps): update helm/kind-action action to v1.2.0 (#744)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-08-25 09:27:26 +09:00
renovate[bot]
5125dd7e77 chore(deps): update golang docker tag to v1.17 (#741)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2021-08-25 09:26:51 +09:00
Yusuke Kuoka
2c711506ea Update documentation about epehemral runners and RunnerSet (#727)
Follow-up for #721 and #629
2021-08-25 09:26:26 +09:00
Tarasovych
dfa0f2eef4 Update README.md (#735)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2021-08-25 09:25:51 +09:00
Yusuke Kuoka
180db37a9a Merge pull request #762 from actions-runner-controller/non-deprecate-apis-follow-up
chart,kustomize: Fix errors
2021-08-25 09:23:23 +09:00
Yusuke Kuoka
424c33b11f kustomize: Fix error while generating release manifests
This fixes the below error that occurs in `make release`:

```
kustomize build config/default > release/actions-runner-controller.yaml
Error: accumulating resources: accumulation err='accumulating resources from '../webhook': '/home/mumoshu/p/actions-runner-controller/config/webhook' must resolve to a file': recursed accumulation of path '/home/mumoshu/p/actions-runner-controller/config/webhook': accumulating resources: accumulation err='accumulating resources from 'manifests.v1beta1.yaml': evalsymlink failure on '/home/mumoshu/p/actions-runner-controller/config/webhook/manifests.v1beta1.yaml' : lstat /home/mumoshu/p/actions-runner-controller/config/webhook/manifests.v1beta1.yaml: no such file or directory': evalsymlink failure on '/home/mumoshu/p/actions-runner-controller/config/webhook/manifests.v1beta1.yaml' : lstat /home/mumoshu/p/actions-runner-controller/config/webhook/manifests.v1beta1.yaml: no such file or directory
make: *** [Makefile:156: release] Error 1
```

Ref #144
2021-08-25 00:11:43 +00:00
Yusuke Kuoka
34d9c6d4db chart: Fix webhook config installation error
This fixes the below error on installing the chart:

```
Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(MutatingWebhookConfiguration.webhooks[0]): missing required field "admissionReviewVersions" in io.k8s.api.admissionregistration.v1.MutatingWebhook, ValidationError(MutatingWebhookConfiguration.webhooks[1]): missing required field "admissionReviewVersions" in io.k8s.api.admissionregistration.v1.MutatingWebhook, ValidationError(MutatingWebhookConfiguration.webhooks[2]): missing required field "admissionReviewVersions" in io.k8s.api.admissionregistration.v1.MutatingWebhook, ValidationError(MutatingWebhookConfiguration.webhooks[3]): missing required field "admissionReviewVersions" in io.k8s.api.admissionregistration.v1.MutatingWebhook]
```

Ref #144
2021-08-25 00:07:49 +00:00
Yusuke Kuoka
167c5b4dc9 Use non-deprecated API versions in CRDs and Webhooks (#733)
Resolves #144
2021-08-24 10:31:36 +09:00
Patrick Ellis
91c22ef964 doc: GitHub calls it admin:enterprise, not enterprise:admin (#759) 2021-08-24 10:26:43 +09:00
Hidetake Iwata
5d292ee5ff Update actions/runner by Renovate (#734)
Resolves #416
2021-08-23 09:33:47 +09:00
toast-gear
5b4b65664c chore: bump actions runner version (#736) 2021-08-19 14:47:17 +01:00
toast-gear
b6465c5d09 chore: bump docker and runner version and add imageos env var (#730)
* chore: bump runner version

* chore: bump docker version

* feat: add in ImageOS env var

* chore: adding missing fail switches
2021-08-18 15:50:17 +01:00
Hiroki Matsumoto
dc9f9b0bfb fix: arch type with downloading dumb-init. (#723)
* fix: arch type with downloading dumb-init.

* fix: arch type with downloading dumb-init in Dockerfile.dindrunner

* fix: add -f option with curl
2021-08-11 16:43:25 +01:00
toast-gear
02e05bdafb ci: set username statically (#724)
Co-authored-by: Callum James Tait <callum.tait@photobox.com>
2021-08-11 20:03:37 +09:00
callum-tait-pbx
a9421edd46 chore: bump dumb-init (#710)
* chore: bump dumb-init and align files

* ci: align make file with root make file
2021-08-11 09:55:09 +09:00
Rob Bos
fb66b28569 Change move command to copy to prevent issues (#716)
Prevents issues when /runner and /runnertmp are in different devices

Fixes #686
2021-08-11 09:53:42 +09:00
Yusuke Kuoka
fabead8c8e feat: Workflow job based ephemeral runner scaling (#721)
This add support for two upcoming enhancements on the GitHub side of self-hosted runners, ephemeral runners, and `workflow_jow` events. You can't use these yet.

**These features are not yet generally available to all GitHub users**. Please take this pull request as a preparation to make it available to actions-runner-controller users as soon as possible after GitHub released the necessary features on their end.

**Ephemeral runners**:

The former, ephemeral runners, is basically the reliable alternative to `--once`, which we've been using when you enabled `ephemeral: true` (default in actions-runner-controller).

`--once` has been suffering from a race issue #466. `--ephemeral` fixes that.

To enable ephemeral runners with `actions/runner`, you give `--ephemeral` to `config.sh`. This updated version of `actions-runner-controller` does it for you, by using `--ephemeral` instead of `--once` when you set `RUNNER_FEATURE_FLAG_EPHEMERAL=true`.

Please read the section `Ephemeral Runners` in the updated version of our README for more information.

Note that ephemeral runners is not released on GitHub yet. And `RUNNER_FEATURE_FLAG_EPHEMERAL=true` won't work at all until the feature gets released on GitHub. Stay tuned for an announcement from GitHub!

**`workflow_job` events**:

`workflow_job` is the additional webhook event that corresponds to each GitHub Actions workflow job run. It provides `actions-runner-controller` a solid foundation to improve our webhook-based autoscale.

Formerly, we've been exploiting webhook events like `check_run` for autoscaling. However, as none of our supported events has included `labels`, you had to configure an HRA to only match relevant `check_run` events. It wasn't trivial.

In contrast, a `workflow_job` event payload contains `labels` of runners requested. `actions-runner-controller` is able to automatically decide which HRA to scale by filtering the corresponding RunnerDeployment by `labels` included in the webhook payload. So all you need to use webhook-based autoscale will be to enable `workflow_job` on GitHub and expose actions-runner-controller's webhook server to the internet.

Note that the current implementation of `workflow_job` support works in two ways, increment, and decrement. An increment happens when the webhook server receives` workflow_job` of `queued` status. A decrement happens when it receives `workflow_job` of `completed` status. The latter is used to make scaling-down faster so that you waste money less than before. You still don't suffer from flapping, as a scale-down is still subject to `scaleDownDelaySecondsAfterScaleOut `.

Please read the section `Example 3: Scale on each `workflow_job` event` in the updated version of our README for more information on its usage.
2021-08-11 09:52:04 +09:00
Rolf Ahrenberg
d528d18211 Fix markdown header (#718) 2021-08-09 14:37:57 +01:00
toast-gear
7e593a80ff docs: more improvements to the english used 2021-08-06 17:36:11 +01:00
toast-gear
27bdc780a3 docs: better english 2021-08-06 17:34:53 +01:00
toast-gear
3948406374 docs: using better english 2021-08-06 17:32:58 +01:00
toast-gear
743e6d6202 feat: bump runner version (#705)
* feat: bump runner version

* feat: remove deprecated env var

* docs: updating the docs

Co-authored-by: Callum James Tait <callum.tait@photobox.com>
2021-07-30 19:58:04 +09:00
Rolf Ahrenberg
29260549fa Document volumeStorageMedium and volumeSizeLimit (#700)
Related to #674
2021-07-21 07:50:25 +09:00
Roee Landesman
f17edd500b Use https connection when metrics enabled for githubwebhook server (#685)
Relates to #625 and adds necessary RBAC permissions to fix #401 first reported [here](https://github.com/actions-runner-controller/actions-runner-controller/issues/656).

Co-authored-by: Roee Landesman <roee.landesman@sonos.com>
2021-07-16 10:19:38 +09:00
Rolf Ahrenberg
14564c7b8e Allow disabling /runner emptydir mounts and setting storage volume (#674)
* Allow disabling /runner emptydir mounts

* Support defining storage medium for emptydirs

* Fix typos
2021-07-15 06:29:58 +09:00
Sebastien Le Digabel
7f2795b5d6 Adding a default docker registry mirror (#689)
* Adding a default docker registry mirror

This change allows the controller to start with a specified default
docker registry mirror and avoid having to specify it in all the runner*
objects.

The change is backward compatible, if a runner has a docker registry
mirror specified, it will supersede the default one.
2021-07-15 06:20:08 +09:00
Abhi Kapoor
b27b6ea2a8 Add shortNames to CRDs(#693)
Add `shortNames` to kube api-resource CRDs. Short-names make it easier when interacting/troubleshooting api-resources with kubectl. 

We have tried to follow the naming convention similar to what K8s uses which should help with avoiding any naming conflicts as well. For example:
* `Deployment` has a shortName of deploy, so added rdeploy for `runnerdeployment`
* `HorizontalPodAutoscaler` has a shortName of hpa, so added hra for `HorizontalRunnerAutoscaler`
*  `ReplicaSets` has a shortName of rs, so added rrs for `runnerreplicaset`

Co-authored-by: abhinav454 <43758739+abhinav454@users.noreply.github.com>
2021-07-15 06:17:09 +09:00
Yusuke Kuoka
f858e2e432 Add POC of GitHub Webhook Delivery Forwarder (#682)
* Add POC of GitHub Webhook Delivery Forwarder

* multi-forwarder and ctrl-c existing and fix for non-woring http post

* Rename source files

* Extract signal handling into a dedicated source file

* Faster ctrl-c handling

* Enable automatic creation of repo hook on startup

* Add support for forwarding org hook deliveries

* Set hook secret on hook creation via envvar (HOOK_SECRET)

* Fix org hook support

* Fix HOOK_SECRET for consistency

* Refactor to prepare for custom log position provider

* Refactor to extract inmemory log position provider

* Add configmap-based log position provider

* Rename githubwebhookdeliveryforwarder to hookdeliveryforwarder

* Refactor to rename LogPositionProvider to Checkpointer and extract ConfigMap checkpointer into a dedicated pkg

* Refactor to extract logger initialization

* Add hookdeliveryforwarder README and bump go-github to unreleased ver
2021-07-14 10:18:55 +09:00
Yusuke Kuoka
6f130c2db5 Fix dockerdWithinRunnerContainer for Runner(Deployment) not working in the main branch (#696)
Ref https://github.com/actions-runner-controller/actions-runner-controller/pull/674#issuecomment-878600993
2021-07-13 18:14:15 +09:00
lucas-pate
dcea0f7f79 Update README.md to fix scaleUp/Down examples (#684)
* Update README.md to fix scaleUp/Down examples

* fix comment formatting
2021-07-11 09:05:43 +09:00
Yusuke Kuoka
f19e7ea8a8 chore: Upgrade go-github to v36 (#681) 2021-07-04 17:43:52 +09:00
toast-gear
9437e164b4 docs: runner startup delay docs PR #678 (#679)
* docs: runner startup delay docs PR #678

* docs: adding in immutable tag into the docs
2021-07-03 12:02:37 +01:00
toast-gear
82d1be7791 chore: deprecate STARTUP_DELAY (#678)
* chore: deprecate STARTUP_DELAY

* chore: adding better comments

* chore: whitespace correction
2021-07-03 11:51:07 +01:00
Yusuke Kuoka
dbab1a5e92 chaart: Bump version number to 0.12.7 2021-07-03 06:16:53 +00:00
Kirill Bilchenko
e5a9d50cb6 chart: Add additional labels to serviceMonitor (#670)
Add a way to add additional labels for service monitor. Could be helpful in case if you are using unified labels to scrape the metrics in k8s
2021-07-03 15:14:59 +09:00
Roee Landesman
67031acdc4 Add annotations to githubWebhookServer Service in Helm Chart (#665)
Improves #664 by adding annotations to the server's service. Beyond general applications, we use these annotations within my own projects to configure various LB values.
2021-06-30 20:42:21 +09:00
Sebastien Le Digabel
b1bfa8787f Optional override of runner image in chart (#666)
* Optional override of runner image in chart

This commit adds the option to override the actions runner image. This
allows running the controller in environments where access to Dockerhub
is restricted.

It uses the parameter [--runner-image](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/main.go#L89) from the controller.
The default value is set as a constant
[here](acb906164b/main.go (L40)).

The default value for the chart is the same.

* Fixing actionsRunner name

... to actionsRunnerRepositoryAndTag for consistency.

* Bumping chart to v0.12.5
2021-06-30 09:53:45 +09:00
Yusuke Kuoka
c78116b0f9 e2e: Cover RunnerDeployment (#668)
Previously the E2E test suite covered only RunnerSet. This refactors the existing E2E test code to extract the common test structure into a `env` struct and its methods, and use it to write two very similar tests, one for RunnerSet and another for RunnerDeployment.
2021-06-29 17:52:43 +09:00
toast-gear
4ec57d3e39 chore: update helm create secret defaults to false (#669)
There's no reason to create a non-working secret by default. If someone wants to deploy the secrets via the chart they will need to do some config regardless so they might as well also set the create flag
2021-06-29 17:51:41 +09:00
John Stewart
79543add3f Instruct ServiceMonitor to connect using https for controller (#625)
The controller metrics endpoint serves over https using a self-signed cert by default in this chart so correct the ServiceMonitor to reflect.
2021-06-29 15:50:38 +09:00
Yusuke Kuoka
7722730dc0 e2e: Concurrent workflow jobs (#667)
Enhances out existing E2E test suite to additionally support triggering two or more concurrent workflow jobs and verifying all the results, so that you can ensure the runners managed by the controller are able to handle jobs reliably when loaded.
2021-06-29 14:34:27 +09:00
toast-gear
044f4ad4ea chore: updating to use non-deprecated env var (#660)
Fixes #659

Co-authored-by: Callum James Tait <callum.tait@photobox.com>
2021-06-29 08:54:59 +09:00
Yusuke Kuoka
20394be04d Fix image repo name in chart (#663)
* Fix image repo name in chart

Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/644#issuecomment-869200869
2021-06-29 08:53:39 +09:00
Yusuke Kuoka
7a305d2892 e2e: Install and run workflow and verify the result (#661)
This enhances the E2E test suite introduced in #658 to also include the following steps:

- Install GitHub Actions workflow
- Trigger a workflow run via a git commit
- Verify the workflow run result

In the workflow, we use `kubectl create cm --from-literal` to create a configmap that contains an unique test ID. In the last step we obtain the configmap from within the E2E test and check the test ID to match the expected one.

To install a GitHub Actions workflow, we clone a GitHub repository denoted by the TEST_REPO envvar, progmatically generate a few files with some Go code, run `git-add`, `git-commit`, and then `git-push` to actually push the files to the repository. A single commit containing an updated workflow definition and an updated file seems to run a workflow derived to the definition introduced in the commit, which was a bit surpirising and useful behaviour.

At this point, the E2E test fully covers all the steps for a GitHub token based installation. We need to add scenarios for more deployment options, like GitHub App, RunnerDeployment, HRA, and so on. But each of them would worth another pull request.
2021-06-28 08:30:32 +09:00
Callum James Tait
927d6f03ce docs: fixing whitespace error 2021-06-27 11:51:05 +01:00
Chris Bui
127a9aa7c4 Add Self-hosted GitHub Enterprise Server URL to chart (#649)
Co-authored-by: Chris Bui <chrisbui@paypal.com>
2021-06-27 16:50:57 +09:00
Yusuke Kuoka
2703fa75d6 Add e2e test (#658)
This is the initial version of our E2E test suite which is currently a subset of the acceptance test suite reimplemented in Go.

To run it, pass `-run ^TestE2E$` to `go test`, without `-short`, like `go test -timeout 600s -run ^TestE2E$ github.com/actions-runner-controller/actions-runner-controller/test/e2e -v`.

`make test` is modified to pass `-short` to `go test` by default to skip E2E tests.

The biggest benefit of rewriting the acceptance test in Go turned out to be the fact that you can easily rerun each step- a go-test "subtest"- individually from your IDE, for faster turnaround.  Both VS Code and IntelliJ IDEA/GoLand are known to work.

In the near future, we will add more steps to the suite, like actually git-comminting some Actions workflow and pushing some commit to trigger a workflow run, and verify the workflow and job run results, and finally run it on our `test` workflow to fully automated E2E testing. But that s another story.
2021-06-27 16:28:07 +09:00
toast-gear
605ec158f4 fix: make AGENT_TOOLSDIRECTORY an env var (#657)
Co-authored-by: Callum James Tait <callum.tait@photobox.com>
2021-06-26 20:51:10 +09:00
Yusuke Kuoka
3b45d1b334 doc: Describe RunnerSet (#654)
Ref #629
Ref #613
Ref #612
2021-06-26 07:34:58 +09:00
Yusuke Kuoka
acb906164b RunnerSet: Automatic-recovery from registration timeout and deregistration on pod termination (#652)
Ref #629
Ref #613
Ref #612
2021-06-24 20:39:37 +09:00
Yusuke Kuoka
98da4c2adb Add HRA support for RunnerSet (#647)
`HRA.Spec.ScaleTargetRef.Kind` is added to denote that the scale-target is a RunnerSet.

It defaults to `RunnerDeployment` for backward compatibility.

```
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
  name: myhra
spec:
  scaleTargetRef:
    kind: RunnerSet
    name: myrunnerset
```

Ref #629
Ref #613
Ref #612
2021-06-23 20:25:03 +09:00
Callum James Tait
9e1c28fcff chore: removing superfluous text 2021-06-23 08:48:43 +09:00
Callum James Tait
774db3fef4 docs: moving dev docs to contributing md 2021-06-23 08:48:43 +09:00
Yusuke Kuoka
8b90b0f0e3 Clean up import list (#645)
Resolves #644
2021-06-22 17:55:06 +09:00
Jonathan Gonzalez V
a277489003 Added support to enable and disable enableServiceLinks. (#628)
This option expose internally some `KUBERNETES_*` environment variables
that doesn't allow the runner to use KinD (Kubernetes in Docker) since it will
try to connect to the Kubernetes cluster where the runner it's running.

This option it's set by default to `true` in any Kubernetes deployment.

Signed-off-by: Jonathan Gonzalez V <jonathan.gonzalez@enterprisedb.com>
2021-06-22 17:27:26 +09:00
Shubham Gopale
1084a37174 We are exiting if its a registration-only runner (#641) 2021-06-22 17:26:03 +09:00
Yusuke Kuoka
9e4dbf497c feat: RunnerSet backed by StatefulSet (#629)
* feat: RunnerSet backed by StatefulSet

Unlike a runner deployment, a runner set can manage a set of stateful runners by combining a statefulset and an admission webhook that mutates statefulset-managed pods with required envvars and registration tokens.

Resolves #613
Ref #612

* Upgrade controller-runtime to 0.9.0

* Bump Go to 1.16.x following controller-runtime 0.9.0

* Upgrade kubebuilder to 2.3.2 for updated etcd and apiserver following local setup

* Fix startup failure due to missing LeaderElectionID

* Fix the issue that any pods become unable to start once actions-runner-controller got failed after the mutating webhook has been registered

* Allow force-updating statefulset

* Fix runner container missing work and certs-client volume mounts and DOCKER_HOST and DOCKER_TLS_VERIFY envvars when dockerdWithinRunner=false

* Fix runnerset-controller not applying statefulset.spec.template.spec changes when there were no changes in runnerset spec

* Enable running acceptance tests against arbitrary kind cluster

* RunnerSet supports non-ephemeral runners only today

* fix: docker-build from root Makefile on intel mac

* fix: arch check fixes for mac and ARM

* ci: aligning test data format and patching checks

* fix: removing namespace in test data

* chore: adding more ignores

* chore: removing leading space in shebang

* Re-add metrics to org hra testdata

* Bump cert-manager to v1.1.1 and fix deploy.sh

Co-authored-by: toast-gear <15716903+toast-gear@users.noreply.github.com>
Co-authored-by: Callum James Tait <callum.tait@photobox.com>
2021-06-22 17:10:09 +09:00
Yusuke Kuoka
af0ca03752 doc: Introduce summerwind/actions-runner images (#634)
I have noticed that this isnt documented anywhere while working on https://github.com/actions-runner-controller/actions-runner-controller/issues/631#issuecomment-862807900
2021-06-22 17:07:36 +09:00
Yusuke Kuoka
37d9599dca doc: Use with Istio (#635)
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/591
2021-06-22 17:07:24 +09:00
Yusuke Kuoka
08a676cfd4 Add configuration for "Lock" app (#638)
To prevent people from writing related and unrelated things to already closed issues. As a maitainer, that kind of situation only makes it harder to effectively provide user support. Please create another issue with concrete description of "your issue" and the reproduction steps, rather than commenting "me too" on unrelated issues!
2021-06-20 18:08:07 +09:00
Puneeth
f2e2060ff8 doc: Add caveat on volumeMounts (#632)
Update README.md to add caveat on volumeMounts

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-06-17 08:58:48 +09:00
Hidetake Iwata
dc5f90025c Add default value of githubWebhookServer.syncPeriod to chart (#622)
* Add default value of `githubWebhookServer.syncPeriod` to chart

* Bump chart version

* Update README.md
2021-06-11 09:21:05 +09:00
John Stewart
8566a4f453 Don't set default caBundle for webhooks (#617)
* Don't set default caBundle for webhooks

Fixes #614

* bump chart version
2021-06-10 08:30:37 +09:00
toast-gear
3366dc9a63 docs: adding in the caveat to upgrade docs 2021-06-09 10:15:09 +01:00
toast-gear
fa94799ec8 chore/bump-helm-chart (#615)
* chore: bumping chart version

* chore: updating chart details
2021-06-08 19:24:50 +01:00
toast-gear
c424d1afee ci: ignore .md file changes everywhere 2021-06-08 18:32:08 +01:00
toast-gear
99f83a9bf0 ci: ignore any .md file changes anywhere 2021-06-08 18:29:17 +01:00
toast-gear
aa7d4c5ecc docs: adding docs for the chart values (#608)
* docs: adding docs for the chart values

* docs: updating the main docs

* docs: grammar fixes

* docs: updating proxy default

Co-authored-by: Callum James Tait <callum.tait@photobox.com>
2021-06-08 18:17:49 +01:00
Carus Kyle
552ee28072 chore: bump kube-rbac-proxy version (#609) 2021-06-08 18:16:30 +01:00
toast-gear
fa77facacd ci: adding negative paths for publish 2021-06-07 09:34:44 +01:00
callum-tait-pbx
5b28f3d964 ci: correcting negative paths (#606) 2021-06-07 09:31:55 +01:00
Yusuke Kuoka
c36748b8bc chart: Enhance the upgrade process to not require uninstalling (#605) 2021-06-07 09:00:40 +01:00
toast-gear
f16f5b0aa4 ci: ignore doc changes (#604) 2021-06-07 08:59:28 +09:00
toast-gear
c889b92f45 docs: adding in link to HIP (#603)
* docs: adding in link to HIP

* docs: improving wording
2021-06-07 08:59:05 +09:00
Rob Bos
46be20976a Fixing typos in documentation (#602) 2021-06-04 18:52:10 +01:00
Jonah Back
8c42f99d0b feat: avoid setting privileged flag if seLinuxOptions is not null (#599)
Sets the privileged flag to false if SELinuxOptions are present/defined. This is needed because containerd treats SELinux and Privileged controls as mutually exclusive. Also see https://github.com/containerd/cri/blob/aa2d5a97c/pkg/server/container_create.go#L164.

This allows users who use SELinux for managing privileged processes to use GH Actions - otherwise, based on the SELinux policy, the Docker in Docker container might not be privileged enough. 

Signed-off-by: Jonah Back <jonah@jonahback.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-06-04 08:59:11 +09:00
Tim Birkett
a93fd21f21 feat: add STARTUP_DELAY to entrypoint.sh (#592)
Ref #591 

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-06-04 08:57:59 +09:00
Ameer Ghani
7523ea44f1 feat: allow specifying runtime class in runner spec (#580)
This allows using the `runtimeClassName` directive in the runner's spec.

One of the use-cases for this is Kata Containers, which use `runtimeClassName` in a pod spec as an indicator that the pod should run inside a Kata container. This allows us a greater degree of pod isolation.
2021-06-04 08:56:43 +09:00
Vladyslav Miletskyi
30ab0c0b71 Fix actions-runner-dind not to fail setting up MTU (#589)
Fixes #588
2021-06-04 08:54:46 +09:00
Pierre DEMAGNY
a72f190ef6 docs: add an annotation example in Additional Tweaks (#600) 2021-06-04 08:38:56 +09:00
toast-gear
cb60c1ec3b docs: add explicit permission list (#593)
Fixes https://github.com/actions-runner-controller/actions-runner-controller/issues/543

Co-authored-by: Callum James Tait <callum.tait@photobox.com>
2021-06-02 08:52:14 +09:00
Christian Dobinsky
e108e04dda chart: add podLabels to helm chart (#583)
* Add pod labels to helm chart

* fix: make podLabels consistent to podAnnotations

* Update charts/actions-runner-controller/Chart.yaml

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-06-01 09:21:32 +09:00
toast-gear
2e083bca28 fix: fixing mising pip PATH (#585)
* fix: fixing mising pip PATH

* chore: removing User Site Directory

Co-authored-by: Callum James Tait <callum.tait@photobox.com>
2021-06-01 09:21:14 +09:00
toast-gear
198b13324d ci: only run latest tag job on push / release (#586)
* ci: only run latest tag job on merge

* ci: update job conditional
2021-06-01 09:18:50 +09:00
toast-gear
605dae3995 docs: add docs for upgrading the project when using Helm (#582)
* docs: adding upgrade notes for Helm

* chore: adding new ignore

* docs: add in cmd to check for stuck runners

* docs: better format

* docs: removing superfluous steps

* docs: moved location of docs

Co-authored-by: Callum James Tait <callum.tait@photobox.com>
2021-05-29 10:37:07 +09:00
toast-gear
d2b0920454 chore: removing dead chart parameters (#577)
* chore: removing autoscale parameters

* chore: removing dead parameter

* chore: removing dead parameters
2021-05-28 08:57:25 +09:00
Yair Fried
2cbeca0e7c chart: Add service monitor and remove kube_rbac_proxy leftovers (#527)
* remove all authProxy refs

* Add serviceMonitor

* fix metrics port

* fix newline

* fix newline

* bump chart version

* fix indentation typo

* Rename metrics.proxy

* Make metrics.portNumber configurable

* fix metrics port

* revert: chart version change

Co-authored-by: toast-gear <15716903+toast-gear@users.noreply.github.com>
2021-05-26 12:10:25 +01:00
Callum James Tait
859e04a680 chore: moving python to alphabetical order 2021-05-26 09:32:01 +09:00
Callum James Tait
c0821d4ede chore: correcting lists removal path 2021-05-26 09:32:01 +09:00
Callum James Tait
c3a6e45920 chore: aligning package order 2021-05-26 09:32:01 +09:00
Callum James Tait
818dfd6515 chore: whitespace alignment 2021-05-26 09:32:01 +09:00
Callum James Tait
726b39aedd feat: adding pip to base image 2021-05-26 09:32:01 +09:00
toast-gear
7638c21e92 docs: adding caveat to scaling metric (#570)
* docs: adding caveat to scaling metric

* docs: better wording

Fixes #338
2021-05-25 10:23:32 +09:00
Viktor Anderling
c09d6075c6 Add topologySpreadConstraints to helm chart (#569)
This commit adds the ability to use topologySpreadConstraints in the
helm chart by populating either one or both of topologySpreadConstraints
and githubWebhookServer.topologySpreadConstraints values.

See the official docs:
https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/

Resolves #567
2021-05-25 10:23:08 +09:00
callum-tait-pbx
39d37a7d28 docs: removing git version (#572)
The version of git bundled isn't pinned
2021-05-24 21:47:33 +01:00
toast-gear
de0315380d docs: better formating (#571) 2021-05-24 21:25:27 +01:00
toast-gear
906ddacbc6 chore: lowering daysUntilStale config (#568) 2021-05-24 09:41:24 +01:00
toast-gear
c388446668 docs: adding comment on permissions being included (#565)
* docs: adding comment on permissions being included

* docs: aligning text across readme
2021-05-22 20:05:19 +09:00
Yusuke Kuoka
d56971ca7c Fix typo (sucessfully -> successfully (#563)
Follow-up for #556
2021-05-22 08:36:18 +09:00
Yusuke Kuoka
cb14d7530b Add HRA printer column "SCHEDULE" (#561)
Adds a column to help the operator see if they configured HRA.Spec.ScheduledOverrides correctly, in a form of "next override schedule recognized by the controller":

```
$ k get horizontalrunnerautoscaler
NAME                            MIN   MAX   DESIRED   SCHEDULE
actions-runner-aos-autoscaler   0     5     0
org                             0     5     0         min=0 time=2021-05-21 15:00:00 +0000 UTC
```

Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/484
2021-05-22 08:29:53 +09:00
Yusuke Kuoka
fbb24c8c0a chore: update issue templates (#559)
* Update bug_report.md

* chore: removing default label for enhancement

Co-authored-by: toast-gear <15716903+toast-gear@users.noreply.github.com>
2021-05-21 16:51:07 +01:00
Yusuke Kuoka
0b88b246d3 Fix additionalPrinterColumns (#556)
This fixes human-readable output of `kubectl get` on `runnerdeployment`, `runnerreplicaset`, and `runner`.

Most notably, CURRENT and READY of runner replicasets are now computed and printed correctly. Runner deployments now have UP-TO-DATE and AVAILABLE instead of READY so that it is consistent with columns of K8s deployments.

A few fixes has been also made to runner deployment and runner replicaset controllers so that those numbers stored in Status objects are reliably updated and in-sync with actual values.

Finally, `AGE` columns are added to runnerdeployment, runnerreplicaset, runnner to make that more visible to users.

`kubectl get` outputs should now look like the below examples:

```
# Immediately after runnerdeployment updated/created
$ k get runnerdeployment
NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
example-runnerdeploy   0         0         0            0           8d
org-runnerdeploy       5         5         5            0           8d

# A few dozens of seconds after update/create all the runners are registered that "available" numbers increase
$ k get runnerdeployment
NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
example-runnerdeploy   0         0         0            0           8d
org-runnerdeploy       5         5         5            5           8d
```

```
$ k get runnerreplicaset
NAME                         DESIRED   CURRENT   READY   AGE
example-runnerdeploy-wnpf6   0         0         0       61m
org-runnerdeploy-fsnmr       2         2         0       8m41s
```

```
$ k get runner
NAME                                           ENTERPRISE   ORGANIZATION                REPOSITORY                                       LABELS                      STATUS    AGE
example-runnerdeploy-wnpf6-registration-only                                            actions-runner-controller/mumoshu-actions-test                               Running   61m
org-runnerdeploy-fsnmr-n8kkx                                actions-runner-controller                                                    ["mylabel 1","mylabel 2"]             21s
org-runnerdeploy-fsnmr-sq6m8                                actions-runner-controller                                                    ["mylabel 1","mylabel 2"]             21s
```

Fixes #490
2021-05-21 09:10:47 +09:00
Yusuke Kuoka
a4631f345b Update issue templates (#552) 2021-05-18 18:15:00 +09:00
Yusuke Kuoka
7be31ce3e5 kubectl-diff / dry-run support (#549)
Resolves #266
2021-05-17 09:36:13 +09:00
toast-gear
57a7b8076f docs: correcting shell command (#548)
Fixes #546
2021-05-16 09:08:41 +09:00
ToMe25
5309b1c02c Fix acceptance test not working due to missing SYNC_PERIOD (#542)
Fixes #533
2021-05-11 20:30:34 +09:00
Yusuke Kuoka
ae09e6ebb7 Make log level configurable (#541)
Resolves #425
2021-05-11 20:23:06 +09:00
Yusuke Kuoka
3cd124dce3 chore: Add debug logs for scheduledOverrides (#540)
Follow-up for #515
Ref #484
2021-05-11 17:30:22 +09:00
Yusuke Kuoka
25f5817a5e Improve debug log in webhook-based autoscaling
Adds some helpful debug log messages I have used while verifying #534
2021-05-11 15:49:03 +09:00
Yusuke Kuoka
0510f19607 chore: Enhance acceptance test to cover webhook-based autoscaling for repo and org runners
Adds what I used while verifying #534
2021-05-11 15:36:02 +09:00
Yusuke Kuoka
9d961c58ff Log used settings on startup 2021-05-11 11:46:35 +09:00
Yusuke Kuoka
ab25907050 chart: Add githubAPICacheDuration
Ref #502
2021-05-11 11:46:35 +09:00
Yusuke Kuoka
6cbba80df1 Add --github-api-cache-duration
Resolves #502
2021-05-11 11:46:35 +09:00
Liam Gibson
082245c5db Fix typos in README.md (#528) 2021-05-08 21:29:11 +09:00
Yusuke Kuoka
a82e020daa Add notes for unreleased features (#526) 2021-05-05 14:59:36 +09:00
Yusuke Kuoka
c8c2d44a5c Add documentation for ScheduledOverrides (#525)
Ref #484
2021-05-05 14:54:50 +09:00
Yusuke Kuoka
4e7b8b57c0 edge: Enable scaling from zero with PercentageRunnersBusy (#524)
`PercentageRunnersBusy`, in combination with a secondary `TotalInProgressAndQueuedWorkflowRuns` metric, enables scale-from-zero for PercentageRunnersBusy.

Please see the new `Autoscaling to/from 0` section in the updated documentation about how it works.

Resolves #522
2021-05-05 14:27:17 +09:00
Yusuke Kuoka
e7020c7c0f Fix scale-from-zero to retain the reg-only runner until other pods come up (#523)
Fixes #516
2021-05-05 12:13:51 +09:00
Yair Fried
cb54864387 chart: Allow to disabling kube-rbac-proxy and expose metrics (#511)
Fixes #454
2021-05-03 23:36:01 +09:00
Yusuke Kuoka
0e0f385f72 Experimental support for ScheduledOverrides (#515)
This adds the initial version of ScheduledOverrides to HorizontalRunnerAutoscaler.
`MinReplicas` overriding should just work.
When there are two or more ScheduledOverrides, the earliest one that matched is activated. Each ScheduledOverride can be recurring or one-time. If you have two or more ScheduledOverrides, only one of them should be one-time. And the one-time override should be the earliest item in the list to make sense.

Tests will be added in another commit. Logging improvements and additional observability in HRA.Status will also be added in yet another commits.

Ref #484
2021-05-03 23:31:17 +09:00
Yusuke Kuoka
b3cae25741 Enhance HorizontalRunnerAutoscaler API for ScheduledOverrides (#514)
This adds types and CRD changes related to HorizontalRunnerAutoscaler for the upcoming ScheduledOverrides feature.

Ref #484
2021-05-03 22:31:54 +09:00
Yusuke Kuoka
469b117a09 Foundation for ScheduledOverrides (#513)
Adds two types `RecurrenceRule` and `Period` and one function `MatchSchedule` as the foundation for building the upcoming ScheduledOverrides feature.

Ref #484
2021-05-03 22:03:49 +09:00
Yusuke Kuoka
5f59734078 Fix docker-login failing since move to GitHub organization (#510)
Fixes #509
2021-05-03 14:56:58 +09:00
Yusuke Kuoka
e00b3b9714 Make development cycle faster (#508)
Improves Makefile, acceptance/deploy.sh, acceptance/testdata/runnerdeploy.yaml, and the documentation to help developers and contributors.
2021-05-03 13:03:17 +09:00
Thejas N
588872a316 feat: allow ephemeral runner to be optional (#498)
- Adds `ephemeral` option to `runner.spec` 
    
    ```
      ....
      template:
         spec:
             ephemeral: false
             repository: mumoshu/actions-runner-controller-ci
      ....
    ```
- `ephemeral` defaults to `true`
- `entrypoint.sh` in runner/Dockerfile modified to read `RUNNER_EPHEMERAL` flag
- Runner images are backward-compatible. `--once` is omitted only when the new envvar `RUNNER_EPHEMERAL` is explicitly set to `false`.

Resolves #457
2021-05-02 19:04:14 +09:00
Yusuke Kuoka
a0feee257f Add .dockerignore for controller to accelerate image rebuild in local dev env (#504)
Previously any non-go changes resulted in `make docker-build` rerunning time-consufming `go build`. This fixes that by adding clearly unnecessary files .dockerignore
2021-05-02 16:47:07 +09:00
Christoph Brand
a18ac330bb feature(controller): allow autoscaler to scale down to 0 (#447) 2021-05-02 16:46:51 +09:00
Yusuke Kuoka
0901456320 Update README with more detailed test instructions (#503)
- You can now use `make acceptance/run` to run only a specific acceptance test case
- Add note about Ubuntu 20.04 users / snap-provided docker
- Add instruction to run Ginkgo tests
- Extract acceptance/load from acceptance/kind
- Make `acceptance/pull` not depend on `docker-build`, so that you can do `make docker-build acceptance/load` for faster image reload
2021-05-02 16:31:07 +09:00
Yusuke Kuoka
dbd7b486d2 feat: Support for scaling from/to zero (#465)
This is an attempt to support scaling from/to zero.

The basic idea is that we create a one-off "registration-only" runner pod on RunnerReplicaSet being scaled to zero, so that there is one "offline" runner, which enables GitHub Actions to queue jobs instead of discarding those.

GitHub Actions seems to immediately throw away the new job when there are no runners at all. Generally, having runners of any status, `busy`, `idle`, or `offline` would prevent GitHub actions from failing jobs. But retaining `busy` or `idle` runners means that we need to keep runner pods running, which conflicts with our desired to scale to/from zero, hence we retain `offline` runners.

In this change, I enhanced the runnerreplicaset controller to create a registration-only runner on very beginning of its reconciliation logic, only when a runnerreplicaset is scaled to zero. The runner controller creates the registration-only runner pod, waits for it to become "offline", and then removes the runner pod. The runner on GitHub stays `offline`, until the runner resource on K8s is deleted. As we remove the registration-only runner pod as soon as it registers, this doesn't block cluster-autoscaler.

Related to #447
2021-05-02 16:11:36 +09:00
callum-tait-pbx
7e766282aa ci: updating paths-ignore (#496)
* chore: updating paths-ignore

* chore: adding more path-ignores
2021-05-01 21:36:45 +09:00
ToMe25
ba175148c8 Locally build runner image instead of pulling it (#473)
* Fix acceptance helm test not using newly built controller image

* Locally build runner image instead of pulling it

* Revert runner controller image pull policy to always

and add a line to the test deployment to use IfNotPresent

* Change runner repository from summerwind/action-runner to the owner of actions-runner-controller.

Also fix some Makefile formatting.

* Undo renaming acceptance/pull to docker-pull

* Some env var cleanup

Rename USERNAME to DOCKER_USER(is still used for github too tho)
Add RUNNER_NAME var(defaults to $DOCKER_USER/actions-runner)
Add TEST_REPO(defaults to $DOCKER_USER/actions-runner-controller)
2021-05-01 15:10:57 +09:00
callum-tait-pbx
358146ee54 docs: adding note on cloud tooling (#492)
* docs: adding note on cloud tooling

* docs: better grammar
2021-04-30 10:20:01 +09:00
callum-tait-pbx
e9dd16b023 chore: adding stale config (#487)
* chore: adding stale config

* chore: adding more labels

* chore: adding more exempt labels
2021-04-30 10:14:13 +09:00
callum-tait-pbx
1ba4098648 docs: updating to reflect new ownership (#491) 2021-04-30 10:11:58 +09:00
callum-tait-pbx
05fb8569b3 docs: updating helm install command (#485) 2021-04-27 09:12:30 +09:00
callum-tait-pbx
db45a375d0 chore: bump runner (#486)
* chore: bump runner

* chore: bumper runner in ci
2021-04-27 08:38:40 +09:00
Rolf Ahrenberg
81dd47a893 Document dockerMTU and dockerRegistryMirror (#482) 2021-04-26 09:52:09 +09:00
Rolf Ahrenberg
6b77a2a5a8 feat: Docker registry mirror (#478)
Changes:

- Switched to use `jq` in startup.sh
- Enable docker registry mirror configuration which is useful when e.g. avoiding the Docker Hub rate-limiting

Check #478 for how this feature is tested and supposed to be used.
2021-04-25 14:04:01 +09:00
callum-tait-pbx
dc4cf3f57b docs: better enterprise runner documentation (#477)
* docs: better Enterprise runner documentation

* docs: adding more detail
2021-04-25 13:33:47 +09:00
Yusuke Kuoka
d810b579a5 Update RELEASE_NOTE_TEMPLATE.md 2021-04-25 13:02:15 +09:00
Yusuke Kuoka
47c8de9dc3 Rename RELEASE_NOTE_TEMPLATE to RELEASE_NOTE_TEMPLATE.md 2021-04-25 13:01:20 +09:00
Yusuke Kuoka
74a53bde5e Add release note template (#481)
So that everyone can contribute enhancements and fixes to the release notes structure :)
2021-04-25 13:00:25 +09:00
callum-tait-pbx
aad2615487 docs: improved details on authentication (#472) 2021-04-23 09:42:29 +09:00
callum-tait-pbx
03d9b6a09f docs: slightly better wording about support (#471) 2021-04-23 09:41:08 +09:00
callum-tait-pbx
5d280cc8c8 docs: adding scaling configuration detail (#469) 2021-04-23 09:40:23 +09:00
callum-tait-pbx
133c4fb21e docs: clean up Enterprise and fsGroup docs (#460)
* docs: cleaning up Enterprise docs

* docs: better wording in various areas

* docs: improving enterprise runner docs

* docs: using American English

* docs: removing superfluous paragraph

* docs: improving grammar

* docs: better grammar

* docs: better wording

* docs: updated to reflect comments

* docs: spelling correction
2021-04-20 10:26:10 +09:00
callum-tait-pbx
3b2d2c052e chore: adding Helm app version back (#412)
* chore: adding Helm app version back

* chore: removing redundant values entry

* chore: bumping to newer version

* chore: bumping app version to latest

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-04-18 13:58:54 +09:00
Manuel Jurado
37c2a62fa8 Allow to configure runner volume size limit (#436)
Enable the user to set a limit size on the volume of the runner to avoid some runner pod affecting other resources of the same cluster

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-04-18 13:56:59 +09:00
callum-tait-pbx
2eeb56d1c8 docs: removing superfluous title reference (#459) 2021-04-18 09:45:28 +09:00
ToMe25
a612b38f9b Cache docker images in acceptance test (#463)
* Cache docker images locally

Cache dind, runner, and kube-rbac-proxy docker image on the host and copy onto the kind node instead of downloading it to the node directly.

* Also cache certmanager docker images
2021-04-18 09:44:59 +09:00
callum-tait-pbx
1c67ea65d9 ci: fix latest tag push logic (#462)
* ci: fix latest tag push logic

* ci: use better job names
2021-04-18 09:41:22 +09:00
ToMe25
c26fb5ad5f Make acceptance use local docker image (#448)
load the local docker image to the kind cluster instead of pushing it to dockerhub and pulling it from there
2021-04-17 17:13:47 +09:00
callum-tait-pbx
325c2cc385 docs: correct and simplify example (#450)
* docs: correct and simplify example

* docs: removing alternatives

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-04-17 17:08:57 +09:00
Agoney Garcia-Deniz
2e551c9d0a Add hostAliases to the runner spec (#456) 2021-04-17 17:04:52 +09:00
asoldino
7b44454d01 Add documentation of dockerVolumeMount (#453) 2021-04-17 17:04:38 +09:00
callum-tait-pbx
f2680b2f2d Bumping runner to Ubuntu 20.04 (#438)
Images for `actions-runner:v${VERSION}` and `actions-runner:latest` tags are upgraded to Ubuntu 20.04.

If you would like not to upgrade Ubuntu in the runner image in the future, migrate to new tags suffixed with `-ubuntu-20.04` like`actions-runner:v${VERSION}-ubuntu-20.04`.

We also keep publishing the existing Ubuntu 18.04 images with new `actions-runner:v${VERSION}-ubuntu-18.04` tags. Please use it when it turned out that you had workflows dependent on Ubuntu 18.04.

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-04-17 17:02:03 +09:00
asoldino
b42b8406a2 Add dockerVolumeMounts (#439)
Resolves #435
2021-04-06 10:10:10 +09:00
Javi Polo
3c125e2191 Fix helm webhook ingress error: spec.rules[0].http.paths[0].backend: Required value: port name or number is required (#437) 2021-04-02 06:34:45 +09:00
Christoph Brand
9ed245c85e feature(controller): remove dockerd executable (#432) 2021-04-01 08:50:48 +09:00
Florian Braun
5b7807d54b Quote vars in entrypoint.sh to prevent unwanted argument split (#420)
Prevents arguments from being split when e.g. the RUNNER_GROUP variable contains spaces (which is legit. One can create such groups in GitHub).

I've seen that all workers with group names that contain no spaces can register successfully, while all workers with groups that contain spaces will not register.

Furthermore, I suppose also other chars can be used here to inject arbitrary commands in an unsupported way via e.g. pipe symbol.

Quoting the vars correctly should prevent that and allow for e.g. group names and runner labels with spaces and other bash reserved characters.
2021-03-31 10:09:08 +09:00
Yusuke Kuoka
156e2c1987 Fix MTU configuration for dockerd (#421)
Resolves #393
2021-03-31 09:29:21 +09:00
Yusuke Kuoka
da4dfb3fdf Add make target test-with-deps to ease setting up dependent binaries (#426) 2021-03-31 09:23:16 +09:00
Gabriel Dantas Gomes
0783ffe989 some readme typos (#423) 2021-03-29 10:08:21 +09:00
Yusuke Kuoka
374105c1f3 Fix dindWithinRunnerContainer not to crash-loop runner pods (#419)
Apparently #253 broke dindWithinRunnerContainer completely due to the difference in how /runner volume is set up.
2021-03-25 10:23:36 +09:00
Yusuke Kuoka
bc6e499e4f Make logging more concise (#410)
This makes logging more concise by changing logger names to something like `controllers.Runner` to `actions-runner-controller.runner` after the standard `controller-rutime.controller` and reducing redundant logs by removing unnecessary requeues. I have also tweaked log messages so that their style is more consistent, which will also help readability. Also, runnerreplicaset-controller lacked useful logs so I have enhanced it.
2021-03-20 07:34:25 +09:00
Yusuke Kuoka
07f822bb08 Do include Runner controller in integration test (#409)
So that we could catch bugs in runner controller like seen in #398, #404, and #407.

Ref #400
2021-03-19 16:14:15 +09:00
Hidetake Iwata
3a0332dfdc Add metrics of RunnerDeployment and HRA (#408)
* Add metrics of RunnerDeployment and HRA

* Use kube-state-metrics-style label names
2021-03-19 16:14:02 +09:00
Yusuke Kuoka
f6ab66c55b Do not delay min/maxReplicas propagation from HRA to RD due to caching (#406)
As part of #282, I have introduced some caching mechanism to avoid excessive GitHub API calls due to the autoscaling calculation involving GitHub API calls is executed on each Webhook event.

Apparently, it was saving the wrong value in the cache- The value was one after applying `HRA.Spec.{Max,Min}Replicas` so manual changes to {Max,Min}Replicas doesn't affect RunnerDeployment.Spec.Replicas until the cache expires. This isn't what I had wanted.

This patch fixes that, by changing the value being cached to one before applying {Min,Max}Replicas.

Additionally, I've also updated logging so that you observe which number was fetched from cache, and what number was suggested by either TotalNumberOfQueuedAndInProgressWorkflowRuns or PercentageRunnersBusy, and what was the final number used as the desired-replicas(after applying {Min,Max}Replicas).

Follow-up for #282
2021-03-19 12:58:02 +09:00
Yusuke Kuoka
d874a5cfda Fix status.lastRegistrationCheckTime in body must be of type string: \"null\" errors (#407)
Follow-up for #398 and #404
2021-03-19 11:15:35 +09:00
Yusuke Kuoka
c424215044 Do recheck runner registration timely (#405)
Since #392, the runner controller could have taken unexpectedly long time until it finally notices that the runner has been registered to GitHub. This patch fixes the issue, so that the controller will notice the successful registration in approximately 1 minute(hard-coded).

More concretely, let's say you had configured a long sync-period of like 10m, the runner controller could have taken approx 10m to notice the successful registration. The original expectation was 1m, because it was intended to recheck every 1m as implemented in #392. It wasn't working as such due to my misunderstanding in how requeueing work.
2021-03-19 11:02:47 +09:00
Yusuke Kuoka
c5fdfd63db Merge pull request #404 from summerwind/fix-reg-update-runner-status-err
Fix `Failed to update runner status for Registration` errors
2021-03-19 08:56:20 +09:00
Yusuke Kuoka
23a45eaf87 Bump chart version 2021-03-19 08:37:17 +09:00
Yusuke Kuoka
dee997b44e Fix Failed to update runner status for Registration errors
Fixes #400
2021-03-19 07:02:00 +09:00
Yusuke Kuoka
2929a739e3 Merge pull request #398 from summerwind/fix-status-last-reg-check-time-type-err
Fix `status.lastRegistrationCheckTime in body must be of type string: \"null\"` error
2021-03-18 10:36:44 +09:00
Yusuke Kuoka
3cccca8d09 Do patch runner status instead of update to reduce conflicts and avoid future bugs
Ref https://github.com/summerwind/actions-runner-controller/pull/398#issuecomment-801548375
2021-03-18 10:31:17 +09:00
Yusuke Kuoka
7a7086e7aa Make error logs more helpful 2021-03-18 10:26:21 +09:00
Yusuke Kuoka
565b14a148 Fix status.lastRegistrationCheckTime in body must be of type string: \"null\" error
Follow-up for #392
2021-03-18 10:20:49 +09:00
Yusuke Kuoka
ecc441de3f Bump chart version 2021-03-18 07:36:22 +09:00
Manabu Sakai
25335bb3c3 Fix typo in certificate.yaml (#396) 2021-03-18 07:33:34 +09:00
Yusuke Kuoka
9b871567b1 Fix wildcard in middle of actionsglob/scaleUpTrigger.githubEvent.checkRun.names not working (#395)
actionsglob patterns like `foo-*-bar` was not correctly working. Tests and the implementation was enhanced to correctly support it.
2021-03-17 06:46:48 +09:00
Balazs Gyurak
264cf494e3 Fix "pole" typo in README (#394)
I think these should be "poll".
2021-03-17 06:34:01 +09:00
Yusuke Kuoka
3f23501b8e Reduce "No runner matching the specified labels was found" errors while runner replacement (#392)
We occasionally encountered those errors while the underlying RunnerReplicaSet is being recreated/replaced on RunnerDeployment.Spec.Template update. It turned out to be due to that the RunnerDeployment controller was waiting for the runner pod becomes `Running`, intead of the new replacement runner to have registered to GitHub. This fixes that, by trying to Runner.Status.Phase to `Running` only after the runner in the runner pod appears to be registered.

A side-effect of this change is that runner controller would call more "ListRunners" GitHub Actions API. I've reviewed and improved the runner controller code and Runner CRD to make make the number of calls minimum. In most cases, ListRunners should be called only twice for each runner creation.
2021-03-16 10:52:30 +09:00
Yusuke Kuoka
5530030c67 Disable metrics-based autoscaling by default when scaleUpTriggers are enabled (#391)
Relates to https://github.com/summerwind/actions-runner-controller/pull/379#discussion_r592813661
Relates to https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793266609

When you defined HRA.Spec.ScaleUpTriggers[] but HRA.Spec.Metrics[], the HRA controller will now enable ScaleUpTriggers alone and insteaed of automatically enabling TotalNumberOfQueuedAndInProgressWorkflowRuns. This allows you to use ScaleUpTriggers alone, so that the autoscaling is done without calling GitHub API at all, which should grealy decrease the change of GitHub API calls get rate-limited.
2021-03-14 11:03:00 +09:00
Yusuke Kuoka
8d3a83b07a Add CheckRun.Names scale-up trigger configuration (#390)
This allows you to trigger autoscaling depending on check_run names(i.e. actions job names). If you are willing to differentiate scale amount only for a specific job, or want to scale only on a specific job, try this.
2021-03-14 10:21:42 +09:00
callum-tait-pbx
a6270b44d5 docs: fix typos and add PR link (#379)
* docs: fix typos and add PR link

* docs: changes based on feedback

* docs: fixing numbers in list

* docs: grammer

* docs: better wording
2021-03-12 08:52:34 +09:00
Brandon Kimbrough
2273b198a1 Add ability to set the MTU size of the docker in docker container (#385)
* adding abilitiy to set docker in docker MTU size

* safeguards to only set MTU env var if it is set
2021-03-12 08:44:49 +09:00
Yusuke Kuoka
3d62e73f8c Fix PercentageRunnersBusy scaling not working (#386)
PercentageRunnerBusy seems to have regressed since #355 due to that RunnerDeployment.Spec.Selector is empty by default and the HRA controller was using that empty selector to query runners, which somehow returned 0 runners. This fixes that by using the newly added automatic `runner-deployment-name` label for the default runner label and the selector, which avoids querying with empty selector.

Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-795200205
2021-03-11 20:16:36 +09:00
Yusuke Kuoka
f5c639ae28 Make webhook-based autoscaler github event logs more operator-friendly (#384)
Adds fields like `pullRequest.base.ref` and `checkRun.status` that are useful for verifying the autoscaling behaviour without browsing GitHub.
Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-794175312
2021-03-10 09:40:44 +09:00
Yusuke Kuoka
81016154c0 GITHUB_APP_PRIVATE_KEY can now be the content of the key (#383)
Resolves #382
2021-03-10 09:37:15 +09:00
Yusuke Kuoka
728829be7b Fix panic on scaling organizational runners (#381)
Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793287133
2021-03-09 15:03:47 +09:00
Yusuke Kuoka
c0b8f9d483 Merge pull request #380 from summerwind/ns-flag
Use --watch-namespace flag to restrict the namespace to watch
2021-03-09 15:03:32 +09:00
Yusuke Kuoka
ced1c2321a Fix chart-testing failing due to conflict between authSecret and dummySecret 2021-03-09 14:54:55 +09:00
Yusuke Kuoka
1b8a656051 Use --watch-namespace flag to restrict the namespace to watch
Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793172995
2021-03-09 09:46:21 +09:00
Rob Whitby
1753fa3530 handle GET requests in webhook hra (#378) 2021-03-09 08:46:27 +09:00
Johannes Nicolai
8c0f3dfc79 Set runner group for runners with enterprise scope (#376)
* so far, runner group parameter is only set for runners with org scope
* now set group for enterprise runners as well
* removed null check for org scope as either org or enterprise will be set
2021-03-08 09:18:23 +09:00
Rob Whitby
dbda292f54 fix typo in examples (#373) 2021-03-08 09:18:10 +09:00
callum-tait-pbx
550a864198 chore: bumping helm chart (#372)
PR 355 made changes to the CRDs but didn't bump the version
2021-03-05 20:27:52 +09:00
Yusuke Kuoka
4fa5315311 Fix possible flapping autoscale on runner update (#371)
Addresses https://github.com/summerwind/actions-runner-controller/pull/355#discussion_r587199428
2021-03-05 10:21:20 +09:00
Hiroshi Muraoka
11e58fcc41 Manage runner with label (#355)
* Update RunnerDeploymentSpec to have Selector field

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update RunnerReplicaSetSpec to have Selector field

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Add CloneSelectorAndAddLabel to add Selector field

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Fix tests

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Use label to find RunnerReplicaSet/Runner

Signed-off-by: binoue <banji-inoue@cybozu.co.jp>

* Update controller-gen versions in CRD

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update autoscaler to list Pods with labels

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Add debug log

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Modify RunnerDeployment tests

Signed-off-by: binoue <banji-inoue@cybozu.co.jp>

* Modify RunnerReplicaset test

Signed-off-by: binoue <banji-inoue@cybozu.co.jp>

* Modify integration test

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Use RunnerDeployment Template Labels as the default selector for backward compatibility

* Fix labeling

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update func in Eventually to return (int, error)

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update RunnerDeployment controller not to use label selector

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Fix potential replicaset controller breakage on replicaset created before v0.17.0

* Fix errors on existing runner replica sets

* Ensure RunnerReplicaSet Spec Selector addition does not break controller

* Ensure RunnerDeployment Template.Spec.Labels change does result in template hash change

* Fix comment

Co-authored-by: binoue <banji-inoue@cybozu.co.jp>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-03-05 10:15:39 +09:00
Mike Perry
f220fefe92 Update README.md (#370) 2021-03-05 09:17:32 +09:00
Hidetake Iwata
56b4598d1d Fix helm template error when webhook server is enabled (#365)
* Fix include block in githubwebhook.deployment.yaml

* Fix include block in githubwebhook.secrets.yaml
2021-03-03 09:21:58 +09:00
Taehyun Kim
8f977dbe48 Fix various bugs in helm chart (#364)
* Fix wrong trim

* add missing MutatingWeghookConfiguration.webhooks[*].sideEffects

* fix missing admissionReviewVersions

* admissionregistration.k8s.io/v1 for kustomization manifests

* revert webhook config
2021-03-03 09:21:20 +09:00
Yusuke Kuoka
9ae3551744 Remove unnecessary GitHub API calls (#363)
The controller had the 2 extra and redundant calls to List Workflow Runs API.

Ref #362
2021-03-02 10:55:30 +09:00
Rolf Ahrenberg
05ad3f5469 Set default python (#361) 2021-03-01 09:45:13 +09:00
callum-tait-pbx
9c7372a8e0 docs: styling fixes (#359)
* docs: styling fixes

* docs: grammer fixes
2021-03-01 09:44:35 +09:00
Yusuke Kuoka
584590e97c Use patch instead of update to alleviate HRA conflict on webhook (#358)
We sometimes see that integration test fails due to runner replicas not meeting the expected number in a timely manner. After investigating a bit, this turned out to be due to that HRA updates on webhook-based autoscaler and HRA controller are conflicting. This changes the controllers to use Patch instead of Update to make conflicts less likely to happen.

I have also updated the hra controller to use Patch when updating RunnerDeployment, too.

Overall, these changes should make the webhook-based autoscaling more reliable due to less conflicts.
2021-02-26 10:17:09 +09:00
Yusuke Kuoka
d18884a0b9 Fix HRA expired cache entries not cleaned up (#357)
Fixes #356
2021-02-26 09:54:24 +09:00
callum-tait-pbx
f987571b64 Improve docs (#303) 2021-02-26 09:32:18 +09:00
Taehyun Kim
450e384c4c Update helm chart (#343)
* add replicaCount

* Add authSecret.existingSecret

* set image.tag null by default

* implement ingress for githubwebhook server

* fix deprecated and secretName template

* backward compat .authSecret.enabled

* existingSecret for github webhook secret

* use secretName template

* set default secret names

* do not use app version based image tag

* create and name variable for secrets
2021-02-26 09:26:51 +09:00
Yusuke Kuoka
e9eef04993 Fix old HRA capacity reservations not cleaned up (#354)
Similar to #348 for #346, but for HRA.Spec.CapacityReservations usually modified by the webhook-based autoscaler controller.
This patch tries to fix that by improving the webhook-based autoscaler controller to omit expired reservations on updating HRA spec.
2021-02-25 11:08:00 +09:00
Yusuke Kuoka
598dd1d9fe Fix incorrect DESIRED on `kubectl get hra (#353)
`kubectl get horizontalrunnerautoscalers.actions.summerwind.dev` shows HRA.status.desiredReplicas as the DESIRED count. However the value had been not taking capacityReservations into account, which resulted in showing incorrect count when you used webhook-based autoscaler, or capacityReservations API directly. This fixes that.
2021-02-25 10:32:09 +09:00
Yusuke Kuoka
9890a90e69 Improve webhook-based autoscaler log (#352)
The controller had been writing confusing messages like the below on missing scale target:

```
Found too many scale targets: It must be exactly one to avoid ambiguity. Either set WatchNamespace for the webhook-based autoscaler to let it only find HRAs in the namespace, or update Repository or Organization fields in your RunnerDeployment resources to fix the ambiguity.{"scaleTargets": ""}
```

This fixes that, while improving many kinds of messages written while reconcilation, so that the error message is more actionable.
2021-02-25 10:07:41 +09:00
Yusuke Kuoka
9da123ae5e Fix integration test flakiness (#351)
Ref https://github.com/summerwind/actions-runner-controller/pull/345#issuecomment-785015406
2021-02-25 09:30:32 +09:00
Johannes Nicolai
4d4137aa28 Avoid zombie runners that missed token expiration by a bit (#345)
* if a new runner pod was just scheduled to start up right before a 
registration expired, it will not get a new registration token and go in 
an infinite update loop (until #341) kicks in
* if registzration tokens got updated a little bit before they actually 
expired, just starting up pods will way more likely get a working token
2021-02-25 09:07:49 +09:00
Yusuke Kuoka
022007078e Compact excessive error message on runnerreplicaset status update conflict (#350)
We occasionally see logs like the below:

```
2021-02-24T02:48:26.769ZERRORFailed to update runner status{"runnerreplicaset": "testns-244ol/example-runnerdeploy-j5wzf", "error": "Operation cannot be fulfilled on runnerreplicasets.actions.summerwind.dev \"example-runnerdeploy-j5wzf\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
github.com/summerwind/actions-runner-controller/controllers.(*RunnerReplicaSetReconciler).Reconcile
/home/runner/work/actions-runner-controller/actions-runner-controller/controllers/runnerreplicaset_controller.go:207
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:256
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:88
2021-02-24T02:48:26.769ZERRORcontroller-runtime.controllerReconciler error{"controller": "testns-244olrunnerreplicaset", "request": "testns-244ol/example-runnerdeploy-j5wzf", "error": "Operation cannot be fulfilled on runnerreplicasets.actions.summerwind.dev \"example-runnerdeploy-j5wzf\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:88
```

which can be compacted into one-liner, without the useless stack trace, without double-logging the same error from the logger and the controller.
2021-02-25 09:01:02 +09:00
Johannes Nicolai
31e5e61155 Log correct runner that was deleted (#349) 2021-02-25 08:38:55 +09:00
Aditya Purandare
1d1453c5f2 Fix user used for dind runner group permissions (#337) 2021-02-24 19:06:52 +09:00
Yusuke Kuoka
e44e53b88e Fix failure while saving HRA status after running controller for a while (#348)
Fixes #346
2021-02-24 11:20:21 +09:00
Yusuke Kuoka
398791241e Fix runner release workflow to do docker-push (#347)
Apparently I have mistakenly removed `push` option from the workflow in #323 which resulted in new runner build #323 not being pushed. This fixes that.
2021-02-24 11:08:33 +09:00
Yusuke Kuoka
991535e567 Fix panic on webhook for user-owned repository (#344)
* Fix panic on webhook for user-owned repository

Follow-up for #282 and #334
2021-02-23 08:05:25 +09:00
Johannes Nicolai
2d7fbbfb68 Handle offline runners gracefully (#341)
* if a runner pod starts up with an invalid token, it will go in an 
infinite retry loop, appearing as RUNNING from the outside
* normally, this error situation is detected because no corresponding 
runner objects exists in GitHub and the pod will get removed after 
registration timeout
* if the GitHub runner object already existed before - e.g. because a 
finalizer was not properly run as part of a partial Kubernetes crash, 
the runner will always stay in a running mode, even updating the 
registration token will not kill the problematic pod
* introducing RunnerOffline exception that can be handled in runner 
controller and replicaset controller
* as runners are offline when a pod is completed and marked for restart, 
only do additional restart checks if no restart was already decided, 
making code a bit cleaner and saving GitHub API calls after each job 
completion
2021-02-22 10:08:04 +09:00
Yusuke Kuoka
dd0b9f3e95 Merge pull request #340 from int128/integration-test-check-run
Fix index key to find HRA in GitHub webhook handler
2021-02-22 09:49:54 +09:00
Yusuke Kuoka
7cb2bc84c8 Merge pull request #334 from summerwind/integration-test-check-run
Add integration test for autoscaling on check_run webhook event
2021-02-22 09:38:07 +09:00
Hidetake Iwata
b0e74bebab Fix index key to find HRA in GitHub webhook handler 2021-02-20 21:25:23 +09:00
Hidetake Iwata
dfbe53dcca Fix webhook payload in integration test 2021-02-20 21:08:23 +09:00
Yusuke Kuoka
ebc3970b84 Add integration test for autoscaling on check_run webhook event 2021-02-19 10:33:04 +09:00
Hidetake Iwata
1ddcf6946a Fix nil pointer error on received check_run event (#331)
* Reproduce nil pointer error on received check_run event

* Fix nil pointer error on received check_run event
2021-02-18 20:22:36 +09:00
Yusuke Kuoka
cfbaad38c8 Merge pull request #328 from int128/fix-port-name-length
Changes:

1. Fix length of github-webhook-server port name
2. Add a cluster role binding for github-webhook-server
3. Remove --enable-leader-election from github-webhook-server
2021-02-18 20:20:39 +09:00
Yusuke Kuoka
67f6de010b feat: Common runner labels configurable per controller (#327)
* feat: Common runner labels configurable per controller

Ref #321
2021-02-18 20:19:08 +09:00
Hidetake Iwata
2db608879a Remove --enable-leader-election from github-webhook-server 2021-02-18 16:51:47 +09:00
Hidetake Iwata
2c4a6ca90b Add cluster role binding for github-webhook-server 2021-02-18 16:49:24 +09:00
Hidetake Iwata
829bf20449 Fix length of github-webhook-server port name 2021-02-18 16:42:15 +09:00
Reinier Timmer
be13322816 Update runner to 2.277.1 (#322)
* Update runner to 2.277.1

* Update build-and-release-runners.yml

* integration test condition

Don't run integration tests when only updating the runner image

* fixup! integration test condition

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-02-18 09:29:53 +09:00
Johannes Nicolai
7f4a76a39b Also log into DockerHub for release event (#326)
* so far, only push events would trigger the DockerHub login step
* hence, attempts to release would fail because of a permission problem (tested locally)
* adding OR condition to also login in case a release got published
2021-02-18 08:54:44 +09:00
callum-tait-pbx
0fce761686 fix: add trunate to ensure service kinds have valid names (#325)
* fix: adding truncate for service kinds

* chore : bumping chart version
2021-02-18 08:43:48 +09:00
Yusuke Kuoka
c88ff44518 Fix wip.yml workflow for building controller canary tags (#323)
In #306 we seem to have accidentally updated a wrong workflow, which was for runner builds. This updates the one for the controller.

Resolves #302
2021-02-18 08:42:24 +09:00
Yusuke Kuoka
2fdf35ac9d Refactor integration test to use helpers (#320)
This should make the test code a bit more DRY and readable.
2021-02-17 10:23:35 +09:00
Johannes Nicolai
6cce3fefc5 Add project to awesome-runners list (#319) 2021-02-17 09:14:42 +09:00
Yusuke Kuoka
eb2eaf8130 Fix TotalNumberOfQueuedAndInProgressWorkflowRuns to work with a lot of remaining completed jobs (#316)
I have heard from some user that they have hundred thousands of `status=completed` workflow runs in their repository which effectively blocked TotalNumberOfQueuedAndInProgressWorkflowRuns from working because of GitHub API rate limit due to excessive paginated requests.

This fixes that by separating list-workflow-runs calls to two - one for `queued` and one for `in_progress`, which can make the minimum API call from 1 to 2, but allows it to work regardless of number of remaining `completed` workflow runs.
2021-02-16 18:55:55 +09:00
callum-tait-pbx
7bf712d0d4 fix: duplicate name attribute (#318) 2021-02-16 18:52:08 +09:00
Yusuke Kuoka
7d024a6c05 Fix "duplicate metrics collector registration attempted" errors at startup (#317)
I have seen this error a lot in our integration test. It turned out due to https://github.com/kubernetes-sigs/controller-runtime/issues/484 and is being fixed with this change.
2021-02-16 18:51:33 +09:00
Yusuke Kuoka
434823bcb3 scale{Up,Down}Adjustment to add/remove constant number of replicas on scaling (#315)
* `scale{Up,Down}Adjustment` to add/remove constant number of replicas on scaling

Ref #305

* Bump chart version
2021-02-16 17:16:26 +09:00
Yusuke Kuoka
35d047db01 Fix enterprise runners misusing cached token (#314)
Follow-up for #290
2021-02-16 12:56:52 +09:00
Yusuke Kuoka
f1db6af1c5 Add repository runners support for PercentageRunnersBusy-based autoscaling (#313)
Resolves #258
2021-02-16 12:44:51 +09:00
Hidetake Iwata
4f3f2fb60d Add metrics for GitHub API rate limit (#312) 2021-02-16 09:58:09 +09:00
Johannes Nicolai
2623140c9a Make log message less scary (#311)
* the reconciliation loop is often much faster than the runner startup, 
so changing runner not found related messages to debug and also add the 
possibility that the runner just needs more time
2021-02-16 09:55:55 +09:00
Johannes Nicolai
1db9d9d574 Use ARM64 compatible kube-rbac-proxy from upstream (#310)
* as pointed out in #281 the currently used image for the 
kube-rbac-proxy - gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1" - does not 
have an ARM64 image
* hence, trying to use the standard deployment manifest / helm char will 
fail on ARM64 systems
* replaced image with quay.io/brancz/kube-rbac-proxy:v0.8.0 which is the 
latest version from the upstream maintainer 
(https://github.com/brancz/kube-rbac-proxy/blob/master/Makefile#L13)
* successfully tested on both AMD64 and ARM64 clusters
* fixes #281
2021-02-16 09:55:03 +09:00
callum-tait-pbx
d046350240 chore: bumping helm chart sematically (#296)
* chore: bumping helm chart sematically

* chore: removing the app version config
2021-02-16 09:45:56 +09:00
callum-tait-pbx
cca4d249e9 feat: create workflow for runner releases (#306) 2021-02-16 09:42:28 +09:00
Johannes Nicolai
bc8bc70f69 Fix rate limit and runner registration logic (#309)
* errors.Is compares all members of a struct to return true which never 
happened
* switched to type check instead of exact value check
* notRegistered was using double negation in if statement which lead to 
unregistering runners after the registration timeout
2021-02-15 09:36:49 +09:00
Johannes Nicolai
34c6c3d9cd Pod eviction policy examples (crashed nodes) (#308)
* ... otherwise it will take 40 seconds (until a node is detected as unreachable) + 5 minutes (until pods are evicted from unreachable/crashed nodes)
* pods stuck in "Terminating" status on unreachable nodes will only be freed once #307 gets merged
2021-02-15 09:33:01 +09:00
Johannes Nicolai
9c8d7305f1 Introduce pod deletion timeout and forcefully delete stuck pods (#307)
* if a k8s node becomes unresponsive, the kube controller will soft
delete all pods after the eviction time (default 5 mins)
* as long as the node stays unresponsive, the pod will never leave the
last status and hence the runner controller will assume that everything
is fine with the pod and will not try to create new pods
* this can result in a situation where a horizontal autoscaler thinks
that none of its runners are currently busy and will not schedule any
further runners / pods, resulting in a broken  runner deployment until the
runnerreplicaset is deleted or the node comes back online
* introducing a pod deletion timeout (1 minute) after which the runner
controller will try to reboot the runner and create a pod on a working
node
* use forceful deletion and requeue for pods that have been stuck for
more than one minute in terminating state
* gracefully handling race conditions if pod gets finally forcefully deleted within
2021-02-15 09:32:28 +09:00
Yusuke Kuoka
addcbfa7ee Fix runner registration timeout (#301)
Fixes #300
2021-02-12 10:00:20 +09:00
Yusuke Kuoka
bbb036e732 feat: Prevent blocking on transient runner registration failure (#297)
This enhances the controller to recreate the runner pod if the corresponding runner has failed to register itself to GitHub within 10 minutes(currently hard-coded).

It should alleviate #288 in case the root cause is some kind of transient failures(network unreliability, GitHub down, temporarly compute resource shortage, etc).

Formerly you had to manually detect and delete such pods or even force-delete corresponding runners to unblock the controller.

Since this enhancement, the controller does the pod deletion automatically after 10 minutes after pod creation, which result in the controller create another pod that might work.

Ref #288
2021-02-09 10:17:52 +09:00
Yusuke Kuoka
9301409aec fix: Paginate ListRepositoryWorkflowRuns (#295)
When we used `QueuedAndInProgressWorkflowRuns`-based autoscaling, it only fetched and considered only the first 30 workflow runs at the reconcilation time. This may have resulted in unreliable scaling behaviour, like scale-in/out not happening when it was expected.
2021-02-09 10:13:53 +09:00
Yusuke Kuoka
ab1c39de57 feat: HorizontalRunnerAutoscaler Webhook server (#282)
* feat: HorizontalRunnerAutoscaler Webhook server

This introduces a Webhook server that responds GitHub `check_run`, `pull_request`, and `push` events by scaling up matched HorizontalRunnerAutoscaler by 1 replica. This allows you to immediately add "resource slack" for future GitHub Actions job runs, without waiting next sync period to add insufficient runners.

This feature is highly inspired by https://github.com/philips-labs/terraform-aws-github-runner. terraform-aws-github-runner can manage one set of runners per deployment, where actions-runner-controller with this feature can manage as many sets of runners as you declare with HorizontalRunnerAutoscaler and RunnerDeployment pairs.

On each GitHub event received, the webhook server queries repository-wide and organizational runners from the cluster and searches for the single target to scale up. The webhook server tries to match HorizontalRunnerAutoscaler.Spec.ScaleUpTriggers[].GitHubEvent.[CheckRun|Push|PullRequest] against the event and if it finds only one HRA, it is the scale target. If none or two or more targets are found for repository-wide runners, it does the same on organizational runners.

Changes:

* Fix integration test
* Update manifests
* chart: Add support for github webhook server
* dockerfile: Include github-webhook-server binary
* Do not import unversioned go-github
* Update README
2021-02-07 17:37:27 +09:00
alex-mozejko
a4350d0fc2 bug-fix: patched dir owned by runner (#284)
* bug-fix: patched dir owned by runner

* always build with latest runner version

* Revert "always build with latest runner version"

This reverts commit e719724ae9fe92a12d4a087185cf2a2ff543a0dd.

* Also patch dindrunner.Dockerfile

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-02-07 17:21:10 +09:00
callum-tait-pbx
2146c62c9e chore: bumping Helm chart version (#294)
* chore: adding mac rubbish to gitignore

* chore: bumping chart version
2021-02-07 16:46:19 +09:00
Jesse Haka
28e80a2d28 Add support for enterprise runners (#290)
* Add support for enterprise runners

* update docs
2021-02-05 09:31:06 +09:00
Tom Bamford
831db9ee2a Added github.sha to DockerHub push (#286)
* Added GITHUB.RUN_NUMBER to DockerHub push

* switch run_number to sha on docker tag

* re-add mutable tags for backwards compatability

* truncate to short SHA (7 chars)

* behaviour workaround

* use ENV to define sha_short

* use ::set-output to define sha_short

* bump action
2021-02-04 09:29:32 +09:00
Donovan Muller
4d69e0806e Update GitHub runner version (#280) 2021-02-02 14:06:08 +09:00
Donovan Muller
d37cd69e9b feat/helm: Bump appVersion to 0.6.1 release (#272)
* feat/helm: Bump appVersion to 0.6.1 release

* Also bump chart version to trigger a new chart release

Co-authored-by: Yusuke Kuoka <c-ykuoka@zlab.co.jp>
2021-01-29 09:29:43 +09:00
Yusuke Kuoka
a2690aa5cb Update README.md
Follow-up for #275
2021-01-29 09:29:26 +09:00
Clément
da020df0fd docs: fix install installation method (#275) 2021-01-29 09:28:34 +09:00
Jonas Lergell
6c64ae6a01 Actually use 'dockerdContainerResources' to set resources on the dind container (#273) 2021-01-29 09:18:28 +09:00
Yusuke Kuoka
42c7d0489d chart: Bump to 0.2.0 2021-01-25 09:14:49 +09:00
Donovan Muller
b3bef6404c Add support for additional environment variables (#271) 2021-01-25 09:00:03 +09:00
David Young
1127c447c4 Add GitHub Actions to publish helm chart (#257)
* Add chart workflows (#1)

* Add chart workflows

* Fix publishing step in CI

Signed-off-by: David Young <davidy@funkypenguin.co.nz>

* Update CI on push-to-master (#3)

* Put helm installation step in the correct CI job

Signed-off-by: David Young <davidy@funkypenguin.co.nz>

* Put helm installation step in the correct CI job (#4)

* Update on-push-master-publish-chart.yml

* Remove references to certmanager dependency

Signed-off-by: David Young <davidy@funkypenguin.co.nz>

* Add ability to customize kube-rbac-proxy image

Signed-off-by: David Young <davidy@funkypenguin.co.nz>

* Only install cert-manager if we're going to spin up KinD

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-01-24 15:37:01 +09:00
Yusuke Kuoka
ace95d72ab Fix self-update failuers due to /runner/externals mount (#253)
* Fix self-update failuers due to /runner/externals mount

Fixes #252

* Tested Self-update Fixes (#269)

Adding fixes to #253 as confirmed and tested in https://github.com/summerwind/actions-runner-controller/issues/264#issuecomment-764549833 by @jolestar, @achedeuzot and @hfuss 🙇 🍻

Co-authored-by: Hayden Fuss <wifu1234@gmail.com>
2021-01-24 10:58:35 +09:00
Johannes Nicolai
42493d5e01 Adding --name-space parameter in example (#259)
* when setting a GitHub Enterprise server URL without a namespace, an error occurs: "error: the server doesn't have a resource type "controller-manager"
* setting default namespace "actions-runner-system" makes the example work out of the box
2021-01-22 10:12:04 +09:00
Johannes Nicolai
94e8c6ffbf minReplicas <= desiredReplicas <= maxReplicas (#267)
* ensure that minReplicas <= desiredReplicas <= maxReplicas no matter what
* before this change, if the number of runners was much larger than the max number, the applied scale down factor might still result in a desired value > maxReplicas
* if for resource constraints in the cluster, runners would be permanently restarted, the number of runners could go up more than the reverse scale down factor until the next reconciliation round, resulting in a situation where the number of runners climbs up even though it should actually go down
* by checking whether the desiredReplicas is always <= maxReplicas, infinite scaling up loops can be prevented
2021-01-22 10:11:21 +09:00
callum-tait-pbx
563c79c1b9 feat/helm: add manager secret to Helm chart (#254)
* feat: adding maanger secret to Helm

* fix: correcting secret data format

* feat: adding in common labels

* fix: updating default values to have config

The auth config needs to be commented out by default as we don't want to deploy both configs empty. This may break stuff, so we want the user to actively uncomment the auth method they want instead

* chore: updating default format of cert

* chore: wording
2021-01-22 10:03:25 +09:00
Johannes Nicolai
cbb41cbd18 Updating custom container example (#260)
* use latest instead of outdated version
* use sudo for package install (required)
* use sudo for package meta data removal (required)
2021-01-22 09:57:42 +09:00
Johannes Nicolai
64a1a58acf GitHub runner groups have to be created first (#261)
* in contrast to runner labels, GitHub runner groups are not automatically created
2021-01-22 09:52:35 +09:00
Reinier Timmer
524cf1b379 Update runner to v2.275.1 (#239) 2020-12-18 08:38:39 +09:00
ZacharyBenamram
0dadddfc7d Update README for "PercentageRunnersBusy" HRA metric type (#237)
* adding readme for new hpa scheme

* callum's comments

Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-12-17 10:21:27 +09:00
ZacharyBenamram
48923fec56 Autoscaling: Percentage runners busy - remove magic number used for round up (#235)
* remove magic number for autoscaling

Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-12-15 14:38:01 +09:00
ZacharyBenamram
466b30728d Add "PercentageRunnersBusy" horizontal runner autoscaler metric type (#223)
* hpa scheme based off busy runners

* running make manifests

Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-12-13 08:48:19 +09:00
callum-tait-pbx
c13704d7e2 feat: custom labels (#231)
Co-authored-by: Callum Tait <callum.tait@PBXUK-HH-05772.photobox.priv>
2020-12-13 08:33:04 +09:00
callum-tait-pbx
fb49bbda75 feat: adding helm config for dind sidecar (#232)
Co-authored-by: Callum Tait <callum.tait@PBXUK-HH-05772.photobox.priv>
2020-12-13 08:31:24 +09:00
Reinier Timmer
8d6f77e07c Remove beta GitHub client implementations (#228) 2020-12-10 09:08:51 +09:00
Yusuke Kuoka
dfffd3fb62 feat: EKS IAM Roles for Service Accounts for Runner Pods (#226)
One of the pod recreation conditions has been modified to use hash of runner spec, so that the controller does not keep restarting pods mutated by admission webhooks. This naturally allows us, for example, to use IRSA for EKS that requires its admission webhook to mutate the runner pod to have additional, IRSA-related volumes, volume mounts and env.

Resolves #200
2020-12-08 17:56:06 +09:00
Juho Saarinen
f710a54110 Don't compare runner connetion token at restart need check (#227)
Fixes #143
2020-12-08 08:48:35 +09:00
Yusuke Kuoka
85c29a95f5 runner: Add support for ruby/setup-ruby (#224)
It turned out previous versions of runner images were unable to run actions that require `AGENT_TOOLSDIRECTORY` or `libyaml` to exist in the runner environment. One of notable examples of such actions is [`ruby/setup-ruby`](https://github.com/ruby/setup-ruby).

This change adds the support for those actions, by setting up AGENT_TOOLSDIRECTORY and installing libyaml-dev within runner images.
2020-12-06 11:53:38 +09:00
Erik Nobel
a2b335ad6a Github pkg: Bump github package to version 33 (#222) 2020-12-06 10:01:47 +09:00
Tom Bamford
56c57cbf71 ci: Replace deprecated crazy-max buildx action to use alternative docker actions (#197)
Deprecated action `crazy-max/setup-buildx-action@v1` has been replaced with:
  `docker/setup-qemu-action@v1`
  `docker/setup-buildx-action@v1`
  `docker/login-action@v1`
  `docker/build-push-action@v2`

See: https://github.com/crazy-max/ghaction-docker-buildx
2020-12-06 10:00:10 +09:00
Ahmad Hamade
837563c976 Adding priorityClassName to helm chart (#215)
* Adding priorityClassName to helm chart and README file

* removed README and revert chart version
2020-11-30 09:04:25 +09:00
ZacharyBenamram
df99f394b4 Remove 10 minute buffer to token expiration (#214)
Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-11-30 09:03:27 +09:00
Shinnosuke Sawada
be25715e1e Use TLS for secure docker connection (#192) 2020-11-30 08:57:33 +09:00
Yusuke Kuoka
4ca825eef0 Publish runner images for v2.274.2
Ref #212
2020-11-27 08:49:58 +09:00
Yusuke Kuoka
e5101554b3 Fix release workflow to not use add-path
Fixes #208
2020-11-26 08:39:03 +09:00
Reinier Timmer
ee8fb5a388 parametrized working directory (#185)
* parametrized working directory

* manifests v3.0
2020-11-25 08:55:26 +09:00
Erik Nobel
4e93879b8f [BUG?]: Create mountpoint for /externals/ (#203)
* runner/controller: Add externals directory mount point

* Runner: Create hack for moving content of /runner/externals/ dir

* Externals dir Mount: mount examples for '__e/node12/bin/node' not found error
2020-11-25 08:53:47 +09:00
Shinnosuke Sawada
6ce6737f61 add dockerEnabled document (#193)
Follow-up for #191
2020-11-17 09:31:34 +09:00
Shinnosuke Sawada
4371de9733 add dockerEnabled option (#191)
Add dockerEnabled option for users who does not need docker and want not to run privileged container.
if `dockerEnabled == false`, dind container not run, and there are no privileged container.

Do the same as closed #96
2020-11-16 09:41:12 +09:00
Yusuke Kuoka
1fd752fca2 Use tcp DOCKER_HOST instead of sharing docker.sock (#177)
docker:dind container creates `/var/run/docker.sock` with root user and root group.
so, docker command in runner container needs root privileges to use docker.sock and docker action fails because lack of permission.

Use tcp connection between runner and docker container, so runner container doesn't need root privileges to run docker, and can run docker action.

Fixes #174
2020-11-16 09:32:29 +09:00
Yusuke Kuoka
ccd86dce69 chart: Update CRDs to make Runnner{Deployment,ReplicaSet} replicas optional (#189)
Ref #186
2020-11-14 22:09:06 +09:00
Yusuke Kuoka
3d531ffcdd refactor/sync-period (#188)
* refactor: adding explicit sync-period flag

* docs: adding mroe detail for sync period config

* docs: spelling 🙃

Co-authored-by: Callum Tait <callum.tait@PBXUK-HH-05772.photobox.priv>
2020-11-14 22:07:22 +09:00
Yusuke Kuoka
1658f51fcb Make Runner{Deployment,ReplicaSet} replicas actually optional (#186)
If omitted, it properly defaults to 1.

Fixes #64
2020-11-14 22:06:33 +09:00
Yusuke Kuoka
9a22bb5086 Initial version of Helm chart (#187)
Acceptance tests are passing with the chart. In addition to standard chart values, syncPeriod is supported.
Please use it as a foundation for further collaboration.

Ref #184
Inspired by #91
Related #61
2020-11-14 22:06:14 +09:00
Yusuke Kuoka
71436f0466 Final fixes before merging 2020-11-14 22:00:41 +09:00
Yusuke Kuoka
b63879f59f Ensure the chart is passing acceptance tests 2020-11-14 21:58:16 +09:00
Callum Tait
c623ce0765 docs: spelling 🙃 2020-11-14 12:32:14 +00:00
Yusuke Kuoka
01117041b8 chart: controller-manager should not be autoscaled 2020-11-14 20:37:40 +09:00
Yusuke Kuoka
42a272051d chart: Use the correct image 2020-11-14 20:37:22 +09:00
Yusuke Kuoka
6a4c29d30e Set ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm to run acceptance tests with chart 2020-11-14 20:31:37 +09:00
Yusuke Kuoka
ce48dc58e6 Add crds to the chart 2020-11-14 20:31:21 +09:00
Callum Tait
04dde518f0 docs: adding mroe detail for sync period config 2020-11-14 11:22:09 +00:00
Callum Tait
1b98c8f811 refactor: adding explicit sync-period flag 2020-11-14 11:19:16 +00:00
Yusuke Kuoka
14b34efa77 Start collaborating to develop a Helm chart
Ref #184
2020-11-14 20:15:42 +09:00
Yusuke Kuoka
bbfe03f02b Add acceptance test (#168)
To ease verifying the controller to work before submitting/merging PRs and releasing a new version of the controller.
2020-11-14 20:07:14 +09:00
Yusuke Kuoka
8ccf64080c Merge pull request #182 from callum-tait-pbx/docs/improvements
docs/improvements
2020-11-14 12:02:24 +09:00
Callum Tait
492ea4b583 docs: adding a common errors section 2020-11-13 08:38:25 +00:00
Callum Tait
3446d4d761 docs: reordering to be more logical 2020-11-13 08:27:18 +00:00
Javier de Pedro López
1f82032fe3 Improvements in the documentation for permissions and replication (#175) 2020-11-13 09:36:22 +09:00
Elliot Maincourt
5b2272d80a Add read-only permissions to actions for being able to autoscale (#180) 2020-11-13 09:27:01 +09:00
Reinier Timmer
0870250f9b Enable matrix build for runner image (#179) 2020-11-13 09:24:12 +09:00
Shinnosuke Sawada
a4061d0625 gofmt ed 2020-11-12 09:20:06 +09:00
Juho Saarinen
7846f26199 Increase memory limit (#173)
We saw controller quite often OOMing, so first help to increase limit a bit.
2020-11-12 09:14:33 +09:00
Reinier Timmer
8217ebcaac Updated Runner to 2.274.1 (#176)
* Updated Runner to 2.274.1

* Update image tag in workflow
2020-11-12 09:14:06 +09:00
Shinnosuke Sawada
83857ba7e0 use tcp DOCKER_HOST instead of sharing docker.sock 2020-11-12 08:07:52 +09:00
Yusuke Kuoka
e613219a89 Fix token registration broken since v0.11.0 (#167)
Fixes #166
2020-11-11 09:38:05 +09:00
Reinier Timmer
bc35bdfa85 fixed label argument in entrypoint (#162)
* fixed label argument in entrypoint

* Removed quotes from RUNNER_GROUP_ARG
2020-11-11 08:44:51 +09:00
Yusuke Kuoka
ece8fd8fe4 Bump Go to 1.15 (#160)
Closes #104
2020-11-10 17:16:04 +09:00
Dan Webb
dcf8524b5c Adds RUNNER_GROUP argument to the runner registration (#157)
* Adds RUNNER_GROUP argument to the runner registration

Adds the ability to register a runner to a predefined runner_group

Resolves #137

* Update README with runner group example

- Updates the README with instructions of how to add the runner to a
  group
- Fix code fencing for shell and yaml blocks in the README
- Use consistent bullet points (dash not asterisk)
2020-11-10 17:15:54 +09:00
Yusuke Kuoka
4eb45d3c7f Fix build error 2020-11-10 17:09:16 +09:00
Juho Saarinen
1c30bdf35b Add GHE URL to transport (#152)
Fixes #149
2020-11-10 17:05:09 +09:00
Yusuke Kuoka
3f335ca628 Fix panic on startup when misconfigured (#154)
Fixes #153
2020-11-10 17:03:33 +09:00
Juho Saarinen
f2a2ab7ede Check token validity only when creating new pod (#159)
Fixes #143
2020-11-10 17:02:30 +09:00
Juho Saarinen
40c5050978 Added support for other than public GitHub URL (#146)
Refactoring a bit
2020-10-28 22:15:53 +09:00
Juho Saarinen
99a53a6e79 Releasing latest controller from master push (#147)
Fixes #135
2020-10-28 22:13:35 +09:00
Yusuke Kuoka
6d78fb07b3 Fix permission error with the default setup since v0.9.4 (#142)
Fixes #138
2020-10-25 11:25:48 +09:00
Yusuke Kuoka
faaca10fba Rename Runner.Spec.dockerWithinRunnerContainer to docker"d"WithinRunnerContainer (#134)
* Rename Runner.Spec.dockerWithinRunnerContainer to dockerdWithinRunnerContainer

Ref https://github.com/summerwind/actions-runner-controller/pull/126#issuecomment-712501790
2020-10-21 21:32:40 +09:00
Juho Saarinen
d16dfac0f8 Restart if pod ends up succeeded (#136)
Fixes #132
2020-10-21 21:32:26 +09:00
Juho Saarinen
af483d83da Possibility to define resources for DIND container (#125)
Ref #119
2020-10-21 10:26:19 +09:00
Juho Saarinen
92920926fe Configurable "runner and DinD in a single container" (#126) 2020-10-20 08:48:28 +09:00
Brendan Galloway
7d0bfb77e3 Inject Env Vars into Runner defined Container Spec (#127)
The runner token is now injected into the `runner` container defined within Runner.Spec.Containers[]
2020-10-20 08:43:53 +09:00
Brendan Galloway
c4074130e8 Patch additional protocol instances during manifest generation (#129)
Fixes #128
2020-10-19 09:57:53 +09:00
Yusuke Kuoka
be2e61f209 Add "alternatives" section to README (#124)
Grow together :)
2020-10-15 08:40:02 +09:00
Juho Saarinen
da818a898a Manifests valid for K8s 1.18 and 1.19 (#123)
Fixes #113
Fixes #116
2020-10-15 08:39:46 +09:00
Jesse Newland
2d250d5e06 Fix release build (#122) 2020-10-14 08:43:14 +09:00
Jesse Newland
231cde1531 Update build-runner workflow to be compatible with forks, fix image push (#117)
Partly revert and enhances #115

This is a follow-up to #115 that replaces the hardcoded `summerwind` portion of the image name with `${{ github.repository_owner }}` to enable contributors to test the image pushing behavior and fixes image building by conditionally passing `--push` to the build step based on the event that triggered the workflow.

After setting the `DOCKER_ACCESS_TOKEN` Secret on my fork of this repository, I was able to use this updated workflow to [build and push](https://github.com/urcomputeringpal/actions-runner-controller/runs/1242793758?check_suite_focus=true) a [set of images](https://hub.docker.com/r/urcomputeringpal/actions-runner/tags) and confirm their functionality. I imagine this will be useful to future contributors who wish to help with the chore of keeping up with https://github.com/actions/runner/releases.
2020-10-13 21:14:36 +09:00
Hayden Fuss
c986c5553d Fixing Docker Build and Push for Runner Image (#115)
A new image tag for the runner stopped being published on master merges from changes in #86.

This fixes that in the following ways:
- Tests the GH workflow on PRs w/o pushing the images
- Runner and Docker version are moved from Makefile to GH Actions workflow and are passed in as build args
- GHA workflow runs on PRs, and if the workflow file itself is changed (i.e. version bump) or the runner Docker source changes (excluding the Makefile since thats just for local dev)
- Images are pushed on push (i.e. a merge)
2020-10-09 09:16:24 +09:00
Harry Gogonis
f12bb76fd1 Update runner to v2.273.5 (#111) 2020-10-08 09:02:01 +09:00
Dominic LoBue
a63860029a Prefer autoscaling based on jobs rather than workflows if available (#114)
Adds the ability to autoscale on jobs in addition to workflows. We fall back to using workflow metrics if job details are not present.

Resolves #89
2020-10-08 09:00:44 +09:00
Yusuke Kuoka
1bc6809c1b Merge pull request #110 from summerwind/ensure-controller-gen-code-manifests-sync
Ensure controller-gen is up-to-date and the code and the manifests are in-sync
2020-10-06 09:44:25 +09:00
Yusuke Kuoka
2e7b77321d Verify manifests on pull request build 2020-10-06 09:30:25 +09:00
Yusuke Kuoka
1e466ad3df Ensure controller-gen is up-to-date and the code and the manifests are in-sync
Follow-up for #95 that added /finalizers subresource permission and #103 that upgraded controller-gen from 0.2.4 from 0.3.0
2020-10-06 09:23:03 +09:00
MIℂHΛΞL FѲRИΛRѲ
a309eb1687 Initial multi-arch image commit (#86)
Adding multi-arch image support for `arm64` and `amd64`. This uses dockers new `buildx` feature, to enable further architectures more work will be required to update the `runner/Dockerfile` file to pull architecture-specific releases.

The Makefile targets really should only be used for local testing and not for release, additional work to appropriately tag the release images may need to be added but for now, I've not added that logic.

Fixes: #86 

Signed-off-by: Michael Fornaro <20387402+xUnholy@users.noreply.github.com>
2020-10-05 09:26:46 +09:00
Tomoaki Nakagawa
e8a7733ee7 Change api version of cert manager (#94)
* change apiVersion cert-manager

* change apiVersion kustomization.yaml
2020-10-05 09:13:10 +09:00
Hayden Fuss
729f5fde81 Allowing access to finalizers for all managed resources (#95) 2020-10-05 09:12:01 +09:00
Helder Moreira
7a2fa7fbce runner-controller: do not delete runner if it is busy (#103)
Currently, after refreshing the token, the controller re-creates the runner with the new token. This results in jobs being interrupted. This PR makes sure the pod is not restarted if it is busy.

Closes #74
2020-10-05 09:06:37 +09:00
Dominic LoBue
7b5e62e266 Add gh workflow to run tests on PRs (#108) 2020-10-05 09:01:44 +09:00
John Wiebalk
acb1700b7c Fix a couple typos in readme (#107) 2020-10-05 08:59:34 +09:00
Yury Tsarev
b79ea980b8 Use self update ready entrypoint (#99)
* Use self update ready entrypoint

* Add --once support for runsvc.sh

Run `cd runner; NAME=$DOCKER_USER/actions-runner TAG=dev make docker-build docker-push`,
`kubectl apply -f release/actions-runner-controller.yaml`,
then update the runner image(not the controller image) by updating e.g. `Runner.Spec.Image` to `$DOCKER_USER/actions-runner:$TAG`, for testing.

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2020-10-05 08:58:20 +09:00
Moto Ishizawa
b1ba5bf0e8 Merge pull request #101 from summerwind/runner-v2.273.4
Update runner to v2.273.4
2020-09-20 13:26:24 +09:00
Moto Ishizawa
7a25a8962b Update runner to v2.273.4 2020-09-20 13:24:55 +09:00
Moto Ishizawa
9e61a78c62 Merge pull request #93 from summerwind/runner-v2.273.1
Update runner to v2.273.1
2020-09-09 06:46:42 +09:00
Moto Ishizawa
0179abfee5 Update runner to v2.273.1 2020-09-09 06:45:23 +09:00
Moto Ishizawa
c7b560b8cb Merge pull request #88 from summerwind/runner-v2.273.0
Update runner to v2.273.0
2020-08-20 20:21:24 +09:00
Moto Ishizawa
0cc499d77b Update runner to v2.273.0 2020-08-20 20:19:57 +09:00
Moto Ishizawa
35caf436d4 Merge pull request #82 from summerwind/add-hra-crd-to-kustomization
Do include currently missing HRA CRD in the released manifests
2020-08-05 21:35:57 +09:00
Yusuke Kuoka
a136714723 Do include currently missing HRA CRD in the released manifests
The standard installation procedure explained in https://github.com/summerwind/actions-runner-controller#installation has been broken since v0.7.0. This is due to that I missed adding the CRD to the kustomization.yaml which is used for kustomize-based deployments and generation of released manifests. This fixes that.
2020-08-05 08:38:49 +09:00
Moto Ishizawa
fde8df608b Merge pull request #81 from summerwind/add-scaling-down-integration-test
Add scaling-down scenario to integration test
2020-08-04 19:22:17 +09:00
Yusuke Kuoka
4733edc20d Add scaling-down scenario to integration test 2020-08-02 16:10:01 +09:00
Moto Ishizawa
3818e584ec Merge pull request #76 from summerwind/fix-crash-on-startup
Fix crash on startup after the HRDA addition
2020-08-02 13:10:56 +09:00
Yusuke Kuoka
50487bbb54 Fix the HRA controller name 2020-08-02 10:38:15 +09:00
Yusuke Kuoka
e2164f9946 Fix integration test bugs and do verify scaling out 2020-08-02 10:34:58 +09:00
Moto Ishizawa
bdc1279e9e Merge pull request #80 from summerwind/runner-v2.272.0
Update runner to v2.272.0
2020-08-01 17:53:37 +09:00
Moto Ishizawa
3223480bc0 Update docker to v19.03.12 2020-08-01 17:28:40 +09:00
Moto Ishizawa
e642632a50 Update runner to v2.272.0 2020-08-01 17:28:16 +09:00
Yusuke Kuoka
3c3077a11c Fix crash on startup after the HRDA addition
This is a follow-up for #66.

The reconciler for the new HorizontalRunnerDeploymentAutoscaler had a terrible flaw that prevented the controller to fail launching due to an error like:

```
indexer conflict: map[field:.metadata.controller:{}]
```

This fixes that, while adding `integration_test.go` to verify its actually fixed and prevent regression in the future.
2020-07-29 21:20:46 +09:00
Moto Ishizawa
e10637ce35 Merge pull request #66 from summerwind/org-runner-autoscale
feat: Organizational RunnerDeployment Autoscaling
2020-07-28 19:17:18 +09:00
Yusuke Kuoka
ae30648985 feat: Use HorizontalRunnerAutoscaler for autoscaling 2020-07-27 20:33:44 +09:00
Moto Ishizawa
be0850a582 Merge pull request #73 from summerwind/runner-v2.267.1
Update runner to v2.267.1
2020-07-13 21:54:51 +09:00
Moto Ishizawa
a995597111 Update runner to v2.267.1 2020-07-13 21:52:30 +09:00
Moto Ishizawa
ba8f61141b Merge pull request #71 from dvdliao/respect-imagepullpolicy
add config to respect image pull policy
2020-07-13 21:49:58 +09:00
David Liao
c0914743b0 add config to respect image pull policy 2020-07-08 23:53:52 -07:00
Yusuke Kuoka
eca6917c6a feat: Organizational RunnerDeployment Autoscaling
Enhances #57 to add support for organizational runners.

As GitHub Actions does not have an appropriate API for this, this is the spec you need:

```
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: myrunners
spec:
  minReplicas: 1
  maxReplicas: 3
  autoscaling:
    metrics:
    - type: TotalNumberOfQueuedAndProgressingWorkflowRuns
      repositories:
      # Assumes that you have `github.com/myorg/myrepo1` repo
      - myrepo1
      - myrepo2
  template:
    spec:
      organization: myorg
```

It works by collecting "in_progress" and "queued" workflow runs for the repositories `myrepo1` and `myrepo2` to autoscale the number of replicas, assuming you have this organizational runner deployment only for those two repositories.

For example, if `myrepo1` had 1 `in_progress` and 2 `queued` workflow runs, and `myrepo2` had 4 `in_progress` and 8 `queued` workflow runs at the time of running the reconcilation loop on the runner deployment, it will scale replicas to 1 + 2 + 4 + 8 = 15.

Perhaps we might be better add a kind of "ratio" setting so that you can configure the controller to create e.g. 2x runners than demanded. But that's another story.

Ref #10
2020-07-03 09:12:47 +09:00
Moto Ishizawa
f1556ff060 Merge pull request #63 from summerwind/runner-v2.267.0
Update runner to v2.267.0
2020-06-28 12:00:24 +09:00
Moto Ishizawa
8c35cab1a4 Update runner to v2.267.0 2020-06-28 11:59:16 +09:00
KUOKA Yusuke
5bb2694349 feat: Repository-wide RunnerDeployment Autoscaling (#57)
* feat: Repository-wide RunnerDeployment Autoscaling

This adds `maxReplicas` and `minReplicas` to the RunnerDeploymentSpec. If and only if both fields are set, the controller computes and sets desired `replicas` automatically depending on the demand.

The number of demanded runner replicas is computed by `queued workflow runs + in_progress workflow runs` for the repository. The support for organizational runners is not included.

Ref https://github.com/summerwind/actions-runner-controller/issues/10
2020-06-27 17:26:46 +09:00
Moto Ishizawa
512cae68a1 Merge pull request #59 from vitali-raikov/patch-1
Add an extra permission to README for organization self hosted runners
2020-06-02 22:49:29 +09:00
Vitali Raikov
905ed18824 Add an extra permission for organization self hosted runners 2020-06-02 16:16:26 +03:00
Moto Ishizawa
6ce35ced7a Merge pull request #56 from summerwind/runner-v2.263.0
Update runner to v2.263.0
2020-05-25 22:58:43 +09:00
Moto Ishizawa
d09b14b629 Update runner to v2.263.0 2020-05-24 20:45:16 +09:00
Moto Ishizawa
eaabf1fc8c Merge pull request #55 from summerwind/runner-v2.262.1
Update runner to v2.262.1
2020-05-13 22:12:26 +09:00
Moto Ishizawa
553dda65a4 Update runner to v2.262.1 2020-05-13 22:10:23 +09:00
Moto Ishizawa
d446893890 Merge pull request #54 from summerwind/runner-v2.262.0
Update runner to v2.262.0
2020-05-13 21:59:30 +09:00
Moto Ishizawa
46883eb742 Update runner to v2.262.0 2020-05-12 22:46:06 +09:00
Moto Ishizawa
390f2a62d9 Merge pull request #50 from summerwind/runner-validation-webhook
Add validation webhooks
2020-05-08 22:39:13 +09:00
naka-gawa
1555651325 fix typo readme 2020-05-06 12:25:30 +09:00
Moto Ishizawa
e7445e286f Merge pull request #52 from reiniertimmer/add-git-unzip-to-runner
Add dependencies from GitHub virtual environment to runner
2020-05-04 22:34:38 +09:00
Reinier Timmer
79655989d0 Add additional software packages to runner image 2020-05-04 06:55:16 +00:00
Moto Ishizawa
55323c3754 Update installation section 2020-05-02 16:57:19 +09:00
Moto Ishizawa
f80c3c1928 Set volume to pod properly 2020-05-01 08:51:25 +09:00
Moto Ishizawa
9a86812214 Add manifests for validation webhook 2020-04-30 22:12:39 +09:00
Moto Ishizawa
e889eaeb04 Add validation webhooks 2020-04-30 22:11:59 +09:00
Reinier Timmer
b96979888c fix delete pod when runner failed to register 2020-04-29 14:23:58 +09:00
Moto Ishizawa
7df119e470 Merge pull request #44 from reiniertimmer/organization-runners
support for organization runners & labels
2020-04-28 22:38:35 +09:00
Reinier Timmer
966e0dca37 fix merge conflict on README 2020-04-28 11:24:59 +02:00
Reinier Timmer
8c42b317ec updated documentation 2020-04-28 11:15:41 +02:00
Reinier Timmer
9f57f52e36 organization and repository are now exclusive 2020-04-28 11:14:31 +02:00
Reinier Timmer
8c5b776807 support runner labels 2020-04-28 11:14:31 +02:00
Reinier Timmer
2567f6ee4e omit empty repository from runner spec 2020-04-28 11:14:31 +02:00
Reinier Timmer
eca3cc7941 add organization info to runner status 2020-04-28 11:14:31 +02:00
Reinier Timmer
75d15ee91b backwards compatibility of dockerfile 2020-04-28 11:14:31 +02:00
Reinier Timmer
fb35dd4131 support for organization runners 2020-04-28 11:14:31 +02:00
Moto Ishizawa
d1429beaa6 Merge pull request #45 from summerwind/share-work-dir
Share runner's working directory with docker sidecar
2020-04-25 10:08:04 +09:00
Moto Ishizawa
3b8ea2991c Share runner's working directory with docker sidecar 2020-04-24 22:36:27 +09:00
Moto Ishizawa
6bdfb5cff6 Merge pull request #41 from arhue/improve-readme
Improve readme
2020-04-23 19:59:28 +09:00
Varun Priolkar
df5eed52d0 Specify language in additional tweaks code block 2020-04-23 12:57:56 +05:30
Varun Priolkar
5f271a3050 Fix code block on additional tweak section 2020-04-23 12:57:15 +05:30
Varun Priolkar
6997cc97c6 Improve readme 2020-04-23 12:53:46 +05:30
Moto Ishizawa
427cc506e1 Merge pull request #36 from summerwind/runner-v2.169.1
Update runner to v2.169.1
2020-04-18 10:41:34 +09:00
Moto Ishizawa
13616ba1b2 Use latest container image 2020-04-16 19:16:15 +09:00
Moto Ishizawa
d8327e9ab8 Update runner to v2.169.1 2020-04-16 19:16:07 +09:00
Moto Ishizawa
fd1b72e4ed Merge pull request #35 from chenrui333/default-docker-image-to-19.03.8
Make defaultDockerImage to 19.03.8
2020-04-16 17:55:02 +09:00
Rui Chen
79f15b4906 Make defaultDockerImage to 19.03.8 2020-04-15 19:47:07 -04:00
Moto Ishizawa
5714459c24 Merge pull request #31 from summerwind/github-package
Use github package to access the GitHub API
2020-04-14 16:30:26 +09:00
Moto Ishizawa
ab28dde0ec Add github package to container image 2020-04-13 22:29:48 +09:00
Moto Ishizawa
3ccc51433f Use github package to access the GitHub API 2020-04-13 22:28:07 +09:00
Moto Ishizawa
5f608058cd Add github package 2020-04-13 22:27:05 +09:00
Moto Ishizawa
a91df5c564 Merge pull request #29 from summerwind/add-github-apps-doc
Add section to use GitHub App
2020-04-11 21:33:37 +09:00
Moto Ishizawa
0bb6f64470 Add section to use GitHub App 2020-04-10 15:42:27 +09:00
Moto Ishizawa
ce40635d1e Merge pull request #28 from chenrui333/update-default-base-image
Update defaultRunnerImage to v2.168.0
2020-04-10 14:12:07 +09:00
Rui Chen
52c0f2e4f3 Update defaultRunnerImage to v2.168.0 2020-04-09 20:16:52 -04:00
Yusuke Kuoka
b411d37f2b fix: RunnerDeployment should clean up old RunnerReplicaSets ASAP
Since the initial implementation of RunnerDeployment and until this change, any update to a runner deployment has been leaving old runner replicasets until the next resync interval. This fixes that, by continusouly retrying the reconcilation 10 seconds later to see if there are any old runner replicasets that can be removed.

In addition to that, the cleanup of old runner replicasets has been improved to be deferred until all the runners of the newest replica set to be available. This gives you hopefully zero or at less downtime updates of runner deployments.

Fixes #24
2020-04-04 07:55:12 +09:00
Vito Botta
a19cd373db Bump Docker version 2020-04-02 08:32:27 +09:00
Vito Botta
f2dcb5659d Add runner user to sudo group 2020-04-02 08:32:27 +09:00
Moto Ishizawa
b8b4ef4b60 Merge pull request #21 from summerwind/add-permission-events
Add permission to create/patch events resource
2020-03-28 22:18:58 +09:00
Moto Ishizawa
cac199f16e Merge pull request #20 from summerwind/github-apps-support
Add support of GitHub Apps authentication
2020-03-28 22:18:36 +09:00
Moto Ishizawa
5efdc6efe6 Add permission to create/patch events resource 2020-03-27 23:25:37 +09:00
Moto Ishizawa
af81c7f4c9 Add environment variables and volumes for GitHub Apps credentials 2020-03-26 23:12:54 +09:00
Moto Ishizawa
80122a56d7 Add flags for GitHub Apps credentials 2020-03-26 23:12:11 +09:00
Adam Jensen
934ec7f181 Clarify instructions for getting a token (#18)
* Clarify instructions for getting a token

* Fix typo
2020-03-25 21:22:19 +09:00
Moto Ishizawa
49160138ab Merge pull request #19 from summerwind/actions-runner-v2.168.0
Update runner to v2.168.0
2020-03-25 17:25:07 +09:00
Moto Ishizawa
fac211f5d9 Update runner to v2.168.0 2020-03-25 17:10:25 +09:00
Aleksandr Stepanov
d4c849ee09 Add variants of PodTemplate spec fields into the Runner spec (#7)
Resolves #5
Fixes #11
Fixes #12

Changes:

* Added podtemplate spec

* Rework pod creation logic

* Added most using podspecs

* Added copy of podspec

* Fixed Github List method

* Fixed containers

* Added ability to override runner's containers

* Added ability to override runner's containers

* Added ability to override runner's containers

* Update controllers/runner_controller.go

Co-Authored-By: Moto Ishizawa <summerwind.jp@gmail.com>

* Remove optional restartpolicy

* Changed naming convention

Co-authored-by: Moto Ishizawa <summerwind.jp@gmail.com>
2020-03-20 22:50:50 +09:00
Moto Ishizawa
23538d43b3 Merge pull request #9 from jorge07/patch-1
Readme fix requirements
2020-03-18 19:52:15 +09:00
Jorge Arco
d3aa21f583 Readme fix requirements 2020-03-17 11:48:41 +01:00
Moto Ishizawa
ccce752259 Add sample manifest of RunnerDeployment and RunnerReplicaSet 2020-03-15 22:08:01 +09:00
Moto Ishizawa
bbcfa10459 Update custom resource information 2020-03-15 22:07:27 +09:00
Moto Ishizawa
9ad8064db6 Split files according to the kubebuilder style 2020-03-15 22:06:50 +09:00
Moto Ishizawa
b1da3092fb Revert test comment 2020-03-15 21:55:38 +09:00
Moto Ishizawa
5aeae6a152 Fix generate name 2020-03-15 21:50:45 +09:00
Moto Ishizawa
a897eee402 Fix RBAC role for RunnerDeployment and RunnerReplicaSet 2020-03-15 18:08:11 +09:00
Moto Ishizawa
2e9fecb983 Includes RunnerReplicaSet and RunnerDeployment 2020-03-14 22:57:33 +09:00
Moto Ishizawa
ce3011fe1b Merge pull request #6 from mumoshu/rename-runnerset
Rename RunnerSet to RunnerReplicaSet
2020-03-11 21:29:55 +09:00
Yusuke Kuoka
c19a1b3ffe Rename RunnerSet to RunnerReplicaSet
To hand over the name `RunnerSet` to the new StatefulSet-based implementation of that being developed at #4
2020-03-10 09:14:34 +09:00
Moto Ishizawa
de85823c81 Merge pull request #3 from mumoshu/manual-test-feedbacks
Manual test feedbacks
2020-03-09 23:41:16 +09:00
Yusuke Kuoka
d12eca268d Fix validation error on nil for optional slice field (runner.spec.env)
I had observed athe exact issue seen for the 4th option described in https://github.com/elastic/cloud-on-k8s/issues/1822, which resulted in actions-runner-controller is unable to create nor update runners. This fixes that.

I've also updated README to introduce RunnerDeployment and manually tested it to work after the fix.

---

`actions-runner-controller` has been failing while creating and updating runners:

```
2020-03-05T11:05:16.610+0900    ERROR   controllers.Runner      Failed to update runner {"runner": "default/example-runner", "error": "Runner.actions.summerwind.dev \"example-runner\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"actions.summerwind.dev/v1alpha1\", \"kind\":\"Runner\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2020-03-05T02:05:16Z\", \"finalizers\":[]interface {}{\"runner.actions.summerwind.dev\"}, \"generation\":2, \"name\":\"example-runner\", \"namespace\":\"default\", \"resourceVersion\":\"911496\", \"selfLink\":\"/apis/actions.summerwind.dev/v1alpha1/namespaces/default/runners/example-runner\", \"uid\":\"48b62d07-ff2c-42d6-878c-d3f951202209\"}, \"spec\":map[string]interface {}{\"env\":interface {}(nil), \"image\":\"\", \"repository\":\"mumoshu/actions-runner-controller-ci\"}}: validation failure list:\nspec.env in body must be of type array: \"null\""}
github.com/go-logr/zapr.(*zapLogger).Error
        /Users/c-ykuoka/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
github.com/summerwind/actions-runner-controller/controllers.(*RunnerReconciler).Reconcile
        /Users/c-ykuoka/p/actions-runner-controller/controllers/runner_controller.go:88
```

This seems like the exact issue seen in the 4th option in https://github.com/elastic/cloud-on-k8s/issues/1822

I also observed the same issue is failing while the runnerset controller is trying to create/update runners:

```

Also while creating runner in the runnerset controller:

2020-03-05T11:15:01.223+0900    ERROR   controller-runtime.controller   Reconciler error        {"controller": "runnerset", "request": "default/example-runnerset", "error": "Runner.actions.summerwind.dev \"example-runnersetgp56m\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"actions.summerwind.dev/v1alpha1\", \"kind\":\"Runner\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2020-03-05T02:15:01Z\", \"generateName\":\"example-runnerset\", \"generation\":1, \"name\":\"example-runnersetgp56m\", \"namespace\":\"default\", \"ownerReferences\":[]interface {}{map[string]interface {}{\"apiVersion\":\"actions.summerwind.dev/v1alpha1\", \"blockOwnerDeletion\":true, \"controller\":true, \"kind\":\"RunnerSet\", \"name\":\"example-runnerset\", \"uid\":\"e26f7d01-3168-496d-931b-8e6f97b776ea\"}}, \"uid\":\"4ee490f5-9a8c-4f30-86f9-61dea799b972\"}, \"spec\":map[string]interface {}{\"env\":interface {}(nil), \"image\":\"\", \"repository\":\"mumoshu/actions-runner-controller-ci\"}}: validation failure list:\nspec.env in body must be of type array: \"null\""}
github.com/go-logr/zapr.(*zapLogger).Error
```

and while the runnerdeployment controller is trying to create/update runners.

I've fixed it so that the new `RunnerDeployment` example added to README just works.
2020-03-09 22:03:07 +09:00
Yusuke Kuoka
4b6806fda3 fix: RunnerDeployment was not working at all after field indexing enhancement 2020-03-06 08:53:46 +09:00
Moto Ishizawa
0edf0d59f7 Merge pull request #2 from mumoshu/runnerdeployment
feat: RunnerDeployments
2020-03-04 22:57:40 +09:00
Yusuke Kuoka
70a8c3db0d chore: Tidy up the deployment controller code
Removes an unnecessary condition from the deployment controller code. We assumed that the client would return a not-found error on an empty runnerset list  it is clearly not the case.
2020-03-03 10:50:52 +09:00
Yusuke Kuoka
31fb7cc113 feat: Efficient runner set looks up for the deployment controller
Enhances the deployment controller to use indexed lookups against runner sets for more scalability.
2020-03-03 10:45:39 +09:00
Moto Ishizawa
338da818be Merge pull request #1 from mumoshu/runnerset
feat: RunnerSets
2020-03-01 19:35:40 +09:00
Yusuke Kuoka
9d634d88ff feat: RunnerDeployment
Adds the initial version of RunnerDeployment that is intended to manage RunnerSets(#1), like Deployment manages ReplicaSets.

This is the initial version and therefore is bare bone. The only update strategy it supports is `Recreate`, which recreates the underlying RunnerSet when the runner template changes. I'd like to add `RollingUpdate` strategy once this is merged.

This depends on #1 so the diff contains that of #1, too. Please see only the latest commit for review.

Also see https://github.com/mumoshu/actions-runner-controller-ci/runs/471329823?check_suite_focus=true to confirm that `make tests` is passing after changes made in this commit.
2020-02-27 10:46:23 +09:00
Yusuke Kuoka
d8d829b734 feat: RunnerSets
RunnerSet is basically ReplicaSet for Runners.

It is responsible for maintaining number of runners to match the desired one. That is, it creates missing runners from `.Spec.Template` and deletes redundant runners.

Similar to ReplicaSet, this does not support rolling update of runners on its own. We might want to later add `RunnerDeployment` for that. But that's another story.
2020-02-24 10:32:44 +09:00
Moto Ishizawa
7dd3ab43d7 Update default container image 2020-02-19 22:41:11 +09:00
Moto Ishizawa
58cac20109 Docker CLI v19.03.6 2020-02-19 22:39:58 +09:00
Moto Ishizawa
cac45f284a actions/runner v2.165.2 2020-02-19 22:37:38 +09:00
Moto Ishizawa
f2d3ca672f Unset environment variables for runner config 2020-02-06 22:15:26 +09:00
Moto Ishizawa
829a167303 Add 'env' field to runner resource 2020-02-06 22:09:07 +09:00
Moto Ishizawa
c66916a4ee Use dumb-init to handle signal properly 2020-02-06 18:47:50 +09:00
Moto Ishizawa
f5c8a0e655 Update README 2020-02-03 21:52:28 +09:00
226 changed files with 71478 additions and 868 deletions

14
.dockerignore Normal file
View File

@@ -0,0 +1,14 @@
Makefile
acceptance
runner
hack
test-assets
config
charts
.github
.envrc
.env
*.md
*.txt
*.sh
test/e2e/.docker-build

177
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,177 @@
name: Bug Report
description: File a bug report
title: "Bug"
labels: ["bug"]
body:
- type: input
id: controller-version
attributes:
label: Controller Version
description: Refer to semver-like release tags for controller versions. Any release tags prefixed with `actions-runner-controller-` are for chart releases
placeholder: ex. 0.18.2 or git commit ID
validations:
required: true
- type: input
id: chart-version
attributes:
label: Helm Chart Version
description: Run `helm list` and see what's shown under CHART VERSION. Any release tags prefixed with `actions-runner-controller-` are for chart releases
placeholder: ex. 0.11.0
- type: input
id: cert-manager-version
attributes:
label: CertManager Version
description: Run `kubectl get po -o yaml $CERT_MANAGER_POD` and see the image tag, or run `helm list` and see what's shown under APP VERSION for your cert-manager Helm release.
placeholder: ex. 1.8
- type: dropdown
id: deployment-method
attributes:
label: Deployment Method
description: Which deployment method did you use to install ARC?
options:
- Helm
- Kustomize
- ArgoCD
- Other
validations:
required: true
- type: textarea
id: cert-manager
attributes:
label: cert-manager installation
description: Confirm that you've installed cert-manager correctly by answering a few questions
placeholder: |
- Did you follow https://github.com/actions-runner-controller/actions-runner-controller#installation? If not, describe the installation process so that we can reproduce your environment.
- Are you sure you've installed cert-manager from an official source?
(Note that we won't provide user support for cert-manager itself. Make sure cert-manager is fully working before testing ARC or reporting a bug
validations:
required: true
- type: checkboxes
id: checks
attributes:
label: Checks
description: Please check the boxes below before submitting
options:
- label: This isn't a question or user support case (For Q&A and community support, go to [Discussions](https://github.com/actions-runner-controller/actions-runner-controller/discussions). It might also be a good idea to contract with any of contributors and maintainers if your business is so critical and therefore you need priority support
required: true
- label: I've read [releasenotes](https://github.com/actions-runner-controller/actions-runner-controller/tree/master/docs/releasenotes) before submitting this issue and I'm sure it's not due to any recently-introduced backward-incompatible changes
required: true
- label: My actions-runner-controller version (v0.x.y) does support the feature
required: true
- label: I've already upgraded ARC (including the CRDs, see charts/actions-runner-controller/docs/UPGRADING.md for details) to the latest and it didn't fix the issue
required: true
- type: textarea
id: resource-definitions
attributes:
label: Resource Definitions
description: "Add copy(s) of your resource definition(s) (RunnerDeployment or RunnerSet, and HorizontalRunnerAutoscaler. If RunnerSet, also include the StorageClass being used)"
render: yaml
placeholder: |
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example
spec:
#snip
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerSet
metadata:
name: example
spec:
#snip
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: example
provisioner: ...
reclaimPolicy: ...
volumeBindingMode: ...
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name:
spec:
#snip
validations:
required: true
- type: textarea
id: reproduction-steps
attributes:
label: To Reproduce
description: "Steps to reproduce the behavior"
render: markdown
placeholder: |
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
validations:
required: true
- type: textarea
id: actual-behavior
attributes:
label: Describe the bug
description: Also tell us, what did happen?
placeholder: A clear and concise description of what happened.
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Describe the expected behavior
description: Also tell us, what did you expect to happen?
placeholder: A clear and concise description of what the expected behavior is.
validations:
required: true
- type: textarea
id: controller-logs
attributes:
label: Controller Logs
description: "NEVER EVER OMIT THIS! Include logs from `actions-runner-controller`'s controller-manager pod"
render: shell
placeholder: |
PROVIDE THE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
To grab controller logs:
# Set NS according to your setup
NS=actions-runner-system
# Grab the pod name and set it to $POD_NAME
kubectl -n $NS get po
kubectl -n $NS logs $POD_NAME > arc.log
validations:
required: true
- type: textarea
id: runner-pod-logs
attributes:
label: Runner Pod Logs
description: "Include logs from runner pod(s)"
render: shell
placeholder: |
PROVIDE THE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
To grab the runner pod logs:
# Set NS according to your setup. It should match your RunnerDeployment's metadata.namespace.
NS=default
# Grab the name of the problematic runner pod and set it to $POD_NAME
kubectl -n $NS get po
kubectl -n $NS logs $POD_NAME -c runner > runnerpod_runner.log
kubectl -n $NS logs $POD_NAME -c docker > runnerpod_docker.log
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: |
Add any other context about the problem here.
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.

15
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,15 @@
# Blank issues are mainly for maintainers who are known to write complete issue descriptions without need to following a form
blank_issues_enabled: true
contact_links:
- name: Sponsor ARC Maintainers
about: If your business relies on the continued maintainance of actions-runner-controller, please consider sponsoring the project and the maintainers.
url: https://github.com/actions-runner-controller/actions-runner-controller/tree/master/CODEOWNERS
- name: Ideas and Feature Requests
about: Wanna request a feature? Create a discussion and collect :+1:s first.
url: https://github.com/actions-runner-controller/actions-runner-controller/discussions/new?category=ideas
- name: Questions and User Support
about: Need support using ARC? We use Discussions as the place to provide community support.
url: https://github.com/actions-runner-controller/actions-runner-controller/discussions/new?category=questions
- name: Need Paid Support?
about: Consider contracting with any of the actions-runner-controller maintainers and contributors.
url: https://github.com/actions-runner-controller/actions-runner-controller/tree/master/CODEOWNERS

View File

@@ -0,0 +1,19 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

34
.github/RELEASE_NOTE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,34 @@
# Release Note Template
This is the template of actions-runner-controller's release notes.
Whenever a new release is made, I start by manually copy-pasting this template onto the GitHub UI for creating the release.
I then walk-through all the changes, take sometime to think abount best one-sentence explanations to tell the users about changes, write it all,
and click the publish button.
If you think you can improve future release notes in any way, please do submit a pull request to change the template below.
Note that even though it looks like a Go template, I don't use any templating to generate the changelog.
It's just that I'm used to reading and intepreting Go template by myself, not a computer program :)
**Title**:
```
v{{ .Version }}: {{ .TitlesOfImportantChanges }}
```
**Body**:
```
**CAUTION:** If you're using the Helm chart, beware to review changes to CRDs and do manually upgrade CRDs! Helm installs CRDs only on installing a chart. It doesn't automatically upgrade CRDs. Otherwise you end up with troubles like #427, #467, and #468. Please refer to the [UPGRADING](charts/actions-runner-controller/docs/UPGRADING.md) docs for the latest process.
This release includes the following changes from contributors. Thank you!
- @{{ .GitHubUser }} fixed {{ .Feature }} to not break when ... (#{{ .PullRequestNumber }})
- @{{ .GitHubUser }} enhanced {{ .Feature }} to ... (#{{ .PullRequestNumber }})
- @{{ .GitHubUser }} added {{ .Feature }} for ... (#{{ .PullRequestNumber }})
- @{{ .GitHubUser }} fixed {{ .Topic }} in the documentation so that ... (#{{ .PullRequestNumber }})
- @{{ .GitHubUser }} added {{ .Topic }} to the documentation (#{{ .PullRequestNumber }})
- @{{ .GitHubUser }} improved the documentation about {{ .Topic }} to also cover ... (#{{ .PullRequestNumber }})
```

View File

@@ -0,0 +1,52 @@
name: "Setup Docker"
inputs:
username:
description: "Username"
required: true
password:
description: "Password"
required: true
ghcr_username:
description: "GHCR username. Usually set from the github.actor variable"
required: true
ghcr_password:
description: "GHCR password. Usually set from the secrets.GITHUB_TOKEN variable"
required: true
outputs:
sha_short:
description: "The short SHA used for image builds"
value: ${{ steps.vars.outputs.sha_short }}
runs:
using: "composite"
steps:
- name: Get Short SHA
id: vars
run: |
echo ::set-output name=sha_short::${GITHUB_SHA::7}
shell: bash
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: latest
- name: Login to DockerHub
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' }}
uses: docker/login-action@v2
with:
username: ${{ inputs.username }}
password: ${{ inputs.password }}
- name: Login to GitHub Container Registry
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' }}
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ inputs.ghcr_username }}
password: ${{ inputs.ghcr_password }}

41
.github/renovate.json5 vendored Normal file
View File

@@ -0,0 +1,41 @@
{
"extends": ["config:base"],
"labels": ["dependencies"],
"packageRules": [
{
// automatically merge an update of runner
"matchPackageNames": ["actions/runner"],
"extractVersion": "^v(?<version>.*)$",
"automerge": true
}
],
"regexManagers": [
{
// use https://github.com/actions/runner/releases
"fileMatch": [
".github/workflows/runners.yaml"
],
"matchStrings": ["RUNNER_VERSION: +(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
},
{
"fileMatch": [
"runner/Makefile",
"Makefile"
],
"matchStrings": ["RUNNER_VERSION \\?= +(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
},
{
"fileMatch": [
"runner/actions-runner.dockerfile",
"runner/actions-runner-dind.dockerfile"
],
"matchStrings": ["RUNNER_VERSION=+(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
}
]
}

View File

@@ -1,22 +0,0 @@
on:
push:
branches:
- master
paths:
- 'runner/**'
jobs:
build:
runs-on: ubuntu-latest
name: Build runner
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Build container image
run: make docker-build
working-directory: runner
- name: Docker Login
run: docker login -u summerwind -p ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Push container image
run: make docker-push
working-directory: runner

View File

@@ -1,26 +0,0 @@
on:
push:
branches:
- master
paths-ignore:
- 'runner/**'
- '.github/**'
jobs:
build:
runs-on: ubuntu-latest
name: Build
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.2.0/kubebuilder_2.2.0_linux_amd64.tar.gz
tar zxvf kubebuilder_2.2.0_linux_amd64.tar.gz
sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder
- name: Build container image
run: make docker-build
- name: Docker Login
run: docker login -u summerwind -p ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Push container image
run: make docker-push

70
.github/workflows/publish-arc.yaml vendored Normal file
View File

@@ -0,0 +1,70 @@
name: Publish ARC
on:
release:
types:
- published
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: write
packages: write
jobs:
release-controller:
name: Release
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USER }}
steps:
- name: Checkout
uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.18.2'
- name: Install tools
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.2.0/kubebuilder_2.2.0_linux_amd64.tar.gz
tar zxvf kubebuilder_2.2.0_linux_amd64.tar.gz
sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder
curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash
sudo mv kustomize /usr/local/bin
curl -L -O https://github.com/tcnksm/ghr/releases/download/v0.13.0/ghr_v0.13.0_linux_amd64.tar.gz
tar zxvf ghr_v0.13.0_linux_amd64.tar.gz
sudo mv ghr_v0.13.0_linux_amd64/ghr /usr/local/bin
- name: Set version
run: echo "VERSION=$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')" >> $GITHUB_ENV
- name: Upload artifacts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
make github-release
- name: Setup Docker Environment
id: vars
uses: ./.github/actions/setup-docker-environment
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
ghcr_username: ${{ github.actor }}
ghcr_password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:latest
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }}
ghcr.io/actions-runner-controller/actions-runner-controller:latest
ghcr.io/actions-runner-controller/actions-runner-controller:${{ env.VERSION }}
ghcr.io/actions-runner-controller/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }}
cache-from: type=gha
cache-to: type=gha,mode=max

58
.github/workflows/publish-canary.yaml vendored Normal file
View File

@@ -0,0 +1,58 @@
name: Publish Canary Image
on:
push:
branches:
- master
paths-ignore:
- '**.md'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/runners.yaml'
- '.github/workflows/validate-entrypoint.yaml'
- '.github/renovate.*'
- 'runner/**'
- '.gitignore'
- 'PROJECT'
- 'LICENSE'
- 'Makefile'
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: read
packages: write
jobs:
canary-build:
name: Build and Publish Canary Image
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USER }}
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Docker Environment
id: vars
uses: ./.github/actions/setup-docker-environment
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
ghcr_username: ${{ github.actor }}
ghcr_password: ${{ secrets.GITHUB_TOKEN }}
# Considered unstable builds
# See Issue #285, PR #286, and PR #323 for more information
- name: Build and Push
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:canary
ghcr.io/actions-runner-controller/actions-runner-controller:canary
cache-from: type=gha,scope=arc-canary
cache-to: type=gha,mode=max,scope=arc-canary

127
.github/workflows/publish-chart.yaml vendored Normal file
View File

@@ -0,0 +1,127 @@
name: Publish Helm Chart
on:
push:
branches:
- master
paths:
- 'charts/**'
- '.github/workflows/publish-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.10.0
HELM_VERSION: v3.8.0
permissions:
contents: read
jobs:
lint-chart:
name: Lint Chart
runs-on: ubuntu-latest
outputs:
publish-chart: ${{ steps.publish-chart-step.outputs.publish }}
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v3.0
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.2.1
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config charts/.ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: |
ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
if: steps.list-changed.outputs.changed == 'true'
uses: helm/kind-action@v1.3.0
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
if: steps.list-changed.outputs.changed == 'true'
run: |
helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: ct install --config charts/.ci/ct-config.yaml
# WARNING: This relies on the latest release being inat the top of the JSON from GitHub and a clean chart.yaml
- name: Check if Chart Publish is Needed
id: publish-chart-step
run: |
CHART_TEXT=$(curl -fs https://raw.githubusercontent.com/actions-runner-controller/actions-runner-controller/master/charts/actions-runner-controller/Chart.yaml)
NEW_CHART_VERSION=$(echo "$CHART_TEXT" | grep version: | cut -d ' ' -f 2)
RELEASE_LIST=$(curl -fs https://api.github.com/repos/actions-runner-controller/actions-runner-controller/releases | jq .[].tag_name | grep actions-runner-controller | cut -d '"' -f 2 | cut -d '-' -f 4)
LATEST_RELEASED_CHART_VERSION=$(echo $RELEASE_LIST | cut -d ' ' -f 1)
echo "Chart version in master : $NEW_CHART_VERSION"
echo "Latest release chart version : $LATEST_RELEASED_CHART_VERSION"
if [[ $NEW_CHART_VERSION != $LATEST_RELEASED_CHART_VERSION ]]; then
echo "::set-output name=publish::true"
fi
publish-chart:
if: needs.lint-chart.outputs.publish-chart == 'true'
needs: lint-chart
name: Publish Chart
runs-on: ubuntu-latest
permissions:
contents: write # for helm/chart-releaser-action to push chart release and create a release
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.4.0
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -1,33 +0,0 @@
on:
release:
types: [published]
jobs:
build:
runs-on: ubuntu-latest
name: Release
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install tools
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.2.0/kubebuilder_2.2.0_linux_amd64.tar.gz
tar zxvf kubebuilder_2.2.0_linux_amd64.tar.gz
sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder
curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash
sudo mv kustomize /usr/local/bin
curl -L -O https://github.com/tcnksm/ghr/releases/download/v0.13.0/ghr_v0.13.0_linux_amd64.tar.gz
tar zxvf ghr_v0.13.0_linux_amd64.tar.gz
sudo mv ghr_v0.13.0_linux_amd64/ghr /usr/local/bin
- name: Set version
run: echo "::set-env name=VERSION::$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')"
- name: Upload artifacts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: make github-release
- name: Build container image
run: make docker-build
- name: Docker Login
run: docker login -u summerwind -p ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Push container image
run: make docker-push

32
.github/workflows/run-codeql.yaml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: Run CodeQL
on:
push:
branches:
- master
pull_request:
branches:
- master
schedule:
- cron: '30 1 * * 0'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v2
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2

25
.github/workflows/run-stale.yaml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Run Stale Bot
on:
schedule:
- cron: '30 1 * * *'
permissions:
contents: read
jobs:
stale:
name: Run Stale
runs-on: ubuntu-latest
permissions:
issues: write # for actions/stale to close stale issues
pull-requests: write # for actions/stale to close stale PRs
steps:
- uses: actions/stale@v5
with:
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
# turn off stale for both issues and PRs
days-before-stale: -1
# turn stale back on for issues only
days-before-issue-stale: 30
days-before-issue-close: 14
exempt-issue-labels: 'pinned,security,enhancement,refactor,documentation,chore,bug,dependencies,needs-investigation'

83
.github/workflows/runners.yaml vendored Normal file
View File

@@ -0,0 +1,83 @@
name: Runners
on:
pull_request:
types:
- opened
- synchronize
- reopened
branches:
- 'master'
paths:
- 'runner/**'
- '!runner/Makefile'
- '.github/workflows/runners.yaml'
- '!**.md'
# We must do a trigger on a push: instead of a types: closed so GitHub Secrets
# are available to the workflow run
push:
branches:
- 'master'
paths:
- 'runner/**'
- '!runner/Makefile'
- '.github/workflows/runners.yaml'
- '!**.md'
env:
RUNNER_VERSION: 2.294.0
DOCKER_VERSION: 20.10.12
RUNNER_CONTAINER_HOOKS_VERSION: 0.1.2
DOCKERHUB_USERNAME: summerwind
jobs:
build-runners:
name: Build ${{ matrix.name }}-${{ matrix.os-name }}-${{ matrix.os-version }}
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
strategy:
fail-fast: false
matrix:
include:
- name: actions-runner
os-name: ubuntu
os-version: 20.04
- name: actions-runner-dind
os-name: ubuntu
os-version: 20.04
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Docker Environment
id: vars
uses: ./.github/actions/setup-docker-environment
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
ghcr_username: ${{ github.actor }}
ghcr_password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Push Versioned Tags
uses: docker/build-push-action@v3
with:
context: ./runner
file: ./runner/${{ matrix.name }}.dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }}
build-args: |
RUNNER_VERSION=${{ env.RUNNER_VERSION }}
DOCKER_VERSION=${{ env.DOCKER_VERSION }}
RUNNER_CONTAINER_HOOKS_VERSION=${{ env.RUNNER_CONTAINER_HOOKS_VERSION }}
tags: |
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}-${{ steps.vars.outputs.sha_short }}
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:latest
ghcr.io/${{ github.repository }}/${{ matrix.name }}:latest
ghcr.io/${{ github.repository }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}
ghcr.io/${{ github.repository }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}-${{ steps.vars.outputs.sha_short }}
cache-from: type=gha,scope=build-${{ matrix.name }}
cache-to: type=gha,mode=max,scope=build-${{ matrix.name }}

60
.github/workflows/validate-arc.yaml vendored Normal file
View File

@@ -0,0 +1,60 @@
name: Validate ARC
on:
pull_request:
branches:
- master
paths-ignore:
- '**.md'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/publish-canary.yaml'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/runners.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/validate-entrypoint.yaml'
- '.github/renovate.*'
- 'runner/**'
- '.gitignore'
- 'PROJECT'
- 'LICENSE'
- 'Makefile'
permissions:
contents: read
jobs:
test-controller:
name: Test ARC
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set-up Go
uses: actions/setup-go@v3
with:
go-version: '1.18.2'
check-latest: false
- uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_linux_amd64.tar.gz
tar zxvf kubebuilder_2.3.2_linux_amd64.tar.gz
sudo mv kubebuilder_2.3.2_linux_amd64 /usr/local/kubebuilder
- name: Run tests
run: |
make test
- name: Verify manifests are up-to-date
run: |
make manifests
git diff --exit-code

82
.github/workflows/validate-chart.yaml vendored Normal file
View File

@@ -0,0 +1,82 @@
name: Validate Helm Chart
on:
push:
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.10.0
HELM_VERSION: v3.8.0
permissions:
contents: read
jobs:
validate-chart:
name: Lint Chart
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v3.0
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.2.1
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config charts/.ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: |
ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
uses: helm/kind-action@v1.3.0
if: steps.list-changed.outputs.changed == 'true'
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
if: steps.list-changed.outputs.changed == 'true'
run: |
helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
- name: Run chart-testing (install)
run: |
ct install --config charts/.ci/ct-config.yaml

25
.github/workflows/validate-runners.yaml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: Validate Runners
on:
pull_request:
branches:
- '**'
paths:
- 'runner/**'
- 'test/entrypoint/**'
- '!**.md'
permissions:
contents: read
jobs:
test-runner-entrypoint:
name: Test entrypoint
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Run tests
run: |
make acceptance/runner/entrypoint

13
.gitignore vendored
View File

@@ -1,3 +1,4 @@
# Deploy Assets
release
# Binaries for programs and plugins
@@ -15,11 +16,21 @@ bin
*.out
# Kubernetes Generated files - skip generated files, except for vendored files
!vendor/**/zz_generated.*
# editor and IDE paraphernalia
.vscode
.idea
*.swp
*.swo
*~
.envrc
.env
.test.env
*.pem
# OS
.DS_STORE
/test-assets

2
CODEOWNERS Normal file
View File

@@ -0,0 +1,2 @@
# actions-runner-controller maintainers
* @mumoshu @toast-gear

157
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,157 @@
## Contributing
### Testing Controller Built from a Pull Request
We always appreciate your help in testing open pull requests by deploying custom builds of actions-runner-controller onto your own environment, so that we are extra sure we didn't break anything.
It is especially true when the pull request is about GitHub Enterprise, both GHEC and GHES, as [maintainers don't have GitHub Enterprise environments for testing](/README.md#github-enterprise-support).
The process would look like the below:
- Clone this repository locally
- Checkout the branch. If you use the `gh` command, run `gh pr checkout $PR_NUMBER`
- Run `NAME=$DOCKER_USER/actions-runner-controller VERSION=canary make docker-build docker-push` for a custom container image build
- Update your actions-runner-controller's controller-manager deployment to use the new image, `$DOCKER_USER/actions-runner-controller:canary`
Please also note that you need to replace `$DOCKER_USER` with your own DockerHub account name.
### How to Contribute a Patch
Depending on what you are patching depends on how you should go about it. Below are some guides on how to test patches locally as well as develop the controller and runners.
When submitting a PR for a change please provide evidence that your change works as we still need to work on improving the CI of the project. Some resources are provided for helping achieve this, see this guide for details.
#### Running an End to End Test
> **Notes for Ubuntu 20.04+ users**
>
> If you're using Ubuntu 20.04 or greater, you might have installed `docker` with `snap`.
>
> If you want to stick with `snap`-provided `docker`, do not forget to set `TMPDIR` to
> somewhere under `$HOME`.
> Otherwise `kind load docker-image` fail while running `docker save`.
> See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap for more information.
To test your local changes against both PAT and App based authentication please run the `acceptance` make target with the authentication configuration details provided:
```shell
# This sets `VERSION` envvar to some appropriate value
. hack/make-env.sh
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
make acceptance
```
**Rerunning a failed test**
When one of tests run by `make acceptance` failed, you'd probably like to rerun only the failed one.
It can be done by `make acceptance/run` and by setting the combination of `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm|kubectl` and `ACCEPTANCE_TEST_SECRET_TYPE=token|app` values that failed (note, you just need to set the corresponding authentication configuration in this circumstance)
In the example below, we rerun the test for the combination `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token` only:
```shell
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm
ACCEPTANCE_TEST_SECRET_TYPE=token \
make acceptance/run
```
**Testing in a non-kind cluster**
If you prefer to test in a non-kind cluster, you can instead run:
```shell
KUBECONFIG=path/to/kubeconfig \
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make docker-build acceptance/setup \
acceptance/deploy \
acceptance/tests
```
#### Developing the Controller
Rerunning the whole acceptance test suite from scratch on every little change to the controller, the runner, and the chart would be counter-productive.
To make your development cycle faster, use the below command to update deploy and update all the three:
```shell
# Let assume we have all other envvars like DOCKER_USER, GITHUB_TOKEN already set,
# The below command will (re)build `actions-runner-controller:controller1` and `actions-runner:runner1`,
# load those into kind nodes, and then rerun kubectl or helm to install/upgrade the controller,
# and finally upgrade the runner deployment to use the new runner image.
#
# As helm 3 and kubectl is unable to recreate a pod when no tag change,
# you either need to bump VERSION and RUNNER_TAG on each run,
# or manually run `kubectl delete pod $POD` on respective pods for changes to actually take effect.
# Makefile
VERSION=controller1 \
RUNNER_TAG=runner1 \
make acceptance/pull acceptance/kind docker-build acceptance/load acceptance/deploy
```
If you've already deployed actions-runner-controller and only want to recreate pods to use the newer image, you can run:
```shell
# Makefile
NAME=$DOCKER_USER/actions-runner-controller \
make docker-build acceptance/load && \
kubectl -n actions-runner-system delete po $(kubectl -n actions-runner-system get po -ojsonpath={.items[*].metadata.name})
```
Similarly, if you'd like to recreate runner pods with the newer runner image you can use the runner specific [Makefile](runner/Makefile) to build and / or push new runner images
```shell
# runner/Makefile
NAME=$DOCKER_USER/actions-runner make \
-C runner docker-{build,push}-ubuntu && \
(kubectl get po -ojsonpath={.items[*].metadata.name} | xargs -n1 kubectl delete po)
```
#### Developing the Runners
**Tests**
A set of example pipelines (./acceptance/pipelines) are provided in this repository which you can use to validate your runners are working as expected. When raising a PR please run the relevant suites to prove your change hasn't broken anything.
**Running Ginkgo Tests**
You can run the integration test suite that is written in Ginkgo with:
```shell
make test-with-deps
```
This will firstly install a few binaries required to setup the integration test environment and then runs `go test` to start the Ginkgo test.
If you don't want to use `make`, like when you're running tests from your IDE, install required binaries to `/usr/local/kubebuilder/bin`. That's the directory in which controller-runtime's `envtest` framework locates the binaries.
```shell
sudo mkdir -p /usr/local/kubebuilder/bin
make kube-apiserver etcd
sudo mv test-assets/{etcd,kube-apiserver} /usr/local/kubebuilder/bin/
go test -v -run TestAPIs github.com/actions-runner-controller/actions-runner-controller/controllers
```
To run Ginkgo tests selectively, set the pattern of target test names to `GINKGO_FOCUS`.
All the Ginkgo test that matches `GINKGO_FOCUS` will be run.
```shell
GINKGO_FOCUS='[It] should create a new Runner resource from the specified template, add a another Runner on replicas increased, and removes all the replicas when set to 0' \
go test -v -run TestAPIs github.com/actions-runner-controller/actions-runner-controller/controllers
```
#### Helm Version Bumps
In general we ask you not to bump the version in your PR, the maintainers in general manage the publishing of a new chart.

View File

@@ -1,27 +1,54 @@
# Build the manager binary
FROM golang:1.13 as builder
FROM --platform=$BUILDPLATFORM golang:1.18.3 as builder
WORKDIR /workspace
# Make it runnable on a distroless image/without libc
ENV CGO_ENABLED=0
# Copy the Go Modules manifests
COPY go.mod go.mod
COPY go.sum go.sum
COPY go.mod go.sum ./
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
# and so that source changes don't invalidate our downloaded layer.
#
# Also, we need to do this before setting TARGETPLATFORM/TARGETOS/TARGETARCH/TARGETVARIANT
# so that go mod cache is shared across platforms.
RUN go mod download
# Copy the go source
COPY main.go main.go
COPY api/ api/
COPY controllers/ controllers/
# COPY . .
# Usage:
# docker buildx build --tag repo/img:tag -f ./Dockerfile . --platform linux/amd64,linux/arm64,linux/arm/v7
#
# With the above commmand,
# TARGETOS can be "linux", TARGETARCH can be "amd64", "arm64", and "arm", TARGETVARIANT can be "v7".
ARG TARGETPLATFORM TARGETOS TARGETARCH TARGETVARIANT
# We intentionally avoid `--mount=type=cache,mode=0777,target=/go/pkg/mod` in the `go mod download` and the `go build` runs
# to avoid https://github.com/moby/buildkit/issues/2334
# We can use docker layer cache so the build is fast enogh anyway
# We also use per-platform GOCACHE for the same reason.
env GOCACHE /build/${TARGETPLATFORM}/root/.cache/go-build
# Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go
RUN --mount=target=. \
--mount=type=cache,mode=0777,target=${GOCACHE} \
export GOOS=${TARGETOS} GOARCH=${TARGETARCH} GOARM=${TARGETVARIANT#v} && \
go build -o /out/manager main.go && \
go build -o /out/github-webhook-server ./cmd/githubwebhookserver
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
COPY --from=builder /out/manager .
COPY --from=builder /out/github-webhook-server .
USER nonroot:nonroot
ENTRYPOINT ["/manager"]

253
Makefile
View File

@@ -1,8 +1,27 @@
NAME ?= summerwind/actions-runner-controller
ifdef DOCKER_USER
NAME ?= ${DOCKER_USER}/actions-runner-controller
else
NAME ?= summerwind/actions-runner-controller
endif
DOCKER_USER ?= $(shell echo ${NAME} | cut -d / -f1)
VERSION ?= latest
RUNNER_VERSION ?= 2.294.0
TARGETPLATFORM ?= $(shell arch)
RUNNER_NAME ?= ${DOCKER_USER}/actions-runner
RUNNER_TAG ?= ${VERSION}
TEST_REPO ?= ${DOCKER_USER}/actions-runner-controller
TEST_ORG ?=
TEST_ORG_REPO ?=
TEST_EPHEMERAL ?= false
SYNC_PERIOD ?= 1m
USE_RUNNERSET ?=
KUBECONTEXT ?= kind-acceptance
CLUSTER ?= acceptance
CERT_MANAGER_VERSION ?= v1.1.1
KUBE_RBAC_PROXY_VERSION ?= v0.11.0
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:trivialVersions=true"
CRD_OPTIONS ?= "crd:generateEmbeddedObjectMeta=true"
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
@@ -11,11 +30,40 @@ else
GOBIN=$(shell go env GOBIN)
endif
TEST_ASSETS=$(PWD)/test-assets
# default list of platforms for which multiarch image is built
ifeq (${PLATFORMS}, )
export PLATFORMS="linux/amd64,linux/arm64"
endif
# if IMG_RESULT is unspecified, by default the image will be pushed to registry
ifeq (${IMG_RESULT}, load)
export PUSH_ARG="--load"
# if load is specified, image will be built only for the build machine architecture.
export PLATFORMS="local"
else ifeq (${IMG_RESULT}, cache)
# if cache is specified, image will only be available in the build cache, it won't be pushed or loaded
# therefore no PUSH_ARG will be specified
else
export PUSH_ARG="--push"
endif
all: manager
GO_TEST_ARGS ?= -short
# Run tests
test: generate fmt vet manifests
go test ./... -coverprofile cover.out
go test $(GO_TEST_ARGS) ./... -coverprofile cover.out
go test -fuzz=Fuzz -fuzztime=10s -run=Fuzz* ./controllers
test-with-deps: kube-apiserver etcd kubectl
# See https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest#pkg-constants
TEST_ASSET_KUBE_APISERVER=$(KUBE_APISERVER_BIN) \
TEST_ASSET_ETCD=$(ETCD_BIN) \
TEST_ASSET_KUBECTL=$(KUBECTL_BIN) \
make test
# Build manager binary
manager: generate fmt vet
@@ -39,8 +87,16 @@ deploy: manifests
kustomize build config/default | kubectl apply -f -
# Generate manifests e.g. CRD, RBAC etc.
manifests: controller-gen
manifests: manifests-gen-crds chart-crds
manifests-gen-crds: controller-gen yq
$(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
for YAMLFILE in config/crd/bases/actions*.yaml; do \
$(YQ) write --inplace "$$YAMLFILE" spec.preserveUnknownFields false; \
done
chart-crds:
cp config/crd/bases/*.yaml charts/actions-runner-controller/crds/
# Run go fmt against code
fmt:
@@ -54,13 +110,23 @@ vet:
generate: controller-gen
$(CONTROLLER_GEN) object:headerFile=./hack/boilerplate.go.txt paths="./..."
# Build the docker image
docker-build: test
docker build . -t ${NAME}:${VERSION}
docker-buildx:
export DOCKER_CLI_EXPERIMENTAL=enabled ;\
export DOCKER_BUILDKIT=1
@if ! docker buildx ls | grep -q container-builder; then\
docker buildx create --platform ${PLATFORMS} --name container-builder --use;\
fi
docker buildx build --platform ${PLATFORMS} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
-t "${NAME}:${VERSION}" \
-f Dockerfile \
. ${PUSH_ARG}
# Push the docker image
docker-push:
docker push ${NAME}:${VERSION}
docker push ${RUNNER_NAME}:${RUNNER_TAG}
# Generate the release manifest file
release: manifests
@@ -68,23 +134,190 @@ release: manifests
mkdir -p release
kustomize build config/default > release/actions-runner-controller.yaml
.PHONY: release/clean
release/clean:
rm -rf release
.PHONY: acceptance
acceptance: release/clean acceptance/pull docker-build release
ACCEPTANCE_TEST_SECRET_TYPE=token make acceptance/run
ACCEPTANCE_TEST_SECRET_TYPE=app make acceptance/run
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token make acceptance/run
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=app make acceptance/run
acceptance/run: acceptance/kind acceptance/load acceptance/setup acceptance/deploy acceptance/tests acceptance/teardown
acceptance/kind:
kind create cluster --name ${CLUSTER} --config acceptance/kind.yaml
# Set TMPDIR to somewhere under $HOME when you use docker installed with Ubuntu snap
# Otherwise `load docker-image` fail while running `docker save`.
# See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap
acceptance/load:
kind load docker-image ${NAME}:${VERSION} --name ${CLUSTER}
kind load docker-image quay.io/brancz/kube-rbac-proxy:$(KUBE_RBAC_PROXY_VERSION) --name ${CLUSTER}
kind load docker-image ${RUNNER_NAME}:${RUNNER_TAG} --name ${CLUSTER}
kind load docker-image docker:dind --name ${CLUSTER}
kind load docker-image quay.io/jetstack/cert-manager-controller:$(CERT_MANAGER_VERSION) --name ${CLUSTER}
kind load docker-image quay.io/jetstack/cert-manager-cainjector:$(CERT_MANAGER_VERSION) --name ${CLUSTER}
kind load docker-image quay.io/jetstack/cert-manager-webhook:$(CERT_MANAGER_VERSION) --name ${CLUSTER}
kubectl cluster-info --context ${KUBECONTEXT}
# Pull the docker images for acceptance
acceptance/pull:
docker pull quay.io/brancz/kube-rbac-proxy:$(KUBE_RBAC_PROXY_VERSION)
docker pull docker:dind
docker pull quay.io/jetstack/cert-manager-controller:$(CERT_MANAGER_VERSION)
docker pull quay.io/jetstack/cert-manager-cainjector:$(CERT_MANAGER_VERSION)
docker pull quay.io/jetstack/cert-manager-webhook:$(CERT_MANAGER_VERSION)
acceptance/setup:
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/$(CERT_MANAGER_VERSION)/cert-manager.yaml #kubectl create namespace actions-runner-system
kubectl -n cert-manager wait deploy/cert-manager-cainjector --for condition=available --timeout 90s
kubectl -n cert-manager wait deploy/cert-manager-webhook --for condition=available --timeout 60s
kubectl -n cert-manager wait deploy/cert-manager --for condition=available --timeout 60s
kubectl create namespace actions-runner-system || true
# Adhocly wait for some time until cert-manager's admission webhook gets ready
sleep 5
acceptance/teardown:
kind delete cluster --name ${CLUSTER}
acceptance/deploy:
NAME=${NAME} DOCKER_USER=${DOCKER_USER} VERSION=${VERSION} RUNNER_NAME=${RUNNER_NAME} RUNNER_TAG=${RUNNER_TAG} TEST_REPO=${TEST_REPO} \
TEST_ORG=${TEST_ORG} TEST_ORG_REPO=${TEST_ORG_REPO} SYNC_PERIOD=${SYNC_PERIOD} \
USE_RUNNERSET=${USE_RUNNERSET} \
TEST_EPHEMERAL=${TEST_EPHEMERAL} \
acceptance/deploy.sh
acceptance/tests:
acceptance/checks.sh
acceptance/runner/entrypoint:
cd test/entrypoint/ && bash test.sh
# We use -count=1 instead of `go clean -testcache`
# See https://terratest.gruntwork.io/docs/testing-best-practices/avoid-test-caching/
.PHONY: e2e
e2e:
go test -count=1 -v -timeout 600s -run '^TestE2E$$' ./test/e2e
# Upload release file to GitHub.
github-release: release
ghr ${VERSION} release/
# find or download controller-gen
# download controller-gen if necessary
# Find or download controller-gen
#
# Note that controller-gen newer than 0.4.1 is needed for https://github.com/kubernetes-sigs/controller-tools/issues/444#issuecomment-680168439
# Otherwise we get errors like the below:
# Error: failed to install CRD crds/actions.summerwind.dev_runnersets.yaml: CustomResourceDefinition.apiextensions.k8s.io "runnersets.actions.summerwind.dev" is invalid: [spec.validation.openAPIV3Schema.properties[spec].properties[template].properties[spec].properties[containers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property, spec.validation.openAPIV3Schema.properties[spec].properties[template].properties[spec].properties[initContainers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property]
#
# Note that controller-gen newer than 0.6.0 is needed due to https://github.com/kubernetes-sigs/controller-tools/issues/448
# Otherwise ObjectMeta embedded in Spec results in empty on the storage.
controller-gen:
ifeq (, $(shell which controller-gen))
ifeq (, $(wildcard $(GOBIN)/controller-gen))
@{ \
set -e ;\
CONTROLLER_GEN_TMP_DIR=$$(mktemp -d) ;\
cd $$CONTROLLER_GEN_TMP_DIR ;\
go mod init tmp ;\
go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.2.4 ;\
go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0 ;\
rm -rf $$CONTROLLER_GEN_TMP_DIR ;\
}
endif
CONTROLLER_GEN=$(GOBIN)/controller-gen
else
CONTROLLER_GEN=$(shell which controller-gen)
endif
# find or download yq
# download yq if necessary
# Use always go-version to get consistent line wraps etc.
yq:
ifeq (, $(wildcard $(GOBIN)/yq))
echo "Downloading yq"
@{ \
set -e ;\
YQ_TMP_DIR=$$(mktemp -d) ;\
cd $$YQ_TMP_DIR ;\
go mod init tmp ;\
go install github.com/mikefarah/yq/v3@3.4.0 ;\
rm -rf $$YQ_TMP_DIR ;\
}
endif
YQ=$(GOBIN)/yq
OS_NAME := $(shell uname -s | tr A-Z a-z)
# find or download etcd
etcd:
ifeq (, $(shell which etcd))
ifeq (, $(wildcard $(TEST_ASSETS)/etcd))
@{ \
set -xe ;\
INSTALL_TMP_DIR=$$(mktemp -d) ;\
cd $$INSTALL_TMP_DIR ;\
wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mkdir -p $(TEST_ASSETS) ;\
tar zxvf kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/etcd $(TEST_ASSETS)/etcd ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kube-apiserver $(TEST_ASSETS)/kube-apiserver ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kubectl $(TEST_ASSETS)/kubectl ;\
rm -rf $$INSTALL_TMP_DIR ;\
}
ETCD_BIN=$(TEST_ASSETS)/etcd
else
ETCD_BIN=$(TEST_ASSETS)/etcd
endif
else
ETCD_BIN=$(shell which etcd)
endif
# find or download kube-apiserver
kube-apiserver:
ifeq (, $(shell which kube-apiserver))
ifeq (, $(wildcard $(TEST_ASSETS)/kube-apiserver))
@{ \
set -xe ;\
INSTALL_TMP_DIR=$$(mktemp -d) ;\
cd $$INSTALL_TMP_DIR ;\
wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mkdir -p $(TEST_ASSETS) ;\
tar zxvf kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/etcd $(TEST_ASSETS)/etcd ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kube-apiserver $(TEST_ASSETS)/kube-apiserver ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kubectl $(TEST_ASSETS)/kubectl ;\
rm -rf $$INSTALL_TMP_DIR ;\
}
KUBE_APISERVER_BIN=$(TEST_ASSETS)/kube-apiserver
else
KUBE_APISERVER_BIN=$(TEST_ASSETS)/kube-apiserver
endif
else
KUBE_APISERVER_BIN=$(shell which kube-apiserver)
endif
# find or download kubectl
kubectl:
ifeq (, $(shell which kubectl))
ifeq (, $(wildcard $(TEST_ASSETS)/kubectl))
@{ \
set -xe ;\
INSTALL_TMP_DIR=$$(mktemp -d) ;\
cd $$INSTALL_TMP_DIR ;\
wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mkdir -p $(TEST_ASSETS) ;\
tar zxvf kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/etcd $(TEST_ASSETS)/etcd ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kube-apiserver $(TEST_ASSETS)/kube-apiserver ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kubectl $(TEST_ASSETS)/kubectl ;\
rm -rf $$INSTALL_TMP_DIR ;\
}
KUBECTL_BIN=$(TEST_ASSETS)/kubectl
else
KUBECTL_BIN=$(TEST_ASSETS)/kubectl
endif
else
KUBECTL_BIN=$(shell which kubectl)
endif

View File

@@ -1,7 +1,13 @@
domain: summerwind.dev
repo: github.com/summerwind/actions-runner-controller
repo: github.com/actions-runner-controller/actions-runner-controller
resources:
- group: actions
kind: Runner
version: v1alpha1
- group: actions
kind: RunnerReplicaSet
version: v1alpha1
- group: actions
kind: RunnerDeployment
version: v1alpha1
version: "2"

1730
README.md

File diff suppressed because it is too large Load Diff

22
SECURITY.md Normal file
View File

@@ -0,0 +1,22 @@
# Security Policy
## Sponsoring the project
This project is maintained by a small team of two and therefore lacks the resource to provide security fixes in a timely manner.
If you have important business(es) that relies on this project, please consider sponsoring the project so that the maintainer(s) can commit to providing such service.
Please refer to https://github.com/sponsors/actions-runner-controller for available tiers.
## Supported Versions
| Version | Supported |
| ------- | ------------------ |
| 0.23.0 | :white_check_mark: |
| < 0.23.0| :x: |
## Reporting a Vulnerability
To report a security issue, please email ykuoka+arcsecurity(at)gmail.com with a description of the issue, the steps you took to create the issue, affected versions, and, if known, mitigations for the issue.
A maintainer will try to respond within 5 working days. If the issue is confirmed as a vulnerability, a Security Advisory will be opened. This project tries to follow a 90 day disclosure timeline.

272
TROUBLESHOOTING.md Normal file
View File

@@ -0,0 +1,272 @@
# Troubleshooting
* [Tools](#tools)
* [Installation](#installation)
* [InternalError when calling webhook: context deadline exceeded](#internalerror-when-calling-webhook-context-deadline-exceeded)
* [Invalid header field value](#invalid-header-field-value)
* [Helm chart install failure: certificate signed by unknown authority](#helm-chart-install-failure-certificate-signed-by-unknown-authority)
* [Operations](#operations)
* [Stuck runner kind or backing pod](#stuck-runner-kind-or-backing-pod)
* [Delay in jobs being allocated to runners](#delay-in-jobs-being-allocated-to-runners)
* [Runner coming up before network available](#runner-coming-up-before-network-available)
* [Outgoing network action hangs indefinitely](#outgoing-network-action-hangs-indefinitely)
* [Unable to scale to zero with TotalNumberOfQueuedAndInProgressWorkflowRuns](#unable-to-scale-to-zero-with-totalnumberofqueuedandinprogressworkflowruns)
## Tools
A list of tools which are helpful for troubleshooting
* https://github.com/rewanthtammana/kubectl-fields Kubernetes resources hierarchy parsing tool
* https://github.com/stern/stern Multi pod and container log tailing for Kubernetes
## Installation
Troubeshooting runbooks that relate to ARC installation problems
### InternalError when calling webhook: context deadline exceeded
**Problem**
This issue can come up for various reasons like leftovers from previous installations or not being able to access the K8s service's clusterIP associated with the admission webhook server (of ARC).
```
Internal error occurred: failed calling webhook "mutate.runnerdeployment.actions.summerwind.dev":
Post "https://actions-runner-controller-webhook.actions-runner-system.svc:443/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment?timeout=10s": context deadline exceeded
```
**Solution**
First we will try the common solution of checking webhook leftovers from previous installations:
1. ```bash
kubectl get validatingwebhookconfiguration -A
kubectl get mutatingwebhookconfiguration -A
```
2. If you see any webhooks related to actions-runner-controller, delete them:
```bash
kubectl delete mutatingwebhookconfiguration actions-runner-controller-mutating-webhook-configuration
kubectl delete validatingwebhookconfiguration actions-runner-controller-validating-webhook-configuration
```
If that didn't work then probably your K8s control-plane is somehow unable to access the K8s service's clusterIP associated with the admission webhook server:
1. You're running apiserver as a binary and you didn't make service cluster IPs available to the host network.
2. You're running the apiserver in the pod but your pod network (i.e. CNI plugin installation and config) is not good so your pods(like kube-apiserver) in the K8s control-plane nodes can't access ARC's admission webhook server pod(s) in probably data-plane nodes.
Another reason could be due to GKEs firewall settings you may run into the following errors when trying to deploy runners on a private GKE cluster:
To fix this, you may either:
1. Configure the webhook to use another port, such as 443 or 10250, [each of
which allow traffic by default](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules).
```sh
# With helm, you'd set `webhookPort` to the port number of your choice
# See https://github.com/actions-runner-controller/actions-runner-controller/pull/1410/files for more information
helm upgrade --install --namespace actions-runner-system --create-namespace \
--wait actions-runner-controller actions-runner-controller/actions-runner-controller \
--set webhookPort=10250
```
2. Set up a firewall rule to allow the master node to connect to the default
webhook port. The exact way to do this may vary, but the following script
should point you in the right direction:
```sh
# 1) Retrieve the network tag automatically given to the worker nodes
# NOTE: this only works if you have only one cluster in your GCP project. You will have to manually inspect the result of this command to find the tag for the cluster you want to target
WORKER_NODES_TAG=$(gcloud compute instances list --format='text(tags.items[0])' --filter='metadata.kubelet-config:*' | grep tags | awk '{print $2}' | sort | uniq)
# 2) Take note of the VPC network in which you deployed your cluster
# NOTE this only works if you have only one network in which you deploy your clusters
NETWORK=$(gcloud compute instances list --format='text(networkInterfaces[0].network)' --filter='metadata.kubelet-config:*' | grep networks | awk -F'/' '{print $NF}' | sort | uniq)
# 3) Get the master source ip block
SOURCE=$(gcloud container clusters describe <cluster-name> --region <region> | grep masterIpv4CidrBlock| cut -d ':' -f 2 | tr -d ' ')
gcloud compute firewall-rules create k8s-cert-manager --source-ranges $SOURCE --target-tags $WORKER_NODES_TAG --allow TCP:9443 --network $NETWORK
```
### Invalid header field value
**Problem**
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error
{
"controller": "runner",
"request": "actions-runner-system/runner-deployment-dk7q8-dk5c9",
"error": "failed to create registration token: Post \"https://api.github.com/orgs/$YOUR_ORG_HERE/actions/runners/registration-token\": net/http: invalid header field value \"Bearer $YOUR_TOKEN_HERE\\n\" for key Authorization"
}
```
**Solution**
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added, either:
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documented flags
### Helm chart install failure: certificate signed by unknown authority
**Problem**
```
Error: UPGRADE FAILED: failed to create resource: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority
```
Apparently, it's failing while `helm` is creating one of resources defined in the ARC chart and the cause was that cert-manager's webhook is not working correctly, due to the missing or the invalid CA certficate.
You'd try to tail logs from the `cert-manager-cainjector` and see it's failing with an error like:
```
$ kubectl -n cert-manager logs cert-manager-cainjector-7cdbb9c945-g6bt4
I0703 03:31:55.159339 1 start.go:91] "starting" version="v1.1.1" revision="3ac7418070e22c87fae4b22603a6b952f797ae96"
I0703 03:31:55.615061 1 leaderelection.go:243] attempting to acquire leader lease kube-system/cert-manager-cainjector-leader-election...
I0703 03:32:10.738039 1 leaderelection.go:253] successfully acquired lease kube-system/cert-manager-cainjector-leader-election
I0703 03:32:10.739941 1 recorder.go:52] cert-manager/controller-runtime/manager/events "msg"="Normal" "message"="cert-manager-cainjector-7cdbb9c945-g6bt4_88e4bc70-eded-4343-a6fb-0ddd6434eb55 became leader" "object"={"kind":"ConfigMap","namespace":"kube-system","name":"cert-manager-cainjector-leader-election","uid":"942a021e-364c-461a-978c-f54a95723cdc","apiVersion":"v1","resourceVersion":"1576"} "reason"="LeaderElection"
E0703 03:32:11.192128 1 start.go:119] cert-manager/ca-injector "msg"="manager goroutine exited" "error"=null
I0703 03:32:12.339197 1 request.go:645] Throttling request took 1.047437675s, request: GET:https://10.96.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
E0703 03:32:13.143790 1 start.go:151] cert-manager/ca-injector "msg"="Error registering certificate based controllers. Retrying after 5 seconds." "error"="no matches for kind \"MutatingWebhookConfiguration\" in version \"admissionregistration.k8s.io/v1beta1\""
Error: error registering secret controller: no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
```
**Solution**
Your cluster is based on a new enough Kubernetes of version 1.22 or greater which does not support the legacy `admissionregistration.k8s.io/v1beta1` API anymore, and your `cert-manager` is not up-to-date hence it's still trying to use the leagcy Kubernetes API.
In many cases, it's not an option to downgrade Kubernetes. So, just upgrade `cert-manager` to a more recent version that does have have the support for the specific Kubernetes version you're using.
See https://cert-manager.io/docs/installation/supported-releases/ for the list of available cert-manager versions.
## Operations
Troubeshooting runbooks that relate to ARC operational problems
### Stuck runner kind or backing pod
**Problem**
Sometimes either the runner kind (`kubectl get runners`) or it's underlying pod can get stuck in a terminating state for various reasons. You can get the kind unstuck by removing its finaliser using something like this:
**Solution**
Remove the finaliser from the relevent runner kind or pod
```
# Get all kind runners and remove the finalizer
$ kubectl get runners --no-headers | awk {'print $1'} | xargs kubectl patch runner --type merge -p '{"metadata":{"finalizers":null}}'
# Get all pods that are stuck terminating and remove the finalizer
$ kubectl -n get pods | grep Terminating | awk {'print $1'} | xargs kubectl patch pod -p '{"metadata":{"finalizers":null}}'
```
_Note the code assumes you have already selected the namespace your runners are in and that they
are in a namespace not shared with anything else_
### Delay in jobs being allocated to runners
**Problem**
ARC isn't involved in jobs actually getting allocated to a runner. ARC is responsible for orchestrating runners and the runner lifecycle. Why some people see large delays in job allocation is not clear however it has been https://github.com/actions-runner-controller/actions-runner-controller/issues/1387#issuecomment-1122593984 that this is caused from the self-update process somehow.
**Solution**
Disable the self-update process in your runner manifests
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeployment-with-sleep
spec:
template:
spec:
...
env:
- name: DISABLE_RUNNER_UPDATE
value: "true"
```
### Runner coming up before network available
**Problem**
If you're running your action runners on a service mesh like Istio, you might
have problems with runner configuration accompanied by logs like:
```
....
runner Starting Runner listener with startup type: service
runner Started listener process
runner An error occurred: Not configured
runner Runner listener exited with error code 2
runner Runner listener exit with retryable error, re-launch runner in 5 seconds.
....
```
This is because the `istio-proxy` has not completed configuring itself when the
configuration script tries to communicate with the network.
More broadly, there are many other circumstances where the runner pod coming up first can cause issues.
**Solution**<br />
> Added originally to help users with older istio instances.
> Newer Istio instances can use Istio's `holdApplicationUntilProxyStarts` attribute ([istio/istio#11130](https://github.com/istio/istio/issues/11130)) to avoid having to delay starting up the runner.
> Please read the discussion in [#592](https://github.com/actions-runner-controller/actions-runner-controller/pull/592) for more information.
You can add a delay to the runner's entrypoint script by setting the `STARTUP_DELAY_IN_SECONDS` environment variable for the runner pod. This will cause the script to sleep X seconds, this works with any runner kind.
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeployment-with-sleep
spec:
template:
spec:
...
env:
- name: STARTUP_DELAY_IN_SECONDS
value: "5"
```
## Outgoing network action hangs indefinitely
**Problem**
Some random outgoing network actions hangs indefinitely. This could be because your cluster does not give Docker the standard MTU of 1500, you can check this out by running `ip link` in a pod that encounters the problem and reading the outgoing interface's MTU value. If it is smaller than 1500, then try the following.
**Solution**
Add a `dockerMTU` key in your runner's spec with the value you read on the outgoing interface. For instance:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: github-runner
namespace: github-system
spec:
replicas: 6
template:
spec:
dockerMTU: 1400
repository: $username/$repo
env: []
```
There may be more places you need to tweak for MTU.
Please consult issues like #651 for more information.
## Unable to scale to zero with TotalNumberOfQueuedAndInProgressWorkflowRuns
**Problem**
HRA doesn't scale the RunnerDeployment to zero, even though you did configure HRA correctly, to have a pull-based scaling metric `TotalNumberOfQueuedAndInProgressWorkflowRuns`, and set `minReplicas: 0`.
**Solution**
You very likely have some dangling workflow jobs stuck in `queued` or `in_progress` as seen in [#1057](https://github.com/actions-runner-controller/actions-runner-controller/issues/1057#issuecomment-1133439061).
Manually call [the "list workflow runs" API](https://docs.github.com/en/rest/actions/workflow-runs#list-workflow-runs-for-a-repository), and [remove the dangling workflow job(s)](https://docs.github.com/en/rest/actions/workflow-runs#delete-a-workflow-run).

97
acceptance/argotunnel.sh Executable file
View File

@@ -0,0 +1,97 @@
#!/usr/bin/env bash
# See https://developers.cloudflare.com/cloudflare-one/tutorials/many-cfd-one-tunnel/
kubectl create ns tunnel || :
kubectl -n tunnel delete secret tunnel-credentials || :
kubectl -n tunnel create secret generic tunnel-credentials \
--from-file=credentials.json=$HOME/.cloudflared/${TUNNEL_ID}.json || :
cat <<MANIFEST | kubectl -n tunnel ${OP} -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudflared
spec:
selector:
matchLabels:
app: cloudflared
replicas: 2 # You could also consider elastic scaling for this deployment
template:
metadata:
labels:
app: cloudflared
spec:
containers:
- name: cloudflared
image: cloudflare/cloudflared:latest
args:
- tunnel
# Points cloudflared to the config file, which configures what
# cloudflared will actually do. This file is created by a ConfigMap
# below.
- --config
- /etc/cloudflared/config/config.yaml
- run
livenessProbe:
httpGet:
# Cloudflared has a /ready endpoint which returns 200 if and only if
# it has an active connection to the edge.
path: /ready
port: 2000
failureThreshold: 1
initialDelaySeconds: 10
periodSeconds: 10
volumeMounts:
- name: config
mountPath: /etc/cloudflared/config
readOnly: true
# Each tunnel has an associated "credentials file" which authorizes machines
# to run the tunnel. cloudflared will read this file from its local filesystem,
# and it'll be stored in a k8s secret.
- name: creds
mountPath: /etc/cloudflared/creds
readOnly: true
volumes:
- name: creds
secret:
secretName: tunnel-credentials
# Create a config.yaml file from the ConfigMap below.
- name: config
configMap:
name: cloudflared
items:
- key: config.yaml
path: config.yaml
---
# This ConfigMap is just a way to define the cloudflared config.yaml file in k8s.
# It's useful to define it in k8s, rather than as a stand-alone .yaml file, because
# this lets you use various k8s templating solutions (e.g. Helm charts) to
# parameterize your config, instead of just using string literals.
apiVersion: v1
kind: ConfigMap
metadata:
name: cloudflared
data:
config.yaml: |
# Name of the tunnel you want to run
tunnel: ${TUNNEL_NAME}
credentials-file: /etc/cloudflared/creds/credentials.json
# Serves the metrics server under /metrics and the readiness server under /ready
metrics: 0.0.0.0:2000
# Autoupdates applied in a k8s pod will be lost when the pod is removed or restarted, so
# autoupdate doesn't make sense in Kubernetes. However, outside of Kubernetes, we strongly
# recommend using autoupdate.
no-autoupdate: true
ingress:
# The first rule proxies traffic to the httpbin sample Service defined in app.yaml
- hostname: ${TUNNEL_HOSTNAME}
service: http://actions-runner-controller-github-webhook-server.actions-runner-system:80
# This rule matches any traffic which didn't match a previous rule, and responds with HTTP 404.
- service: http_status:404
MANIFEST
kubectl -n tunnel delete po -l app=cloudflared || :

84
acceptance/checks.sh Executable file
View File

@@ -0,0 +1,84 @@
#!/usr/bin/env bash
set +e
repo_runnerdeployment_passed="skipped"
repo_runnerset_passed="skipped"
echo "Checking if RunnerDeployment repo test is set"
if [ "${TEST_REPO}" ] && [ ! "${USE_RUNNERSET}" ]; then
runner_name=
count=0
while [ $count -le 30 ]; do
echo "Finding Runner ..."
runner_name=$(kubectl get runner --output=jsonpath="{.items[*].metadata.name}")
if [ "${runner_name}" ]; then
while [ $count -le 30 ]; do
runner_pod_name=
echo "Found Runner \""${runner_name}"\""
echo "Finding underlying pod ..."
runner_pod_name=$(kubectl get pod --output=jsonpath="{.items[*].metadata.name}" | grep ${runner_name})
if [ "${runner_pod_name}" ]; then
echo "Found underlying pod \""${runner_pod_name}"\""
echo "Waiting for pod \""${runner_pod_name}"\" to become ready..."
kubectl wait pod/${runner_pod_name} --for condition=ready --timeout 270s
break 2
fi
sleep 1
let "count=count+1"
done
fi
sleep 1
let "count=count+1"
done
if [ $count -ge 30 ]; then
repo_runnerdeployment_passed=false
else
repo_runnerdeployment_passed=true
fi
echo "Checking if RunnerSet repo test is set"
elif [ "${TEST_REPO}" ] && [ "${USE_RUNNERSET}" ]; then
runnerset_name=
count=0
while [ $count -le 30 ]; do
echo "Finding RunnerSet ..."
runnerset_name=$(kubectl get runnerset --output=jsonpath="{.items[*].metadata.name}")
if [ "${runnerset_name}" ]; then
while [ $count -le 30 ]; do
runnerset_pod_name=
echo "Found RunnerSet \""${runnerset_name}"\""
echo "Finding underlying pod ..."
runnerset_pod_name=$(kubectl get pod --output=jsonpath="{.items[*].metadata.name}" | grep ${runnerset_name})
echo "BEFORE IF"
if [ "${runnerset_pod_name}" ]; then
echo "AFTER IF"
echo "Found underlying pod \""${runnerset_pod_name}"\""
echo "Waiting for pod \""${runnerset_pod_name}"\" to become ready..."
kubectl wait pod/${runnerset_pod_name} --for condition=ready --timeout 270s
break 2
fi
sleep 1
let "count=count+1"
done
fi
sleep 1
let "count=count+1"
done
if [ $count -ge 30 ]; then
repo_runnerset_passed=false
else
repo_runnerset_passed=true
fi
fi
if [ ${repo_runnerset_passed} == true ] || [ ${repo_runnerset_passed} == "skipped" ] && \
[ ${repo_runnerdeployment_passed} == true ] || [ ${repo_runnerdeployment_passed} == "skipped" ]; then
echo "INFO : All tests passed or skipped"
echo "RunnerSet Repo Test Status : ${repo_runnerset_passed}"
echo "RunnerDeployment Repo Test Status : ${repo_runnerdeployment_passed}"
else
echo "ERROR : Some tests failed"
echo "RunnerSet Repo Test Status : ${repo_runnerset_passed}"
echo "RunnerDeployment Repo Test Status : ${repo_runnerdeployment_passed}"
exit 1
fi

81
acceptance/deploy.sh Executable file
View File

@@ -0,0 +1,81 @@
#!/usr/bin/env bash
set -e
tpe=${ACCEPTANCE_TEST_SECRET_TYPE}
VALUES_FILE=${VALUES_FILE:-$(dirname $0)/values.yaml}
kubectl delete secret -n actions-runner-system controller-manager || :
if [ "${tpe}" == "token" ]; then
if ! kubectl get secret controller-manager -n actions-runner-system >/dev/null; then
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_token=${GITHUB_TOKEN:?GITHUB_TOKEN must not be empty}
fi
elif [ "${tpe}" == "app" ]; then
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_app_id=${APP_ID:?must not be empty} \
--from-literal=github_app_installation_id=${APP_INSTALLATION_ID:?must not be empty} \
--from-file=github_app_private_key=${APP_PRIVATE_KEY_FILE:?must not be empty}
else
echo "ACCEPTANCE_TEST_SECRET_TYPE must be set to either \"token\" or \"app\"" 1>&2
exit 1
fi
if [ -n "${WEBHOOK_GITHUB_TOKEN}" ]; then
kubectl -n actions-runner-system delete secret \
github-webhook-server || :
kubectl -n actions-runner-system create secret generic \
github-webhook-server \
--from-literal=github_token=${WEBHOOK_GITHUB_TOKEN:?WEBHOOK_GITHUB_TOKEN must not be empty}
else
echo 'Skipped deploying secret "github-webhook-server". Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
fi
tool=${ACCEPTANCE_TEST_DEPLOYMENT_TOOL}
TEST_ID=${TEST_ID:-default}
if [ "${tool}" == "helm" ]; then
set -v
helm upgrade --install actions-runner-controller \
charts/actions-runner-controller \
-n actions-runner-system \
--create-namespace \
--set syncPeriod=${SYNC_PERIOD} \
--set authSecret.create=false \
--set image.repository=${NAME} \
--set image.tag=${VERSION} \
--set podAnnotations.test-id=${TEST_ID} \
--set githubWebhookServer.podAnnotations.test-id=${TEST_ID} \
--set imagePullSecrets[0].name=${IMAGE_PULL_SECRET} \
--set image.actionsRunnerImagePullSecrets[0].name=${IMAGE_PULL_SECRET} \
--set githubWebhookServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET} \
-f ${VALUES_FILE}
set +v
# To prevent `CustomResourceDefinition.apiextensions.k8s.io "runners.actions.summerwind.dev" is invalid: metadata.annotations: Too long: must have at most 262144 bytes`
# errors
kubectl create -f charts/actions-runner-controller/crds || kubectl replace -f charts/actions-runner-controller/crds
# This wait fails due to timeout when it's already in crashloopback and this update doesn't change the image tag.
# That's why we add `|| :`. With that we prevent stopping the script in case of timeout and
# proceed to delete (possibly in crashloopback and/or running with outdated image) pods so that they are recreated by K8s.
kubectl -n actions-runner-system wait deploy/actions-runner-controller --for condition=available --timeout 60s || :
else
kubectl apply \
-n actions-runner-system \
-f release/actions-runner-controller.yaml
kubectl -n actions-runner-system wait deploy/controller-manager --for condition=available --timeout 120s || :
fi
# Restart all ARC pods
kubectl -n actions-runner-system delete po -l app.kubernetes.io/name=actions-runner-controller
echo Waiting for all ARC pods to be up and running after restart
kubectl -n actions-runner-system wait deploy/actions-runner-controller --for condition=available --timeout 120s
# Adhocly wait for some time until actions-runner-controller's admission webhook gets ready
sleep 20

58
acceptance/deploy_runners.sh Executable file
View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bash
set -e
OP=${OP:-apply}
RUNNER_LABEL=${RUNNER_LABEL:-self-hosted}
if [ -n "${TEST_REPO}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerset envsubst | kubectl ${OP} -f -
else
echo "Running ${OP} runnerdeployment and hra. Set USE_RUNNERSET if you want to deploy runnerset instead."
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} for runnerdeployment and hra. Set TEST_REPO to "yourorg/yourrepo" to deploy."
fi
if [ -n "${TEST_ORG}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerdeploy envsubst | kubectl ${OP} -f -
fi
if [ -n "${TEST_ORG_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ORG_GROUP to ${OP}."
fi
else
echo "Skipped ${OP} on organizational runnerdeployment. Set TEST_ORG to ${OP}."
fi
if [ -n "${TEST_ENTERPRISE}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerdeploy envsubst | kubectl ${OP} -f -
fi
if [ -n "${TEST_ENTERPRISE_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ENTERPRISE_GROUP to ${OP}."
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ENTERPRISE to ${OP}."
fi

10
acceptance/kind.yaml Normal file
View File

@@ -0,0 +1,10 @@
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 31000
hostPort: 31000
listenAddress: "0.0.0.0"
protocol: tcp
#- role: worker

View File

@@ -0,0 +1,36 @@
name: EKS Integration Tests
on:
workflow_dispatch:
env:
IRSA_ROLE_ARN:
ASSUME_ROLE_ARN:
AWS_REGION:
jobs:
assume-role-in-runner-test:
runs-on: ['self-hosted', 'Linux']
steps:
- name: Test aws-actions/configure-aws-credentials Action
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: ${{ env.AWS_REGION }}
role-to-assume: ${{ env.ASSUME_ROLE_ARN }}
role-duration-seconds: 900
assume-role-in-container-test:
runs-on: ['self-hosted', 'Linux']
container:
image: amazon/aws-cli
env:
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
AWS_ROLE_ARN: ${{ env.IRSA_ROLE_ARN }}
volumes:
- /var/run/secrets/eks.amazonaws.com/serviceaccount/token:/var/run/secrets/eks.amazonaws.com/serviceaccount/token
steps:
- name: Test aws-actions/configure-aws-credentials Action in container
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: ${{ env.AWS_REGION }}
role-to-assume: ${{ env.ASSUME_ROLE_ARN }}
role-duration-seconds: 900

View File

@@ -0,0 +1,83 @@
name: Runner Integration Tests
on:
workflow_dispatch:
env:
ImageOS: ubuntu18 # Used by ruby/setup-ruby action | Update me for the runner OS version you are testing against
jobs:
run-step-in-container-test:
runs-on: ['self-hosted', 'Linux']
container:
image: alpine
steps:
- name: Test we are working in the container
run: |
if [[ $(sed -n '2p' < /etc/os-release | cut -d "=" -f2) != "alpine" ]]; then
echo "::error ::Failed OS detection test, could not match /etc/os-release with alpine. Are we really running in the container?"
echo "/etc/os-release below:"
cat /etc/os-release
exit 1
fi
setup-python-test:
runs-on: ['self-hosted', 'Linux']
steps:
- name: Print native Python environment
run: |
which python
python --version
- uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Test actions/setup-python works
run: |
VERSION=$(python --version 2>&1 | cut -d ' ' -f2 | cut -d '.' -f1-2)
if [[ $VERSION != '3.9' ]]; then
echo "Python version detected : $(python --version 2>&1)"
echo "::error ::Detected python failed setup version test, could not match version with version specified in the setup action"
exit 1
else
echo "Python version detected : $(python --version 2>&1)"
fi
setup-node-test:
runs-on: ['self-hosted', 'Linux']
steps:
- uses: actions/setup-node@v2
with:
node-version: '12'
- name: Test actions/setup-node works
run: |
VERSION=$(node --version | cut -c 2- | cut -d '.' -f1)
if [[ $VERSION != '12' ]]; then
echo "Node version detected : $(node --version 2>&1)"
echo "::error ::Detected node failed setup version test, could not match version with version specified in the setup action"
exit 1
else
echo "Node version detected : $(node --version 2>&1)"
fi
setup-ruby-test:
runs-on: ['self-hosted', 'Linux']
steps:
- uses: ruby/setup-ruby@v1
with:
ruby-version: 3.0
bundler-cache: true
- name: Test ruby/setup-ruby works
run: |
VERSION=$(ruby --version | cut -d ' ' -f2 | cut -d '.' -f1-2)
if [[ $VERSION != '3.0' ]]; then
echo "Ruby version detected : $(ruby --version 2>&1)"
echo "::error ::Detected ruby failed setup version test, could not match version with version specified in the setup action"
exit 1
else
echo "Ruby version detected : $(ruby --version 2>&1)"
fi
python-shell-test:
runs-on: ['self-hosted', 'Linux']
steps:
- name: Test Python shell works
run: |
import os
print(os.environ['PATH'])
shell: python

View File

@@ -0,0 +1,61 @@
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: ${NAME}
spec:
# replicas: 1
template:
spec:
enterprise: ${TEST_ENTERPRISE}
group: ${TEST_GROUP}
organization: ${TEST_ORG}
repository: ${TEST_REPO}
#
# Custom runner image
#
image: ${RUNNER_NAME}:${RUNNER_TAG}
imagePullPolicy: IfNotPresent
ephemeral: ${TEST_EPHEMERAL}
#
# dockerd within runner container
#
## Replace `mumoshu/actions-runner-dind:dev` with your dind image
#dockerdWithinRunnerContainer: true
#image: mumoshu/actions-runner-dind:dev
dockerdWithinRunnerContainer: ${RUNNER_DOCKERD_WITHIN_RUNNER_CONTAINER}
#
# Set the MTU used by dockerd-managed network interfaces (including docker-build-ubuntu)
#
#dockerMTU: 1450
#Runner group
# labels:
# - "mylabel 1"
# - "mylabel 2"
labels:
- "${RUNNER_LABEL}"
#
# Non-standard working directory
#
# workDir: "/"
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: ${NAME}
spec:
scaleTargetRef:
name: ${NAME}
scaleUpTriggers:
- githubEvent:
workflowJob: {}
amount: 1
duration: "10m"
minReplicas: ${RUNNER_MIN_REPLICAS}
maxReplicas: 10
scaleDownDelaySecondsAfterScaleOut: ${RUNNER_SCALE_DOWN_DELAY_SECONDS_AFTER_SCALE_OUT}

View File

@@ -0,0 +1,263 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-runner-work-dir
labels:
content: ${NAME}-runner-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}
# In kind environments, the provider writes:
# /var/lib/docker/volumes/KIND_NODE_CONTAINER_VOL_ID/_data/local-path-provisioner/PV_NAME
# It can be hundreds of gigabytes depending on what you cache in the test workflow. Beware to not encounter `no space left on device` errors!
# If you did encounter no space errorrs try:
# docker system prune
# docker buildx prune #=> frees up /var/lib/docker/volumes/buildx_buildkit_container-builder0_state
# sudo rm -rf /var/lib/docker/volumes/KIND_NODE_CONTAINER_VOL_ID/_data/local-path-provisioner #=> frees up local-path-provisioner's data
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-var-lib-docker
labels:
content: ${NAME}-var-lib-docker
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-cache
labels:
content: ${NAME}-cache
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-runner-tool-cache
labels:
content: ${NAME}-runner-tool-cache
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerSet
metadata:
name: ${NAME}
spec:
# MANDATORY because it is based on StatefulSet: Results in a below error when omitted:
# missing required field "selector" in dev.summerwind.actions.v1alpha1.RunnerSet.spec
selector:
matchLabels:
app: ${NAME}
# MANDATORY because it is based on StatefulSet: Results in a below error when omitted:
# missing required field "serviceName" in dev.summerwind.actions.v1alpha1.RunnerSet.spec]
serviceName: ${NAME}
#replicas: 1
# From my limited testing, `ephemeral: true` is more reliable.
# Seomtimes, updating already deployed runners from `ephemeral: false` to `ephemeral: true` seems to
# result in queued jobs hanging forever.
ephemeral: ${TEST_EPHEMERAL}
enterprise: ${TEST_ENTERPRISE}
group: ${TEST_GROUP}
organization: ${TEST_ORG}
repository: ${TEST_REPO}
#
# Custom runner image
#
image: ${RUNNER_NAME}:${RUNNER_TAG}
#
# dockerd within runner container
#
## Replace `mumoshu/actions-runner-dind:dev` with your dind image
#dockerdWithinRunnerContainer: true
dockerdWithinRunnerContainer: ${RUNNER_DOCKERD_WITHIN_RUNNER_CONTAINER}
#
# Set the MTU used by dockerd-managed network interfaces (including docker-build-ubuntu)
#
#dockerMTU: 1450
#Runner group
# labels:
# - "mylabel 1"
# - "mylabel 2"
labels:
- "${RUNNER_LABEL}"
#
# Non-standard working directory
#
# workDir: "/"
template:
metadata:
labels:
app: ${NAME}
spec:
containers:
- name: runner
imagePullPolicy: IfNotPresent
env:
- name: RUNNER_FEATURE_FLAG_EPHEMERAL
value: "${RUNNER_FEATURE_FLAG_EPHEMERAL}"
- name: GOMODCACHE
value: "/home/runner/.cache/go-mod"
# PV-backed runner work dir
volumeMounts:
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
# The volume and mount with the same names will be created by workVolumeClaimTemplate and the kubernetes container mode support.
# - name: work
# mountPath: /runner/_work
# Cache docker image layers, in case dockerdWithinRunnerContainer=true
- name: var-lib-docker
mountPath: /var/lib/docker
# Cache go modules and builds
# - name: gocache
# # Run `goenv | grep GOCACHE` to verify the path is correct for your env
# mountPath: /home/runner/.cache/go-build
# - name: gomodcache
# # Run `goenv | grep GOMODCACHE` to verify the path is correct for your env
# # mountPath: /home/runner/go/pkg/mod
- name: cache
# go: could not create module cache: stat /home/runner/.cache/go-mod: permission denied
mountPath: "/home/runner/.cache"
- name: runner-tool-cache
# This corresponds to our runner image's default setting of RUNNER_TOOL_CACHE=/opt/hostedtoolcache.
#
# In case you customize the envvar in both runner and docker containers of the runner pod spec,
# You'd need to change this mountPath accordingly.
#
# The tool cache directory is defined in actions/toolkit's tool-cache module:
# https://github.com/actions/toolkit/blob/2f164000dcd42fb08287824a3bc3030dbed33687/packages/tool-cache/src/tool-cache.ts#L621-L638
#
# Many setup-* actions like setup-go utilizes the tool-cache module to download and cache installed binaries:
# https://github.com/actions/setup-go/blob/56a61c9834b4a4950dbbf4740af0b8a98c73b768/src/installer.ts#L144
mountPath: "/opt/hostedtoolcache"
# Valid only when dockerdWithinRunnerContainer=false
- name: docker
# PV-backed runner work dir
volumeMounts:
- name: work
mountPath: /runner/_work
# Cache docker image layers, in case dockerdWithinRunnerContainer=false
- name: var-lib-docker
mountPath: /var/lib/docker
# image: mumoshu/actions-runner-dind:dev
# For buildx cache
- name: cache
mountPath: "/home/runner/.cache"
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
# volumes:
# - name: work
# ephemeral:
# volumeClaimTemplate:
# spec:
# accessModes:
# - ReadWriteOnce
# storageClassName: "${NAME}-runner-work-dir"
# resources:
# requests:
# storage: 10Gi
volumeClaimTemplates:
- metadata:
name: vol1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: ${NAME}
## Dunno which provider supports auto-provisioning with selector.
## At least the rancher local path provider stopped with:
## waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
# selector:
# matchLabels:
# runnerset-volume-id: ${NAME}-vol1
- metadata:
name: vol2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: ${NAME}
# selector:
# matchLabels:
# runnerset-volume-id: ${NAME}-vol2
- metadata:
name: var-lib-docker
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: ${NAME}-var-lib-docker
- metadata:
name: cache
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: ${NAME}-cache
- metadata:
name: runner-tool-cache
# It turns out labels doesn't distinguish PVs across PVCs and the
# end result is PVs are reused by wrong PVCs.
# The correct way seems to be to differentiate storage class per pvc template.
# labels:
# id: runner-tool-cache
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: ${NAME}-runner-tool-cache
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: ${NAME}
spec:
scaleTargetRef:
kind: RunnerSet
name: ${NAME}
scaleUpTriggers:
- githubEvent:
workflowJob: {}
amount: 1
duration: "10m"
minReplicas: ${RUNNER_MIN_REPLICAS}
maxReplicas: 10
scaleDownDelaySecondsAfterScaleOut: ${RUNNER_SCALE_DOWN_DELAY_SECONDS_AFTER_SCALE_OUT}
# Comment out the whole metrics if you'd like to solely test webhook-based scaling
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75'
scaleDownThreshold: '0.25'
scaleUpFactor: '2'
scaleDownFactor: '0.5'

30
acceptance/values.yaml Normal file
View File

@@ -0,0 +1,30 @@
# Set actions-runner-controller settings for testing
logLevel: "-4"
imagePullSecrets:
- name:
image:
actionsRunnerImagePullSecrets:
- name:
githubWebhookServer:
imagePullSecrets:
- name:
logLevel: "-4"
enabled: true
labels: {}
replicaCount: 1
syncPeriod: 10m
useRunnerGroupsVisibility: true
secret:
enabled: true
# create: true
name: "github-webhook-server"
### GitHub Webhook Configuration
#github_webhook_secret_token: ""
service:
type: NodePort
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
nodePort: 31000

View File

@@ -0,0 +1,266 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// HorizontalRunnerAutoscalerSpec defines the desired state of HorizontalRunnerAutoscaler
type HorizontalRunnerAutoscalerSpec struct {
// ScaleTargetRef sis the reference to scaled resource like RunnerDeployment
ScaleTargetRef ScaleTargetRef `json:"scaleTargetRef,omitempty"`
// MinReplicas is the minimum number of replicas the deployment is allowed to scale
// +optional
MinReplicas *int `json:"minReplicas,omitempty"`
// MaxReplicas is the maximum number of replicas the deployment is allowed to scale
// +optional
MaxReplicas *int `json:"maxReplicas,omitempty"`
// ScaleDownDelaySecondsAfterScaleUp is the approximate delay for a scale down followed by a scale up
// Used to prevent flapping (down->up->down->... loop)
// +optional
ScaleDownDelaySecondsAfterScaleUp *int `json:"scaleDownDelaySecondsAfterScaleOut,omitempty"`
// Metrics is the collection of various metric targets to calculate desired number of runners
// +optional
Metrics []MetricSpec `json:"metrics,omitempty"`
// ScaleUpTriggers is an experimental feature to increase the desired replicas by 1
// on each webhook requested received by the webhookBasedAutoscaler.
//
// This feature requires you to also enable and deploy the webhookBasedAutoscaler onto your cluster.
//
// Note that the added runners remain until the next sync period at least,
// and they may or may not be used by GitHub Actions depending on the timing.
// They are intended to be used to gain "resource slack" immediately after you
// receive a webhook from GitHub, so that you can loosely expect MinReplicas runners to be always available.
ScaleUpTriggers []ScaleUpTrigger `json:"scaleUpTriggers,omitempty"`
CapacityReservations []CapacityReservation `json:"capacityReservations,omitempty" patchStrategy:"merge" patchMergeKey:"name"`
// ScheduledOverrides is the list of ScheduledOverride.
// It can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule.
// The earlier a scheduled override is, the higher it is prioritized.
// +optional
ScheduledOverrides []ScheduledOverride `json:"scheduledOverrides,omitempty"`
}
type ScaleUpTrigger struct {
GitHubEvent *GitHubEventScaleUpTriggerSpec `json:"githubEvent,omitempty"`
Amount int `json:"amount,omitempty"`
Duration metav1.Duration `json:"duration,omitempty"`
}
type GitHubEventScaleUpTriggerSpec struct {
CheckRun *CheckRunSpec `json:"checkRun,omitempty"`
PullRequest *PullRequestSpec `json:"pullRequest,omitempty"`
Push *PushSpec `json:"push,omitempty"`
WorkflowJob *WorkflowJobSpec `json:"workflowJob,omitempty"`
}
// https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
type CheckRunSpec struct {
// One of: created, rerequested, or completed
Types []string `json:"types,omitempty"`
Status string `json:"status,omitempty"`
// Names is a list of GitHub Actions glob patterns.
// Any check_run event whose name matches one of patterns in the list can trigger autoscaling.
// Note that check_run name seem to equal to the job name you've defined in your actions workflow yaml file.
// So it is very likely that you can utilize this to trigger depending on the job.
Names []string `json:"names,omitempty"`
// Repositories is a list of GitHub repositories.
// Any check_run event whose repository matches one of repositories in the list can trigger autoscaling.
Repositories []string `json:"repositories,omitempty"`
}
// https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#workflow_job
type WorkflowJobSpec struct {
}
// https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request
type PullRequestSpec struct {
Types []string `json:"types,omitempty"`
Branches []string `json:"branches,omitempty"`
}
// PushSpec is the condition for triggering scale-up on push event
// Also see https://docs.github.com/en/actions/reference/events-that-trigger-workflows#push
type PushSpec struct {
}
// CapacityReservation specifies the number of replicas temporarily added
// to the scale target until ExpirationTime.
type CapacityReservation struct {
Name string `json:"name,omitempty"`
ExpirationTime metav1.Time `json:"expirationTime,omitempty"`
Replicas int `json:"replicas,omitempty"`
// +optional
EffectiveTime metav1.Time `json:"effectiveTime,omitempty"`
}
type ScaleTargetRef struct {
// Kind is the type of resource being referenced
// +optional
// +kubebuilder:validation:Enum=RunnerDeployment;RunnerSet
Kind string `json:"kind,omitempty"`
// Name is the name of resource being referenced
Name string `json:"name,omitempty"`
}
type MetricSpec struct {
// Type is the type of metric to be used for autoscaling.
// The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
Type string `json:"type,omitempty"`
// RepositoryNames is the list of repository names to be used for calculating the metric.
// For example, a repository name is the REPO part of `github.com/USER/REPO`.
// +optional
RepositoryNames []string `json:"repositoryNames,omitempty"`
// ScaleUpThreshold is the percentage of busy runners greater than which will
// trigger the hpa to scale runners up.
// +optional
ScaleUpThreshold string `json:"scaleUpThreshold,omitempty"`
// ScaleDownThreshold is the percentage of busy runners less than which will
// trigger the hpa to scale the runners down.
// +optional
ScaleDownThreshold string `json:"scaleDownThreshold,omitempty"`
// ScaleUpFactor is the multiplicative factor applied to the current number of runners used
// to determine how many pods should be added.
// +optional
ScaleUpFactor string `json:"scaleUpFactor,omitempty"`
// ScaleDownFactor is the multiplicative factor applied to the current number of runners used
// to determine how many pods should be removed.
// +optional
ScaleDownFactor string `json:"scaleDownFactor,omitempty"`
// ScaleUpAdjustment is the number of runners added on scale-up.
// You can only specify either ScaleUpFactor or ScaleUpAdjustment.
// +optional
ScaleUpAdjustment int `json:"scaleUpAdjustment,omitempty"`
// ScaleDownAdjustment is the number of runners removed on scale-down.
// You can only specify either ScaleDownFactor or ScaleDownAdjustment.
// +optional
ScaleDownAdjustment int `json:"scaleDownAdjustment,omitempty"`
}
// ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule.
// A schedule can optionally be recurring, so that the correspoding override happens every day, week, month, or year.
type ScheduledOverride struct {
// StartTime is the time at which the first override starts.
StartTime metav1.Time `json:"startTime"`
// EndTime is the time at which the first override ends.
EndTime metav1.Time `json:"endTime"`
// MinReplicas is the number of runners while overriding.
// If omitted, it doesn't override minReplicas.
// +optional
// +nullable
// +kubebuilder:validation:Minimum=0
MinReplicas *int `json:"minReplicas,omitempty"`
// +optional
RecurrenceRule RecurrenceRule `json:"recurrenceRule,omitempty"`
}
type RecurrenceRule struct {
// Frequency is the name of a predefined interval of each recurrence.
// The valid values are "Daily", "Weekly", "Monthly", and "Yearly".
// If empty, the corresponding override happens only once.
// +optional
// +kubebuilder:validation:Enum=Daily;Weekly;Monthly;Yearly
Frequency string `json:"frequency,omitempty"`
// UntilTime is the time of the final recurrence.
// If empty, the schedule recurs forever.
// +optional
UntilTime metav1.Time `json:"untilTime,omitempty"`
}
type HorizontalRunnerAutoscalerStatus struct {
// ObservedGeneration is the most recent generation observed for the target. It corresponds to e.g.
// RunnerDeployment's generation, which is updated on mutation by the API Server.
// +optional
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
// DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet
// This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
// +optional
DesiredReplicas *int `json:"desiredReplicas,omitempty"`
// +optional
// +nullable
LastSuccessfulScaleOutTime *metav1.Time `json:"lastSuccessfulScaleOutTime,omitempty"`
// +optional
CacheEntries []CacheEntry `json:"cacheEntries,omitempty"`
// ScheduledOverridesSummary is the summary of active and upcoming scheduled overrides to be shown in e.g. a column of a `kubectl get hra` output
// for observability.
// +optional
ScheduledOverridesSummary *string `json:"scheduledOverridesSummary,omitempty"`
}
const CacheEntryKeyDesiredReplicas = "desiredReplicas"
type CacheEntry struct {
Key string `json:"key,omitempty"`
Value int `json:"value,omitempty"`
ExpirationTime metav1.Time `json:"expirationTime,omitempty"`
}
// +kubebuilder:object:root=true
// +kubebuilder:resource:shortName=hra
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.minReplicas",name=Min,type=number
// +kubebuilder:printcolumn:JSONPath=".spec.maxReplicas",name=Max,type=number
// +kubebuilder:printcolumn:JSONPath=".status.desiredReplicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.scheduledOverridesSummary",name=Schedule,type=string
// HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler API
type HorizontalRunnerAutoscaler struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec HorizontalRunnerAutoscalerSpec `json:"spec,omitempty"`
Status HorizontalRunnerAutoscalerStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// HorizontalRunnerAutoscalerList contains a list of HorizontalRunnerAutoscaler
type HorizontalRunnerAutoscalerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []HorizontalRunnerAutoscaler `json:"items"`
}
func init() {
SchemeBuilder.Register(&HorizontalRunnerAutoscaler{}, &HorizontalRunnerAutoscalerList{})
}

View File

@@ -17,37 +17,307 @@ limitations under the License.
package v1alpha1
import (
"errors"
"fmt"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/util/validation/field"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// RunnerSpec defines the desired state of Runner
type RunnerSpec struct {
// +kubebuilder:validation:MinLength=3
RunnerConfig `json:",inline"`
RunnerPodSpec `json:",inline"`
}
type RunnerConfig struct {
// +optional
// +kubebuilder:validation:Pattern=`^[^/]+$`
Enterprise string `json:"enterprise,omitempty"`
// +optional
// +kubebuilder:validation:Pattern=`^[^/]+$`
Organization string `json:"organization,omitempty"`
// +optional
// +kubebuilder:validation:Pattern=`^[^/]+/[^/]+$`
Repository string `json:"repository"`
Repository string `json:"repository,omitempty"`
// +optional
Labels []string `json:"labels,omitempty"`
// +optional
Group string `json:"group,omitempty"`
// +optional
Ephemeral *bool `json:"ephemeral,omitempty"`
// +optional
Image string `json:"image"`
// +optional
WorkDir string `json:"workDir,omitempty"`
// +optional
DockerdWithinRunnerContainer *bool `json:"dockerdWithinRunnerContainer,omitempty"`
// +optional
DockerEnabled *bool `json:"dockerEnabled,omitempty"`
// +optional
DockerMTU *int64 `json:"dockerMTU,omitempty"`
// +optional
DockerRegistryMirror *string `json:"dockerRegistryMirror,omitempty"`
// +optional
VolumeSizeLimit *resource.Quantity `json:"volumeSizeLimit,omitempty"`
// +optional
VolumeStorageMedium *string `json:"volumeStorageMedium,omitempty"`
// +optional
ContainerMode string `json:"containerMode,omitempty"`
}
// RunnerPodSpec defines the desired pod spec fields of the runner pod
type RunnerPodSpec struct {
// +optional
DockerdContainerResources corev1.ResourceRequirements `json:"dockerdContainerResources,omitempty"`
// +optional
DockerVolumeMounts []corev1.VolumeMount `json:"dockerVolumeMounts,omitempty"`
// +optional
DockerEnv []corev1.EnvVar `json:"dockerEnv,omitempty"`
// +optional
Containers []corev1.Container `json:"containers,omitempty"`
// +optional
ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"`
// +optional
Env []corev1.EnvVar `json:"env,omitempty"`
// +optional
EnvFrom []corev1.EnvFromSource `json:"envFrom,omitempty"`
// +optional
Resources corev1.ResourceRequirements `json:"resources,omitempty"`
// +optional
VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"`
// +optional
Volumes []corev1.Volume `json:"volumes,omitempty"`
// +optional
EnableServiceLinks *bool `json:"enableServiceLinks,omitempty"`
// +optional
InitContainers []corev1.Container `json:"initContainers,omitempty"`
// +optional
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// +optional
ServiceAccountName string `json:"serviceAccountName,omitempty"`
// +optional
AutomountServiceAccountToken *bool `json:"automountServiceAccountToken,omitempty"`
// +optional
SidecarContainers []corev1.Container `json:"sidecarContainers,omitempty"`
// +optional
SecurityContext *corev1.PodSecurityContext `json:"securityContext,omitempty"`
// +optional
ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"`
// +optional
Affinity *corev1.Affinity `json:"affinity,omitempty"`
// +optional
Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
// +optional
PriorityClassName string `json:"priorityClassName,omitempty"`
// +optional
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
// +optional
EphemeralContainers []corev1.EphemeralContainer `json:"ephemeralContainers,omitempty"`
// +optional
HostAliases []corev1.HostAlias `json:"hostAliases,omitempty"`
// +optional
TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"`
// RuntimeClassName is the container runtime configuration that containers should run under.
// More info: https://kubernetes.io/docs/concepts/containers/runtime-class
// +optional
RuntimeClassName *string `json:"runtimeClassName,omitempty"`
// +optional
DnsConfig *corev1.PodDNSConfig `json:"dnsConfig,omitempty"`
// +optional
WorkVolumeClaimTemplate *WorkVolumeClaimTemplate `json:"workVolumeClaimTemplate,omitempty"`
}
func (rs *RunnerSpec) Validate(rootPath *field.Path) field.ErrorList {
var (
errList field.ErrorList
err error
)
err = rs.validateRepository()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("repository"), rs.Repository, err.Error()))
}
err = rs.validateWorkVolumeClaimTemplate()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("workVolumeClaimTemplate"), rs.WorkVolumeClaimTemplate, err.Error()))
}
err = rs.validateIsServiceAccountNameSet()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("serviceAccountName"), rs.ServiceAccountName, err.Error()))
}
return errList
}
// ValidateRepository validates repository field.
func (rs *RunnerSpec) validateRepository() error {
// Enterprise, Organization and repository are both exclusive.
foundCount := 0
if len(rs.Organization) > 0 {
foundCount += 1
}
if len(rs.Repository) > 0 {
foundCount += 1
}
if len(rs.Enterprise) > 0 {
foundCount += 1
}
if foundCount == 0 {
return errors.New("Spec needs enterprise, organization or repository")
}
if foundCount > 1 {
return errors.New("Spec cannot have many fields defined enterprise, organization and repository")
}
return nil
}
func (rs *RunnerSpec) validateWorkVolumeClaimTemplate() error {
if rs.ContainerMode != "kubernetes" {
return nil
}
if rs.WorkVolumeClaimTemplate == nil {
return errors.New("Spec.ContainerMode: kubernetes must have workVolumeClaimTemplate field specified")
}
return rs.WorkVolumeClaimTemplate.validate()
}
func (rs *RunnerSpec) validateIsServiceAccountNameSet() error {
if rs.ContainerMode != "kubernetes" {
return nil
}
if rs.ServiceAccountName == "" {
return errors.New("service account name is required if container mode is kubernetes")
}
return nil
}
// RunnerStatus defines the observed state of Runner
type RunnerStatus struct {
// Turns true only if the runner pod is ready.
// +optional
Ready bool `json:"ready"`
// +optional
Registration RunnerStatusRegistration `json:"registration"`
Phase string `json:"phase"`
Reason string `json:"reason"`
Message string `json:"message"`
// +optional
Phase string `json:"phase,omitempty"`
// +optional
Reason string `json:"reason,omitempty"`
// +optional
Message string `json:"message,omitempty"`
// +optional
// +nullable
LastRegistrationCheckTime *metav1.Time `json:"lastRegistrationCheckTime,omitempty"`
}
// RunnerStatusRegistration contains runner registration status
type RunnerStatusRegistration struct {
Repository string `json:"repository"`
Token string `json:"token"`
ExpiresAt metav1.Time `json:"expiresAt"`
Enterprise string `json:"enterprise,omitempty"`
Organization string `json:"organization,omitempty"`
Repository string `json:"repository,omitempty"`
Labels []string `json:"labels,omitempty"`
Token string `json:"token"`
ExpiresAt metav1.Time `json:"expiresAt"`
}
type WorkVolumeClaimTemplate struct {
StorageClassName string `json:"storageClassName"`
AccessModes []corev1.PersistentVolumeAccessMode `json:"accessModes"`
Resources corev1.ResourceRequirements `json:"resources"`
}
func (w *WorkVolumeClaimTemplate) validate() error {
if w.AccessModes == nil || len(w.AccessModes) == 0 {
return errors.New("Access mode should have at least one mode specified")
}
for _, accessMode := range w.AccessModes {
switch accessMode {
case corev1.ReadWriteOnce, corev1.ReadWriteMany:
default:
return fmt.Errorf("Access mode %v is not supported", accessMode)
}
}
return nil
}
func (w *WorkVolumeClaimTemplate) V1Volume() corev1.Volume {
return corev1.Volume{
Name: "work",
VolumeSource: corev1.VolumeSource{
Ephemeral: &corev1.EphemeralVolumeSource{
VolumeClaimTemplate: &corev1.PersistentVolumeClaimTemplate{
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: w.AccessModes,
StorageClassName: &w.StorageClassName,
Resources: w.Resources,
},
},
},
},
}
}
func (w *WorkVolumeClaimTemplate) V1VolumeMount(mountPath string) corev1.VolumeMount {
return corev1.VolumeMount{
MountPath: mountPath,
Name: "work",
}
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.enterprise",name=Enterprise,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.organization",name=Organization,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.repository",name=Repository,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.labels",name=Labels,type=string
// +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// Runner is the Schema for the runners API
type Runner struct {

View File

@@ -0,0 +1,76 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/validation/field"
ctrl "sigs.k8s.io/controller-runtime"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/webhook"
)
// log is for logging in this package.
var runnerLog = logf.Log.WithName("runner-resource")
func (r *Runner) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
For(r).
Complete()
}
// +kubebuilder:webhook:path=/mutate-actions-summerwind-dev-v1alpha1-runner,verbs=create;update,mutating=true,failurePolicy=fail,groups=actions.summerwind.dev,resources=runners,versions=v1alpha1,name=mutate.runner.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
var _ webhook.Defaulter = &Runner{}
// Default implements webhook.Defaulter so a webhook will be registered for the type
func (r *Runner) Default() {
// Nothing to do.
}
// +kubebuilder:webhook:path=/validate-actions-summerwind-dev-v1alpha1-runner,verbs=create;update,mutating=false,failurePolicy=fail,groups=actions.summerwind.dev,resources=runners,versions=v1alpha1,name=validate.runner.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
var _ webhook.Validator = &Runner{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (r *Runner) ValidateCreate() error {
runnerLog.Info("validate resource to be created", "name", r.Name)
return r.Validate()
}
// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (r *Runner) ValidateUpdate(old runtime.Object) error {
runnerLog.Info("validate resource to be updated", "name", r.Name)
return r.Validate()
}
// ValidateDelete implements webhook.Validator so a webhook will be registered for the type
func (r *Runner) ValidateDelete() error {
return nil
}
// Validate validates resource spec.
func (r *Runner) Validate() error {
errList := r.Spec.Validate(field.NewPath("spec"))
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)
}
return nil
}

View File

@@ -0,0 +1,106 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns = "TotalNumberOfQueuedAndInProgressWorkflowRuns"
AutoscalingMetricTypePercentageRunnersBusy = "PercentageRunnersBusy"
)
// RunnerDeploymentSpec defines the desired state of RunnerDeployment
type RunnerDeploymentSpec struct {
// +optional
// +nullable
Replicas *int `json:"replicas,omitempty"`
// EffectiveTime is the time the upstream controller requested to sync Replicas.
// It is usually populated by the webhook-based autoscaler via HRA.
// The value is inherited to RunnerRepicaSet(s) and used to prevent ephemeral runners from unnecessarily recreated.
//
// +optional
// +nullable
EffectiveTime *metav1.Time `json:"effectiveTime"`
// +optional
// +nullable
Selector *metav1.LabelSelector `json:"selector"`
Template RunnerTemplate `json:"template"`
}
type RunnerDeploymentStatus struct {
// See K8s deployment controller code for reference
// https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/controller/deployment/sync.go#L487-L505
// AvailableReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to the sum of status.availableReplicas of all the runner replica sets.
// +optional
AvailableReplicas *int `json:"availableReplicas"`
// ReadyReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to the sum of status.readyReplicas of all the runner replica sets.
// +optional
ReadyReplicas *int `json:"readyReplicas"`
// ReadyReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to status.replicas of the runner replica set that has the desired template hash.
// +optional
UpdatedReplicas *int `json:"updatedReplicas"`
// DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet
// This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
// +optional
DesiredReplicas *int `json:"desiredReplicas"`
// Replicas is the total number of replicas
// +optional
Replicas *int `json:"replicas"`
}
// +kubebuilder:object:root=true
// +kubebuilder:resource:shortName=rdeploy
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Current,type=number
// +kubebuilder:printcolumn:JSONPath=".status.updatedReplicas",name=Up-To-Date,type=number
// +kubebuilder:printcolumn:JSONPath=".status.availableReplicas",name=Available,type=number
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// RunnerDeployment is the Schema for the runnerdeployments API
type RunnerDeployment struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RunnerDeploymentSpec `json:"spec,omitempty"`
Status RunnerDeploymentStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// RunnerList contains a list of Runner
type RunnerDeploymentList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []RunnerDeployment `json:"items"`
}
func init() {
SchemeBuilder.Register(&RunnerDeployment{}, &RunnerDeploymentList{})
}

View File

@@ -0,0 +1,76 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/validation/field"
ctrl "sigs.k8s.io/controller-runtime"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/webhook"
)
// log is for logging in this package.
var runnerDeploymentLog = logf.Log.WithName("runnerdeployment-resource")
func (r *RunnerDeployment) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
For(r).
Complete()
}
// +kubebuilder:webhook:path=/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment,verbs=create;update,mutating=true,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerdeployments,versions=v1alpha1,name=mutate.runnerdeployment.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
var _ webhook.Defaulter = &RunnerDeployment{}
// Default implements webhook.Defaulter so a webhook will be registered for the type
func (r *RunnerDeployment) Default() {
// Nothing to do.
}
// +kubebuilder:webhook:path=/validate-actions-summerwind-dev-v1alpha1-runnerdeployment,verbs=create;update,mutating=false,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerdeployments,versions=v1alpha1,name=validate.runnerdeployment.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
var _ webhook.Validator = &RunnerDeployment{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (r *RunnerDeployment) ValidateCreate() error {
runnerDeploymentLog.Info("validate resource to be created", "name", r.Name)
return r.Validate()
}
// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (r *RunnerDeployment) ValidateUpdate(old runtime.Object) error {
runnerDeploymentLog.Info("validate resource to be updated", "name", r.Name)
return r.Validate()
}
// ValidateDelete implements webhook.Validator so a webhook will be registered for the type
func (r *RunnerDeployment) ValidateDelete() error {
return nil
}
// Validate validates resource spec.
func (r *RunnerDeployment) Validate() error {
errList := r.Spec.Template.Spec.Validate(field.NewPath("spec", "template", "spec"))
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)
}
return nil
}

View File

@@ -0,0 +1,94 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// RunnerReplicaSetSpec defines the desired state of RunnerReplicaSet
type RunnerReplicaSetSpec struct {
// +optional
// +nullable
Replicas *int `json:"replicas,omitempty"`
// EffectiveTime is the time the upstream controller requested to sync Replicas.
// It is usually populated by the webhook-based autoscaler via HRA and RunnerDeployment.
// The value is used to prevent runnerreplicaset controller from unnecessarily recreating ephemeral runners
// based on potentially outdated Replicas value.
//
// +optional
// +nullable
EffectiveTime *metav1.Time `json:"effectiveTime"`
// +optional
// +nullable
Selector *metav1.LabelSelector `json:"selector"`
Template RunnerTemplate `json:"template"`
}
type RunnerReplicaSetStatus struct {
// See K8s replicaset controller code for reference
// https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/controller/replicaset/replica_set_utils.go#L101-L106
// Replicas is the number of runners that are created and still being managed by this runner replica set.
// +optional
Replicas *int `json:"replicas"`
// ReadyReplicas is the number of runners that are created and Runnning.
ReadyReplicas *int `json:"readyReplicas"`
// AvailableReplicas is the number of runners that are created and Runnning.
// This is currently same as ReadyReplicas but perserved for future use.
AvailableReplicas *int `json:"availableReplicas"`
}
type RunnerTemplate struct {
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RunnerSpec `json:"spec,omitempty"`
}
// +kubebuilder:object:root=true
// +kubebuilder:resource:shortName=rrs
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Current,type=number
// +kubebuilder:printcolumn:JSONPath=".status.readyReplicas",name=Ready,type=number
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// RunnerReplicaSet is the Schema for the runnerreplicasets API
type RunnerReplicaSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RunnerReplicaSetSpec `json:"spec,omitempty"`
Status RunnerReplicaSetStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// RunnerList contains a list of Runner
type RunnerReplicaSetList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []RunnerReplicaSet `json:"items"`
}
func init() {
SchemeBuilder.Register(&RunnerReplicaSet{}, &RunnerReplicaSetList{})
}

View File

@@ -0,0 +1,76 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/validation/field"
ctrl "sigs.k8s.io/controller-runtime"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/webhook"
)
// log is for logging in this package.
var runnerReplicaSetLog = logf.Log.WithName("runnerreplicaset-resource")
func (r *RunnerReplicaSet) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
For(r).
Complete()
}
// +kubebuilder:webhook:path=/mutate-actions-summerwind-dev-v1alpha1-runnerreplicaset,verbs=create;update,mutating=true,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerreplicasets,versions=v1alpha1,name=mutate.runnerreplicaset.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
var _ webhook.Defaulter = &RunnerReplicaSet{}
// Default implements webhook.Defaulter so a webhook will be registered for the type
func (r *RunnerReplicaSet) Default() {
// Nothing to do.
}
// +kubebuilder:webhook:path=/validate-actions-summerwind-dev-v1alpha1-runnerreplicaset,verbs=create;update,mutating=false,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerreplicasets,versions=v1alpha1,name=validate.runnerreplicaset.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
var _ webhook.Validator = &RunnerReplicaSet{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (r *RunnerReplicaSet) ValidateCreate() error {
runnerReplicaSetLog.Info("validate resource to be created", "name", r.Name)
return r.Validate()
}
// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (r *RunnerReplicaSet) ValidateUpdate(old runtime.Object) error {
runnerReplicaSetLog.Info("validate resource to be updated", "name", r.Name)
return r.Validate()
}
// ValidateDelete implements webhook.Validator so a webhook will be registered for the type
func (r *RunnerReplicaSet) ValidateDelete() error {
return nil
}
// Validate validates resource spec.
func (r *RunnerReplicaSet) Validate() error {
errList := r.Spec.Template.Spec.Validate(field.NewPath("spec", "template", "spec"))
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)
}
return nil
}

View File

@@ -0,0 +1,102 @@
/*
Copyright 2021 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
appsv1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// RunnerSetSpec defines the desired state of RunnerSet
type RunnerSetSpec struct {
RunnerConfig `json:",inline"`
// EffectiveTime is the time the upstream controller requested to sync Replicas.
// It is usually populated by the webhook-based autoscaler via HRA.
// It is used to prevent ephemeral runners from unnecessarily recreated.
//
// +optional
// +nullable
EffectiveTime *metav1.Time `json:"effectiveTime,omitempty"`
// +optional
ServiceAccountName string `json:"serviceAccountName,omitempty"`
// +optional
WorkVolumeClaimTemplate *WorkVolumeClaimTemplate `json:"workVolumeClaimTemplate,omitempty"`
appsv1.StatefulSetSpec `json:",inline"`
}
type RunnerSetStatus struct {
// See K8s deployment controller code for reference
// https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/controller/deployment/sync.go#L487-L505
// AvailableReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to the sum of status.availableReplicas of all the runner replica sets.
// +optional
CurrentReplicas *int `json:"availableReplicas"`
// ReadyReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to the sum of status.readyReplicas of all the runner replica sets.
// +optional
ReadyReplicas *int `json:"readyReplicas"`
// ReadyReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to status.replicas of the runner replica set that has the desired template hash.
// +optional
UpdatedReplicas *int `json:"updatedReplicas"`
// DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet
// This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
// +optional
DesiredReplicas *int `json:"desiredReplicas"`
// Replicas is the total number of replicas
// +optional
Replicas *int `json:"replicas"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Current,type=number
// +kubebuilder:printcolumn:JSONPath=".status.updatedReplicas",name=Up-To-Date,type=number
// +kubebuilder:printcolumn:JSONPath=".status.availableReplicas",name=Available,type=number
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// RunnerSet is the Schema for the runnersets API
type RunnerSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RunnerSetSpec `json:"spec,omitempty"`
Status RunnerSetStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// RunnerList contains a list of Runner
type RunnerSetList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []RunnerSet `json:"items"`
}
func init() {
SchemeBuilder.Register(&RunnerSet{}, &RunnerSetList{})
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,5 @@
# This file defines the config for "ct" (chart tester) used by the helm linting GitHub workflow
lint-conf: charts/.ci/lint-config.yaml
chart-repos:
- jetstack=https://charts.jetstack.io
check-version-increment: false # Disable checking that the chart version has been bumped

View File

@@ -0,0 +1,6 @@
rules:
# One blank line is OK
empty-lines:
max-start: 1
max-end: 1
max: 1

View File

@@ -0,0 +1,3 @@
#!/bin/bash
docker run --rm -it -w /repo -v $(pwd):/repo quay.io/helmpack/chart-testing ct lint --all --config charts/.ci/ct-config.yaml

View File

@@ -0,0 +1,15 @@
#!/bin/bash
for chart in `ls charts`;
do
helm template --values charts/$chart/ci/ci-values.yaml charts/$chart | kube-score score - \
--ignore-test pod-networkpolicy \
--ignore-test deployment-has-poddisruptionbudget \
--ignore-test deployment-has-host-podantiaffinity \
--ignore-test pod-probes \
--ignore-test container-image-tag \
--enable-optional-test container-security-context-privileged \
--enable-optional-test container-security-context-readonlyrootfilesystem \
--ignore-test container-security-context
done

View File

@@ -0,0 +1,25 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# Docs
docs/

View File

@@ -0,0 +1,30 @@
apiVersion: v2
name: actions-runner-controller
description: A Kubernetes controller that operates self-hosted runners for GitHub Actions on your Kubernetes cluster.
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.20.0
# Used as the default manager tag value when no tag property is provided in the values.yaml
appVersion: 0.25.0
home: https://github.com/actions-runner-controller/actions-runner-controller
sources:
- https://github.com/actions-runner-controller/actions-runner-controller
maintainers:
- name: actions-runner-controller
url: https://github.com/actions-runner-controller

View File

@@ -0,0 +1,111 @@
## Docs
All additional docs are kept in the `docs/` folder, this README is solely for documenting the values.yaml keys and values
## Values
**_The values are documented as of HEAD, to review the configuration options for your chart version ensure you view this file at the relevant [tag](https://github.com/actions-runner-controller/actions-runner-controller/tags)_**
> _Default values are the defaults set in the charts `values.yaml`, some properties have default configurations in the code for when the property is omitted or invalid_
| Key | Description | Default |
|----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------|
| `labels` | Set labels to apply to all resources in the chart | |
| `replicaCount` | Set the number of controller pods | 1 |
| `webhookPort` | Set the containerPort for the webhook Pod | 9443 |
| `syncPeriod` | Set the period in which the controler reconciles the desired runners count | 10m |
| `enableLeaderElection` | Enable election configuration | true |
| `leaderElectionId` | Set the election ID for the controller group | |
| `githubEnterpriseServerURL` | Set the URL for a self-hosted GitHub Enterprise Server | |
| `githubURL` | Override GitHub URL to be used for GitHub API calls | |
| `githubUploadURL` | Override GitHub Upload URL to be used for GitHub API calls | |
| `runnerGithubURL` | Override GitHub URL to be used by runners during registration | |
| `logLevel` | Set the log level of the controller container | |
| `additionalVolumes` | Set additional volumes to add to the manager container | |
| `additionalVolumeMounts` | Set additional volume mounts to add to the manager container | |
| `authSecret.create` | Deploy the controller auth secret | false |
| `authSecret.name` | Set the name of the auth secret | controller-manager |
| `authSecret.annotations` | Set annotations for the auth Secret | |
| `authSecret.github_app_id` | The ID of your GitHub App. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_installation_id` | The ID of your GitHub App installation. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_private_key` | The multiline string of your GitHub App's private key. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_token` | Your chosen GitHub PAT token. **This can't be set at the same time as the `authSecret.github_app_*`** | |
| `authSecret.github_basicauth_username` | Username for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `authSecret.github_basicauth_password` | Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `dockerRegistryMirror` | The default Docker Registry Mirror used by runners. | |
| `hostNetwork` | The "hostNetwork" of the controller container | false |
| `image.repository` | The "repository/image" of the controller container | summerwind/actions-runner-controller |
| `image.tag` | The tag of the controller container | |
| `image.actionsRunnerRepositoryAndTag` | The "repository/image" of the actions runner container | summerwind/actions-runner:latest |
| `image.actionsRunnerImagePullSecrets` | Optional image pull secrets to be included in the runner pod's ImagePullSecrets | |
| `image.dindSidecarRepositoryAndTag` | The "repository/image" of the dind sidecar container | docker:dind |
| `image.pullPolicy` | The pull policy of the controller image | IfNotPresent |
| `metrics.serviceMonitor` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `metrics.serviceAnnotations` | Set annotations for the provisioned metrics service resource | |
| `metrics.port` | Set port of metrics service | 8443 |
| `metrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
| `metrics.proxy.image.repository` | The "repository/image" of the kube-proxy container | quay.io/brancz/kube-rbac-proxy |
| `metrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.10.0 |
| `metrics.serviceMonitorLabels` | Set labels to apply to ServiceMonitor resources | |
| `imagePullSecrets` | Specifies the secret to be used when pulling the controller pod containers | |
| `fullnameOverride` | Override the full resource names | |
| `nameOverride` | Override the resource name prefix | |
| `serviceAccount.annotations` | Set annotations to the service account | |
| `serviceAccount.create` | Deploy the controller pod under a service account | true |
| `podAnnotations` | Set annotations for the controller pod | |
| `podLabels` | Set labels for the controller pod | |
| `serviceAccount.name` | Set the name of the service account | |
| `securityContext` | Set the security context for each container in the controller pod | |
| `podSecurityContext` | Set the security context to controller pod | |
| `service.annotations` | Set annotations for the provisioned webhook service resource | |
| `service.port` | Set controller service ports | |
| `service.type` | Set controller service type | |
| `topologySpreadConstraints` | Set the controller pod topologySpreadConstraints | |
| `nodeSelector` | Set the controller pod nodeSelector | |
| `resources` | Set the controller pod resources | |
| `affinity` | Set the controller pod affinity rules | |
| `podDisruptionBudget.enabled` | Enables a PDB to ensure HA of controller pods | false |
| `podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |
| `tolerations` | Set the controller pod tolerations | |
| `env` | Set environment variables for the controller container | |
| `priorityClassName` | Set the controller pod priorityClassName | |
| `scope.watchNamespace` | Tells the controller and the github webhook server which namespace to watch if `scope.singleNamespace` is true | `Release.Namespace` (the default namespace of the helm chart). |
| `scope.singleNamespace` | Limit the controller to watch a single namespace | false |
| `certManagerEnabled` | Enable cert-manager. If disabled you must set admissionWebHooks.caBundle and create TLS secrets manually | true |
| `admissionWebHooks.caBundle` | Base64-encoded PEM bundle containing the CA that signed the webhook's serving certificate | |
| `githubWebhookServer.logLevel` | Set the log level of the githubWebhookServer container | |
| `githubWebhookServer.replicaCount` | Set the number of webhook server pods | 1 |
| `githubWebhookServer.useRunnerGroupsVisibility` | Enable supporting runner groups with custom visibility. This will incur in extra API calls and may blow up your budget. Currently, you also need to set `githubWebhookServer.secret.enabled` to enable this feature. | false |
| `githubWebhookServer.syncPeriod` | Set the period in which the controller reconciles the resources | 10m |
| `githubWebhookServer.enabled` | Deploy the webhook server pod | false |
| `githubWebhookServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false |
| `githubWebhookServer.secret.create` | Deploy the webhook hook secret | false |
| `githubWebhookServer.secret.name` | Set the name of the webhook hook secret | github-webhook-server |
| `githubWebhookServer.secret.github_webhook_secret_token` | Set the webhook secret token value | |
| `githubWebhookServer.imagePullSecrets` | Specifies the secret to be used when pulling the githubWebhookServer pod containers | |
| `githubWebhookServer.nameOverride` | Override the resource name prefix | |
| `githubWebhookServer.fullnameOverride` | Override the full resource names | |
| `githubWebhookServer.serviceAccount.create` | Deploy the githubWebhookServer under a service account | true |
| `githubWebhookServer.serviceAccount.annotations` | Set annotations for the service account | |
| `githubWebhookServer.serviceAccount.name` | Set the service account name | |
| `githubWebhookServer.podAnnotations` | Set annotations for the githubWebhookServer pod | |
| `githubWebhookServer.podLabels` | Set labels for the githubWebhookServer pod | |
| `githubWebhookServer.podSecurityContext` | Set the security context to githubWebhookServer pod | |
| `githubWebhookServer.securityContext` | Set the security context for each container in the githubWebhookServer pod | |
| `githubWebhookServer.resources` | Set the githubWebhookServer pod resources | |
| `githubWebhookServer.topologySpreadConstraints` | Set the githubWebhookServer pod topologySpreadConstraints | |
| `githubWebhookServer.nodeSelector` | Set the githubWebhookServer pod nodeSelector | |
| `githubWebhookServer.tolerations` | Set the githubWebhookServer pod tolerations | |
| `githubWebhookServer.affinity` | Set the githubWebhookServer pod affinity rules | |
| `githubWebhookServer.priorityClassName` | Set the githubWebhookServer pod priorityClassName | |
| `githubWebhookServer.service.type` | Set githubWebhookServer service type | |
| `githubWebhookServer.service.ports` | Set githubWebhookServer service ports | `[{"port":80, "targetPort:"http", "protocol":"TCP", "name":"http"}]` |
| `githubWebhookServer.ingress.enabled` | Deploy an ingress kind for the githubWebhookServer | false |
| `githubWebhookServer.ingress.annotations` | Set annotations for the ingress kind | |
| `githubWebhookServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
| `githubWebhookServer.ingress.tls` | Set tls configuration for ingress | |
| `githubWebhookServer.ingress.ingressClassName` | Set ingress class name | |
| `githubWebhookServer.podDisruptionBudget.enabled` | Enables a PDB to ensure HA of githubwebhook pods | false |
| `githubWebhookServer.podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `githubWebhookServer.podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |

View File

@@ -0,0 +1,30 @@
# This file sets some opinionated values for kube-score to use
# when parsing the chart
image:
pullPolicy: Always
podSecurityContext:
fsGroup: 2000
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 2000
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
authSecret:
create: false
# Set the following to true to create a dummy secret, allowing the manager pod to start
# This is only useful in CI
createDummySecret: true

View File

@@ -0,0 +1,249 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
creationTimestamp: null
name: horizontalrunnerautoscalers.actions.summerwind.dev
spec:
group: actions.summerwind.dev
names:
kind: HorizontalRunnerAutoscaler
listKind: HorizontalRunnerAutoscalerList
plural: horizontalrunnerautoscalers
shortNames:
- hra
singular: horizontalrunnerautoscaler
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.minReplicas
name: Min
type: number
- jsonPath: .spec.maxReplicas
name: Max
type: number
- jsonPath: .status.desiredReplicas
name: Desired
type: number
- jsonPath: .status.scheduledOverridesSummary
name: Schedule
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: HorizontalRunnerAutoscalerSpec defines the desired state of HorizontalRunnerAutoscaler
properties:
capacityReservations:
items:
description: CapacityReservation specifies the number of replicas temporarily added to the scale target until ExpirationTime.
properties:
effectiveTime:
format: date-time
type: string
expirationTime:
format: date-time
type: string
name:
type: string
replicas:
type: integer
type: object
type: array
maxReplicas:
description: MaxReplicas is the maximum number of replicas the deployment is allowed to scale
type: integer
metrics:
description: Metrics is the collection of various metric targets to calculate desired number of runners
items:
properties:
repositoryNames:
description: RepositoryNames is the list of repository names to be used for calculating the metric. For example, a repository name is the REPO part of `github.com/USER/REPO`.
items:
type: string
type: array
scaleDownAdjustment:
description: ScaleDownAdjustment is the number of runners removed on scale-down. You can only specify either ScaleDownFactor or ScaleDownAdjustment.
type: integer
scaleDownFactor:
description: ScaleDownFactor is the multiplicative factor applied to the current number of runners used to determine how many pods should be removed.
type: string
scaleDownThreshold:
description: ScaleDownThreshold is the percentage of busy runners less than which will trigger the hpa to scale the runners down.
type: string
scaleUpAdjustment:
description: ScaleUpAdjustment is the number of runners added on scale-up. You can only specify either ScaleUpFactor or ScaleUpAdjustment.
type: integer
scaleUpFactor:
description: ScaleUpFactor is the multiplicative factor applied to the current number of runners used to determine how many pods should be added.
type: string
scaleUpThreshold:
description: ScaleUpThreshold is the percentage of busy runners greater than which will trigger the hpa to scale runners up.
type: string
type:
description: Type is the type of metric to be used for autoscaling. The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
type: string
type: object
type: array
minReplicas:
description: MinReplicas is the minimum number of replicas the deployment is allowed to scale
type: integer
scaleDownDelaySecondsAfterScaleOut:
description: ScaleDownDelaySecondsAfterScaleUp is the approximate delay for a scale down followed by a scale up Used to prevent flapping (down->up->down->... loop)
type: integer
scaleTargetRef:
description: ScaleTargetRef sis the reference to scaled resource like RunnerDeployment
properties:
kind:
description: Kind is the type of resource being referenced
enum:
- RunnerDeployment
- RunnerSet
type: string
name:
description: Name is the name of resource being referenced
type: string
type: object
scaleUpTriggers:
description: "ScaleUpTriggers is an experimental feature to increase the desired replicas by 1 on each webhook requested received by the webhookBasedAutoscaler. \n This feature requires you to also enable and deploy the webhookBasedAutoscaler onto your cluster. \n Note that the added runners remain until the next sync period at least, and they may or may not be used by GitHub Actions depending on the timing. They are intended to be used to gain \"resource slack\" immediately after you receive a webhook from GitHub, so that you can loosely expect MinReplicas runners to be always available."
items:
properties:
amount:
type: integer
duration:
type: string
githubEvent:
properties:
checkRun:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
properties:
names:
description: Names is a list of GitHub Actions glob patterns. Any check_run event whose name matches one of patterns in the list can trigger autoscaling. Note that check_run name seem to equal to the job name you've defined in your actions workflow yaml file. So it is very likely that you can utilize this to trigger depending on the job.
items:
type: string
type: array
repositories:
description: Repositories is a list of GitHub repositories. Any check_run event whose repository matches one of repositories in the list can trigger autoscaling.
items:
type: string
type: array
status:
type: string
types:
description: 'One of: created, rerequested, or completed'
items:
type: string
type: array
type: object
pullRequest:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request
properties:
branches:
items:
type: string
type: array
types:
items:
type: string
type: array
type: object
push:
description: PushSpec is the condition for triggering scale-up on push event Also see https://docs.github.com/en/actions/reference/events-that-trigger-workflows#push
type: object
workflowJob:
description: https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#workflow_job
type: object
type: object
type: object
type: array
scheduledOverrides:
description: ScheduledOverrides is the list of ScheduledOverride. It can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. The earlier a scheduled override is, the higher it is prioritized.
items:
description: ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. A schedule can optionally be recurring, so that the correspoding override happens every day, week, month, or year.
properties:
endTime:
description: EndTime is the time at which the first override ends.
format: date-time
type: string
minReplicas:
description: MinReplicas is the number of runners while overriding. If omitted, it doesn't override minReplicas.
minimum: 0
nullable: true
type: integer
recurrenceRule:
properties:
frequency:
description: Frequency is the name of a predefined interval of each recurrence. The valid values are "Daily", "Weekly", "Monthly", and "Yearly". If empty, the corresponding override happens only once.
enum:
- Daily
- Weekly
- Monthly
- Yearly
type: string
untilTime:
description: UntilTime is the time of the final recurrence. If empty, the schedule recurs forever.
format: date-time
type: string
type: object
startTime:
description: StartTime is the time at which the first override starts.
format: date-time
type: string
required:
- endTime
- startTime
type: object
type: array
type: object
status:
properties:
cacheEntries:
items:
properties:
expirationTime:
format: date-time
type: string
key:
type: string
value:
type: integer
type: object
type: array
desiredReplicas:
description: DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
type: integer
lastSuccessfulScaleOutTime:
format: date-time
nullable: true
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed for the target. It corresponds to e.g. RunnerDeployment's generation, which is updated on mutation by the API Server.
format: int64
type: integer
scheduledOverridesSummary:
description: ScheduledOverridesSummary is the summary of active and upcoming scheduled overrides to be shown in e.g. a column of a `kubectl get hra` output for observability.
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,44 @@
## Upgrading
This project makes extensive use of CRDs to provide much of its functionality. Helm unfortunately does not support [managing](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/) CRDs by design:
_The full breakdown as to how they came to this decision and why they have taken the approach they have for dealing with CRDs can be found in [Helm Improvement Proposal 11](https://github.com/helm/community/blob/main/hips/hip-0011.md)_
```
There is no support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much
community discussion due to the danger for unintentional data loss. Furthermore, there is currently no community
consensus around how to handle CRDs and their lifecycle. As this evolves, Helm will add support for those use cases.
```
Helm will do an initial install of CRDs but it will not touch them afterwards (update or delete).
Additionally, because the project leverages CRDs so extensively you **MUST** run the matching controller app container with its matching CRDs i.e. always redeploy your CRDs if you are changing the app version.
Due to the above you can't just do a `helm upgrade` to release the latest version of the chart, the best practice steps are recorded below:
## Steps
1. Upgrade CRDs, this isn't optional, the CRDs you are using must be those that correspond with the version of the controller you are installing
```shell
# REMEMBER TO UPDATE THE CHART_VERSION TO RELEVANT CHART VERISON!!!!
CHART_VERSION=0.18.0
curl -L https://github.com/actions-runner-controller/actions-runner-controller/releases/download/actions-runner-controller-${CHART_VERSION}/actions-runner-controller-${CHART_VERSION}.tgz | tar zxv --strip 1 actions-runner-controller/crds
kubectl replace -f crds/
```
2. Upgrade the Helm release
```shell
# helm repo [command]
helm repo update
# helm upgrade [RELEASE] [CHART] [flags]
helm upgrade actions-runner-controller \
actions-runner-controller/actions-runner-controller \
--install \
--namespace actions-runner-system \
--version ${CHART_VERSION}
```

View File

@@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.githubWebhookServer.ingress.enabled }}
{{- range $host := .Values.githubWebhookServer.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.githubWebhookServer.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "actions-runner-controller.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "actions-runner-controller.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "actions-runner-controller.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "actions-runner-controller.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

View File

@@ -0,0 +1,64 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "actions-runner-controller-github-webhook-server.name" -}}
{{- default .Chart.Name .Values.githubWebhookServer.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.instance" -}}
{{- printf "%s-%s" .Release.Name "github-webhook-server" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "actions-runner-controller-github-webhook-server.fullname" -}}
{{- if .Values.githubWebhookServer.fullnameOverride }}
{{- .Values.githubWebhookServer.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.githubWebhookServer.nameOverride }}
{{- $instance := include "actions-runner-controller-github-webhook-server.instance" . }}
{{- if contains $name $instance }}
{{- $instance | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s-%s" .Release.Name $name "github-webhook-server" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "actions-runner-controller-github-webhook-server.selectorLabels" -}}
app.kubernetes.io/name: {{ include "actions-runner-controller-github-webhook-server.name" . }}
app.kubernetes.io/instance: {{ include "actions-runner-controller-github-webhook-server.instance" . }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "actions-runner-controller-github-webhook-server.serviceAccountName" -}}
{{- if .Values.githubWebhookServer.serviceAccount.create }}
{{- default (include "actions-runner-controller-github-webhook-server.fullname" .) .Values.githubWebhookServer.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.githubWebhookServer.serviceAccount.name }}
{{- end }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.secretName" -}}
{{- default (include "actions-runner-controller-github-webhook-server.fullname" .) .Values.githubWebhookServer.secret.name }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.roleName" -}}
{{- include "actions-runner-controller-github-webhook-server.fullname" . }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.serviceMonitorName" -}}
{{- include "actions-runner-controller-github-webhook-server.fullname" . | trunc 47 }}-service-monitor
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.pdbName" -}}
{{- include "actions-runner-controller-github-webhook-server.fullname" . | trunc 59 }}-pdb
{{- end }}

View File

@@ -0,0 +1,117 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "actions-runner-controller.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "actions-runner-controller.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "actions-runner-controller.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "actions-runner-controller.labels" -}}
helm.sh/chart: {{ include "actions-runner-controller.chart" . }}
{{ include "actions-runner-controller.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- range $k, $v := .Values.labels }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "actions-runner-controller.selectorLabels" -}}
app.kubernetes.io/name: {{ include "actions-runner-controller.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "actions-runner-controller.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "actions-runner-controller.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{- define "actions-runner-controller.secretName" -}}
{{- default (include "actions-runner-controller.fullname" .) .Values.authSecret.name -}}
{{- end }}
{{- define "actions-runner-controller.githubWebhookServerSecretName" -}}
{{- default (include "actions-runner-controller.fullname" .) .Values.githubWebhookServer.secret.name -}}
{{- end }}
{{- define "actions-runner-controller.leaderElectionRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-leader-election
{{- end }}
{{- define "actions-runner-controller.authProxyRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-proxy
{{- end }}
{{- define "actions-runner-controller.managerRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-manager
{{- end }}
{{- define "actions-runner-controller.runnerEditorRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-runner-editor
{{- end }}
{{- define "actions-runner-controller.runnerViewerRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-runner-viewer
{{- end }}
{{- define "actions-runner-controller.webhookServiceName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 55 }}-webhook
{{- end }}
{{- define "actions-runner-controller.metricsServiceName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 47 }}-metrics-service
{{- end }}
{{- define "actions-runner-controller.serviceMonitorName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 47 }}-service-monitor
{{- end }}
{{- define "actions-runner-controller.selfsignedIssuerName" -}}
{{- include "actions-runner-controller.fullname" . }}-selfsigned-issuer
{{- end }}
{{- define "actions-runner-controller.servingCertName" -}}
{{- include "actions-runner-controller.fullname" . }}-serving-cert
{{- end }}
{{- define "actions-runner-controller.pdbName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 59 }}-pdb
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.metrics.proxy.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "actions-runner-controller.authProxyRoleName" . }}
rules:
- apiGroups: ["authentication.k8s.io"]
resources:
- tokenreviews
verbs: ["create"]
- apiGroups: ["authorization.k8s.io"]
resources:
- subjectaccessreviews
verbs: ["create"]
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if .Values.metrics.proxy.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller.authProxyRoleName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller.authProxyRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,26 @@
{{- if .Values.certManagerEnabled }}
# The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io
# WARNING: Targets CertManager 0.11 check https://docs.cert-manager.io/en/latest/tasks/upgrading/index.html for breaking changes
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ include "actions-runner-controller.selfsignedIssuerName" . }}
namespace: {{ .Release.Namespace }}
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ include "actions-runner-controller.servingCertName" . }}
namespace: {{ .Release.Namespace }}
spec:
dnsNames:
- {{ include "actions-runner-controller.webhookServiceName" . }}.{{ .Release.Namespace }}.svc
- {{ include "actions-runner-controller.webhookServiceName" . }}.{{ .Release.Namespace }}.svc.cluster.local
issuerRef:
kind: Issuer
name: {{ include "actions-runner-controller.selfsignedIssuerName" . }}
secretName: {{ include "actions-runner-controller.servingCertName" . }}
{{- end }}

View File

@@ -0,0 +1,14 @@
# This template only exists to facilitate CI testing of the chart, since
# a secret is expected to be found in the namespace by the controller manager
{{ if .Values.createDummySecret -}}
apiVersion: v1
data:
github_token: dGVzdA==
kind: Secret
metadata:
name: controller-manager
{{- if .Values.authSecret.annotations }}
annotations:
{{ toYaml .Values.authSecret.annotations | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
name: {{ include "actions-runner-controller.metricsServiceName" . }}
namespace: {{ .Release.Namespace }}
{{- with .Values.metrics.serviceAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
ports:
- name: metrics-port
port: {{ .Values.metrics.port }}
targetPort: metrics-port
selector:
{{- include "actions-runner-controller.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,24 @@
{{- if .Values.metrics.serviceMonitor }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.metrics.serviceMonitorLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "actions-runner-controller.serviceMonitorName" . }}
spec:
endpoints:
- path: /metrics
port: metrics-port
{{- if .Values.metrics.proxy.enabled }}
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
scheme: https
tlsConfig:
insecureSkipVerify: true
{{- end }}
selector:
matchLabels:
{{- include "actions-runner-controller.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@@ -0,0 +1,19 @@
{{- if .Values.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
name: {{ include "actions-runner-controller.pdbName" . }}
namespace: {{ .Release.Namespace }}
spec:
{{- if .Values.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
{{- include "actions-runner-controller.selectorLabels" . | nindent 6 }}
{{- end -}}

View File

@@ -0,0 +1,206 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "actions-runner-controller.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "actions-runner-controller.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
kubectl.kubernetes.io/default-logs-container: "manager"
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "actions-runner-controller.selectorLabels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "actions-runner-controller.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- with .Values.priorityClassName }}
priorityClassName: "{{ . }}"
{{- end }}
containers:
- args:
{{- $metricsHost := .Values.metrics.proxy.enabled | ternary "127.0.0.1" "0.0.0.0" }}
{{- $metricsPort := .Values.metrics.proxy.enabled | ternary "8080" .Values.metrics.port }}
- "--metrics-addr={{ $metricsHost }}:{{ $metricsPort }}"
{{- if .Values.enableLeaderElection }}
- "--enable-leader-election"
{{- end }}
{{- if .Values.leaderElectionId }}
- "--leader-election-id={{ .Values.leaderElectionId }}"
{{- end }}
- "--port={{ .Values.webhookPort }}"
- "--sync-period={{ .Values.syncPeriod }}"
- "--default-scale-down-delay={{ .Values.defaultScaleDownDelay }}"
- "--docker-image={{ .Values.image.dindSidecarRepositoryAndTag }}"
- "--runner-image={{ .Values.image.actionsRunnerRepositoryAndTag }}"
{{- range .Values.image.actionsRunnerImagePullSecrets }}
- "--runner-image-pull-secret={{ . }}"
{{- end }}
{{- if .Values.dockerRegistryMirror }}
- "--docker-registry-mirror={{ .Values.dockerRegistryMirror }}"
{{- end }}
{{- if .Values.scope.singleNamespace }}
- "--watch-namespace={{ default .Release.Namespace .Values.scope.watchNamespace }}"
{{- end }}
{{- if .Values.githubAPICacheDuration }}
- "--github-api-cache-duration={{ .Values.githubAPICacheDuration }}"
{{- end }}
{{- if .Values.logLevel }}
- "--log-level={{ .Values.logLevel }}"
{{- end }}
{{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }}
command:
- "/manager"
env:
{{- if .Values.githubEnterpriseServerURL }}
- name: GITHUB_ENTERPRISE_URL
value: {{ .Values.githubEnterpriseServerURL }}
{{- end }}
{{- if .Values.githubURL }}
- name: GITHUB_URL
value: {{ .Values.githubURL }}
{{- end }}
{{- if .Values.githubUploadURL }}
- name: GITHUB_UPLOAD_URL
value: {{ .Values.githubUploadURL }}
{{- end }}
{{- if .Values.authSecret.enabled }}
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
key: github_token
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
- name: GITHUB_APP_ID
valueFrom:
secretKeyRef:
key: github_app_id
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
- name: GITHUB_APP_INSTALLATION_ID
valueFrom:
secretKeyRef:
key: github_app_installation_id
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
- name: GITHUB_APP_PRIVATE_KEY
valueFrom:
secretKeyRef:
key: github_app_private_key
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
{{- if .Values.authSecret.github_basicauth_username }}
- name: GITHUB_BASICAUTH_USERNAME
value: {{ .Values.authSecret.github_basicauth_username }}
{{- end }}
- name: GITHUB_BASICAUTH_PASSWORD
valueFrom:
secretKeyRef:
key: github_basicauth_password
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
{{- end }}
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: manager
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.webhookPort }}
name: webhook-server
protocol: TCP
{{- if not .Values.metrics.proxy.enabled }}
- containerPort: {{ .Values.metrics.port }}
name: metrics-port
protocol: TCP
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
volumeMounts:
{{- if .Values.authSecret.enabled }}
- mountPath: "/etc/actions-runner-controller"
name: secret
readOnly: true
{{- end }}
- mountPath: /tmp
name: tmp
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
{{- if .Values.additionalVolumeMounts }}
{{- toYaml .Values.additionalVolumeMounts | nindent 8 }}
{{- end }}
{{- if .Values.metrics.proxy.enabled }}
- args:
- "--secure-listen-address=0.0.0.0:{{ .Values.metrics.port }}"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=10"
image: "{{ .Values.metrics.proxy.image.repository }}:{{ .Values.metrics.proxy.image.tag }}"
name: kube-rbac-proxy
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.metrics.port }}
name: metrics-port
resources:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- end }}
terminationGracePeriodSeconds: 10
volumes:
{{- if .Values.authSecret.enabled }}
- name: secret
secret:
secretName: {{ include "actions-runner-controller.secretName" . }}
{{- end }}
- name: cert
secret:
defaultMode: 420
secretName: {{ include "actions-runner-controller.servingCertName" . }}
- name: tmp
emptyDir: {}
{{- if .Values.additionalVolumes }}
{{- toYaml .Values.additionalVolumes | nindent 6}}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.hostNetwork }}
hostNetwork: {{ .Values.hostNetwork }}
{{- end }}

View File

@@ -0,0 +1,163 @@
{{- if .Values.githubWebhookServer.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.githubWebhookServer.replicaCount }}
selector:
matchLabels:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.githubWebhookServer.podAnnotations }}
annotations:
kubectl.kubernetes.io/default-logs-container: "github-webhook-server"
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 8 }}
{{- with .Values.githubWebhookServer.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.githubWebhookServer.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "actions-runner-controller-github-webhook-server.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.githubWebhookServer.podSecurityContext | nindent 8 }}
{{- with .Values.githubWebhookServer.priorityClassName }}
priorityClassName: "{{ . }}"
{{- end }}
containers:
- args:
{{- $metricsHost := .Values.metrics.proxy.enabled | ternary "127.0.0.1" "0.0.0.0" }}
{{- $metricsPort := .Values.metrics.proxy.enabled | ternary "8080" .Values.metrics.port }}
- "--metrics-addr={{ $metricsHost }}:{{ $metricsPort }}"
- "--sync-period={{ .Values.githubWebhookServer.syncPeriod }}"
{{- if .Values.githubWebhookServer.logLevel }}
- "--log-level={{ .Values.githubWebhookServer.logLevel }}"
{{- end }}
{{- if .Values.scope.singleNamespace }}
- "--watch-namespace={{ default .Release.Namespace .Values.scope.watchNamespace }}"
{{- end }}
{{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }}
command:
- "/github-webhook-server"
env:
- name: GITHUB_WEBHOOK_SECRET_TOKEN
valueFrom:
secretKeyRef:
key: github_webhook_secret_token
name: {{ include "actions-runner-controller-github-webhook-server.secretName" . }}
optional: true
{{- if .Values.githubEnterpriseServerURL }}
- name: GITHUB_ENTERPRISE_URL
value: {{ .Values.githubEnterpriseServerURL }}
{{- end }}
{{- if .Values.githubURL }}
- name: GITHUB_URL
value: {{ .Values.githubURL }}
{{- end }}
{{- if .Values.githubUploadURL }}
- name: GITHUB_UPLOAD_URL
value: {{ .Values.githubUploadURL }}
{{- end }}
{{- if and .Values.githubWebhookServer.useRunnerGroupsVisibility .Values.githubWebhookServer.secret.enabled }}
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
key: github_token
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
optional: true
- name: GITHUB_APP_ID
valueFrom:
secretKeyRef:
key: github_app_id
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
optional: true
- name: GITHUB_APP_INSTALLATION_ID
valueFrom:
secretKeyRef:
key: github_app_installation_id
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
optional: true
- name: GITHUB_APP_PRIVATE_KEY
valueFrom:
secretKeyRef:
key: github_app_private_key
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
optional: true
{{- if .Values.authSecret.github_basicauth_username }}
- name: GITHUB_BASICAUTH_USERNAME
value: {{ .Values.authSecret.github_basicauth_username }}
{{- end }}
- name: GITHUB_BASICAUTH_PASSWORD
valueFrom:
secretKeyRef:
key: github_basicauth_password
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
{{- end }}
{{- range $key, $val := .Values.githubWebhookServer.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: github-webhook-server
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 8000
name: http
protocol: TCP
{{- if not .Values.metrics.proxy.enabled }}
- containerPort: {{ .Values.metrics.port }}
name: metrics-port
protocol: TCP
{{- end }}
resources:
{{- toYaml .Values.githubWebhookServer.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.githubWebhookServer.securityContext | nindent 12 }}
{{- if .Values.metrics.proxy.enabled }}
- args:
- "--secure-listen-address=0.0.0.0:{{ .Values.metrics.port }}"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=10"
image: "{{ .Values.metrics.proxy.image.repository }}:{{ .Values.metrics.proxy.image.tag }}"
name: kube-rbac-proxy
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.metrics.port }}
name: metrics-port
resources:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- end }}
terminationGracePeriodSeconds: 10
{{- with .Values.githubWebhookServer.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.githubWebhookServer.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.githubWebhookServer.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.githubWebhookServer.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,60 @@
{{- if .Values.githubWebhookServer.ingress.enabled -}}
{{- $fullName := include "actions-runner-controller-github-webhook-server.fullname" . -}}
{{- $svcPort := (index .Values.githubWebhookServer.service.ports 0).port -}}
{{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
apiVersion: networking.k8s.io/v1
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1/Ingress" }}
apiVersion: networking.k8s.io/v1beta1
{{- else if .Capabilities.APIVersions.Has "extensions/v1beta1/Ingress" }}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.githubWebhookServer.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.githubWebhookServer.ingress.tls }}
tls:
{{- range .Values.githubWebhookServer.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
{{- with .Values.githubWebhookServer.ingress.ingressClassName }}
ingressClassName: {{ . }}
{{- end }}
rules:
{{- range .Values.githubWebhookServer.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- if .extraPaths }}
{{- toYaml .extraPaths | nindent 10 }}
{{- end }}
{{- range .paths }}
- path: {{ .path }}
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,19 @@
{{- if .Values.githubWebhookServer.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
name: {{ include "actions-runner-controller-github-webhook-server.pdbName" . }}
namespace: {{ .Release.Namespace }}
spec:
{{- if .Values.githubWebhookServer.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.githubWebhookServer.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.githubWebhookServer.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.githubWebhookServer.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 6 }}
{{- end -}}

View File

@@ -0,0 +1,90 @@
{{- if .Values.githubWebhookServer.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller-github-webhook-server.roleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runnersets
verbs:
- get
- list
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/status
verbs:
- get
- patch
- update
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if .Values.githubWebhookServer.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.roleName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller-github-webhook-server.roleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller-github-webhook-server.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,28 @@
{{- if .Values.githubWebhookServer.enabled }}
{{- if .Values.githubWebhookServer.secret.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.secretName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque
data:
{{- if .Values.githubWebhookServer.secret.github_webhook_secret_token }}
github_webhook_secret_token: {{ .Values.githubWebhookServer.secret.github_webhook_secret_token | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_app_id }}
github_app_id: {{ .Values.githubWebhookServer.secret.github_app_id | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_app_installation_id }}
github_app_installation_id: {{ .Values.githubWebhookServer.secret.github_app_installation_id | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_app_private_key }}
github_app_private_key: {{ .Values.githubWebhookServer.secret.github_app_private_key | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_token }}
github_token: {{ .Values.githubWebhookServer.secret.github_token | toString | b64enc }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,26 @@
{{- if .Values.githubWebhookServer.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- if .Values.githubWebhookServer.service.annotations }}
annotations:
{{ toYaml .Values.githubWebhookServer.service.annotations | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.githubWebhookServer.service.type }}
ports:
{{ range $_, $port := .Values.githubWebhookServer.service.ports -}}
- {{ $port | toYaml | nindent 6 }}
{{- end }}
{{- if .Values.metrics.serviceMonitor }}
- name: metrics-port
port: {{ .Values.metrics.port }}
targetPort: metrics-port
{{- end }}
selector:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,24 @@
{{- if and .Values.githubWebhookServer.enabled .Values.metrics.serviceMonitor }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.metrics.serviceMonitorLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "actions-runner-controller-github-webhook-server.serviceMonitorName" . }}
spec:
endpoints:
- path: /metrics
port: metrics-port
{{- if .Values.metrics.proxy.enabled }}
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
scheme: https
tlsConfig:
insecureSkipVerify: true
{{- end }}
selector:
matchLabels:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.githubWebhookServer.enabled -}}
{{- if .Values.githubWebhookServer.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.githubWebhookServer.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,33 @@
# permissions to do leader election.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "actions-runner-controller.leaderElectionRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- configmaps/status
verbs:
- get
- update
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "actions-runner-controller.leaderElectionRoleName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "actions-runner-controller.leaderElectionRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,260 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.managerRoleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runnerreplicasets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerreplicasets/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerreplicasets/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runnersets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnersets/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnersets/status
verbs:
- get
- patch
- update
- apiGroups:
- "apps"
resources:
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- "apps"
resources:
- statefulsets/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- persistentvolumeclaims
verbs:
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- list
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller.managerRoleName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller.managerRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,32 @@
{{- if .Values.authSecret.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "actions-runner-controller.secretName" . }}
namespace: {{ .Release.Namespace }}
{{- if .Values.authSecret.annotations }}
annotations:
{{ toYaml .Values.authSecret.annotations | nindent 4 }}
{{- end }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque
data:
{{- if .Values.authSecret.github_app_id }}
# Keep this as a string as strings integrate better with things like AWS Parameter Store, see PR #882 for an example
github_app_id: {{ .Values.authSecret.github_app_id | toString | b64enc }}
{{- end }}
{{- if .Values.authSecret.github_app_installation_id }}
# Keep this as a string as strings integrate better with things like AWS Parameter Store, see PR #882 for an example
github_app_installation_id: {{ .Values.authSecret.github_app_installation_id | toString | b64enc }}
{{- end }}
{{- if .Values.authSecret.github_app_private_key }}
github_app_private_key: {{ .Values.authSecret.github_app_private_key | toString | b64enc }}
{{- end }}
{{- if .Values.authSecret.github_token }}
github_token: {{ .Values.authSecret.github_token | toString | b64enc }}
{{- end }}
{{- if .Values.authSecret.github_basicauth_password }}
github_basicauth_password: {{ .Values.authSecret.github_basicauth_password | toString | b64enc }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,26 @@
# permissions to do edit runners.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "actions-runner-controller.runnerEditorRoleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- runners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/status
verbs:
- get
- patch
- update

View File

@@ -0,0 +1,20 @@
# permissions to do viewer runners.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "actions-runner-controller.runnerViewerRoleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- runners
verbs:
- get
- list
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/status
verbs:
- get

View File

@@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,221 @@
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.fullname" . }}-mutating-webhook-configuration
{{- if .Values.certManagerEnabled }}
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "actions-runner-controller.servingCertName" . }}
{{- end }}
webhooks:
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ quote .Values.admissionWebHooks.caBundle }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-actions-summerwind-dev-v1alpha1-runner
failurePolicy: Fail
name: mutate.runner.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runners
sideEffects: None
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-actions-summerwind-dev-v1alpha1-runnerdeployment
failurePolicy: Fail
name: mutate.runnerdeployment.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerdeployments
sideEffects: None
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-actions-summerwind-dev-v1alpha1-runnerreplicaset
failurePolicy: Fail
name: mutate.runnerreplicaset.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerreplicasets
sideEffects: None
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-runner-set-pod
failurePolicy: Fail
name: mutate-runner-pod.webhook.actions.summerwind.dev
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
resources:
- pods
sideEffects: None
objectSelector:
matchLabels:
"actions-runner-controller/inject-registration-token": "true"
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.fullname" . }}-validating-webhook-configuration
{{- if .Values.certManagerEnabled }}
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "actions-runner-controller.servingCertName" . }}
{{- end }}
webhooks:
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-actions-summerwind-dev-v1alpha1-runner
failurePolicy: Fail
name: validate.runner.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runners
sideEffects: None
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-actions-summerwind-dev-v1alpha1-runnerdeployment
failurePolicy: Fail
name: validate.runnerdeployment.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerdeployments
sideEffects: None
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-actions-summerwind-dev-v1alpha1-runnerreplicaset
failurePolicy: Fail
name: validate.runnerreplicaset.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerreplicasets
sideEffects: None

View File

@@ -0,0 +1,20 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
- port: 443
targetPort: {{ .Values.webhookPort }}
protocol: TCP
name: https
selector:
{{- include "actions-runner-controller.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,257 @@
# Default values for actions-runner-controller.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
labels: {}
replicaCount: 1
webhookPort: 9443
syncPeriod: 1m
defaultScaleDownDelay: 10m
enableLeaderElection: true
# Specifies the controller id for leader election.
# Must be unique if more than one controller installed onto the same namespace.
#leaderElectionId: "actions-runner-controller"
# DEPRECATED: This has been removed as unnecessary in #1192
# The controller tries its best not to repeat the duplicate GitHub API call
# within this duration.
# Defaults to syncPeriod - 10s.
#githubAPICacheDuration: 30s
# The URL of your GitHub Enterprise server, if you're using one.
#githubEnterpriseServerURL: https://github.example.com
# Override GitHub URLs in case of using proxy APIs
#githubURL: ""
#githubUploadURL: ""
#runnerGithubURL: ""
# Only 1 authentication method can be deployed at a time
# Uncomment the configuration you are applying and fill in the details
#
# If authSecret.enabled=true these values are inherited to actions-runner-controller's controller-manager container's env.
#
# Do set authSecret.enabled=false and set env if you want full control over
# the GitHub authn related envvars of the container.
# See https://github.com/actions-runner-controller/actions-runner-controller/pull/937 for more details.
authSecret:
enabled: true
create: false
name: "controller-manager"
annotations: {}
### GitHub Apps Configuration
## NOTE: IDs MUST be strings, use quotes
#github_app_id: ""
#github_app_installation_id: ""
#github_app_private_key: |
### GitHub PAT Configuration
#github_token: ""
### Basic auth for github API proxy
#github_basicauth_username: ""
#github_basicauth_password: ""
dockerRegistryMirror: ""
image:
repository: "summerwind/actions-runner-controller"
actionsRunnerRepositoryAndTag: "summerwind/actions-runner:latest"
dindSidecarRepositoryAndTag: "docker:dind"
pullPolicy: IfNotPresent
# The default image-pull secrets name for self-hosted runner container.
# It's added to spec.ImagePullSecrets of self-hosted runner pods.
actionsRunnerImagePullSecrets: []
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext:
{}
# fsGroup: 2000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# Webhook service resource
service:
type: ClusterIP
port: 443
annotations: {}
# Metrics service resource
metrics:
serviceAnnotations: {}
serviceMonitor: false
serviceMonitorLabels: {}
port: 8443
proxy:
enabled: true
image:
repository: quay.io/brancz/kube-rbac-proxy
tag: v0.13.0
resources:
{}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
# Only one of minAvailable or maxUnavailable can be set
podDisruptionBudget:
enabled: false
# minAvailable: 1
# maxUnavailable: 3
# Leverage a PriorityClass to ensure your pods survive resource shortages
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# PriorityClass: system-cluster-critical
priorityClassName: ""
env:
{}
# http_proxy: "proxy.com:8080"
# https_proxy: "proxy.com:8080"
# no_proxy: ""
## specify additional volumes to mount in the manager container, this can be used
## to specify additional storage of material or to inject files from ConfigMaps
## into the running container
additionalVolumes: []
## specify where the additional volumes are mounted in the manager container
additionalVolumeMounts: []
scope:
# If true, the controller will only watch custom resources in a single namespace
singleNamespace: false
# If `scope.singleNamespace=true`, the controller will only watch custom resources in this namespace
# The default value is "", which means the namespace of the controller
watchNamespace: ""
certManagerEnabled: true
admissionWebHooks:
{}
#caBundle: "Ci0tLS0tQk...<base64-encoded PEM bundle containing the CA that signed the webhook's serving certificate>...tLS0K"
# There may be alternatives to setting `hostNetwork: true`, see
# https://github.com/actions-runner-controller/actions-runner-controller/issues/1005#issuecomment-993097155
#hostNetwork: true
githubWebhookServer:
enabled: false
replicaCount: 1
syncPeriod: 10m
useRunnerGroupsVisibility: false
secret:
enabled: false
create: false
name: "github-webhook-server"
### GitHub Webhook Configuration
github_webhook_secret_token: ""
### GitHub Apps Configuration
## NOTE: IDs MUST be strings, use quotes
#github_app_id: ""
#github_app_installation_id: ""
#github_app_private_key: |
### GitHub PAT Configuration
#github_token: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
priorityClassName: ""
service:
type: ClusterIP
annotations: {}
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
#nodePort: someFixedPortForUseWithTerraformCdkCfnEtc
ingress:
enabled: false
ingressClassName: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
# - path: /*
# pathType: ImplementationSpecific
# Extra paths that are not automatically connected to the server. This is useful when working with annotation based services.
extraPaths: []
# - path: /*
# backend:
# serviceName: ssl-redirect
# servicePort: use-annotation
## for Kubernetes >=1.19 (when "networking.k8s.io/v1" is used)
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# name: use-annotation
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# Only one of minAvailable or maxUnavailable can be set
podDisruptionBudget:
enabled: false
# minAvailable: 1
# maxUnavailable: 3

View File

@@ -0,0 +1,225 @@
/*
Copyright 2021 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"context"
"errors"
"flag"
"fmt"
"net/http"
"os"
"sync"
"time"
actionsv1alpha1 "github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/controllers"
"github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/actions-runner-controller/actions-runner-controller/logging"
"github.com/kelseyhightower/envconfig"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
_ "k8s.io/client-go/plugin/pkg/client/auth/exec"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
ctrl "sigs.k8s.io/controller-runtime"
// +kubebuilder:scaffold:imports
)
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
const (
webhookSecretTokenEnvName = "GITHUB_WEBHOOK_SECRET_TOKEN"
)
func init() {
_ = clientgoscheme.AddToScheme(scheme)
_ = actionsv1alpha1.AddToScheme(scheme)
// +kubebuilder:scaffold:scheme
}
func main() {
var (
err error
webhookAddr string
metricsAddr string
// The secret token of the GitHub Webhook. See https://docs.github.com/en/developers/webhooks-and-events/securing-your-webhooks
webhookSecretToken string
webhookSecretTokenEnv string
watchNamespace string
enableLeaderElection bool
syncPeriod time.Duration
logLevel string
queueLimit int
ghClient *github.Client
)
var c github.Config
err = envconfig.Process("github", &c)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: processing environment variables: %v\n", err)
os.Exit(1)
}
webhookSecretTokenEnv = os.Getenv(webhookSecretTokenEnvName)
flag.StringVar(&webhookAddr, "webhook-addr", ":8000", "The address the metric endpoint binds to.")
flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.")
flag.StringVar(&watchNamespace, "watch-namespace", "", "The namespace to watch for HorizontalRunnerAutoscaler's to scale on Webhook. Set to empty for letting it watch for all namespaces.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change")
flag.StringVar(&logLevel, "log-level", logging.LogLevelDebug, `The verbosity of the logging. Valid values are "debug", "info", "warn", "error". Defaults to "debug".`)
flag.IntVar(&queueLimit, "queue-limit", controllers.DefaultQueueLimit, `The maximum length of the scale operation queue. The scale opration is enqueued per every matching webhook event, and the server returns a 500 HTTP status when the queue was already full on enqueue attempt.`)
flag.StringVar(&webhookSecretToken, "github-webhook-secret-token", "", "The personal access token of GitHub.")
flag.StringVar(&c.Token, "github-token", c.Token, "The personal access token of GitHub.")
flag.Int64Var(&c.AppID, "github-app-id", c.AppID, "The application ID of GitHub App.")
flag.Int64Var(&c.AppInstallationID, "github-app-installation-id", c.AppInstallationID, "The installation ID of GitHub App.")
flag.StringVar(&c.AppPrivateKey, "github-app-private-key", c.AppPrivateKey, "The path of a private key file to authenticate as a GitHub App")
flag.StringVar(&c.URL, "github-url", c.URL, "GitHub URL to be used for GitHub API calls")
flag.StringVar(&c.UploadURL, "github-upload-url", c.UploadURL, "GitHub Upload URL to be used for GitHub API calls")
flag.StringVar(&c.BasicauthUsername, "github-basicauth-username", c.BasicauthUsername, "Username for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API")
flag.StringVar(&c.BasicauthPassword, "github-basicauth-password", c.BasicauthPassword, "Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API")
flag.StringVar(&c.RunnerGitHubURL, "runner-github-url", c.RunnerGitHubURL, "GitHub URL to be used by runners during registration")
flag.Parse()
if webhookSecretToken == "" && webhookSecretTokenEnv != "" {
setupLog.Info(fmt.Sprintf("Using the value from %s for -github-webhook-secret-token", webhookSecretTokenEnvName))
webhookSecretToken = webhookSecretTokenEnv
}
if webhookSecretToken == "" {
setupLog.Info(fmt.Sprintf("-github-webhook-secret-token and %s are missing or empty. Create one following https://docs.github.com/en/developers/webhooks-and-events/securing-your-webhooks and specify it via the flag or the envvar", webhookSecretTokenEnvName))
}
if watchNamespace == "" {
setupLog.Info("-watch-namespace is empty. HorizontalRunnerAutoscalers in all the namespaces are watched, cached, and considered as scale targets.")
} else {
setupLog.Info("-watch-namespace is %q. Only HorizontalRunnerAutoscalers in %q are watched, cached, and considered as scale targets.")
}
logger := logging.NewLogger(logLevel)
ctrl.SetLogger(logger)
// In order to support runner groups with custom visibility (selected repositories), we need to perform some GitHub API calls.
// Let the user define if they want to opt-in supporting this option by providing the proper GitHub authentication parameters
// Without an opt-in, runner groups with custom visibility won't be supported to save API calls
// That is, all runner groups managed by ARC are assumed to be visible to any repositories,
// which is wrong when you have one or more non-default runner groups in your organization or enterprise.
if len(c.Token) > 0 || (c.AppID > 0 && c.AppInstallationID > 0 && c.AppPrivateKey != "") || (len(c.BasicauthUsername) > 0 && len(c.BasicauthPassword) > 0) {
c.Log = &logger
ghClient, err = c.NewClient()
if err != nil {
fmt.Fprintln(os.Stderr, "Error: Client creation failed.", err)
setupLog.Error(err, "unable to create controller", "controller", "Runner")
os.Exit(1)
}
} else {
setupLog.Info("GitHub client is not initialized. Runner groups with custom visibility are not supported. If needed, please provide GitHub authentication. This will incur in extra GitHub API calls")
}
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
SyncPeriod: &syncPeriod,
LeaderElection: enableLeaderElection,
Namespace: watchNamespace,
MetricsBindAddress: metricsAddr,
Port: 9443,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
hraGitHubWebhook := &controllers.HorizontalRunnerAutoscalerGitHubWebhook{
Name: "webhookbasedautoscaler",
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("webhookbasedautoscaler"),
Recorder: nil,
Scheme: mgr.GetScheme(),
SecretKeyBytes: []byte(webhookSecretToken),
Namespace: watchNamespace,
GitHubClient: ghClient,
QueueLimit: queueLimit,
}
if err = hraGitHubWebhook.SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "webhookbasedautoscaler")
os.Exit(1)
}
var wg sync.WaitGroup
ctx, cancel := context.WithCancel(context.Background())
wg.Add(1)
go func() {
defer cancel()
defer wg.Done()
setupLog.Info("starting webhook server")
if err := mgr.Start(ctx); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}()
mux := http.NewServeMux()
mux.HandleFunc("/", hraGitHubWebhook.Handle)
srv := http.Server{
Addr: webhookAddr,
Handler: mux,
}
wg.Add(1)
go func() {
defer cancel()
defer wg.Done()
go func() {
<-ctx.Done()
srv.Shutdown(context.Background())
}()
if err := srv.ListenAndServe(); err != nil {
if !errors.Is(err, http.ErrServerClosed) {
setupLog.Error(err, "problem running http server")
}
}
}()
go func() {
<-ctrl.SetupSignalHandler().Done()
cancel()
}()
wg.Wait()
}

View File

@@ -1,7 +1,7 @@
# The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io
# WARNING: Targets CertManager 0.11 check https://docs.cert-manager.io/en/latest/tasks/upgrading/index.html for breaking changes
apiVersion: cert-manager.io/v1alpha2
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned-issuer
@@ -9,7 +9,7 @@ metadata:
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1alpha2
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml

View File

@@ -0,0 +1,249 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
creationTimestamp: null
name: horizontalrunnerautoscalers.actions.summerwind.dev
spec:
group: actions.summerwind.dev
names:
kind: HorizontalRunnerAutoscaler
listKind: HorizontalRunnerAutoscalerList
plural: horizontalrunnerautoscalers
shortNames:
- hra
singular: horizontalrunnerautoscaler
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.minReplicas
name: Min
type: number
- jsonPath: .spec.maxReplicas
name: Max
type: number
- jsonPath: .status.desiredReplicas
name: Desired
type: number
- jsonPath: .status.scheduledOverridesSummary
name: Schedule
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: HorizontalRunnerAutoscalerSpec defines the desired state of HorizontalRunnerAutoscaler
properties:
capacityReservations:
items:
description: CapacityReservation specifies the number of replicas temporarily added to the scale target until ExpirationTime.
properties:
effectiveTime:
format: date-time
type: string
expirationTime:
format: date-time
type: string
name:
type: string
replicas:
type: integer
type: object
type: array
maxReplicas:
description: MaxReplicas is the maximum number of replicas the deployment is allowed to scale
type: integer
metrics:
description: Metrics is the collection of various metric targets to calculate desired number of runners
items:
properties:
repositoryNames:
description: RepositoryNames is the list of repository names to be used for calculating the metric. For example, a repository name is the REPO part of `github.com/USER/REPO`.
items:
type: string
type: array
scaleDownAdjustment:
description: ScaleDownAdjustment is the number of runners removed on scale-down. You can only specify either ScaleDownFactor or ScaleDownAdjustment.
type: integer
scaleDownFactor:
description: ScaleDownFactor is the multiplicative factor applied to the current number of runners used to determine how many pods should be removed.
type: string
scaleDownThreshold:
description: ScaleDownThreshold is the percentage of busy runners less than which will trigger the hpa to scale the runners down.
type: string
scaleUpAdjustment:
description: ScaleUpAdjustment is the number of runners added on scale-up. You can only specify either ScaleUpFactor or ScaleUpAdjustment.
type: integer
scaleUpFactor:
description: ScaleUpFactor is the multiplicative factor applied to the current number of runners used to determine how many pods should be added.
type: string
scaleUpThreshold:
description: ScaleUpThreshold is the percentage of busy runners greater than which will trigger the hpa to scale runners up.
type: string
type:
description: Type is the type of metric to be used for autoscaling. The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
type: string
type: object
type: array
minReplicas:
description: MinReplicas is the minimum number of replicas the deployment is allowed to scale
type: integer
scaleDownDelaySecondsAfterScaleOut:
description: ScaleDownDelaySecondsAfterScaleUp is the approximate delay for a scale down followed by a scale up Used to prevent flapping (down->up->down->... loop)
type: integer
scaleTargetRef:
description: ScaleTargetRef sis the reference to scaled resource like RunnerDeployment
properties:
kind:
description: Kind is the type of resource being referenced
enum:
- RunnerDeployment
- RunnerSet
type: string
name:
description: Name is the name of resource being referenced
type: string
type: object
scaleUpTriggers:
description: "ScaleUpTriggers is an experimental feature to increase the desired replicas by 1 on each webhook requested received by the webhookBasedAutoscaler. \n This feature requires you to also enable and deploy the webhookBasedAutoscaler onto your cluster. \n Note that the added runners remain until the next sync period at least, and they may or may not be used by GitHub Actions depending on the timing. They are intended to be used to gain \"resource slack\" immediately after you receive a webhook from GitHub, so that you can loosely expect MinReplicas runners to be always available."
items:
properties:
amount:
type: integer
duration:
type: string
githubEvent:
properties:
checkRun:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
properties:
names:
description: Names is a list of GitHub Actions glob patterns. Any check_run event whose name matches one of patterns in the list can trigger autoscaling. Note that check_run name seem to equal to the job name you've defined in your actions workflow yaml file. So it is very likely that you can utilize this to trigger depending on the job.
items:
type: string
type: array
repositories:
description: Repositories is a list of GitHub repositories. Any check_run event whose repository matches one of repositories in the list can trigger autoscaling.
items:
type: string
type: array
status:
type: string
types:
description: 'One of: created, rerequested, or completed'
items:
type: string
type: array
type: object
pullRequest:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request
properties:
branches:
items:
type: string
type: array
types:
items:
type: string
type: array
type: object
push:
description: PushSpec is the condition for triggering scale-up on push event Also see https://docs.github.com/en/actions/reference/events-that-trigger-workflows#push
type: object
workflowJob:
description: https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#workflow_job
type: object
type: object
type: object
type: array
scheduledOverrides:
description: ScheduledOverrides is the list of ScheduledOverride. It can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. The earlier a scheduled override is, the higher it is prioritized.
items:
description: ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. A schedule can optionally be recurring, so that the correspoding override happens every day, week, month, or year.
properties:
endTime:
description: EndTime is the time at which the first override ends.
format: date-time
type: string
minReplicas:
description: MinReplicas is the number of runners while overriding. If omitted, it doesn't override minReplicas.
minimum: 0
nullable: true
type: integer
recurrenceRule:
properties:
frequency:
description: Frequency is the name of a predefined interval of each recurrence. The valid values are "Daily", "Weekly", "Monthly", and "Yearly". If empty, the corresponding override happens only once.
enum:
- Daily
- Weekly
- Monthly
- Yearly
type: string
untilTime:
description: UntilTime is the time of the final recurrence. If empty, the schedule recurs forever.
format: date-time
type: string
type: object
startTime:
description: StartTime is the time at which the first override starts.
format: date-time
type: string
required:
- endTime
- startTime
type: object
type: array
type: object
status:
properties:
cacheEntries:
items:
properties:
expirationTime:
format: date-time
type: string
key:
type: string
value:
type: integer
type: object
type: array
desiredReplicas:
description: DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
type: integer
lastSuccessfulScaleOutTime:
format: date-time
nullable: true
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed for the target. It corresponds to e.g. RunnerDeployment's generation, which is updated on mutation by the API Server.
format: int64
type: integer
scheduledOverridesSummary:
description: ScheduledOverridesSummary is the summary of active and upcoming scheduled overrides to be shown in e.g. a column of a `kubectl get hra` output for observability.
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,10 @@
# It should be run by config/default
resources:
- bases/actions.summerwind.dev_runners.yaml
- bases/actions.summerwind.dev_runnerreplicasets.yaml
- bases/actions.summerwind.dev_runnerdeployments.yaml
- bases/actions.summerwind.dev_horizontalrunnerautoscalers.yaml
- bases/actions.summerwind.dev_runnersets.yaml
# +kubebuilder:scaffold:crdkustomizeresource
patchesStrategicMerge:

Some files were not shown because too many files have changed in this diff Show More