Compare commits

...

224 Commits

Author SHA1 Message Date
Jesse Haka
332548093a feat: replace v1beta1 api with v1 (#1931)
* replace v1beta1 api with v1
2022-10-25 20:12:31 +01:00
renovate[bot]
b4e143dadc fix(deps): update module golang.org/x/oauth2 to v0.1.0 (#1938)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-23 09:20:49 +09:00
Cory Miller
93c4dd856e Add first interaction workflow to greet new contributors and users (#1918) 2022-10-21 22:58:06 +09:00
Cory Miller
93aea48c38 Revamp the contribution guide (#1917)
* Revamp the contribution guide

* Fix link

* Update CONTRIBUTING.md

Co-authored-by: Ava Stancu <avastancu@github.com>

* Update CONTRIBUTING.md

Co-authored-by: Ava Stancu <avastancu@github.com>

* add guidance on image tool requests

Co-authored-by: Ava Stancu <avastancu@github.com>
2022-10-21 22:56:42 +09:00
DongHo Jung
14b17cca73 docs: fix typo for syncPeriod in chart README (#1942) 2022-10-21 09:54:59 +01:00
Ayoola Ajebeku
5298c6ea29 docs: remove duplicate word (#1930) 2022-10-17 19:47:13 +01:00
renovate[bot]
8fa08d59b1 fix(deps): update golang.org/x/oauth2 digest to 6fdb5e3 (#1922)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-16 17:17:25 +09:00
renovate[bot]
003c552c34 fix(deps): update kubernetes packages to v0.25.3 (#1919)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-16 17:17:11 +09:00
Callum Tait
83370d7f95 ci: use github-pr-check reporter for shellcheck (#1927)
* ci: use github-pr-review reporter for shellcheck

* ci: use default reporter

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-10-16 17:15:56 +09:00
Connor Murphy
7da9d0ae19 Docs: fix broken link to readme (#1912) 2022-10-14 06:43:35 +09:00
Callum Tait
b56fa6a748 docs: minor grammar fix (#1915) 2022-10-13 09:08:09 +09:00
Callum Tait
a22ee8a5f1 chore: add new label to bug form (#1913) 2022-10-13 09:07:26 +09:00
Yusuke Kuoka
e1762ba746 Fix inability to configure MTU for rootless dind runner (#1856)
Follow-up for https://github.com/actions-runner-controller/actions-runner-controller/pull/1644
2022-10-13 09:04:56 +09:00
Yusuke Kuoka
710e2fbc3a Prevent runner controller from recreating runner pod when pod was terminated externally (#1851) 2022-10-13 09:04:50 +09:00
Yusuke Kuoka
433552770e Let it be a bug only when it's reproducible with official runner image (#1839)
* Let it be a bug only when it's reproducible with official runner image

A custom runner image can break runners and ARC in interesting ways. Probably it's better to clearly state that ARC is not guaranteed to work with every custom runner image in the wild.

* Update .github/ISSUE_TEMPLATE/bug_report.yml
2022-10-13 09:04:40 +09:00
renovate[bot]
ba2a32eef6 fix(deps): update module github.com/onsi/gomega to v1.22.1 (#1911)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-12 10:54:15 +09:00
renovate[bot]
de0d7ad78c fix(deps): update module github.com/onsi/gomega to v1.22.0 (#1909)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-12 06:37:40 +09:00
renovate[bot]
0382f3bbd5 chore(deps): update quay.io/brancz/kube-rbac-proxy docker tag to v0.13.1 (#1899)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-09 18:09:24 +09:00
renovate[bot]
b6640a033c fix(deps): update module github.com/onsi/gomega to v1.21.1 (#1901)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-09 18:09:12 +09:00
renovate[bot]
998c028d90 fix(deps): update golang.org/x/oauth2 digest to b44042a (#1900)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-09 16:53:46 +09:00
Yusuke Kuoka
f8dffab19d Add workflow for validating runner scripts with shellcheck (#1853)
* Add workflow for validating runner scripts with shellcheck

I am about to revisit #1517, #1454, #1561, and #1560 as a part of our on-going effort to a major enhancement to the runner entrypoints being made in #1759.

This change is a counterpart of #1852. #1852 enables you to easily run shellcheck locally. This enables you to automatically run shellcheck on every pull request.

Currently, any shellcheck error does not result in failing the workflow job. Once we addressed all the shellcheck findings, we can flip the fail_on_error option to true and let jobs start failing on pull requests that introduce invalid or suspicious bash code.
2022-10-09 16:53:22 +09:00
Yusuke Kuoka
7ff5b7da8c Handle missing runner ID more gracefully (#1855)
so that ARC respect the registration timeout, terminationGracePeriodSeconds and RUNNER_GRACEFUL_STOP_TIMEOUT(#1759) when the runner pod was terminated externally too early after its creation

While I was running E2E tests for #1759, I discovered a potential issue that ARC can terminate runner pods without waiting for the registration timeout of 10 minutes.

You won't be affected by this in normal circumstances, as this failure scenario can be triggered only when you or another K8s controller like cluster-autoscaler deleted the runner or the runner pod immediately after the runner or the runner pod has been created. But probably is it worth fixing it anyway because it's not impossible to trigger it?
2022-10-09 16:52:51 +09:00
renovate[bot]
6aaff4ecee chore(deps): update golang docker tag to v1.19.2 (#1894)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-05 08:30:09 +09:00
renovate[bot]
437d0173b0 chore(deps): update dependency actions/runner to v2.298.2 (#1891)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-05 08:16:38 +09:00
Nicholas Farley
a389292478 Allow RunnerDeployments to configure dnsPolicy for runners (#1892)
* Add DnsPolicy field to RunnerPodSpec struct

* Ensure the runnerSpec's DNSPolicy is mirrored to the pod.Spec

* Run `make manifests`
2022-10-05 08:16:11 +09:00
Vijay-train
6eadb03669 Update links to QuickStartGuide.md (#1890)
* Update detailed-docs.md

Quick start instructions are now inline in README.md. Updating the link to point to the `Getting Started` section of README.md

* Update Actions-Runner-Controller-Overview.md

Quick start instructions are now inline in README.md. Updating the link to point to the `Getting Started` section of README.md

* Delete QuickStartGuide.md

This guide is no longer needed as the details of it are merged with README.md.
2022-10-04 20:32:16 +09:00
Yusuke Kuoka
aa60021ab0 e2e: Make docker build timeout longer (#1862)
Depending on the host machine spec and load, it can take more time than 5 minutes. This change adhocly increases the timeout by 1.5x to address that.
2022-10-04 20:31:10 +09:00
Yusuke Kuoka
50e26bd2f6 makefile: Add shellcheck installation and run targets (#1852)
I am about to revisit #1517, #1454, #1561, and #1560 as a part of our on-going effort for a major enhancement to the runner entrypoints being made in #1759.
This commit adds the makefile target to run shellformat locally, so that any contributor can use it before submitting a pull request.
2022-10-04 20:30:43 +09:00
Yusuke Kuoka
2dd13b4a19 runner: Address all shellcheck findings (#1854)
I am about to revisit #1517, #1454, #1561, and #1560 as a part of our on-going effort for a major enhancement to the runner entrypoints being made in #1759.

This change updates and reintroduces #1517 contributed by @CASABECI in a way it becomes applicable to today's code-base.
2022-10-04 20:30:27 +09:00
Yusuke Kuoka
35af24cf03 Enhnace log-related fields in the bug report form (#1838)
Both fields can be useless when the reporter thought only one or two lines of the respective logs are relevant and it turned out we had to see another line later.
To avoid such a situation I'd like to change the field labels to include `Whole` so that it looks like `Whole Runner Pod Logs`, and ask to not omit the logs in the field description.
2022-10-04 20:29:51 +09:00
Yusuke Kuoka
ca96b66fbe Update bug_report.yml 2022-10-04 20:28:34 +09:00
Yusuke Kuoka
4db5fbc7a1 Update bug_report.yml 2022-10-04 20:27:00 +09:00
Yusuke Kuoka
add83bc7bc Update bug_report.yml (#1837)
Honestly, I'm a bit tired of seeing issues filed with the default title "Bug" that are seemingly not related to real bugs!
I'm emptying it so that the reporter is more encourage to write up a sentence to describe the problem and they hopefully notice along the way that there are places to diagnose before considering it as a bug.
2022-10-04 20:25:30 +09:00
Yusuke Kuoka
666bba784c e2e: Bump runner version to 2.297.0 (#1850)
* e2e: Bump runner version to 2.296.2

* Update e2e_test.go
2022-10-04 20:25:13 +09:00
Karthik KN
0672ff0ff9 FIx broken links in documentation (#1882)
* Fix links on Actions-Runner-Controller-Overview.md

* Fix link on QuickStartGuide.md

* Fix link on Contributing.md

* Fix links on detailed-docs.md

* Update CONTRIBUTING.md

* Update docs/Actions-Runner-Controller-Overview.md

* Update docs/Actions-Runner-Controller-Overview.md

* Update docs/QuickStartGuide.md

* Update docs/Actions-Runner-Controller-Overview.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-30 09:53:32 +09:00
pratikbin
f2f22827a6 Use -trimpath and ldflags -s -w build flags (#1880) 2022-09-30 09:19:52 +09:00
Vijay-train
a3a801757d Fix link to 'detailed docs' (#1879) 2022-09-30 08:42:59 +09:00
Vijay-train
0c003f20d4 Create Simplified README.md with only 'Getting Started' steps and with links to additional detailed documentation. (#1864)
* Update with recent changes from README.md

* Update README.md

This is the final phase of simplifying README.md
- include only Getting started` steps ( These steps were earlier reviewed as part of /docs/QuickStartGuide.md)
- links to a detailed documentation ( the detailed documentation is a copy of the current README.md)
Once this is merged, any new detailed docs should be captured in /docs/detailed-docs.md

* Update detailed-docs.md

Redo the change made in #1873

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-29 21:13:29 +09:00
renovate[bot]
863760828a chore(deps): update helm/chart-releaser-action action to v1.4.1 (#1870)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-29 21:13:00 +09:00
renovate[bot]
517fae4119 chore(deps): update helm/chart-testing-action action to v2.3.1 (#1871)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-29 20:45:02 +09:00
gitulisca
d9e1e64dc6 Fix repositoryNames in workflow runs queued based HRA example (#1873) 2022-09-29 20:32:38 +09:00
Mike
3c4ab2d479 Add terraform deployment method to contrib/examples (#1559)
Co-authored-by: Mike Joseph <mike@Mikes-MacBook-Pro-5618.local>
2022-09-28 13:31:52 +09:00
Saravanan Palanisamy
3ca96557a6 helm charts for actions runner (#1375)
Fixes: #942 

helm charts for actions runner, currently its having only RunnerDeployment and Autoscaler resources.

Looks like deployment order is important here, facing the below issue if Autoscaler deployed first and then autoscaling not working as expected.
```
2022-04-21T12:13:08Z    DEBUG    controllers.webhookbasedautoscaler    RunnerDeployment not found with scale target ref name test-actions-runner for hra test-actions-runner-autoscaler
```
Helm doesn't support [ordering](https://github.com/helm/helm/issues/8439) for custom resources. So using List to overcome this issue, didn't use helm chart hooks for ordering since its not [tracked](https://helm.sh/docs/topics/charts_hooks/#hook-resources-are-not-managed-with-corresponding-releases) after creation.

Co-authored-by: Josh Feierman <joshua.feierman@warnermedia.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-28 11:07:56 +09:00
renovate[bot]
5fd6ec4bc8 chore(deps): update dependency actions/runner to v2.297.0 (#1860)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-27 09:11:53 +09:00
renovate[bot]
6863bdb208 chore(deps): update helm/kind-action action to v1.4.0 2022-09-25 10:35:05 +09:00
Yusuke Kuoka
f3fcb428ae rootless-dind-dockerfile: Add comment about installation path 2022-09-25 07:50:12 +09:00
Yusuke Kuoka
41bae32a9f runner: Dump supervisor log on dockerd timeout 2022-09-25 07:50:12 +09:00
Yusuke Kuoka
e4879e7ae4 Tweak E2E and documentation about MTU configuration 2022-09-25 07:50:12 +09:00
Yusuke Kuoka
e5bb130fda Add MTU propagation docker-shim also to rootless dind runner images
Related to #1201
2022-09-25 07:50:12 +09:00
Tiago Melo
e7a21cfc53 feat: Add container to propagate host network MTU (#1201)
* feat: Add container to propagate host network MTU

Some network environments use non-standard MTU values. In these
situations, the `DockerMTU` setting might be used to specify the MTU
setting for the `bridge` network created by Docker. However, when the
Github Actions workflow creates networks, it doesn't propagate the
`bridge` network MTU which can lead to `connection reset by peer`
messages.

To overcome this, I've created a new docker image called
`summerwind/actions-runner-mtu` that shims the docker binary in order to
propagate the MTU setting to networks created by Github workflows.

This is a follow-up on the discussion in
(#1046)[https://github.com/actions-runner-controller/actions-runner-controller/issues/1046]
and uses a separate image since there might be some unintended
side-effects with this approach.

* fixup! feat: Add container to propagate host network MTU

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-23 17:08:28 +09:00
Sebastian N
8f54644b08 Create app-version-mapping.md (#1820) 2022-09-23 10:36:28 +09:00
renovate[bot]
c56d6a6c85 chore(deps): update actions/stale action to v6 (#1827)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-23 10:25:36 +09:00
renovate[bot]
a96c3e1102 fix(deps): update kubernetes packages to v0.25.2 (#1829)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-23 10:25:24 +09:00
Cristian Calin
d29de8d454 feat: use helm genCA to generate a certificate for the mutating web hook if no cert-manager is available (#1780) 2022-09-23 10:21:00 +09:00
renovate[bot]
12c4d96250 fix(deps): update kubernetes packages to v0.25.1 (#1729)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-21 11:50:52 +09:00
renovate[bot]
46a13c0626 fix(deps): update module github.com/onsi/gomega to v1.20.2 (#1757)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-21 11:22:27 +09:00
Frederic MARTIN
e32a8054d0 🍱 add git-lfs package as standard tool (#1821) 2022-09-21 11:04:43 +09:00
renovate[bot]
0deb6809b9 fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.0 (#1775)
* fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.0

* fixup! fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.0

* fixup! fixup! fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.0

* fixup! fixup! fixup! fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.0

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-21 11:04:07 +09:00
David Young
21af1ec19d Use numeric USER for nonroot:nonroot in Dockerfile (#1765)
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2022-09-21 10:50:27 +09:00
renovate[bot]
dfadb86d66 fix(deps): update module github.com/google/go-github/v47 to v47.1.0 (#1813)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-21 10:23:37 +09:00
Cory Miller
c91e76f169 Add golangci-lilnt to CI (#1794)
This introduces a linter to PRs to help with code reviews and code hygiene. I've also gone ahead and fixed (or ignored) the existing lints.

I've only setup the default linters right now. There are many more options that are documented at https://golangci-lint.run/.

The GitHub Action should add appropriate annotations to the lint job for the PR. Contributors can also lint locally using `make lint`.
2022-09-21 09:08:22 +09:00
Yusuke Kuoka
718232f8f4 Add ArtifactHub badge (#1816)
Ref #1502
2022-09-20 18:49:50 +09:00
renovate[bot]
c7f5f7d161 fix(deps): update golang.org/x/oauth2 digest to f213421 (#1792)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-20 09:32:56 +09:00
renovate[bot]
33bb6902bc fix(deps): update module github.com/google/go-cmp to v0.5.9 (#1787)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-20 09:32:07 +09:00
Yusuke Kuoka
aeb0601147 Update bug_report.yml (#1812)
I think we should at least recommend the reporter read the troubleshooting guide before submitting a bug report. It would give them more chances to realize it isn't a bug in some cases.
2022-09-20 09:03:36 +09:00
Yusuke Kuoka
991c0b3211 chore: disable blank issues (#1809)
Even though we have a bug report form, we've seen so many issues being written in freestyle and I think every occurrence of it resulted in needing multiple hops for us to gather the information necessary for diagnosing the issue. I hope every bug report is more actionable. Now we disable blank issues so that the reporter is likely to choose the bug report form or Discussions depending on their situation.
2022-09-16 16:17:49 +01:00
Vijay-train
71da6d5271 [Docs] Move into [docs] folder and other minor fixes (#1769)
* [Docs] Move into docs folder and other minor fixes

* Move into [docs] folder

* Fix minor formatting

* Create detailed-docs.md

Moving current Readme contents into a separate detailed-docs.md file. Added one additional section "Getting Started" to have the getting started link. Rest of the file is as is from the current Readme.md

This file would be linked from a new simplified Readme (to be raised a separate PR)
2022-09-16 10:37:22 +09:00
David Girón
e4fd4bc99c Update dependency docker/cli to v20.10.18 (#1803) 2022-09-16 10:25:12 +09:00
Yusuke Kuoka
d9a8dc7e84 chart: Bump chart and app versions for ARC 0.26.0 (#1799) 2022-09-13 09:09:29 +09:00
Yusuke Kuoka
795cf8b1de Add releasenote for 0.26.0 (#1796) 2022-09-13 08:43:28 +09:00
renovate[bot]
0615c2adb1 chore(deps): update dependency actions/runner to v2.296.2 (#1791)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-09 18:43:00 +09:00
Yusuke Kuoka
a918e56ece Merge pull request #1784 from actions-runner-controller/renovate/golang-1.x
chore(deps): update golang docker tag to v1.19.1
2022-09-09 18:42:44 +09:00
Yusuke Kuoka
546b5251ed Merge pull request #1781 from THG-Site-Reliability-Engineering/master
Fix bug with enterpriseURL for multi-tenancy
2022-09-09 18:40:46 +09:00
renovate[bot]
74dda4ea1b chore(deps): update golang docker tag to v1.19.1 2022-09-06 22:14:03 +00:00
Barun Mishra
b29816290a Merge branch 'actions-runner-controller:master' into master 2022-09-05 13:58:49 +01:00
Barun Mishra
921daff61b Add cmd line arg for enterprise url. Fix enterprise bug. (#1)
* Add cmd line arg for enterprise url. Fix enterprise bug.

* Fix package import order

* Fix comment
2022-09-05 13:50:17 +01:00
renovate[bot]
e233f7ad6a chore(deps): update dependency actions/runner to v2.296.1 2022-09-01 12:31:39 +00:00
Yusuke Kuoka
623c84fa52 Merge pull request #1758 from actions-runner-controller/fix-e2e
e2e: A bunch of fixes
2022-08-27 16:29:56 +09:00
Yusuke Kuoka
d4fb6204cb Add TODO comment to the PVC reconciler 2022-08-27 07:14:16 +00:00
Yusuke Kuoka
f8e07c7fe4 e2e: Update RunnerSet template for rootless-dind test 2022-08-27 07:12:55 +00:00
Yusuke Kuoka
f73713859c e2e: Fix workflow for rootless-dind test to actually pass 2022-08-27 07:12:06 +00:00
Yusuke Kuoka
e0a7be253e e2e: Change the default runner rolling-update interval from 10s to 60s to let the runners actually get jobs assigned by GitHub Actions 2022-08-27 07:11:17 +00:00
Yusuke Kuoka
915739b972 e2e: Fix broken token expiration checks 2022-08-27 07:10:10 +00:00
Yusuke Kuoka
4925880e5e e2e: Install workflow before starting continuous rolling-updates of runners 2022-08-27 07:08:56 +00:00
Yusuke Kuoka
c143fd50b5 e2e: Use newer version of actions/runner(0.296.0) 2022-08-27 07:07:56 +00:00
Yusuke Kuoka
dbd668ae2d e2e: Set ARC_E2E_SKIP_RUNNERDEPLOYMENT to skip RunnerDeployment test 2022-08-26 01:48:54 +00:00
Yusuke Kuoka
5c1be3265b e2e: Fix the token check to actually fail on expiration 2022-08-26 01:48:36 +00:00
Yusuke Kuoka
ebcd838501 e2e: Continuous rolling-update of runners while workflow jobs are running
This should help revealing issues like https://github.com/actions-runner-controller/actions-runner-controller/issues/1535 if any.
2022-08-26 01:28:08 +00:00
Yusuke Kuoka
6ef276b239 e2e: Custom RBAC resources for make test success reporting work when k8s container mode or runner update hook is enabled 2022-08-26 01:28:08 +00:00
Yusuke Kuoka
f70f325f48 e2e: Set ARC_E2E_DO_DOCKER_BUILD to verify docker-build 2022-08-26 01:28:08 +00:00
Yusuke Kuoka
f7c336f9dd e2e: Mention maintained versions of cert-manager for reference 2022-08-26 01:28:08 +00:00
renovate[bot]
ae380f5987 fix(deps): update module go.uber.org/zap to v1.23.0 (#1752)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-25 10:25:52 +09:00
Yusuke Kuoka
4bf1c12a98 e2e: Fix inability to install the stable version of ARC before the edge / Validate GH tokenn on start (#1748)
Let me improve two things I had found while I was E2E-testing ARC for the upcoming 0.26.0 release.

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-25 10:25:06 +09:00
Callum Tait
cb561d8db4 docs: webhook scaling (#1709)
* docs: remove legacy webhook scaling triggers

* docs: remove runnerset limitations

* docs: noddy whitespace

* docs: more technically correct wording

* docs: wording

* docs: correct EffectiveTime logic

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* docs: remove non workflow_job events

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* docs: stuff

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-24 22:23:44 +09:00
Callum Tait
eaf6d2f2e2 docs: bump the required min GHES version (#1749) 2022-08-24 21:38:30 +09:00
Vijay-train
5ae7ce16e0 Fixing typo to render link properly (#1750) 2022-08-24 21:38:14 +09:00
Yusuke Kuoka
bdcde44642 chore: Bump go-github and minimum GHES version to 3.6 (#1747)
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1574
2022-08-24 13:08:40 +09:00
renovate[bot]
5116e3800e fix(deps): update module go.uber.org/zap to v1.22.0 (#1704)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-24 11:23:34 +09:00
renovate[bot]
4e107a4e50 fix(deps): update module github.com/prometheus/client_golang to v1.13.0 (#1699)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-24 11:22:33 +09:00
renovate[bot]
93238697d9 fix(deps): update module github.com/onsi/gomega to v1.20.0 (#1661)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-24 10:56:23 +09:00
Evan Hines
48f62b4c89 Allow customization of ServiceMonitor namespace for helm-template (#1491)
* Allow users to customize which namespace they deploy their service monitors into

* Add missing metrics object reference

* Update charts/actions-runner-controller/templates/githubwebhook.serviceMonitor.yaml

* Update charts/actions-runner-controller/templates/controller.metrics.serviceMonitor.yaml

* Update charts/actions-runner-controller/values.yaml

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-24 10:55:44 +09:00
Yusuke Kuoka
ea94b3cc5b e2e: Add new option to test rootless docker (#1742)
Related to #1644

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-24 10:42:45 +09:00
Callum Tait
0cac005ab2 ci: include sha in canary version (#1744) 2022-08-24 10:21:46 +09:00
renovate[bot]
55ca7bfdf5 chore(deps): update dependency actions/runner to v2.296.0 2022-08-23 19:47:18 +00:00
Viktor Lindgren
ca97f39fcb Print Version Number on startup (#1659)
* Changed Dockerfile to get the Enviroment variable from the github actions workflow and pass it to the main.go file

Added a function in main.go to fetch the enviroment varible and to have a fallback if the env variable isnt there

Added a test for the version to use for this branch only

* Update test-version.yaml

* Update test-version.yaml

* Removed the test because its not needed when we push upstream

* Moved the version print in main.go to the Log codeblock as requested by toast-gear

Added version as issue#1161 requests.

Decided to use a docker tag structure for the userAgent string, with : being a seperator of the name and version

* Used ldflags instead like mumoshu recommended

Changed Dockerfile to use $VERSION from the workflow

Added version.go and the build package
Removed the getVersion function as we can just get the value directly

* Used ldflags instead like mumoshu recommended

Changed Dockerfile to use $VERSION from the workflow

Added version.go and the build package
Removed the getVersion function as we can just get the value directly

* * Removed the default from the go code (set it as N/A)
* Changed version from latest to dev inside makefile
* Added buildarg for version to the dockerfile in the makerfile
* Added VERSION with default dev value as arg inside dockerfile
* Cleaned up inside dockerfile

* Fix failing test

* Fix possible missing VERSION in the ARC UA suffix due to missing build arg in docker-build-push step

Co-authored-by: S8338C <viktor.lindgren@seb.se>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-23 13:40:16 +09:00
renovate[bot]
f0c8c07428 fix(deps): update golang.org/x/oauth2 digest to 0ebed06 (#1678)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-23 13:33:49 +09:00
renovate[bot]
e54edea918 chore(deps): update golang docker tag to v1.19.0 (#1682)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-23 13:33:21 +09:00
Ian Flores Siaca
e58f82bfce Document how to add Windows self-hosted runners (#1608)
* adding windows docs

* adding windows docs

* Editing the explanations

* Update README.md

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-23 13:32:37 +09:00
Alex Dubov
244e0dd987 Fix Typos in Readme (#1741) 2022-08-23 13:04:16 +09:00
renovate[bot]
02009cef17 chore(deps): update module go to 1.19 (#1664)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-23 13:00:17 +09:00
Vijay-train
2b5af62184 [Doc] Create ARC Overview doc (#1707)
* [Doc] Create ARC overview doc

The purpose of this doc is a starting point overview to ARC, with links to Quick start guide within.

* Update Actions-Runner-Controller-Overview.md

Fixed some formatting

* Update for minor formatting

Fixed links to include quotes, where missing. Added spaces after periods, where missing.

* Updated links to the QuickStart guide

* Updated Images and scaling sections

Updated the following based on PR feedback
- `The Runner container image` now calls out more explicitly the recommended way to install additional software
- `Scaling runners - dynamically with Pull Driven ScalingScaling runners - dynamically with Pull Driven Scaling` - Removed mentions of `TotalNumberOfQueuedAndInProgressWorkflowRuns` as its not fully implemented

* Apply suggestions from code review

Incorporated review feedback from  @andyfeller, @sethrylan, @debuger24 and @mumoshu. Thank you all.

Co-authored-by: Andy Feller <andyfeller@github.com>
Co-authored-by: Rahul Kumar <rahulcomp24@gmail.com>
Co-authored-by: Seth Rylan Gainey <sethrylan@github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Apply suggestions from code review

Add more detailed config for PercentageRunnersBusy metric

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Updated link text to "Pull Driven Scaling"

Co-authored-by: Andy Feller <andyfeller@github.com>
Co-authored-by: Rahul Kumar <rahulcomp24@gmail.com>
Co-authored-by: Seth Rylan Gainey <sethrylan@github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-23 12:53:22 +09:00
Sajad Orouji
ec58ad19e0 feat: add queue size limit to github webhook server helm template (#1712)
* Update githubwebhook.deployment.yaml

* Update values.yaml

* Update README.md

* Update charts/actions-runner-controller/values.yaml

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Update values.yaml

* chore: comment out queuelimit setting

* docs: format cleanup

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-08-23 09:40:50 +09:00
Adam Panzer
cc9fe33ef5 Change type: to kind: (#1740)
Per the CRD spec here https://github.com/actions-runner-controller/actions-runner-controller/blob/master/charts/actions-runner-controller/crds/actions.summerwind.dev_horizontalrunnerautoscalers.yaml#L115-L127

It's `kind:` not `type:`
2022-08-23 09:35:14 +09:00
Matt Domko
4a5a85fd61 Replaced 'kubectl apply' with 'kubectl create' in README (#1728)
- Updated as per issue 1317
- Version bump so that folks copy/pasting get the latest version

https://github.com/actions-runner-controller/actions-runner-controller/
issues/1317
2022-08-21 22:54:31 +09:00
David Young
56b26fd751 Fix minor spelling error (#1727)
Just a typo fix :)
2022-08-21 22:00:52 +09:00
João Carlos Ferra de Almeida
36e95dad47 Fix/multitenancy enterprise url (#1725)
* Fix #1714

* Add Comment
2022-08-16 20:20:06 +09:00
Callum Tait
3724b46033 chore(deps): update dependency actions/runner to v2.295.0 (#1723) 2022-08-16 20:11:46 +09:00
Rahul Kumar
538e2783d7 Update Metric Types and typos (#1719)
* Update valid options in metrics types

* FIX: Typos

* FIX: Update metric types in helm chart
2022-08-15 23:12:22 +09:00
Rahul Kumar
72ca998266 Add Additional Autoscaling Metrics to Prometheus (#1720)
* Add prometheus metrics for autoscaling

* Add desc for prometheus-metrics

* FIX: Typo

* Remove replicas_desired_before in metrics

* Remove Num prefix in metricws
2022-08-15 23:12:00 +09:00
Rahul Kumar
d439ed5c81 Update GHCR name to repo name in publish wf (#1721) 2022-08-15 09:46:50 +09:00
Vijay-train
58c2bdf2bb Create QuickStartGuide.md (#1691)
* Create QuickStartGuide.md

Creating a new Quickstart guide that captures simple onboarding instructions. The intent is for first time users to be able to follow this guide and get their environment running and try out ARC. A link to this guide would be added to the repo readme once this PR get merged.

* Update QuickStartGuide.md

Fixed a typo - removed "$" from codeblock "$ kubectl apply -f runnerdeployment.yaml"

* Update QuickStartGuide.md

Eliminated need to specify PAT in Custom_values.yaml. Instead passing as parameter while installing helm chart. This eliminates need to store PAT in a file and also eliminates a setup step.

* Fixed minor typos

Fixed types identified by @nebuk89

* Minor formatting in links and periods.

Fixed formatting to include space after period and commas. Fixed formatting on some links to include quotes
2022-08-14 13:19:41 +09:00
Yusuke Kuoka
fe9164b025 doc: Encourage everyone to explicitly set HRA scaleTargetRef kind (#1633)
Ref #1343
2022-08-14 13:04:03 +09:00
renovate[bot]
06141b39b4 chore(deps): update helm/chart-testing-action action to v2.3.0 (#1710)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-13 14:30:59 +01:00
renovate[bot]
ac4c3fd365 chore(deps): update azure/setup-helm action to v3.3 (#1667)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-13 12:51:30 +01:00
Callum Tait
dc29e31bcc ci: add rootless dnd to renovate (#1711) 2022-08-12 10:41:30 +09:00
renovate[bot]
784019f3d7 chore(deps): update dependency actions/runner to v2.295.0 2022-08-11 11:36:27 +00:00
Natalie Somersall
fc55477c1c remove fuse-overlayfs (#1690) 2022-08-04 13:25:55 +09:00
Yusuke Kuoka
3f78f71137 Start publishing runner-dind-rootless image (#1689)
Follow-up for #1644
2022-08-04 10:37:12 +09:00
oreonl
e511401e51 fix: don't base64 decode secret strings (#1683) 2022-08-03 11:47:07 +09:00
Natalie Somersall
37aa1a0b8c Add rootless DinD runner (#1644)
* add rootless dind images

* add small blurb on rootless dind

* Add ToC entry for README section
2022-08-03 11:45:02 +09:00
João Carlos Ferra de Almeida
bea0775bec Fix small typo in README (#1687)
Changed from `Kuernetes` to `Kubernetes` in the **Multitenancy** chapter.

By the way why not use [the vale-action](https://github.com/errata-ai/vale-action) to automate linting in the markdown files? If you'd like I can probably find some time to do it. Just a small token of appreciation for an awesome project!
2022-08-03 11:28:40 +09:00
Yusuke Kuoka
79a494b2aa doc: Note to fully populate the pool of PVs before checking if the cache is effective (#1655) 2022-07-17 19:44:07 +09:00
Yusuke Kuoka
97404144eb Fix excessive runnerreplicaset update issue since 0.25.0 (#1650)
Fixes #1643
2022-07-17 19:43:24 +09:00
Yusuke Kuoka
b77489d098 Fix E2E to not fail due to missing storageclass for RunnerDeployment w/ kubernetes container mode (#1649) 2022-07-17 19:43:13 +09:00
Yusuke Kuoka
4152afbd30 Fix E2E against local cluster to not fail on helm-upgrade (#1648) 2022-07-17 19:43:01 +09:00
Yusuke Kuoka
29f621e1c8 chart: Remove support for extensions/v1beta1 and networking.k8s.io/v1beta1 (#1632)
* chart: Remove support for extensions/v1beta1 and networking.k8s.io/v1beta1

`networking.k8s.io/v1` has been available since v1.19.
As of today, AWS EKS supports v1.19+ and Oracle Cloud supports v1.20+. GKE and AKS supports v1.21+. The upstream Kubernetes project maintains v1.22+.
So it should be safe to remove it now.

* fixup! chart: Remove support for extensions/v1beta1 and networking.k8s.io/v1beta1
2022-07-17 19:42:35 +09:00
renovate[bot]
5651ba6ead fix(deps): update kubernetes packages to v0.24.3 (#1647)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-15 10:59:44 +09:00
Cory Miller
759cc4b47f Update version of YQ in Makefile (#1634) 2022-07-15 10:59:13 +09:00
Yusuke Kuoka
4ede0c18d0 Fix the new ct chart lint error 2022-07-15 10:23:33 +09:00
Yusuke Kuoka
9091d9b756 chart: Bump version/appVersion to 0.20.2/0.25.2 2022-07-15 10:23:33 +09:00
renovate[bot]
a09c2564d9 fix(deps): update module github.com/bradleyfalzon/ghinstallation/v2 to v2.1.0 (#1637)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-15 10:20:42 +09:00
renovate[bot]
a555c90fd5 chore(deps): update dependency golang to v1.18.4 (#1639)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-15 10:20:29 +09:00
Yusuke Kuoka
38644cf4e8 Remove redundant flags from webhook-based autoscaler (#1630)
* Remove redundant flags from webhook-based autoscaler

Ref #623

* fixup! Remove redundant flags from webhook-based autoscaler
2022-07-15 09:58:30 +09:00
Jonathan Wiemers
23f357db10 Adds way to allow additional environment variables from secretKeyRef (#1565)
* adds additionalFullEnv to allow additional secret refs

* Update charts/actions-runner-controller/templates/deployment.yaml

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* adds examples into values.yaml

* fix

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-07-15 09:57:30 +09:00
Felipe Galindo Sanchez
584745b67d Minor improvements for runner groups
- Add group in runners columns
- Add constant for runner group and labels
2022-07-15 09:47:25 +09:00
AJ Schmidt
df9592dc99 docs: Update README.md (#1645)a 2022-07-13 18:13:11 +01:00
Yusuke Kuoka
8071ac7066 Remove github-api-cache-duration flag and code (#1631)
This removes the flag and code for the legacy GitHub API cache. We already migrated to fully use the new HTTP cache based API cache functionality which had been added via #1127 and available since ARC 0.22.0. Since then, the legacy one had been no-op and therefore removing it is safe.

Ref #1412
2022-07-12 20:37:24 +09:00
toast-gear
3c33eca501 docs: remove superfluous file names 2022-07-12 09:45:51 +09:00
toast-gear
aa827474b2 docs: clearer wording 2022-07-12 09:45:51 +09:00
toast-gear
c75c9f9226 docs: use consistent wording 2022-07-12 09:45:51 +09:00
toast-gear
c09a04ec01 docs: add default label considerations 2022-07-12 09:45:51 +09:00
Yusuke Kuoka
618276e3d3 Enhance support for multi-tenancy (#1371)
This enhances every ARC controller and the various K8s custom resources so that the user can now configure a custom GitHub API credentials (that is different from the default one configured per the ARC instance).

Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1067#issuecomment-1043716646
2022-07-12 09:45:00 +09:00
renovate[bot]
18dd89c884 chore(deps): update azure/setup-helm action to v3.1 (#1628)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-12 09:19:02 +09:00
k.bigwheel (kazufumi nishida)
98b17dc0a5 Fix the dind image to work with the latest entrypoint.sh (#1624)
Fixes #1621
2022-07-12 09:11:04 +09:00
Giovanni Barillari
c658dcfa6d fix #1621: add missing COPY statements to dind docker image 2022-07-11 20:44:35 +09:00
renovate[bot]
c4996d4bbd fix(deps): update module sigs.k8s.io/controller-runtime to v0.12.3 2022-07-11 10:52:14 +09:00
Callum Tait
7a3fa4f362 docs: correct the comparison
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-07-11 10:43:09 +09:00
toast-gear
1bfd743e69 docs: add pod exmaple too 2022-07-11 10:43:09 +09:00
toast-gear
734f3bd63a docs: put shell k8s commands back 2022-07-11 10:43:09 +09:00
toast-gear
409dc4c114 docs: remove ephemeral and simplify 2022-07-11 10:43:09 +09:00
toast-gear
4b9a6c6700 docs: remove runner kind 2022-07-11 10:43:09 +09:00
Yusuke Kuoka
86e1a4a8f3 Fix helm lint error and the unability to install the chart with the default values 2022-07-10 16:16:32 +09:00
Yusuke Kuoka
544d620bc3 e2e: Ensure ARC is roll-updated on deployment even if the container image tag name does not change 2022-07-10 16:16:32 +09:00
Yusuke Kuoka
1cfe1974c4 Add missing job-related permissions to runner pods with k8s container mode 2022-07-10 16:16:32 +09:00
Yusuke Kuoka
7e4b6ebd6d chart: Add rbac.allowGrantingKubernetesContainerModePermissions 2022-07-10 16:16:32 +09:00
Felipe Galindo Sanchez
11cb9b7882 feat: allow to discover runner statuses (#1268)
* feat: allow to discover runner statuses

* fix manifests

* Bump runner version to 2.289.1 which includes the hooks support

* Add feedback from review

* Update reference to newRunnerPod

* Fix TestNewRunnerPodFromRunnerController and make hooks file names job specific

* Fix additional TestNewRunnerPod test

* Cover additional feedback from review

* fix rbac manager role

* Add permissions to service account for container mode if not provided

* Rename flag to runner.statusUpdateHook.enabled and fix needsServiceAccount

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-07-10 15:11:29 +09:00
Tamás Kádár
10b88bf070 Fix typos in README (#1613) 2022-07-10 08:49:35 +09:00
Callum Tait
8b619e7c6f chore: bump helm chart (#1619) 2022-07-10 08:25:55 +09:00
everpcpc
fea1457f12 fix: annotate pod instead of label on UnregistrationFailureMessage (#1615)
The error message should go to annotation instead of pod label, since we only check annotations on autoscale:

https://github.com/actions-runner-controller/actions-runner-controller/blob/master/controllers/autoscaling.go#L338-L342

Then there is no need to truncate or normalize the error message.
2022-07-09 11:45:05 +09:00
Yusuke Kuoka
473295e3fc Enhance the E2E test to be runnable against remote clusters on e.g. AWS EKS (#1610)
This contains apparently enough changes to the current E2E test code to make it runnable against remote Kubernetes clusters. I was actually able to make the test passing against my AWS EKS based test clusters with these changes. You still need to trigger it manually from a local checkout of the ARC repo today. But this might be the foundation for automated E2E tests against major cloud providers.
2022-07-07 20:48:07 +09:00
Yusuke Kuoka
9f6f962fc7 Add toubleshooting for cert-manager ca error (#1598)
I encountered this once while E2E testing ARC with K8s 1.22 and cert-manager 1.1.1. The K8s version is too high / The cert-manager is too low so you generally need to fix either. In a standard scenario, it should be more feasible and meaningful to upgrade cert-manager to a recent enough version that supports the new Kubernetes version.
2022-07-07 11:27:49 +09:00
Yusuke Kuoka
2a475f25c7 Use Argo Tunnel for exposing the autoscaler's webhook server (#1595)
I've been manually setting up Argo Tunnel to expose the webhook server while running E2E tests so that I can cover the webhook-based autoscaling. This automates the setup process so that we can automatiaclly bring up and down cloudflared before/after the test run, so that it can be a part of our upcoming automated E2E test.
2022-07-07 11:27:27 +09:00
Viktor Lindgren
dd9f25ea78 Update README.md (#1606) 2022-07-06 08:57:54 +09:00
Yusuke Kuoka
b8e4eee904 Make it easier to E2E test on various K8s versions (#1599) 2022-07-06 08:57:21 +09:00
Yusuke Kuoka
edbdef8d20 Bump chart version to 0.20.0 for ARC 0.25.0 (#1600)
We'll be merging this immediately after ARC 0.25.0 gets released.
2022-07-05 11:19:24 +09:00
Nguyễn Đức Chiến
a190fa97bb Fix helm charts (#1603) 2022-07-05 10:35:57 +09:00
Yusuke Kuoka
bfc5ea4727 Fix a regression in webhook-based autoscaler (#1596)
The regression resulted in the webhook-based autoscaler be unable to find visible runner groups and therefore unable to scale up and down the target RunnerDeployment/RunnerSet at all when the webhook-based autoscaler was provided GitHub API credentials to enable the runner groups support. This fixes that.

The regression was introduced via #1578 which is not released yet. Users of existing ARC releases are therefore not affected.
2022-07-04 20:17:09 +09:00
renovate[bot]
5a9e8545aa fix(deps): update golang.org/x/oauth2 digest to 2104d58 (#1593)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-02 14:06:21 +09:00
Yusuke Kuoka
4446ba57e1 Cover ARC upgrade in E2E test (#1592)
* Cover ARC upgrade in E2E test

so that we can make it extra sure that you can upgrade the existing installation of ARC to the next and also (hopefully) it is backward-compatible, or at least it does not break immediately after upgrading.

* Consolidate E2E tests for RS and RD

* Fix E2E for RD to pass

* Add some comment in E2E for how to release disk consumed after dozens of test runs
2022-07-01 21:32:05 +09:00
Martin Moon (문성주)
d62c8a4697 fix typo. (#1594) 2022-07-01 10:24:41 +09:00
Yusuke Kuoka
946d5b1fa7 Add release note for v0.25.0 (#1591) 2022-06-30 22:11:22 +09:00
renovate[bot]
da6b07660e fix(deps): update golang.org/x/oauth2 digest to 02e64fa (#1480)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-30 11:32:26 +09:00
Callum Tait
e3deb0d752 chore: move runner docker check (#1548) 2022-06-30 11:31:50 +09:00
Callum Tait
82641e5036 chore: move HOME to more logical place (#1460)
* chore: move HOME to more logical place

* chore: don't break the PATH

* chore: don't break the PATH

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-06-30 11:21:05 +09:00
Vladyslav Miletskyi
2fe6adf5b7 Runner Entrypoint: fix daemon.json (#1409)
* Runner Entrypoint: fix daemon.json

Do not owerwrite daemon.json if it already exists.
Usage: custom images, which are using public image as source.

* Update runner/startup.sh

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-06-30 11:03:12 +09:00
renovate[bot]
736126b793 chore(deps): update helm values quay.io/brancz/kube-rbac-proxy to v0.13.0 (#1589)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-30 09:51:38 +09:00
renovate[bot]
6abf5bbac8 fix(deps): update module github.com/stretchr/testify to v1.8.0 (#1584)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-30 09:50:55 +09:00
Yusuke Kuoka
dc4f116bda Reflect manual test scenario for containerMode=kubernetes to E2E (#1588)
With this my semi-automatic E2E manual testing becomes even easier :)
2022-06-30 09:09:58 +09:00
Callum Tait
cda10fd243 docs: remove once feature flag env var (#1590) 2022-06-30 09:09:37 +09:00
Yusuke Kuoka
b5d1a63bdf Enhance the acceptance runnerset yaml template for manual E2E (#1587)
The primary goal of this change is to let the tester know about the config difference between the explicitly configured ephemeral work volume vs the automatically configured work volume with workVolumeClaimTemplate+containerMode=kubernetes.
2022-06-29 22:15:50 +09:00
Yusuke Kuoka
6f3e23973d Bump E2E runner version to 2.294.0 (#1586)
so that every runner does not result in auto-updating itself on startup in E2E, which makes E2E take longer to complete.
2022-06-29 22:05:50 +09:00
Yusuke Kuoka
a517c1ff66 Fix old runner pods stuck in Terminating since #1579 (#1585)
Ref #1579
2022-06-29 22:02:42 +09:00
Yusuke Kuoka
9b28e633c1 Drop support for --once (#1580)
Ref #1196
2022-06-29 21:49:52 +09:00
Yusuke Kuoka
8161136cbd Fix PercentageRunnersBusy scaling delay (#1579)
* Use a dedicated pod label to say it is a runner pod

Follow-up for #1546

* Fix PercentageRunnersBusy scaling delay

Ref #1374
2022-06-29 20:49:21 +09:00
Nikola Jokic
a9ac5a1cbf extracted validations to a single point (#1582) 2022-06-29 20:32:00 +09:00
Callum Tait
d4f35cff4f ci: add paths to push trigger (#1583) 2022-06-29 20:30:07 +09:00
Yusuke Kuoka
f661249f07 Use the go-github impl of ListRunnerGroups with visible_to_repository (#1578)
Ref #1402
2022-06-29 09:53:03 +01:00
Mike
73e430ce54 Add a solution to InternalError webhook context timedout (#1558)
* added troubleshooting solution

* added error example

* added entry to the pages index

* sorted

Co-authored-by: Mike Joseph <mike@Mikes-MacBook-Pro-5618.local>
Co-authored-by: Mike Joseph <mike@efrontier.com>
2022-06-29 09:40:23 +09:00
renovate[bot]
858ef8979d chore(deps): update helm/kind-action action to v1.3.0 (#1532)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 09:05:26 +09:00
renovate[bot]
1ce0a183a6 chore(deps): update azure/setup-helm action to v3 (#1571)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 09:04:33 +09:00
renovate[bot]
63935d2053 fix(deps): update module github.com/stretchr/testify to v1.7.5 (#1510)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 08:39:09 +09:00
John Delivuk
fc63d6d26e Fix: Match Ingress API Version correctly. (#1541)
* Updating conditional to match the api version and kind

mend

* Updating conditional to match the api version and kind

mend
2022-06-29 08:30:11 +09:00
renovate[bot]
5ea08411e6 chore(deps): update dependency golang to v1.18.3 (#1509)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 08:29:14 +09:00
Giuseppe Crinò
067ed2e5ec docs: fix logic explanation for scale down delay (#1562)
Signed-off: Giuseppe Crinò <giuscri@gmail.com>
2022-06-29 08:26:28 +09:00
renovate[bot]
d86bd2bcd7 fix(deps): update module sigs.k8s.io/controller-runtime to v0.12.2 (#1449)
* fix(deps): update module sigs.k8s.io/controller-runtime to v0.12.2

* Regenerate manfiests with the updated k8s and controller-runtime deps

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-29 06:42:17 +09:00
Yusuke Kuoka
ddd417f756 Bump go-github to v45 (#1573)
* Bump go-github to v45

Ref #1402

* fixup! Bump go-github to v45
2022-06-29 06:34:58 +09:00
Thomas Boop
0386c0734c containerMode option to allow running jobs in k8's instead of docker (#1546)
* added containerMode=kubernetes env variables to the runner

* removed unused logging

* restored configs and charts

* restored makefile cert version and acceptance/run

* added workVolumeClaimTemplate in pod definition, including logic

* added claim template name based on the runner

* Apply suggestions from code review

update errors

* added concurrent cleanup before runner pod is deleted

* update manifests

* added retry after 30s if pod cleanup contains err

* added admission webhook check, made workVolumeClaimTemplate mandatory for k8s

* style changes and added comments

* added izZero timestamp check for deleting runner-linked pods

* changed order of local variable to avoid copy if p is deleted

* removed docker from container mode k8s

* restored charts, config, makefile

* restored forked files back and not the ARC ones

* created PersistentVolume on containerMode k8s

* create pv only if storage class name is local-storage

* removed actions if storage class name is local-storage

* added service account validation if container mode kubernetes

* changed the coding style to match rest of the ARC

* added validation to the runnerdeployment webhook

* specified fields more precisely, added webhook validation to the replicaset as well

* remake manifests

* wraped delete runner-linked-pods in kube mode

* fixed empty line

* fixed import

* makefile changes for hooks

* added cleanup secrets

* create manifests

* docs

* update access modes

* update dockerfile

* nit changes

* fixed dockerfile

* rewrite allowing reuse for runners and runnersets

* deepcopy forgot to stage

* changed privileged

* make manifests

* partly moved to finalizer, still need to apply finalizer first

* finalizer added if env variable used in container mode exists

* bump runner version

* error message moved from Error to Info on cleanup pods/secrets

* removed useless dereferencing, added transformation tests of workVolumeClaimTemplate

* Apply suggestions from code review

* Update controllers/utils_test.go

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* Update controllers/utils_test.go

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* add hook version to cli, update to 0.1.2

* Apply suggestions from code review

* Update controllers/utils_test.go

* Update runner/Makefile

* Fix missing secret permission and the error handling

* Fix a runnerpod reconciler finalizer to not trigger unnecessary retry

Co-authored-by: Nikola Jokic <nikola-jokic@github.com>
Co-authored-by: Nikola Jokic <97525037+nikola-jokic@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-28 14:12:40 +09:00
Yusuke Kuoka
af96de6184 Fix completed runner pod recreation not to be blocked after max out (#1568)
Ref https://github.com/actions-runner-controller/actions-runner-controller/pull/1477#issuecomment-1164154496
2022-06-28 13:50:07 +09:00
Arnaud
abb8615796 Webhook server configuration with kustomize (#1312)
* webhook server configuration with kustomize

* Update README.md

* Update README.md

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-28 09:08:25 +09:00
Sam Weston
bc7a3cab1b Add priorityClassName to CRDs (#1513)
* Add pod priorityClassName to controller and crds

* Add missing bits in bases directory

* Regenerate crds
2022-06-28 08:45:19 +09:00
Yusuke Kuoka
e2c8163b8c Make webhook-based scale race-free (#1477)
* Make webhook-based scale operation asynchronous

This prevents race condition in the webhook-based autoscaler when it received another webhook event while processing another webhook event and both ended up
scaling up the same horizontal runner autoscaler.

Ref #1321

* Fix typos

* Update rather than Patch HRA to avoid race among webhook-based autoscaler servers

* Batch capacity reservation updates for efficient use of apiserver

* Fix potential never-ending HRA update conflicts in batch update

* Extract batchScaler out of webhook-based autoscaler for testability

* Fix log levels and batch scaler hang on start

* Correlate webhook event with scale trigger amount in logs

* Fix log message
2022-06-27 18:31:48 +09:00
Callum Tait
84d16c1c12 revert: "Overhauled startup.sh Script (#1454)" (#1561)
This reverts commit 071898c96b.
2022-06-23 12:39:32 +01:00
Richard Fussenegger
071898c96b Overhauled startup.sh Script (#1454)
This overhaul turns it into a shellcheck valid script with explicit error handling for all possible situations I could think of. This change takes https://github.com/actions-runner-controller/actions-runner-controller/pull/1409 into account and things can be merged in any order. There are a few important changes here to the logic:

- The wait logic for checking if docker comes up was fundamentally flawed because it checks for the PID. Docker will always come up and thus become visible in the process list, just to immediately die when it encounters an issue, after which supervisor starts it again. This means that our check so far is flaky due to the `sleep 1` it might encounter a PID, or it might not, and the existence of the PID does not mean anything. The `docker ps` check we have in the `entrypoint.sh` script does not suffer from this as it checks for a feature of docker and not a PID. I thus entirely removed the PID check, and instead I am handing things over to our `entrypoint.sh` script by setting the environment variables correctly.
- This change has an influence on the `docker0` interface MTU configuration, because the interface might or might not exist after we started docker. Hence, I changed this to a time boxed loop that tries for one minute to set up the interface's MTU. In case the command fails we log an error and continue with the run.
- I changed the entire MTU handling by validating its value before configuring it, logging an error and continuing without if it is set incorrectly. This ensures that we are not going to send our users on a bug hunt.
- The way we started supervisord did not make much sense to me. It sends itself into the background automatically, there is no need for us to do so with Bash.

The decision to not fail on errors but continue is a deliberate choice, because I believe that running a build is more important than having a perfectly configured system. However, this strategy might also hide issues for all users who are not properly checking their logs. It also makes testing harder. Hence, we could change all error conditions from graceful to panicking. We should then align the exit codes across `startup.sh` and `entrypoint.sh` to ensure that every possible error condition has its own unique error code for easy debugging.
2022-06-23 09:37:01 +09:00
renovate[bot]
f24e2fa44e chore(deps): update dependency actions/runner to v2.294.0 2022-06-22 21:45:32 +00:00
Callum Tait
3c7d3d6b57 ci: hardcode dockerhub username (#1555) 2022-06-22 16:15:50 +01:00
Callum Tait
23f091d7fa ci: don't login on a pr (#1554)
* ci: don't login on a pr

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-06-22 16:03:36 +01:00
Callum Tait
667764e027 chore: suggest gist first (#1539) 2022-06-18 17:38:37 +09:00
Callum Tait
de693c4191 ci: runners trigger on push (#1549)
* ci: runners trigger on push

* ci: comments

* ci: comments
2022-06-18 17:34:40 +09:00
Callum Tait
510fc9c834 ci: add GitHub packages to arc release (#1525)
* ci: add GitHub packages to arc release

* ci: use restrictive permissions
2022-06-15 11:37:19 +09:00
Callum Tait
7fd5e24961 chore: bump chart to app 0.24.1 (#1531) 2022-06-15 11:34:55 +09:00
Yusuke Kuoka
9974b1a2b7 e2e: Enable buildx in more images (#1530) 2022-06-14 09:29:30 +01:00
Yusuke Kuoka
bd91b73fd9 chore: update bug_report.yml (#1529) 2022-06-14 09:29:06 +01:00
Callum Tait
a7ae910ee4 docs: add CRD disclaimer to bug report (#1516) 2022-06-14 09:42:31 +09:00
Callum Tait
2733c36d0e ci: publish controller canary to github packages (#1524)
* ci: publish controller canary to github packages

* ci: include image name
2022-06-14 09:10:13 +09:00
145 changed files with 9603 additions and 4579 deletions

View File

@@ -1,8 +1,18 @@
name: Bug Report
description: File a bug report
title: "Bug"
title: "<Please write what didn't work for you here>"
labels: ["bug"]
body:
- type: checkboxes
id: read-troubleshooting-guide
attributes:
label: Checks
description: Please check all the boxes below before submitting
options:
- label: I've already read https://github.com/actions-runner-controller/actions-runner-controller/blob/master/TROUBLESHOOTING.md and I'm sure my issue is not covered in the troubleshooting guide.
required: true
- label: I'm not using a custom entrypoint in my runner image
required: true
- type: input
id: controller-version
attributes:
@@ -17,6 +27,12 @@ body:
label: Helm Chart Version
description: Run `helm list` and see what's shown under CHART VERSION. Any release tags prefixed with `actions-runner-controller-` are for chart releases
placeholder: ex. 0.11.0
- type: input
id: cert-manager-version
attributes:
label: CertManager Version
description: Run `kubectl get po -o yaml $CERT_MANAGER_POD` and see the image tag, or run `helm list` and see what's shown under APP VERSION for your cert-manager Helm release.
placeholder: ex. 1.8
- type: dropdown
id: deployment-method
attributes:
@@ -29,11 +45,22 @@ body:
- Other
validations:
required: true
- type: textarea
id: cert-manager
attributes:
label: cert-manager installation
description: Confirm that you've installed cert-manager correctly by answering a few questions
placeholder: |
- Did you follow https://github.com/actions-runner-controller/actions-runner-controller#installation? If not, describe the installation process so that we can reproduce your environment.
- Are you sure you've installed cert-manager from an official source?
(Note that we won't provide user support for cert-manager itself. Make sure cert-manager is fully working before testing ARC or reporting a bug
validations:
required: true
- type: checkboxes
id: checks
attributes:
label: Checks
description: Please check the boxes below before submitting
description: Please check all the boxes below before submitting
options:
- label: This isn't a question or user support case (For Q&A and community support, go to [Discussions](https://github.com/actions-runner-controller/actions-runner-controller/discussions). It might also be a good idea to contract with any of contributors and maintainers if your business is so critical and therefore you need priority support
required: true
@@ -41,7 +68,9 @@ body:
required: true
- label: My actions-runner-controller version (v0.x.y) does support the feature
required: true
- label: I've already upgraded ARC to the latest and it didn't fix the issue
- label: I've already upgraded ARC (including the CRDs, see charts/actions-runner-controller/docs/UPGRADING.md for details) to the latest and it didn't fix the issue
required: true
- label: I've migrated to the workflow job webhook event (if you using webhook driven scaling)
required: true
- type: textarea
id: resource-definitions
@@ -112,10 +141,12 @@ body:
- type: textarea
id: controller-logs
attributes:
label: Controller Logs
description: "Include logs from `actions-runner-controller`'s controller-manager pod"
label: Whole Controller Logs
description: "NEVER EVER OMIT THIS! Include logs from `actions-runner-controller`'s controller-manager pod. Don't omit the parts you think irrelevant!"
render: shell
placeholder: |
PROVIDE THE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
To grab controller logs:
# Set NS according to your setup
@@ -125,17 +156,17 @@ body:
kubectl -n $NS get po
kubectl -n $NS logs $POD_NAME > arc.log
Upload it to e.g. https://gist.github.com/ and paste the link to it here.
validations:
required: true
- type: textarea
id: runner-pod-logs
attributes:
label: Runner Pod Logs
description: "Include logs from runner pod(s)"
label: Whole Runner Pod Logs
description: "Include logs from runner pod(s). Please don't omit the parts you think irrelevant!"
render: shell
placeholder: |
PROVIDE THE WHOLE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
To grab the runner pod logs:
# Set NS according to your setup. It should match your RunnerDeployment's metadata.namespace.
@@ -146,8 +177,8 @@ body:
kubectl -n $NS logs $POD_NAME -c runner > runnerpod_runner.log
kubectl -n $NS logs $POD_NAME -c docker > runnerpod_docker.log
Upload it to e.g. https://gist.github.com/ and paste the link to it here.
If any of the containers are getting terminated immediately, try adding `--previous` to the kubectl-logs command to obtain logs emitted before the termination.
validations:
required: true
- type: textarea

View File

@@ -1,5 +1,4 @@
# Blank issues are mainly for maintainers who are known to write complete issue descriptions without need to following a form
blank_issues_enabled: true
blank_issues_enabled: false
contact_links:
- name: Sponsor ARC Maintainers
about: If your business relies on the continued maintainance of actions-runner-controller, please consider sponsoring the project and the maintainers.

View File

@@ -1,19 +1,21 @@
---
name: Feature request
about: Suggest an idea for this project
labels: enhancement
title: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
### What would you like added?
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
*A clear and concise description of what you want to happen.*
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Note: Feature requests to integrate vendor specific cloud tools (e.g. `awscli`, `gcloud-sdk`, `azure-cli`) will likely be rejected as the Runner image aims to be vendor agnostic.
**Additional context**
Add any other context or screenshots about the feature request here.
### Why is this needed?
*A clear and concise description of any alternative solutions or features you've considered.*
### Additional context
*Add any other context or screenshots about the feature request here.*

View File

@@ -37,12 +37,14 @@ runs:
version: latest
- name: Login to DockerHub
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' }}
uses: docker/login-action@v2
with:
username: ${{ inputs.username }}
password: ${{ inputs.password }}
- name: Login to GitHub Container Registry
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' }}
uses: docker/login-action@v2
with:
registry: ghcr.io

View File

@@ -31,7 +31,8 @@
{
"fileMatch": [
"runner/actions-runner.dockerfile",
"runner/actions-runner-dind.dockerfile"
"runner/actions-runner-dind.dockerfile",
"runner/actions-runner-dind-rootless.dockerfile"
],
"matchStrings": ["RUNNER_VERSION=+(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",

23
.github/workflows/golangci-lint.yaml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: golangci-lint
on:
push:
branches:
- master
pull_request:
permissions:
contents: read
pull-requests: read
jobs:
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: 1.19
- uses: actions/checkout@v3
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
only-new-issues: true
version: v1.49.0

View File

@@ -5,6 +5,11 @@ on:
types:
- published
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: write
packages: write
jobs:
release-controller:
name: Release
@@ -53,11 +58,14 @@ jobs:
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
build-args: VERSION=${{ env.VERSION }}
push: true
tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:latest
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }}
ghcr.io/actions-runner-controller/actions-runner-controller:latest
ghcr.io/actions-runner-controller/actions-runner-controller:${{ env.VERSION }}
ghcr.io/actions-runner-controller/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -22,11 +22,11 @@ on:
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: read
packages: write
packages: write
jobs:
canary-build:
name: Build and Publish Canary Image
name: Build and Publish Canary Image
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USER }}
@@ -50,8 +50,10 @@ jobs:
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
build-args: VERSION=canary-${{ github.sha }}
push: true
tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:canary
ghcr.io/${{ github.repository }}:canary
cache-from: type=gha,scope=arc-canary
cache-to: type=gha,mode=max,scope=arc-canary

View File

@@ -31,7 +31,7 @@ jobs:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v2.1
uses: azure/setup-helm@v3.3
with:
version: ${{ env.HELM_VERSION }}
@@ -57,7 +57,7 @@ jobs:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.2.1
uses: helm/chart-testing-action@v2.3.1
- name: Run chart-testing (list-changed)
id: list-changed
@@ -73,7 +73,7 @@ jobs:
- name: Create kind cluster
if: steps.list-changed.outputs.changed == 'true'
uses: helm/kind-action@v1.2.0
uses: helm/kind-action@v1.4.0
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
@@ -121,7 +121,7 @@ jobs:
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.4.0
uses: helm/chart-releaser-action@v1.4.1
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -0,0 +1,29 @@
name: first-interaction
on:
issues:
types: [opened]
pull_request:
branches: [master]
types: [opened]
jobs:
check_for_first_interaction:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/first-interaction@main
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
issue-message: |
Hello! Thank you for filing an issue.
The maintainers will triage your issue shortly.
In the meantime, please take a look at the [troubleshooting guide](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/TROUBLESHOOTING.md) for bug reports.
If this is a feature request, please review our [contribution guidelines](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/CONTRIBUTING.md).
pr-message: |
Hello! Thank you for your contribution.
Please review our [contribution guidelines](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/CONTRIBUTING.md) to understand the project's testing and code conventions.

View File

@@ -14,7 +14,7 @@ jobs:
issues: write # for actions/stale to close stale issues
pull-requests: write # for actions/stale to close stale PRs
steps:
- uses: actions/stale@v5
- uses: actions/stale@v6
with:
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
# turn off stale for both issues and PRs

View File

@@ -6,7 +6,16 @@ on:
- opened
- synchronize
- reopened
- closed
branches:
- 'master'
paths:
- 'runner/**'
- '!runner/Makefile'
- '.github/workflows/runners.yaml'
- '!**.md'
# We must do a trigger on a push: instead of a types: closed so GitHub Secrets
# are available to the workflow run
push:
branches:
- 'master'
paths:
@@ -16,9 +25,10 @@ on:
- '!**.md'
env:
RUNNER_VERSION: 2.293.0
RUNNER_VERSION: 2.298.2
DOCKER_VERSION: 20.10.12
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USER }}
RUNNER_CONTAINER_HOOKS_VERSION: 0.1.2
DOCKERHUB_USERNAME: summerwind
jobs:
build-runners:
@@ -37,6 +47,9 @@ jobs:
- name: actions-runner-dind
os-name: ubuntu
os-version: 20.04
- name: actions-runner-dind-rootless
os-name: ubuntu
os-version: 20.04
steps:
- name: Checkout
@@ -57,10 +70,11 @@ jobs:
context: ./runner
file: ./runner/${{ matrix.name }}.dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ github.ref == 'master' && github.event.pull_request.merged == true }}
push: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }}
build-args: |
RUNNER_VERSION=${{ env.RUNNER_VERSION }}
DOCKER_VERSION=${{ env.DOCKER_VERSION }}
RUNNER_CONTAINER_HOOKS_VERSION=${{ env.RUNNER_CONTAINER_HOOKS_VERSION }}
tags: |
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}-${{ steps.vars.outputs.sha_short }}

View File

@@ -26,7 +26,7 @@ jobs:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v2.1
uses: azure/setup-helm@v3.3
with:
version: ${{ env.HELM_VERSION }}
@@ -52,7 +52,7 @@ jobs:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.2.1
uses: helm/chart-testing-action@v2.3.1
- name: Run chart-testing (list-changed)
id: list-changed
@@ -67,7 +67,7 @@ jobs:
ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
uses: helm/kind-action@v1.2.0
uses: helm/kind-action@v1.4.0
if: steps.list-changed.outputs.changed == 'true'
# We need cert-manager already installed in the cluster because we assume the CRDs exist

View File

@@ -13,6 +13,26 @@ permissions:
contents: read
jobs:
shellcheck:
name: runner / shellcheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: shellcheck
uses: reviewdog/action-shellcheck@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
path: "./runner"
pattern: |
*.sh
*.bash
update-status
# Make this consistent with `make shellsheck`
shellcheck_flags: "--shell bash --source-path runner"
exclude: "./.git/*"
check_all_files_with_shebangs: "false"
# Set this to "true" once we addressed all the shellcheck findings
fail_on_error: "false"
test-runner-entrypoint:
name: Test entrypoint
runs-on: ubuntu-latest

17
.golangci.yaml Normal file
View File

@@ -0,0 +1,17 @@
run:
timeout: 3m
output:
format: github-actions
linters-settings:
errcheck:
exclude-functions:
- (net/http.ResponseWriter).Write
- (*net/http.Server).Shutdown
- (*github.com/actions-runner-controller/actions-runner-controller/simulator.VisibleRunnerGroups).Add
- (*github.com/actions-runner-controller/actions-runner-controller/testing.Kind).Stop
issues:
exclude-rules:
- path: controllers/suite_test.go
linters:
- staticcheck
text: "SA1019"

View File

@@ -1,85 +1,53 @@
## Contributing
# Contribution Guide
### Testing Controller Built from a Pull Request
- [Contribution Guide](#contribution-guide)
- [Welcome](#welcome)
- [Before contributing code](#before-contributing-code)
- [How to Contribute a Patch](#how-to-contribute-a-patch)
- [Developing the Controller](#developing-the-controller)
- [Developing the Runners](#developing-the-runners)
- [Tests](#tests)
- [Running Ginkgo Tests](#running-ginkgo-tests)
- [Running End to End Tests](#running-end-to-end-tests)
- [Rerunning a failed test](#rerunning-a-failed-test)
- [Testing in a non-kind cluster](#testing-in-a-non-kind-cluster)
- [Code conventions](#code-conventions)
- [Opening the Pull Request](#opening-the-pull-request)
- [Helm Version Changes](#helm-version-changes)
- [Testing Controller Built from a Pull Request](#testing-controller-built-from-a-pull-request)
We always appreciate your help in testing open pull requests by deploying custom builds of actions-runner-controller onto your own environment, so that we are extra sure we didn't break anything.
## Welcome
It is especially true when the pull request is about GitHub Enterprise, both GHEC and GHES, as [maintainers don't have GitHub Enterprise environments for testing](/README.md#github-enterprise-support).
This document is the single source of truth for how to contribute to the code base.
Feel free to browse the [open issues](https://github.com/actions-runner-controller/actions-runner-controller/issues) or file a new one, all feedback is welcome!
By reading this guide, we hope to give you all of the information you need to be able to pick up issues, contribute new features, and get your work
reviewed and merged.
The process would look like the below:
## Before contributing code
- Clone this repository locally
- Checkout the branch. If you use the `gh` command, run `gh pr checkout $PR_NUMBER`
- Run `NAME=$DOCKER_USER/actions-runner-controller VERSION=canary make docker-build docker-push` for a custom container image build
- Update your actions-runner-controller's controller-manager deployment to use the new image, `$DOCKER_USER/actions-runner-controller:canary`
We welcome code patches, but to make sure things are well coordinated you should discuss any significant change before starting the work.
The maintainers ask that you signal your intention to contribute to the project using the issue tracker.
If there is an existing issue that you want to work on, please let us know so we can get it assigned to you.
If you noticed a bug or want to add a new feature, there are issue templates you can fill out.
Please also note that you need to replace `$DOCKER_USER` with your own DockerHub account name.
When filing a feature request, the maintainers will review the change and give you a decision on whether we are willing to accept the feature into the project.
For significantly large and/or complex features, we may request that you write up an architectural decision record ([ADR](https://github.blog/2020-08-13-why-write-adrs/)) detailing the change.
Please use the [template](/adrs/0000-TEMPLATE.md) as guidance.
### How to Contribute a Patch
<!--
TODO: Add a pre-requisite section describing what developers should
install in order get started on ARC.
-->
Depending on what you are patching depends on how you should go about it. Below are some guides on how to test patches locally as well as develop the controller and runners.
## How to Contribute a Patch
When submitting a PR for a change please provide evidence that your change works as we still need to work on improving the CI of the project. Some resources are provided for helping achieve this, see this guide for details.
Depending on what you are patching depends on how you should go about it.
Below are some guides on how to test patches locally as well as develop the controller and runners.
#### Running an End to End Test
When submitting a PR for a change please provide evidence that your change works as we still need to work on improving the CI of the project.
Some resources are provided for helping achieve this, see this guide for details.
> **Notes for Ubuntu 20.04+ users**
>
> If you're using Ubuntu 20.04 or greater, you might have installed `docker` with `snap`.
>
> If you want to stick with `snap`-provided `docker`, do not forget to set `TMPDIR` to
> somewhere under `$HOME`.
> Otherwise `kind load docker-image` fail while running `docker save`.
> See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap for more information.
To test your local changes against both PAT and App based authentication please run the `acceptance` make target with the authentication configuration details provided:
```shell
# This sets `VERSION` envvar to some appropriate value
. hack/make-env.sh
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
make acceptance
```
**Rerunning a failed test**
When one of tests run by `make acceptance` failed, you'd probably like to rerun only the failed one.
It can be done by `make acceptance/run` and by setting the combination of `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm|kubectl` and `ACCEPTANCE_TEST_SECRET_TYPE=token|app` values that failed (note, you just need to set the corresponding authentication configuration in this circumstance)
In the example below, we rerun the test for the combination `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token` only:
```shell
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm
ACCEPTANCE_TEST_SECRET_TYPE=token \
make acceptance/run
```
**Testing in a non-kind cluster**
If you prefer to test in a non-kind cluster, you can instead run:
```shell
KUBECONFIG=path/to/kubeconfig \
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make docker-build acceptance/setup \
acceptance/deploy \
acceptance/tests
```
#### Developing the Controller
### Developing the Controller
Rerunning the whole acceptance test suite from scratch on every little change to the controller, the runner, and the chart would be counter-productive.
@@ -119,13 +87,14 @@ NAME=$DOCKER_USER/actions-runner make \
(kubectl get po -ojsonpath={.items[*].metadata.name} | xargs -n1 kubectl delete po)
```
#### Developing the Runners
### Developing the Runners
**Tests**
#### Tests
A set of example pipelines (./acceptance/pipelines) are provided in this repository which you can use to validate your runners are working as expected. When raising a PR please run the relevant suites to prove your change hasn't broken anything.
A set of example pipelines (./acceptance/pipelines) are provided in this repository which you can use to validate your runners are working as expected.
When raising a PR please run the relevant suites to prove your change hasn't broken anything.
**Running Ginkgo Tests**
#### Running Ginkgo Tests
You can run the integration test suite that is written in Ginkgo with:
@@ -135,7 +104,8 @@ make test-with-deps
This will firstly install a few binaries required to setup the integration test environment and then runs `go test` to start the Ginkgo test.
If you don't want to use `make`, like when you're running tests from your IDE, install required binaries to `/usr/local/kubebuilder/bin`. That's the directory in which controller-runtime's `envtest` framework locates the binaries.
If you don't want to use `make`, like when you're running tests from your IDE, install required binaries to `/usr/local/kubebuilder/bin`.
That's the directory in which controller-runtime's `envtest` framework locates the binaries.
```shell
sudo mkdir -p /usr/local/kubebuilder/bin
@@ -152,6 +122,92 @@ GINKGO_FOCUS='[It] should create a new Runner resource from the specified templa
go test -v -run TestAPIs github.com/actions-runner-controller/actions-runner-controller/controllers
```
#### Helm Version Bumps
### Running End to End Tests
In general we ask you not to bump the version in your PR, the maintainers in general manage the publishing of a new chart.
> **Notes for Ubuntu 20.04+ users**
>
> If you're using Ubuntu 20.04 or greater, you might have installed `docker` with `snap`.
>
> If you want to stick with `snap`-provided `docker`, do not forget to set `TMPDIR` to somewhere under `$HOME`.
> Otherwise `kind load docker-image` fail while running `docker save`.
> See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap for more information.
To test your local changes against both PAT and App based authentication please run the `acceptance` make target with the authentication configuration details provided:
```shell
# This sets `VERSION` envvar to some appropriate value
. hack/make-env.sh
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
make acceptance
```
#### Rerunning a failed test
When one of tests run by `make acceptance` failed, you'd probably like to rerun only the failed one.
It can be done by `make acceptance/run` and by setting the combination of `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm|kubectl` and `ACCEPTANCE_TEST_SECRET_TYPE=token|app` values that failed (note, you just need to set the corresponding authentication configuration in this circumstance)
In the example below, we rerun the test for the combination `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token` only:
```shell
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make acceptance/run
```
#### Testing in a non-kind cluster
If you prefer to test in a non-kind cluster, you can instead run:
```shell
KUBECONFIG=path/to/kubeconfig \
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make docker-build acceptance/setup \
acceptance/deploy \
acceptance/tests
```
### Code conventions
Before shipping your PR, please check the following items to make sure CI passes.
- Run `go mod tidy` if you made changes to dependencies.
- Format the code using `gofmt`
- Run the `golangci-lint` tool locally.
- We recommend you use `make lint` to run the tool using a Docker container matching the CI version.
### Opening the Pull Request
Send PR, add issue number to description
## Helm Version Changes
In general we ask you not to bump the version in your PR.
The maintainers will manage releases and publishing new charts.
## Testing Controller Built from a Pull Request
We always appreciate your help in testing open pull requests by deploying custom builds of actions-runner-controller onto your own environment, so that we are extra sure we didn't break anything.
It is especially true when the pull request is about GitHub Enterprise, both GHEC and GHES, as [maintainers don't have GitHub Enterprise environments for testing](docs/detailed-docs.md#github-enterprise-support).
The process would look like the below:
- Clone this repository locally
- Checkout the branch. If you use the `gh` command, run `gh pr checkout $PR_NUMBER`
- Run `NAME=$DOCKER_USER/actions-runner-controller VERSION=canary make docker-build docker-push` for a custom container image build
- Update your actions-runner-controller's controller-manager deployment to use the new image, `$DOCKER_USER/actions-runner-controller:canary`
Please also note that you need to replace `$DOCKER_USER` with your own DockerHub account name.

View File

@@ -1,11 +1,10 @@
# Build the manager binary
FROM --platform=$BUILDPLATFORM golang:1.18.2 as builder
FROM --platform=$BUILDPLATFORM golang:1.19.2 as builder
WORKDIR /workspace
# Make it runnable on a distroless image/without libc
ENV CGO_ENABLED=0
# Copy the Go Modules manifests
COPY go.mod go.sum ./
@@ -25,20 +24,20 @@ RUN go mod download
# With the above commmand,
# TARGETOS can be "linux", TARGETARCH can be "amd64", "arm64", and "arm", TARGETVARIANT can be "v7".
ARG TARGETPLATFORM TARGETOS TARGETARCH TARGETVARIANT
ARG TARGETPLATFORM TARGETOS TARGETARCH TARGETVARIANT VERSION=dev
# We intentionally avoid `--mount=type=cache,mode=0777,target=/go/pkg/mod` in the `go mod download` and the `go build` runs
# to avoid https://github.com/moby/buildkit/issues/2334
# We can use docker layer cache so the build is fast enogh anyway
# We also use per-platform GOCACHE for the same reason.
env GOCACHE /build/${TARGETPLATFORM}/root/.cache/go-build
ENV GOCACHE /build/${TARGETPLATFORM}/root/.cache/go-build
# Build
RUN --mount=target=. \
--mount=type=cache,mode=0777,target=${GOCACHE} \
export GOOS=${TARGETOS} GOARCH=${TARGETARCH} GOARM=${TARGETVARIANT#v} && \
go build -o /out/manager main.go && \
go build -o /out/github-webhook-server ./cmd/githubwebhookserver
go build -trimpath -ldflags="-s -w -X 'github.com/actions-runner-controller/actions-runner-controller/build.Version=${VERSION}'" -o /out/manager main.go && \
go build -trimpath -ldflags="-s -w" -o /out/github-webhook-server ./cmd/githubwebhookserver
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
@@ -49,6 +48,6 @@ WORKDIR /
COPY --from=builder /out/manager .
COPY --from=builder /out/github-webhook-server .
USER nonroot:nonroot
USER 65532:65532
ENTRYPOINT ["/manager"]

View File

@@ -4,8 +4,8 @@ else
NAME ?= summerwind/actions-runner-controller
endif
DOCKER_USER ?= $(shell echo ${NAME} | cut -d / -f1)
VERSION ?= latest
RUNNER_VERSION ?= 2.293.0
VERSION ?= dev
RUNNER_VERSION ?= 2.298.2
TARGETPLATFORM ?= $(shell arch)
RUNNER_NAME ?= ${DOCKER_USER}/actions-runner
RUNNER_TAG ?= ${VERSION}
@@ -19,6 +19,7 @@ KUBECONTEXT ?= kind-acceptance
CLUSTER ?= acceptance
CERT_MANAGER_VERSION ?= v1.1.1
KUBE_RBAC_PROXY_VERSION ?= v0.11.0
SHELLCHECK_VERSION ?= 0.8.0
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:generateEmbeddedObjectMeta=true"
@@ -31,6 +32,7 @@ GOBIN=$(shell go env GOBIN)
endif
TEST_ASSETS=$(PWD)/test-assets
TOOLS_PATH=$(PWD)/.tools
# default list of platforms for which multiarch image is built
ifeq (${PLATFORMS}, )
@@ -51,10 +53,13 @@ endif
all: manager
lint:
docker run --rm -v $(PWD):/app -w /app golangci/golangci-lint:v1.49.0 golangci-lint run
GO_TEST_ARGS ?= -short
# Run tests
test: generate fmt vet manifests
test: generate fmt vet manifests shellcheck
go test $(GO_TEST_ARGS) ./... -coverprofile cover.out
go test -fuzz=Fuzz -fuzztime=10s -run=Fuzz* ./controllers
@@ -92,7 +97,7 @@ manifests: manifests-gen-crds chart-crds
manifests-gen-crds: controller-gen yq
$(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
for YAMLFILE in config/crd/bases/actions*.yaml; do \
$(YQ) write --inplace "$$YAMLFILE" spec.preserveUnknownFields false; \
$(YQ) '.spec.preserveUnknownFields = false' --inplace "$$YAMLFILE" ; \
done
chart-crds:
@@ -110,6 +115,10 @@ vet:
generate: controller-gen
$(CONTROLLER_GEN) object:headerFile=./hack/boilerplate.go.txt paths="./..."
# Run shellcheck on runner scripts
shellcheck: shellcheck-install
$(TOOLS_PATH)/shellcheck --shell bash --source-path runner runner/*.bash runner/*.sh
docker-buildx:
export DOCKER_CLI_EXPERIMENTAL=enabled ;\
export DOCKER_BUILDKIT=1
@@ -119,6 +128,7 @@ docker-buildx:
docker buildx build --platform ${PLATFORMS} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
--build-arg VERSION=${VERSION} \
-t "${NAME}:${VERSION}" \
-f Dockerfile \
. ${PUSH_ARG}
@@ -242,12 +252,31 @@ ifeq (, $(wildcard $(GOBIN)/yq))
YQ_TMP_DIR=$$(mktemp -d) ;\
cd $$YQ_TMP_DIR ;\
go mod init tmp ;\
go install github.com/mikefarah/yq/v3@3.4.0 ;\
go install github.com/mikefarah/yq/v4@v4.25.3 ;\
rm -rf $$YQ_TMP_DIR ;\
}
endif
YQ=$(GOBIN)/yq
# find or download shellcheck
# download shellcheck if necessary
shellcheck-install:
ifeq (, $(wildcard $(TOOLS_PATH)/shellcheck))
echo "Downloading shellcheck"
@{ \
set -e ;\
SHELLCHECK_TMP_DIR=$$(mktemp -d) ;\
cd $$SHELLCHECK_TMP_DIR ;\
curl -LO https://github.com/koalaman/shellcheck/releases/download/v$(SHELLCHECK_VERSION)/shellcheck-v$(SHELLCHECK_VERSION).linux.x86_64.tar.xz ;\
tar Jxvf shellcheck-v$(SHELLCHECK_VERSION).linux.x86_64.tar.xz ;\
cd $(CURDIR) ;\
mkdir -p $(TOOLS_PATH) ;\
mv $$SHELLCHECK_TMP_DIR/shellcheck-v$(SHELLCHECK_VERSION)/shellcheck $(TOOLS_PATH)/ ;\
rm -rf $$SHELLCHECK_TMP_DIR ;\
}
endif
SHELLCHECK=$(TOOLS_PATH)/shellcheck
OS_NAME := $(shell uname -s | tr A-Z a-z)
# find or download etcd

1706
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -2,8 +2,9 @@
* [Tools](#tools)
* [Installation](#installation)
* [InternalError when calling webhook: context deadline exceeded](#internalerror-when-calling-webhook-context-deadline-exceeded)
* [Invalid header field value](#invalid-header-field-value)
* [Deployment fails on GKE due to webhooks](#deployment-fails-on-gke-due-to-webhooks)
* [Helm chart install failure: certificate signed by unknown authority](#helm-chart-install-failure-certificate-signed-by-unknown-authority)
* [Operations](#operations)
* [Stuck runner kind or backing pod](#stuck-runner-kind-or-backing-pod)
* [Delay in jobs being allocated to runners](#delay-in-jobs-being-allocated-to-runners)
@@ -22,39 +23,37 @@ A list of tools which are helpful for troubleshooting
Troubeshooting runbooks that relate to ARC installation problems
### Invalid header field value
### InternalError when calling webhook: context deadline exceeded
**Problem**
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error
{
"controller": "runner",
"request": "actions-runner-system/runner-deployment-dk7q8-dk5c9",
"error": "failed to create registration token: Post \"https://api.github.com/orgs/$YOUR_ORG_HERE/actions/runners/registration-token\": net/http: invalid header field value \"Bearer $YOUR_TOKEN_HERE\\n\" for key Authorization"
}
This issue can come up for various reasons like leftovers from previous installations or not being able to access the K8s service's clusterIP associated with the admission webhook server (of ARC).
```
Internal error occurred: failed calling webhook "mutate.runnerdeployment.actions.summerwind.dev":
Post "https://actions-runner-controller-webhook.actions-runner-system.svc:443/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment?timeout=10s": context deadline exceeded
```
**Solution**
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added, either:
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documented flags
First we will try the common solution of checking webhook leftovers from previous installations:
1. ```bash
kubectl get validatingwebhookconfiguration -A
kubectl get mutatingwebhookconfiguration -A
```
2. If you see any webhooks related to actions-runner-controller, delete them:
```bash
kubectl delete mutatingwebhookconfiguration actions-runner-controller-mutating-webhook-configuration
kubectl delete validatingwebhookconfiguration actions-runner-controller-validating-webhook-configuration
```
If that didn't work then probably your K8s control-plane is somehow unable to access the K8s service's clusterIP associated with the admission webhook server:
1. You're running apiserver as a binary and you didn't make service cluster IPs available to the host network.
2. You're running the apiserver in the pod but your pod network (i.e. CNI plugin installation and config) is not good so your pods(like kube-apiserver) in the K8s control-plane nodes can't access ARC's admission webhook server pod(s) in probably data-plane nodes.
### Deployment fails on GKE due to webhooks
**Problem**
Due to GKEs firewall settings you may run into the following errors when trying to deploy runners on a private GKE cluster:
```
Internal error occurred: failed calling webhook "mutate.runner.actions.summerwind.dev":
Post https://webhook-service.actions-runner-system.svc:443/mutate-actions-summerwind-dev-v1alpha1-runner?timeout=10s:
context deadline exceeded
```
**Solution**<br />
Another reason could be due to GKEs firewall settings you may run into the following errors when trying to deploy runners on a private GKE cluster:
To fix this, you may either:
@@ -88,6 +87,57 @@ To fix this, you may either:
gcloud compute firewall-rules create k8s-cert-manager --source-ranges $SOURCE --target-tags $WORKER_NODES_TAG --allow TCP:9443 --network $NETWORK
```
### Invalid header field value
**Problem**
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error
{
"controller": "runner",
"request": "actions-runner-system/runner-deployment-dk7q8-dk5c9",
"error": "failed to create registration token: Post \"https://api.github.com/orgs/$YOUR_ORG_HERE/actions/runners/registration-token\": net/http: invalid header field value \"Bearer $YOUR_TOKEN_HERE\\n\" for key Authorization"
}
```
**Solution**
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added, either:
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documented flags
### Helm chart install failure: certificate signed by unknown authority
**Problem**
```
Error: UPGRADE FAILED: failed to create resource: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority
```
Apparently, it's failing while `helm` is creating one of resources defined in the ARC chart and the cause was that cert-manager's webhook is not working correctly, due to the missing or the invalid CA certficate.
You'd try to tail logs from the `cert-manager-cainjector` and see it's failing with an error like:
```
$ kubectl -n cert-manager logs cert-manager-cainjector-7cdbb9c945-g6bt4
I0703 03:31:55.159339 1 start.go:91] "starting" version="v1.1.1" revision="3ac7418070e22c87fae4b22603a6b952f797ae96"
I0703 03:31:55.615061 1 leaderelection.go:243] attempting to acquire leader lease kube-system/cert-manager-cainjector-leader-election...
I0703 03:32:10.738039 1 leaderelection.go:253] successfully acquired lease kube-system/cert-manager-cainjector-leader-election
I0703 03:32:10.739941 1 recorder.go:52] cert-manager/controller-runtime/manager/events "msg"="Normal" "message"="cert-manager-cainjector-7cdbb9c945-g6bt4_88e4bc70-eded-4343-a6fb-0ddd6434eb55 became leader" "object"={"kind":"ConfigMap","namespace":"kube-system","name":"cert-manager-cainjector-leader-election","uid":"942a021e-364c-461a-978c-f54a95723cdc","apiVersion":"v1","resourceVersion":"1576"} "reason"="LeaderElection"
E0703 03:32:11.192128 1 start.go:119] cert-manager/ca-injector "msg"="manager goroutine exited" "error"=null
I0703 03:32:12.339197 1 request.go:645] Throttling request took 1.047437675s, request: GET:https://10.96.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
E0703 03:32:13.143790 1 start.go:151] cert-manager/ca-injector "msg"="Error registering certificate based controllers. Retrying after 5 seconds." "error"="no matches for kind \"MutatingWebhookConfiguration\" in version \"admissionregistration.k8s.io/v1beta1\""
Error: error registering secret controller: no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
```
**Solution**
Your cluster is based on a new enough Kubernetes of version 1.22 or greater which does not support the legacy `admissionregistration.k8s.io/v1beta1` API anymore, and your `cert-manager` is not up-to-date hence it's still trying to use the leagcy Kubernetes API.
In many cases, it's not an option to downgrade Kubernetes. So, just upgrade `cert-manager` to a more recent version that does have have the support for the specific Kubernetes version you're using.
See https://cert-manager.io/docs/installation/supported-releases/ for the list of available cert-manager versions.
## Operations
Troubeshooting runbooks that relate to ARC operational problems
@@ -117,7 +167,7 @@ are in a namespace not shared with anything else_
**Problem**
ARC isn't involved in jobs actually getting allocated to a runner. ARC is responsible for orchestrating runners and the runner lifecycle. Why some people see large delays in job allocation is not clear however it has been https://github.com/actions-runner-controller/actions-runner-controller/issues/1387#issuecomment-1122593984 that this is caused from the self-update process somehow.
ARC isn't involved in jobs actually getting allocated to a runner. ARC is responsible for orchestrating runners and the runner lifecycle. Why some people see large delays in job allocation is not clear however it has been confirmed https://github.com/actions-runner-controller/actions-runner-controller/issues/1387#issuecomment-1122593984 that this is caused from the self-update process somehow.
**Solution**
@@ -206,8 +256,28 @@ spec:
env: []
```
There may be more places you need to tweak for MTU.
Please consult issues like #651 for more information.
If the issue still persists, you can set the `ARC_DOCKER_MTU_PROPAGATION` to propagate the host MTU to networks created
by the GitHub Runner. For instance:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: github-runner
namespace: github-system
spec:
replicas: 6
template:
spec:
dockerMTU: 1400
repository: $username/$repo
env:
- name: ARC_DOCKER_MTU_PROPAGATION
value: "true"
```
You can read the discussion regarding this issue in
(#1406)[https://github.com/actions-runner-controller/actions-runner-controller/issues/1046].
## Unable to scale to zero with TotalNumberOfQueuedAndInProgressWorkflowRuns

97
acceptance/argotunnel.sh Executable file
View File

@@ -0,0 +1,97 @@
#!/usr/bin/env bash
# See https://developers.cloudflare.com/cloudflare-one/tutorials/many-cfd-one-tunnel/
kubectl create ns tunnel || :
kubectl -n tunnel delete secret tunnel-credentials || :
kubectl -n tunnel create secret generic tunnel-credentials \
--from-file=credentials.json=$HOME/.cloudflared/${TUNNEL_ID}.json || :
cat <<MANIFEST | kubectl -n tunnel ${OP} -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudflared
spec:
selector:
matchLabels:
app: cloudflared
replicas: 2 # You could also consider elastic scaling for this deployment
template:
metadata:
labels:
app: cloudflared
spec:
containers:
- name: cloudflared
image: cloudflare/cloudflared:latest
args:
- tunnel
# Points cloudflared to the config file, which configures what
# cloudflared will actually do. This file is created by a ConfigMap
# below.
- --config
- /etc/cloudflared/config/config.yaml
- run
livenessProbe:
httpGet:
# Cloudflared has a /ready endpoint which returns 200 if and only if
# it has an active connection to the edge.
path: /ready
port: 2000
failureThreshold: 1
initialDelaySeconds: 10
periodSeconds: 10
volumeMounts:
- name: config
mountPath: /etc/cloudflared/config
readOnly: true
# Each tunnel has an associated "credentials file" which authorizes machines
# to run the tunnel. cloudflared will read this file from its local filesystem,
# and it'll be stored in a k8s secret.
- name: creds
mountPath: /etc/cloudflared/creds
readOnly: true
volumes:
- name: creds
secret:
secretName: tunnel-credentials
# Create a config.yaml file from the ConfigMap below.
- name: config
configMap:
name: cloudflared
items:
- key: config.yaml
path: config.yaml
---
# This ConfigMap is just a way to define the cloudflared config.yaml file in k8s.
# It's useful to define it in k8s, rather than as a stand-alone .yaml file, because
# this lets you use various k8s templating solutions (e.g. Helm charts) to
# parameterize your config, instead of just using string literals.
apiVersion: v1
kind: ConfigMap
metadata:
name: cloudflared
data:
config.yaml: |
# Name of the tunnel you want to run
tunnel: ${TUNNEL_NAME}
credentials-file: /etc/cloudflared/creds/credentials.json
# Serves the metrics server under /metrics and the readiness server under /ready
metrics: 0.0.0.0:2000
# Autoupdates applied in a k8s pod will be lost when the pod is removed or restarted, so
# autoupdate doesn't make sense in Kubernetes. However, outside of Kubernetes, we strongly
# recommend using autoupdate.
no-autoupdate: true
ingress:
# The first rule proxies traffic to the httpbin sample Service defined in app.yaml
- hostname: ${TUNNEL_HOSTNAME}
service: http://actions-runner-controller-github-webhook-server.actions-runner-system:80
# This rule matches any traffic which didn't match a previous rule, and responds with HTTP 404.
- service: http_status:404
MANIFEST
kubectl -n tunnel delete po -l app=cloudflared || :

View File

@@ -41,8 +41,23 @@ TEST_ID=${TEST_ID:-default}
if [ "${tool}" == "helm" ]; then
set -v
CHART=${CHART:-charts/actions-runner-controller}
flags=()
if [ "${IMAGE_PULL_SECRET}" != "" ]; then
flags+=( --set imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
flags+=( --set image.actionsRunnerImagePullSecrets[0].name=${IMAGE_PULL_SECRET})
flags+=( --set githubWebhookServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
fi
if [ "${CHART_VERSION}" != "" ]; then
flags+=( --version ${CHART_VERSION})
fi
set -vx
helm upgrade --install actions-runner-controller \
charts/actions-runner-controller \
${CHART} \
-n actions-runner-system \
--create-namespace \
--set syncPeriod=${SYNC_PERIOD} \
@@ -51,6 +66,7 @@ if [ "${tool}" == "helm" ]; then
--set image.tag=${VERSION} \
--set podAnnotations.test-id=${TEST_ID} \
--set githubWebhookServer.podAnnotations.test-id=${TEST_ID} \
${flags[@]} --set image.imagePullPolicy=${IMAGE_PULL_POLICY} \
-f ${VALUES_FILE}
set +v
# To prevent `CustomResourceDefinition.apiextensions.k8s.io "runners.actions.summerwind.dev" is invalid: metadata.annotations: Too long: must have at most 262144 bytes`
@@ -76,56 +92,3 @@ kubectl -n actions-runner-system wait deploy/actions-runner-controller --for con
# Adhocly wait for some time until actions-runner-controller's admission webhook gets ready
sleep 20
RUNNER_LABEL=${RUNNER_LABEL:-self-hosted}
if [ -n "${TEST_REPO}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerset envsubst | kubectl apply -f -
else
echo 'Deploying runnerdeployment and hra. Set USE_RUNNERSET if you want to deploy runnerset instead.'
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerdeploy envsubst | kubectl apply -f -
fi
else
echo 'Skipped deploying runnerdeployment and hra. Set TEST_REPO to "yourorg/yourrepo" to deploy.'
fi
if [ -n "${TEST_ORG}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerset envsubst | kubectl apply -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerdeploy envsubst | kubectl apply -f -
fi
if [ -n "${TEST_ORG_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerset envsubst | kubectl apply -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerdeploy envsubst | kubectl apply -f -
fi
else
echo 'Skipped deploying enterprise runnerdeployment. Set TEST_ORG_GROUP to deploy.'
fi
else
echo 'Skipped deploying organizational runnerdeployment. Set TEST_ORG to deploy.'
fi
if [ -n "${TEST_ENTERPRISE}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerset envsubst | kubectl apply -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerdeploy envsubst | kubectl apply -f -
fi
if [ -n "${TEST_ENTERPRISE_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerset envsubst | kubectl apply -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerdeploy envsubst | kubectl apply -f -
fi
else
echo 'Skipped deploying enterprise runnerdeployment. Set TEST_ENTERPRISE_GROUP to deploy.'
fi
else
echo 'Skipped deploying enterprise runnerdeployment. Set TEST_ENTERPRISE to deploy.'
fi

60
acceptance/deploy_runners.sh Executable file
View File

@@ -0,0 +1,60 @@
#!/usr/bin/env bash
set -e
OP=${OP:-apply}
RUNNER_LABEL=${RUNNER_LABEL:-self-hosted}
cat acceptance/testdata/kubernetes_container_mode.envsubst.yaml | NAMESPACE=${RUNNER_NAMESPACE} envsubst | kubectl apply -f -
if [ -n "${TEST_REPO}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerset envsubst | kubectl ${OP} -f -
else
echo "Running ${OP} runnerdeployment and hra. Set USE_RUNNERSET if you want to deploy runnerset instead."
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} for runnerdeployment and hra. Set TEST_REPO to "yourorg/yourrepo" to deploy."
fi
if [ -n "${TEST_ORG}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerdeploy envsubst | kubectl ${OP} -f -
fi
if [ -n "${TEST_ORG_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ORG_GROUP to ${OP}."
fi
else
echo "Skipped ${OP} on organizational runnerdeployment. Set TEST_ORG to ${OP}."
fi
if [ -n "${TEST_ENTERPRISE}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerdeploy envsubst | kubectl ${OP} -f -
fi
if [ -n "${TEST_ENTERPRISE_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ENTERPRISE_GROUP to ${OP}."
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ENTERPRISE to ${OP}."
fi

View File

@@ -0,0 +1,86 @@
# USAGE:
# cat acceptance/testdata/kubernetes_container_mode.envsubst.yaml | NAMESPACE=default envsubst | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-mode-runner
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch",]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "delete"]
# Needed to report test success by crating a cm from within workflow job step
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: runner-status-updater
rules:
- apiGroups: ["actions.summerwind.dev"]
resources: ["runners/status"]
verbs: ["get", "update", "patch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ${RUNNER_SERVICE_ACCOUNT_NAME}
namespace: ${NAMESPACE}
---
# To verify it's working, try:
# kubectl auth can-i --as system:serviceaccount:default:runner get pod
# If incomplete, workflows and jobs would fail with an error message like:
# Error: Error: The Service account needs the following permissions [{"group":"","verbs":["get","list","create","delete"],"resource":"pods","subresource":""},{"group":"","verbs":["get","create"],"resource":"pods","subresource":"exec"},{"group":"","verbs":["get","list","watch"],"resource":"pods","subresource":"log"},{"group":"batch","verbs":["get","list","create","delete"],"resource":"jobs","subresource":""},{"group":"","verbs":["create","delete","get","list"],"resource":"secrets","subresource":""}] on the pod resource in the 'default' namespace. Please contact your self hosted runner administrator.
# Error: Process completed with exit code 1.
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
name: runner-k8s-mode-runner
namespace: ${NAMESPACE}
subjects:
- kind: ServiceAccount
name: ${RUNNER_SERVICE_ACCOUNT_NAME}
namespace: ${NAMESPACE}
roleRef:
kind: ClusterRole
name: k8s-mode-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: runner-runner-stat-supdater
namespace: ${NAMESPACE}
subjects:
- kind: ServiceAccount
name: ${RUNNER_SERVICE_ACCOUNT_NAME}
namespace: ${NAMESPACE}
roleRef:
kind: ClusterRole
name: runner-status-updater
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: org-runnerdeploy-runner-work-dir
labels:
content: org-runnerdeploy-runner-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

View File

@@ -1,3 +1,13 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-runner-work-dir
labels:
content: ${NAME}-runner-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
@@ -39,10 +49,30 @@ spec:
labels:
- "${RUNNER_LABEL}"
env:
- name: ROLLING_UPDATE_PHASE
value: "${ROLLING_UPDATE_PHASE}"
- name: ARC_DOCKER_MTU_PROPAGATION
value: "true"
dockerMTU: 1400
#
# Non-standard working directory
#
# workDir: "/"
# # Uncomment the below to enable the kubernetes container mode
# # See https://github.com/actions-runner-controller/actions-runner-controller#runner-with-k8s-jobs
containerMode: ${RUNNER_CONTAINER_MODE}
workVolumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: "${NAME}-runner-work-dir"
resources:
requests:
storage: 10Gi
serviceAccountName: ${RUNNER_SERVICE_ACCOUNT_NAME}
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler

View File

@@ -112,6 +112,7 @@ spec:
labels:
app: ${NAME}
spec:
serviceAccountName: ${RUNNER_SERVICE_ACCOUNT_NAME}
containers:
- name: runner
imagePullPolicy: IfNotPresent
@@ -120,10 +121,14 @@ spec:
value: "${RUNNER_FEATURE_FLAG_EPHEMERAL}"
- name: GOMODCACHE
value: "/home/runner/.cache/go-mod"
- name: ROLLING_UPDATE_PHASE
value: "${ROLLING_UPDATE_PHASE}"
# PV-backed runner work dir
volumeMounts:
- name: work
mountPath: /runner/_work
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
# The volume and mount with the same names will be created by workVolumeClaimTemplate and the kubernetes container mode support.
# - name: work
# mountPath: /runner/_work
# Cache docker image layers, in case dockerdWithinRunnerContainer=true
- name: var-lib-docker
mountPath: /var/lib/docker
@@ -150,30 +155,31 @@ spec:
# https://github.com/actions/setup-go/blob/56a61c9834b4a4950dbbf4740af0b8a98c73b768/src/installer.ts#L144
mountPath: "/opt/hostedtoolcache"
# Valid only when dockerdWithinRunnerContainer=false
- name: docker
# PV-backed runner work dir
volumeMounts:
- name: work
mountPath: /runner/_work
# Cache docker image layers, in case dockerdWithinRunnerContainer=false
- name: var-lib-docker
mountPath: /var/lib/docker
# image: mumoshu/actions-runner-dind:dev
# - name: docker
# # PV-backed runner work dir
# volumeMounts:
# - name: work
# mountPath: /runner/_work
# # Cache docker image layers, in case dockerdWithinRunnerContainer=false
# - name: var-lib-docker
# mountPath: /var/lib/docker
# # image: mumoshu/actions-runner-dind:dev
# For buildx cache
- name: cache
mountPath: "/home/runner/.cache"
volumes:
- name: work
ephemeral:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
storageClassName: "${NAME}-runner-work-dir"
resources:
requests:
storage: 10Gi
# # For buildx cache
# - name: cache
# mountPath: "/home/runner/.cache"
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
# volumes:
# - name: work
# ephemeral:
# volumeClaimTemplate:
# spec:
# accessModes:
# - ReadWriteOnce
# storageClassName: "${NAME}-runner-work-dir"
# resources:
# requests:
# storage: 10Gi
volumeClaimTemplates:
- metadata:
name: vol1
@@ -251,3 +257,10 @@ spec:
minReplicas: ${RUNNER_MIN_REPLICAS}
maxReplicas: 10
scaleDownDelaySecondsAfterScaleOut: ${RUNNER_SCALE_DOWN_DELAY_SECONDS_AFTER_SCALE_OUT}
# Comment out the whole metrics if you'd like to solely test webhook-based scaling
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75'
scaleDownThreshold: '0.25'
scaleUpFactor: '2'
scaleDownFactor: '0.5'

View File

@@ -1,6 +1,18 @@
# Set actions-runner-controller settings for testing
logLevel: "-4"
imagePullSecrets: []
image:
# This needs to be an empty array rather than a single-item array with empty name.
# Otherwise you end up with the following error on helm-upgrade:
# Error: UPGRADE FAILED: failed to create patch: map: map[] does not contain declared merge key: name && failed to create patch: map: map[] does not contain declared merge key: name
actionsRunnerImagePullSecrets: []
runner:
statusUpdateHook:
enabled: true
rbac:
allowGrantingKubernetesContainerModePermissions: true
githubWebhookServer:
imagePullSecrets: []
logLevel: "-4"
enabled: true
labels: {}

18
adrs/0000-TEMPLATE.md Normal file
View File

@@ -0,0 +1,18 @@
# Title
<!-- ADR titles should typically be imperative sentences. -->
**Status**: (Proposed|Accepted|Rejected|Superceded|Deprecated)
## Context
*What is the issue or background knowledge necessary for future readers
to understand why this ADR was written?*
## Decision
**What** is the change being proposed? / **How** will it be implemented?*
## Consequences
*What becomes easier or more difficult to do because of this change?*

View File

@@ -60,6 +60,9 @@ type HorizontalRunnerAutoscalerSpec struct {
// The earlier a scheduled override is, the higher it is prioritized.
// +optional
ScheduledOverrides []ScheduledOverride `json:"scheduledOverrides,omitempty"`
// +optional
GitHubAPICredentialsFrom *GitHubAPICredentialsFrom `json:"githubAPICredentialsFrom,omitempty"`
}
type ScaleUpTrigger struct {
@@ -130,7 +133,7 @@ type ScaleTargetRef struct {
type MetricSpec struct {
// Type is the type of metric to be used for autoscaling.
// The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
// It can be TotalNumberOfQueuedAndInProgressWorkflowRuns or PercentageRunnersBusy.
Type string `json:"type,omitempty"`
// RepositoryNames is the list of repository names to be used for calculating the metric.
@@ -170,7 +173,7 @@ type MetricSpec struct {
}
// ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule.
// A schedule can optionally be recurring, so that the correspoding override happens every day, week, month, or year.
// A schedule can optionally be recurring, so that the corresponding override happens every day, week, month, or year.
type ScheduledOverride struct {
// StartTime is the time at which the first override starts.
StartTime metav1.Time `json:"startTime"`

View File

@@ -18,8 +18,10 @@ package v1alpha1
import (
"errors"
"fmt"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/util/validation/field"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -71,6 +73,19 @@ type RunnerConfig struct {
VolumeSizeLimit *resource.Quantity `json:"volumeSizeLimit,omitempty"`
// +optional
VolumeStorageMedium *string `json:"volumeStorageMedium,omitempty"`
// +optional
ContainerMode string `json:"containerMode,omitempty"`
GitHubAPICredentialsFrom *GitHubAPICredentialsFrom `json:"githubAPICredentialsFrom,omitempty"`
}
type GitHubAPICredentialsFrom struct {
SecretRef SecretReference `json:"secretRef,omitempty"`
}
type SecretReference struct {
Name string `json:"name"`
}
// RunnerPodSpec defines the desired pod spec fields of the runner pod
@@ -135,6 +150,9 @@ type RunnerPodSpec struct {
// +optional
Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
// +optional
PriorityClassName string `json:"priorityClassName,omitempty"`
// +optional
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
@@ -152,12 +170,37 @@ type RunnerPodSpec struct {
// +optional
RuntimeClassName *string `json:"runtimeClassName,omitempty"`
// +optional
DnsPolicy corev1.DNSPolicy `json:"dnsPolicy,omitempty"`
// +optional
DnsConfig *corev1.PodDNSConfig `json:"dnsConfig,omitempty"`
// +optional
WorkVolumeClaimTemplate *WorkVolumeClaimTemplate `json:"workVolumeClaimTemplate,omitempty"`
}
func (rs *RunnerSpec) Validate(rootPath *field.Path) field.ErrorList {
var (
errList field.ErrorList
err error
)
err = rs.validateRepository()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("repository"), rs.Repository, err.Error()))
}
err = rs.validateWorkVolumeClaimTemplate()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("workVolumeClaimTemplate"), rs.WorkVolumeClaimTemplate, err.Error()))
}
return errList
}
// ValidateRepository validates repository field.
func (rs *RunnerSpec) ValidateRepository() error {
func (rs *RunnerSpec) validateRepository() error {
// Enterprise, Organization and repository are both exclusive.
foundCount := 0
if len(rs.Organization) > 0 {
@@ -179,6 +222,18 @@ func (rs *RunnerSpec) ValidateRepository() error {
return nil
}
func (rs *RunnerSpec) validateWorkVolumeClaimTemplate() error {
if rs.ContainerMode != "kubernetes" {
return nil
}
if rs.WorkVolumeClaimTemplate == nil {
return errors.New("Spec.ContainerMode: kubernetes must have workVolumeClaimTemplate field specified")
}
return rs.WorkVolumeClaimTemplate.validate()
}
// RunnerStatus defines the observed state of Runner
type RunnerStatus struct {
// Turns true only if the runner pod is ready.
@@ -207,13 +262,60 @@ type RunnerStatusRegistration struct {
ExpiresAt metav1.Time `json:"expiresAt"`
}
type WorkVolumeClaimTemplate struct {
StorageClassName string `json:"storageClassName"`
AccessModes []corev1.PersistentVolumeAccessMode `json:"accessModes"`
Resources corev1.ResourceRequirements `json:"resources"`
}
func (w *WorkVolumeClaimTemplate) validate() error {
if w.AccessModes == nil || len(w.AccessModes) == 0 {
return errors.New("Access mode should have at least one mode specified")
}
for _, accessMode := range w.AccessModes {
switch accessMode {
case corev1.ReadWriteOnce, corev1.ReadWriteMany:
default:
return fmt.Errorf("Access mode %v is not supported", accessMode)
}
}
return nil
}
func (w *WorkVolumeClaimTemplate) V1Volume() corev1.Volume {
return corev1.Volume{
Name: "work",
VolumeSource: corev1.VolumeSource{
Ephemeral: &corev1.EphemeralVolumeSource{
VolumeClaimTemplate: &corev1.PersistentVolumeClaimTemplate{
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: w.AccessModes,
StorageClassName: &w.StorageClassName,
Resources: w.Resources,
},
},
},
},
}
}
func (w *WorkVolumeClaimTemplate) V1VolumeMount(mountPath string) corev1.VolumeMount {
return corev1.VolumeMount{
MountPath: mountPath,
Name: "work",
}
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.enterprise",name=Enterprise,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.organization",name=Organization,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.repository",name=Repository,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.group",name=Group,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.labels",name=Labels,type=string
// +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string
// +kubebuilder:printcolumn:JSONPath=".status.message",name=Message,type=string
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// Runner is the Schema for the runners API
@@ -235,11 +337,7 @@ func (r Runner) IsRegisterable() bool {
}
now := metav1.Now()
if r.Status.Registration.ExpiresAt.Before(&now) {
return false
}
return true
return !r.Status.Registration.ExpiresAt.Before(&now)
}
// +kubebuilder:object:root=true

View File

@@ -66,15 +66,7 @@ func (r *Runner) ValidateDelete() error {
// Validate validates resource spec.
func (r *Runner) Validate() error {
var (
errList field.ErrorList
err error
)
err = r.Spec.ValidateRepository()
if err != nil {
errList = append(errList, field.Invalid(field.NewPath("spec", "repository"), r.Spec.Repository, err.Error()))
}
errList := r.Spec.Validate(field.NewPath("spec"))
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)

View File

@@ -33,7 +33,7 @@ type RunnerDeploymentSpec struct {
// EffectiveTime is the time the upstream controller requested to sync Replicas.
// It is usually populated by the webhook-based autoscaler via HRA.
// The value is inherited to RunnerRepicaSet(s) and used to prevent ephemeral runners from unnecessarily recreated.
// The value is inherited to RunnerReplicaSet(s) and used to prevent ephemeral runners from unnecessarily recreated.
//
// +optional
// +nullable

View File

@@ -66,15 +66,7 @@ func (r *RunnerDeployment) ValidateDelete() error {
// Validate validates resource spec.
func (r *RunnerDeployment) Validate() error {
var (
errList field.ErrorList
err error
)
err = r.Spec.Template.Spec.ValidateRepository()
if err != nil {
errList = append(errList, field.Invalid(field.NewPath("spec", "template", "spec", "repository"), r.Spec.Template.Spec.Repository, err.Error()))
}
errList := r.Spec.Template.Spec.Validate(field.NewPath("spec", "template", "spec"))
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)

View File

@@ -66,15 +66,7 @@ func (r *RunnerReplicaSet) ValidateDelete() error {
// Validate validates resource spec.
func (r *RunnerReplicaSet) Validate() error {
var (
errList field.ErrorList
err error
)
err = r.Spec.Template.Spec.ValidateRepository()
if err != nil {
errList = append(errList, field.Invalid(field.NewPath("spec", "template", "spec", "repository"), r.Spec.Template.Spec.Repository, err.Error()))
}
errList := r.Spec.Template.Spec.Validate(field.NewPath("spec", "template", "spec"))
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)

View File

@@ -33,6 +33,12 @@ type RunnerSetSpec struct {
// +nullable
EffectiveTime *metav1.Time `json:"effectiveTime,omitempty"`
// +optional
ServiceAccountName string `json:"serviceAccountName,omitempty"`
// +optional
WorkVolumeClaimTemplate *WorkVolumeClaimTemplate `json:"workVolumeClaimTemplate,omitempty"`
appsv1.StatefulSetSpec `json:",inline"`
}

View File

@@ -90,6 +90,22 @@ func (in *CheckRunSpec) DeepCopy() *CheckRunSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubAPICredentialsFrom) DeepCopyInto(out *GitHubAPICredentialsFrom) {
*out = *in
out.SecretRef = in.SecretRef
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GitHubAPICredentialsFrom.
func (in *GitHubAPICredentialsFrom) DeepCopy() *GitHubAPICredentialsFrom {
if in == nil {
return nil
}
out := new(GitHubAPICredentialsFrom)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubEventScaleUpTriggerSpec) DeepCopyInto(out *GitHubEventScaleUpTriggerSpec) {
*out = *in
@@ -231,6 +247,11 @@ func (in *HorizontalRunnerAutoscalerSpec) DeepCopyInto(out *HorizontalRunnerAuto
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.GitHubAPICredentialsFrom != nil {
in, out := &in.GitHubAPICredentialsFrom, &out.GitHubAPICredentialsFrom
*out = new(GitHubAPICredentialsFrom)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerSpec.
@@ -425,6 +446,11 @@ func (in *RunnerConfig) DeepCopyInto(out *RunnerConfig) {
*out = new(string)
**out = **in
}
if in.GitHubAPICredentialsFrom != nil {
in, out := &in.GitHubAPICredentialsFrom, &out.GitHubAPICredentialsFrom
*out = new(GitHubAPICredentialsFrom)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerConfig.
@@ -741,6 +767,11 @@ func (in *RunnerPodSpec) DeepCopyInto(out *RunnerPodSpec) {
*out = new(v1.PodDNSConfig)
(*in).DeepCopyInto(*out)
}
if in.WorkVolumeClaimTemplate != nil {
in, out := &in.WorkVolumeClaimTemplate, &out.WorkVolumeClaimTemplate
*out = new(WorkVolumeClaimTemplate)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerPodSpec.
@@ -939,6 +970,11 @@ func (in *RunnerSetSpec) DeepCopyInto(out *RunnerSetSpec) {
in, out := &in.EffectiveTime, &out.EffectiveTime
*out = (*in).DeepCopy()
}
if in.WorkVolumeClaimTemplate != nil {
in, out := &in.WorkVolumeClaimTemplate, &out.WorkVolumeClaimTemplate
*out = new(WorkVolumeClaimTemplate)
(*in).DeepCopyInto(*out)
}
in.StatefulSetSpec.DeepCopyInto(&out.StatefulSetSpec)
}
@@ -1126,6 +1162,42 @@ func (in *ScheduledOverride) DeepCopy() *ScheduledOverride {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SecretReference) DeepCopyInto(out *SecretReference) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SecretReference.
func (in *SecretReference) DeepCopy() *SecretReference {
if in == nil {
return nil
}
out := new(SecretReference)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkVolumeClaimTemplate) DeepCopyInto(out *WorkVolumeClaimTemplate) {
*out = *in
if in.AccessModes != nil {
in, out := &in.AccessModes, &out.AccessModes
*out = make([]v1.PersistentVolumeAccessMode, len(*in))
copy(*out, *in)
}
in.Resources.DeepCopyInto(&out.Resources)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkVolumeClaimTemplate.
func (in *WorkVolumeClaimTemplate) DeepCopy() *WorkVolumeClaimTemplate {
if in == nil {
return nil
}
out := new(WorkVolumeClaimTemplate)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkflowJobSpec) DeepCopyInto(out *WorkflowJobSpec) {
*out = *in

4
build/version.go Normal file
View File

@@ -0,0 +1,4 @@
package build
// This is overridden at build-time using go-build ldflags. dev is the fallback value
var Version = "NA"

View File

@@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.19.0
version: 0.21.1
# Used as the default manager tag value when no tag property is provided in the values.yaml
appVersion: 0.24.0
appVersion: 0.26.0
home: https://github.com/actions-runner-controller/actions-runner-controller

View File

@@ -8,104 +8,105 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
> _Default values are the defaults set in the charts `values.yaml`, some properties have default configurations in the code for when the property is omitted or invalid_
| Key | Description | Default |
|----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------|
| `labels` | Set labels to apply to all resources in the chart | |
| `replicaCount` | Set the number of controller pods | 1 |
| `webhookPort` | Set the containerPort for the webhook Pod | 9443 |
| `syncPeriod` | Set the period in which the controler reconciles the desired runners count | 10m |
| `enableLeaderElection` | Enable election configuration | true |
| `leaderElectionId` | Set the election ID for the controller group | |
| `githubEnterpriseServerURL` | Set the URL for a self-hosted GitHub Enterprise Server | |
| `githubURL` | Override GitHub URL to be used for GitHub API calls | |
| `githubUploadURL` | Override GitHub Upload URL to be used for GitHub API calls | |
| `runnerGithubURL` | Override GitHub URL to be used by runners during registration | |
| `logLevel` | Set the log level of the controller container | |
| `additionalVolumes` | Set additional volumes to add to the manager container | |
| `additionalVolumeMounts` | Set additional volume mounts to add to the manager container | |
| `authSecret.create` | Deploy the controller auth secret | false |
| `authSecret.name` | Set the name of the auth secret | controller-manager |
| `authSecret.annotations` | Set annotations for the auth Secret | |
| `authSecret.github_app_id` | The ID of your GitHub App. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_installation_id` | The ID of your GitHub App installation. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_private_key` | The multiline string of your GitHub App's private key. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_token` | Your chosen GitHub PAT token. **This can't be set at the same time as the `authSecret.github_app_*`** | |
| `authSecret.github_basicauth_username` | Username for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `authSecret.github_basicauth_password` | Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `dockerRegistryMirror` | The default Docker Registry Mirror used by runners. | |
| `hostNetwork` | The "hostNetwork" of the controller container | false |
| `image.repository` | The "repository/image" of the controller container | summerwind/actions-runner-controller |
| `image.tag` | The tag of the controller container | |
| `image.actionsRunnerRepositoryAndTag` | The "repository/image" of the actions runner container | summerwind/actions-runner:latest |
| `image.actionsRunnerImagePullSecrets` | Optional image pull secrets to be included in the runner pod's ImagePullSecrets | |
| `image.dindSidecarRepositoryAndTag` | The "repository/image" of the dind sidecar container | docker:dind |
| `image.pullPolicy` | The pull policy of the controller image | IfNotPresent |
| `metrics.serviceMonitor` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `metrics.serviceAnnotations` | Set annotations for the provisioned metrics service resource | |
| `metrics.port` | Set port of metrics service | 8443 |
| `metrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
| `metrics.proxy.image.repository` | The "repository/image" of the kube-proxy container | quay.io/brancz/kube-rbac-proxy |
| `metrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.10.0 |
| `metrics.serviceMonitorLabels` | Set labels to apply to ServiceMonitor resources | |
| `imagePullSecrets` | Specifies the secret to be used when pulling the controller pod containers | |
| `fullnameOverride` | Override the full resource names | |
| `nameOverride` | Override the resource name prefix | |
| `serviceAccount.annotations` | Set annotations to the service account | |
| `serviceAccount.create` | Deploy the controller pod under a service account | true |
| `podAnnotations` | Set annotations for the controller pod | |
| `podLabels` | Set labels for the controller pod | |
| `serviceAccount.name` | Set the name of the service account | |
| `securityContext` | Set the security context for each container in the controller pod | |
| `podSecurityContext` | Set the security context to controller pod | |
| `service.annotations` | Set annotations for the provisioned webhook service resource | |
| `service.port` | Set controller service ports | |
| `service.type` | Set controller service type | |
| `topologySpreadConstraints` | Set the controller pod topologySpreadConstraints | |
| `nodeSelector` | Set the controller pod nodeSelector | |
| `resources` | Set the controller pod resources | |
| `affinity` | Set the controller pod affinity rules | |
| `podDisruptionBudget.enabled` | Enables a PDB to ensure HA of controller pods | false |
| `podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |
| `tolerations` | Set the controller pod tolerations | |
| `env` | Set environment variables for the controller container | |
| `priorityClassName` | Set the controller pod priorityClassName | |
| `scope.watchNamespace` | Tells the controller and the github webhook server which namespace to watch if `scope.singleNamespace` is true | `Release.Namespace` (the default namespace of the helm chart). |
| `scope.singleNamespace` | Limit the controller to watch a single namespace | false |
| `certManagerEnabled` | Enable cert-manager. If disabled you must set admissionWebHooks.caBundle and create TLS secrets manually | true |
| `admissionWebHooks.caBundle` | Base64-encoded PEM bundle containing the CA that signed the webhook's serving certificate | |
| `githubWebhookServer.logLevel` | Set the log level of the githubWebhookServer container | |
| `githubWebhookServer.replicaCount` | Set the number of webhook server pods | 1 |
| `githubWebhookServer.useRunnerGroupsVisibility` | Enable supporting runner groups with custom visibility. This will incur in extra API calls and may blow up your budget. Currently, you also need to set `githubWebhookServer.secret.enabled` to enable this feature. | false |
| `githubWebhookServer.syncPeriod` | Set the period in which the controller reconciles the resources | 10m |
| `githubWebhookServer.enabled` | Deploy the webhook server pod | false |
| `githubWebhookServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false |
| `githubWebhookServer.secret.create` | Deploy the webhook hook secret | false |
| `githubWebhookServer.secret.name` | Set the name of the webhook hook secret | github-webhook-server |
| `githubWebhookServer.secret.github_webhook_secret_token` | Set the webhook secret token value | |
| `githubWebhookServer.imagePullSecrets` | Specifies the secret to be used when pulling the githubWebhookServer pod containers | |
| `githubWebhookServer.nameOverride` | Override the resource name prefix | |
| `githubWebhookServer.fullnameOverride` | Override the full resource names | |
| `githubWebhookServer.serviceAccount.create` | Deploy the githubWebhookServer under a service account | true |
| `githubWebhookServer.serviceAccount.annotations` | Set annotations for the service account | |
| `githubWebhookServer.serviceAccount.name` | Set the service account name | |
| `githubWebhookServer.podAnnotations` | Set annotations for the githubWebhookServer pod | |
| `githubWebhookServer.podLabels` | Set labels for the githubWebhookServer pod | |
| `githubWebhookServer.podSecurityContext` | Set the security context to githubWebhookServer pod | |
| `githubWebhookServer.securityContext` | Set the security context for each container in the githubWebhookServer pod | |
| `githubWebhookServer.resources` | Set the githubWebhookServer pod resources | |
| `githubWebhookServer.topologySpreadConstraints` | Set the githubWebhookServer pod topologySpreadConstraints | |
| `githubWebhookServer.nodeSelector` | Set the githubWebhookServer pod nodeSelector | |
| `githubWebhookServer.tolerations` | Set the githubWebhookServer pod tolerations | |
| `githubWebhookServer.affinity` | Set the githubWebhookServer pod affinity rules | |
| `githubWebhookServer.priorityClassName` | Set the githubWebhookServer pod priorityClassName | |
| `githubWebhookServer.service.type` | Set githubWebhookServer service type | |
| `githubWebhookServer.service.ports` | Set githubWebhookServer service ports | `[{"port":80, "targetPort:"http", "protocol":"TCP", "name":"http"}]` |
| `githubWebhookServer.ingress.enabled` | Deploy an ingress kind for the githubWebhookServer | false |
| `githubWebhookServer.ingress.annotations` | Set annotations for the ingress kind | |
| `githubWebhookServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
| `githubWebhookServer.ingress.tls` | Set tls configuration for ingress | |
| `githubWebhookServer.ingress.ingressClassName` | Set ingress class name | |
| `githubWebhookServer.podDisruptionBudget.enabled` | Enables a PDB to ensure HA of githubwebhook pods | false |
| `githubWebhookServer.podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `githubWebhookServer.podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |
| Key | Description | Default |
|----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|
| `labels` | Set labels to apply to all resources in the chart | |
| `replicaCount` | Set the number of controller pods | 1 |
| `webhookPort` | Set the containerPort for the webhook Pod | 9443 |
| `syncPeriod` | Set the period in which the controller reconciles the desired runners count | 1m |
| `enableLeaderElection` | Enable election configuration | true |
| `leaderElectionId` | Set the election ID for the controller group | |
| `githubEnterpriseServerURL` | Set the URL for a self-hosted GitHub Enterprise Server | |
| `githubURL` | Override GitHub URL to be used for GitHub API calls | |
| `githubUploadURL` | Override GitHub Upload URL to be used for GitHub API calls | |
| `runnerGithubURL` | Override GitHub URL to be used by runners during registration | |
| `logLevel` | Set the log level of the controller container | |
| `additionalVolumes` | Set additional volumes to add to the manager container | |
| `additionalVolumeMounts` | Set additional volume mounts to add to the manager container | |
| `authSecret.create` | Deploy the controller auth secret | false |
| `authSecret.name` | Set the name of the auth secret | controller-manager |
| `authSecret.annotations` | Set annotations for the auth Secret | |
| `authSecret.github_app_id` | The ID of your GitHub App. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_installation_id` | The ID of your GitHub App installation. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_private_key` | The multiline string of your GitHub App's private key. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_token` | Your chosen GitHub PAT token. **This can't be set at the same time as the `authSecret.github_app_*`** | |
| `authSecret.github_basicauth_username` | Username for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `authSecret.github_basicauth_password` | Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `dockerRegistryMirror` | The default Docker Registry Mirror used by runners. | |
| `hostNetwork` | The "hostNetwork" of the controller container | false |
| `image.repository` | The "repository/image" of the controller container | summerwind/actions-runner-controller |
| `image.tag` | The tag of the controller container | |
| `image.actionsRunnerRepositoryAndTag` | The "repository/image" of the actions runner container | summerwind/actions-runner:latest |
| `image.actionsRunnerImagePullSecrets` | Optional image pull secrets to be included in the runner pod's ImagePullSecrets | |
| `image.dindSidecarRepositoryAndTag` | The "repository/image" of the dind sidecar container | docker:dind |
| `image.pullPolicy` | The pull policy of the controller image | IfNotPresent |
| `metrics.serviceMonitor` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `metrics.serviceAnnotations` | Set annotations for the provisioned metrics service resource | |
| `metrics.port` | Set port of metrics service | 8443 |
| `metrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
| `metrics.proxy.image.repository` | The "repository/image" of the kube-proxy container | quay.io/brancz/kube-rbac-proxy |
| `metrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.10.0 |
| `metrics.serviceMonitorLabels` | Set labels to apply to ServiceMonitor resources | |
| `imagePullSecrets` | Specifies the secret to be used when pulling the controller pod containers | |
| `fullnameOverride` | Override the full resource names | |
| `nameOverride` | Override the resource name prefix | |
| `serviceAccount.annotations` | Set annotations to the service account | |
| `serviceAccount.create` | Deploy the controller pod under a service account | true |
| `podAnnotations` | Set annotations for the controller pod | |
| `podLabels` | Set labels for the controller pod | |
| `serviceAccount.name` | Set the name of the service account | |
| `securityContext` | Set the security context for each container in the controller pod | |
| `podSecurityContext` | Set the security context to controller pod | |
| `service.annotations` | Set annotations for the provisioned webhook service resource | |
| `service.port` | Set controller service ports | |
| `service.type` | Set controller service type | |
| `topologySpreadConstraints` | Set the controller pod topologySpreadConstraints | |
| `nodeSelector` | Set the controller pod nodeSelector | |
| `resources` | Set the controller pod resources | |
| `affinity` | Set the controller pod affinity rules | |
| `podDisruptionBudget.enabled` | Enables a PDB to ensure HA of controller pods | false |
| `podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |
| `tolerations` | Set the controller pod tolerations | |
| `env` | Set environment variables for the controller container | |
| `priorityClassName` | Set the controller pod priorityClassName | |
| `scope.watchNamespace` | Tells the controller and the github webhook server which namespace to watch if `scope.singleNamespace` is true | `Release.Namespace` (the default namespace of the helm chart). |
| `scope.singleNamespace` | Limit the controller to watch a single namespace | false |
| `certManagerEnabled` | Enable cert-manager. If disabled you must set admissionWebHooks.caBundle and create TLS secrets manually | true |
| `runner.statusUpdateHook.enabled` | Use custom RBAC for runners (role, role binding and service account), this will enable reporting runner statuses | false |
| `admissionWebHooks.caBundle` | Base64-encoded PEM bundle containing the CA that signed the webhook's serving certificate | |
| `githubWebhookServer.logLevel` | Set the log level of the githubWebhookServer container | |
| `githubWebhookServer.replicaCount` | Set the number of webhook server pods | 1 |
| `githubWebhookServer.useRunnerGroupsVisibility` | Enable supporting runner groups with custom visibility. This will incur in extra API calls and may blow up your budget. Currently, you also need to set `githubWebhookServer.secret.enabled` to enable this feature. | false |
| `githubWebhookServer.enabled` | Deploy the webhook server pod | false |
| `githubWebhookServer.queueLimit` | Set the queue size limit in the githubWebhookServer | |
| `githubWebhookServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false |
| `githubWebhookServer.secret.create` | Deploy the webhook hook secret | false |
| `githubWebhookServer.secret.name` | Set the name of the webhook hook secret | github-webhook-server |
| `githubWebhookServer.secret.github_webhook_secret_token` | Set the webhook secret token value | |
| `githubWebhookServer.imagePullSecrets` | Specifies the secret to be used when pulling the githubWebhookServer pod containers | |
| `githubWebhookServer.nameOverride` | Override the resource name prefix | |
| `githubWebhookServer.fullnameOverride` | Override the full resource names | |
| `githubWebhookServer.serviceAccount.create` | Deploy the githubWebhookServer under a service account | true |
| `githubWebhookServer.serviceAccount.annotations` | Set annotations for the service account | |
| `githubWebhookServer.serviceAccount.name` | Set the service account name | |
| `githubWebhookServer.podAnnotations` | Set annotations for the githubWebhookServer pod | |
| `githubWebhookServer.podLabels` | Set labels for the githubWebhookServer pod | |
| `githubWebhookServer.podSecurityContext` | Set the security context to githubWebhookServer pod | |
| `githubWebhookServer.securityContext` | Set the security context for each container in the githubWebhookServer pod | |
| `githubWebhookServer.resources` | Set the githubWebhookServer pod resources | |
| `githubWebhookServer.topologySpreadConstraints` | Set the githubWebhookServer pod topologySpreadConstraints | |
| `githubWebhookServer.nodeSelector` | Set the githubWebhookServer pod nodeSelector | |
| `githubWebhookServer.tolerations` | Set the githubWebhookServer pod tolerations | |
| `githubWebhookServer.affinity` | Set the githubWebhookServer pod affinity rules | |
| `githubWebhookServer.priorityClassName` | Set the githubWebhookServer pod priorityClassName | |
| `githubWebhookServer.service.type` | Set githubWebhookServer service type | |
| `githubWebhookServer.service.ports` | Set githubWebhookServer service ports | `[{"port":80, "targetPort:"http", "protocol":"TCP", "name":"http"}]` |
| `githubWebhookServer.ingress.enabled` | Deploy an ingress kind for the githubWebhookServer | false |
| `githubWebhookServer.ingress.annotations` | Set annotations for the ingress kind | |
| `githubWebhookServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
| `githubWebhookServer.ingress.tls` | Set tls configuration for ingress | |
| `githubWebhookServer.ingress.ingressClassName` | Set ingress class name | |
| `githubWebhookServer.podDisruptionBudget.enabled` | Enables a PDB to ensure HA of githubwebhook pods | false |
| `githubWebhookServer.podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `githubWebhookServer.podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |

View File

@@ -61,6 +61,16 @@ spec:
type: integer
type: object
type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
maxReplicas:
description: MaxReplicas is the maximum number of replicas the deployment is allowed to scale
type: integer
@@ -92,7 +102,7 @@ spec:
description: ScaleUpThreshold is the percentage of busy runners greater than which will trigger the hpa to scale runners up.
type: string
type:
description: Type is the type of metric to be used for autoscaling. The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
description: Type is the type of metric to be used for autoscaling. It can be TotalNumberOfQueuedAndInProgressWorkflowRuns or PercentageRunnersBusy.
type: string
type: object
type: array
@@ -170,7 +180,7 @@ spec:
scheduledOverrides:
description: ScheduledOverrides is the list of ScheduledOverride. It can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. The earlier a scheduled override is, the higher it is prioritized.
items:
description: ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. A schedule can optionally be recurring, so that the correspoding override happens every day, week, month, or year.
description: ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. A schedule can optionally be recurring, so that the corresponding override happens every day, week, month, or year.
properties:
endTime:
description: EndTime is the time at which the first override ends.

View File

@@ -114,4 +114,4 @@ Create the name of the service account to use
{{- define "actions-runner-controller.pdbName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 59 }}-pdb
{{- end }}
{{- end }}

View File

@@ -8,6 +8,7 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "actions-runner-controller.serviceMonitorName" . }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- path: /metrics

View File

@@ -1,5 +1,5 @@
{{- if .Values.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
labels:

View File

@@ -58,15 +58,15 @@ spec:
{{- if .Values.scope.singleNamespace }}
- "--watch-namespace={{ default .Release.Namespace .Values.scope.watchNamespace }}"
{{- end }}
{{- if .Values.githubAPICacheDuration }}
- "--github-api-cache-duration={{ .Values.githubAPICacheDuration }}"
{{- end }}
{{- if .Values.logLevel }}
- "--log-level={{ .Values.logLevel }}"
{{- end }}
{{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }}
{{- if .Values.runner.statusUpdateHook.enabled }}
- "--runner-status-update-hook"
{{- end }}
command:
- "/manager"
env:
@@ -118,10 +118,14 @@ spec:
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
{{- end }}
{{- if kindIs "slice" .Values.env }}
{{- toYaml .Values.env | nindent 8 }}
{{- else }}
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: manager
imagePullPolicy: {{ .Values.image.pullPolicy }}

View File

@@ -39,7 +39,6 @@ spec:
{{- $metricsHost := .Values.metrics.proxy.enabled | ternary "127.0.0.1" "0.0.0.0" }}
{{- $metricsPort := .Values.metrics.proxy.enabled | ternary "8080" .Values.metrics.port }}
- "--metrics-addr={{ $metricsHost }}:{{ $metricsPort }}"
- "--sync-period={{ .Values.githubWebhookServer.syncPeriod }}"
{{- if .Values.githubWebhookServer.logLevel }}
- "--log-level={{ .Values.githubWebhookServer.logLevel }}"
{{- end }}
@@ -49,6 +48,9 @@ spec:
{{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }}
{{- if .Values.githubWebhookServer.queueLimit }}
- "--queue-limit={{ .Values.githubWebhookServer.queueLimit }}"
{{- end }}
command:
- "/github-webhook-server"
env:

View File

@@ -1,13 +1,7 @@
{{- if .Values.githubWebhookServer.ingress.enabled -}}
{{- $fullName := include "actions-runner-controller-github-webhook-server.fullname" . -}}
{{- $svcPort := (index .Values.githubWebhookServer.service.ports 0).port -}}
{{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1" }}
apiVersion: networking.k8s.io/v1
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" }}
apiVersion: networking.k8s.io/v1beta1
{{- else if .Capabilities.APIVersions.Has "extensions/v1beta1" }}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
@@ -42,19 +36,12 @@ spec:
{{- end }}
{{- range .paths }}
- path: {{ .path }}
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1" }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1" }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,5 +1,5 @@
{{- if .Values.githubWebhookServer.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
labels:

View File

@@ -12,5 +12,17 @@ data:
{{- if .Values.githubWebhookServer.secret.github_webhook_secret_token }}
github_webhook_secret_token: {{ .Values.githubWebhookServer.secret.github_webhook_secret_token | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_app_id }}
github_app_id: {{ .Values.githubWebhookServer.secret.github_app_id | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_app_installation_id }}
github_app_installation_id: {{ .Values.githubWebhookServer.secret.github_app_installation_id | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_app_private_key }}
github_app_private_key: {{ .Values.githubWebhookServer.secret.github_app_private_key | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_token }}
github_token: {{ .Values.githubWebhookServer.secret.github_token | toString | b64enc }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -8,6 +8,7 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "actions-runner-controller-github-webhook-server.serviceMonitorName" . }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- path: /metrics

View File

@@ -250,3 +250,72 @@ rules:
- patch
- update
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
{{- if .Values.runner.statusUpdateHook.enabled }}
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- create
- delete
- get
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- create
- delete
- get
{{- end }}
{{- if .Values.rbac.allowGrantingKubernetesContainerModePermissions }}
{{/* These permissions are required by ARC to create RBAC resources for the runner pod to use the kubernetes container mode. */}}
{{/* See https://github.com/actions-runner-controller/actions-runner-controller/pull/1268/files#r917331632 */}}
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
- get
- apiGroups:
- ""
resources:
- pods/log
verbs:
- get
- list
- watch
- apiGroups:
- "batch"
resources:
- jobs
verbs:
- get
- list
- create
- delete
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
{{- end }}

View File

@@ -1,4 +1,8 @@
{{/*
We will use a self managed CA if one is not provided by cert-manager
*/}}
{{- $ca := genCA "actions-runner-ca" 3650 }}
{{- $cert := genSignedCert (printf "%s.%s.svc" (include "actions-runner-controller.webhookServiceName" .) .Release.Namespace) nil (list (printf "%s.%s.svc" (include "actions-runner-controller.webhookServiceName" .) .Release.Namespace)) 3650 $ca }}
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
@@ -20,6 +24,8 @@ webhooks:
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ quote .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -48,6 +54,8 @@ webhooks:
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -76,6 +84,8 @@ webhooks:
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -104,6 +114,8 @@ webhooks:
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -145,6 +157,8 @@ webhooks:
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -173,6 +187,8 @@ webhooks:
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -201,6 +217,8 @@ webhooks:
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -219,3 +237,18 @@ webhooks:
resources:
- runnerreplicasets
sideEffects: None
{{ if not (or .Values.admissionWebHooks.caBundle .Values.certManagerEnabled) }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "actions-runner-controller.servingCertName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: kubernetes.io/tls
data:
tls.crt: {{ $cert.Cert | b64enc | quote }}
tls.key: {{ $cert.Key | b64enc | quote }}
ca.crt: {{ $ca.Cert | b64enc | quote }}
{{- end }}

View File

@@ -15,12 +15,6 @@ enableLeaderElection: true
# Must be unique if more than one controller installed onto the same namespace.
#leaderElectionId: "actions-runner-controller"
# DEPRECATED: This has been removed as unnecessary in #1192
# The controller tries its best not to repeat the duplicate GitHub API call
# within this duration.
# Defaults to syncPeriod - 10s.
#githubAPICacheDuration: 30s
# The URL of your GitHub Enterprise server, if you're using one.
#githubEnterpriseServerURL: https://github.example.com
@@ -67,6 +61,18 @@ imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
runner:
statusUpdateHook:
enabled: false
rbac:
{}
# # This allows ARC to dynamically create a ServiceAccount and a Role for each Runner pod that uses "kubernetes" container mode,
# # by extending ARC's manager role to have the same permissions required by the pod runs the runner agent in "kubernetes" container mode.
# # Without this, Kubernetes blocks ARC to create the role to prevent a priviledge escalation.
# # See https://github.com/actions-runner-controller/actions-runner-controller/pull/1268/files#r917327010
# allowGrantingKubernetesContainerModePermissions: true
serviceAccount:
# Specifies whether a service account should be created
create: true
@@ -109,7 +115,7 @@ metrics:
enabled: true
image:
repository: quay.io/brancz/kube-rbac-proxy
tag: v0.12.0
tag: v0.13.1
resources:
{}
@@ -143,10 +149,20 @@ priorityClassName: ""
env:
{}
# specify additional environment variables for the controller pod.
# It's possible to specify either key vale pairs e.g.:
# http_proxy: "proxy.com:8080"
# https_proxy: "proxy.com:8080"
# no_proxy: ""
# or a list of complete environment variable definitions e.g.:
# - name: GITHUB_APP_INSTALLATION_ID
# valueFrom:
# secretKeyRef:
# key: some_key_in_the_secret
# name: some-secret-name
# optional: true
## specify additional volumes to mount in the manager container, this can be used
## to specify additional storage of material or to inject files from ConfigMaps
## into the running container
@@ -175,7 +191,6 @@ admissionWebHooks:
githubWebhookServer:
enabled: false
replicaCount: 1
syncPeriod: 10m
useRunnerGroupsVisibility: false
secret:
enabled: false
@@ -183,6 +198,13 @@ githubWebhookServer:
name: "github-webhook-server"
### GitHub Webhook Configuration
github_webhook_secret_token: ""
### GitHub Apps Configuration
## NOTE: IDs MUST be strings, use quotes
#github_app_id: ""
#github_app_installation_id: ""
#github_app_private_key: |
### GitHub PAT Configuration
#github_token: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
@@ -248,3 +270,4 @@ githubWebhookServer:
enabled: false
# minAvailable: 1
# maxUnavailable: 3
# queueLimit: 100

View File

@@ -69,9 +69,8 @@ func main() {
watchNamespace string
enableLeaderElection bool
syncPeriod time.Duration
logLevel string
logLevel string
queueLimit int
ghClient *github.Client
)
@@ -88,10 +87,8 @@ func main() {
flag.StringVar(&webhookAddr, "webhook-addr", ":8000", "The address the metric endpoint binds to.")
flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.")
flag.StringVar(&watchNamespace, "watch-namespace", "", "The namespace to watch for HorizontalRunnerAutoscaler's to scale on Webhook. Set to empty for letting it watch for all namespaces.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change")
flag.StringVar(&logLevel, "log-level", logging.LogLevelDebug, `The verbosity of the logging. Valid values are "debug", "info", "warn", "error". Defaults to "debug".`)
flag.IntVar(&queueLimit, "queue-limit", controllers.DefaultQueueLimit, `The maximum length of the scale operation queue. The scale opration is enqueued per every matching webhook event, and the server returns a 500 HTTP status when the queue was already full on enqueue attempt.`)
flag.StringVar(&webhookSecretToken, "github-webhook-secret-token", "", "The personal access token of GitHub.")
flag.StringVar(&c.Token, "github-token", c.Token, "The personal access token of GitHub.")
flag.Int64Var(&c.AppID, "github-app-id", c.AppID, "The application ID of GitHub App.")
@@ -142,10 +139,10 @@ func main() {
setupLog.Info("GitHub client is not initialized. Runner groups with custom visibility are not supported. If needed, please provide GitHub authentication. This will incur in extra GitHub API calls")
}
syncPeriod := 10 * time.Minute
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
SyncPeriod: &syncPeriod,
LeaderElection: enableLeaderElection,
Namespace: watchNamespace,
MetricsBindAddress: metricsAddr,
Port: 9443,
@@ -164,6 +161,7 @@ func main() {
SecretKeyBytes: []byte(webhookSecretToken),
Namespace: watchNamespace,
GitHubClient: ghClient,
QueueLimit: queueLimit,
}
if err = hraGitHubWebhook.SetupWithManager(mgr); err != nil {

View File

@@ -61,6 +61,16 @@ spec:
type: integer
type: object
type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
maxReplicas:
description: MaxReplicas is the maximum number of replicas the deployment is allowed to scale
type: integer
@@ -92,7 +102,7 @@ spec:
description: ScaleUpThreshold is the percentage of busy runners greater than which will trigger the hpa to scale runners up.
type: string
type:
description: Type is the type of metric to be used for autoscaling. The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
description: Type is the type of metric to be used for autoscaling. It can be TotalNumberOfQueuedAndInProgressWorkflowRuns or PercentageRunnersBusy.
type: string
type: object
type: array
@@ -170,7 +180,7 @@ spec:
scheduledOverrides:
description: ScheduledOverrides is the list of ScheduledOverride. It can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. The earlier a scheduled override is, the higher it is prioritized.
items:
description: ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. A schedule can optionally be recurring, so that the correspoding override happens every day, week, month, or year.
description: ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. A schedule can optionally be recurring, so that the corresponding override happens every day, week, month, or year.
properties:
endTime:
description: EndTime is the time at which the first override ends.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -22,8 +22,6 @@ bases:
- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
#- ../prometheus
# [GH_WEBHOOK_SERVER] To enable the GitHub webhook server, uncomment all sections with 'GH_WEBHOOK_SERVER'.
#- ../github-webhook-server
patchesStrategicMerge:
# Protect the /metrics endpoint by putting it behind auth.
@@ -46,10 +44,6 @@ patchesStrategicMerge:
# 'CERTMANAGER' needs to be enabled to use ca injection
- webhookcainjection_patch.yaml
# [GH_WEBHOOK_SERVER] To enable the GitHub webhook server, uncomment all sections with 'GH_WEBHOOK_SERVER'.
# Protect the GitHub webhook server metrics endpoint by putting it behind auth.
# - gh-webhook-server-auth-proxy-patch.yaml
# the following config is for teaching kustomize how to do var substitution
vars:
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.

View File

@@ -2,11 +2,14 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: controller
newName: summerwind/actions-runner-controller
newTag: latest
- name: controller
newName: summerwind/actions-runner-controller
newTag: latest
resources:
- deployment.yaml
- rbac.yaml
- service.yaml
- deployment.yaml
- rbac.yaml
- service.yaml
patchesStrategicMerge:
- gh-webhook-server-auth-proxy-patch.yaml

View File

@@ -249,3 +249,36 @@ rules:
- patch
- update
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- delete
- get
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- create
- delete
- get
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- create
- delete
- get

6
contrib/README.md Normal file
View File

@@ -0,0 +1,6 @@
The `contrib` directory is the place for sharing various example code for deploying and operating `actions-runner-controller`.
Anything contained in this directory is provided as-is. The maintainers of `actions-runner-controller` is not yet commited to provide
full support for using, fixing, and enhancing it. However, they will do their best effort to collect feedbacks from early adopters and advanced users like you, and may eventually consider graduating any of the examples as an official addition to the project.
See https://github.com/actions-runner-controller/actions-runner-controller/pull/1375#issuecomment-1258816470 and https://github.com/actions-runner-controller/actions-runner-controller/pull/1559#issuecomment-1258827496 for more context.

View File

@@ -0,0 +1,25 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# Docs
docs/

View File

@@ -0,0 +1,10 @@
apiVersion: v2
name: actions-runner
description: Helm Chart for Github Actions Runner
type: application
version: 0.0.1
appVersion: 2.290.1
home: https://github.com/actions-runner-controller/actions-runner-controller/tree/master/runner
sources:
- https://github.com/actions-runner-controller/actions-runner-controller/tree/master/runner

View File

@@ -0,0 +1,36 @@
## Docs
All additional docs are kept in the `docs/` folder, this README is solely for documenting the values.yaml keys and values
## Values
**_The values are documented as of HEAD, to review the configuration options for your chart version ensure you view this file at the relevent [tag](https://github.com/actions-runner-controller/actions-runner-controller/tags)_**
> _Default values are the defaults set in the charts values.yaml, some properties have default configurations in the code for when the property is omitted or invalid_
| Key | Description | Default |
|----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------|
| `labels` | Set labels to apply to all resources in the chart | |
| `replicaCount` | Set the number of runner pods | 1 |
| `image.repository` | The "repository/image" of the runner container | summerwind/actions-runner |
| `image.tag` | The tag of the runner container | |
| `image.pullPolicy` | The pull policy of the runner image | IfNotPresent |
| `imagePullSecrets` | Specifies the secret to be used when pulling the runner pod containers | |
| `fullnameOverride` | Override the full resource names | |
| `nameOverride` | Override the resource name prefix | |
| `podAnnotations` | Set annotations for the runner pod | |
| `podLabels` | Set labels for the runner pod | |
| `podSecurityContext` | Set the security context to runner pod | |
| `nodeSelector` | Set the pod nodeSelector | |
| `affinity` | Set the runner pod affinity rules | |
| `tolerations` | Set the runner pod tolerations | |
| `env` | Set environment variables for the runner container | |
| `organization` | Github organization where runner will be registered | test |
| `repository` | Github repository where runner will be registered | |
| `runnerLabels` | Labels you want to add in your runner | test |
| `autoscaler.enabled` | Enable the HorizontalRunnerAutoscaler, if its enabled then replica count will not be used | true |
| `autoscaler.minReplicas` | Minimum no of replicas | 1 |
| `autoscaler.maxReplicas` | Maximum no of replicas | 5 |
| `autoscaler.scaleDownDelaySecondsAfterScaleOut` | [Anti-Flapping Configuration](https://github.com/actions-runner-controller/actions-runner-controller#anti-flapping-configuration) | 120 |
| `autoscaler.metrics` | [Pull driven scaling](https://github.com/actions-runner-controller/actions-runner-controller#pull-driven-scaling) | default |
| `autoscaler.scaleUpTriggers` | [Webhook driven scaling](https://github.com/actions-runner-controller/actions-runner-controller#webhook-driven-scaling) | |

View File

@@ -0,0 +1,54 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "actions-runner.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "actions-runner.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "actions-runner.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "actions-runner.labels" -}}
helm.sh/chart: {{ include "actions-runner.chart" . }}
{{ include "actions-runner.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- range $k, $v := .Values.labels }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "actions-runner.selectorLabels" -}}
app.kubernetes.io/name: {{ include "actions-runner.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -0,0 +1,96 @@
apiVersion: v1
kind: List
items:
- apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: {{ include "actions-runner.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner.labels" . | nindent 6 }}
spec:
{{- if not .Values.autoscaler.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "actions-runner.selectorLabels" . | nindent 8 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
kubectl.kubernetes.io/default-logs-container: "runner"
{{- toYaml . | nindent 10 }}
{{- end }}
labels:
{{- include "actions-runner.selectorLabels" . | nindent 10 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 10 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 10 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 10 }}
{{- with .Values.priorityClassName }}
priorityClassName: "{{ . }}"
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.organization }}
organization: {{ .Values.organization }}
{{- end }}
{{- if .Values.repository }}
repository: {{ .Values.repository }}
{{- end }}
group: {{ .Values.group | default "Default" }}
{{- with .Values.runnerLabels }}
labels:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- if .Values.env }}
env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
{{- end }}
{{- if .Values.autoscaler.enabled }}
- apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: {{ (list (include "actions-runner.fullname" .) "autoscaler" | join "-") }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner.labels" . | nindent 6 }}
spec:
{{- if .Values.autoscaler.scaleDownDelaySecondsAfterScaleOut }}
scaleDownDelaySecondsAfterScaleOut: {{ .Values.autoscaler.scaleDownDelaySecondsAfterScaleOut }}
{{- end }}
scaleTargetRef:
name: {{ include "actions-runner.fullname" . }}
minReplicas: {{ .Values.autoscaler.minReplicas }}
maxReplicas: {{ .Values.autoscaler.maxReplicas }}
{{- with .Values.autoscaler.scaleUpTriggers }}
scaleUpTriggers:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.autoscaler.metrics }}
metrics:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,63 @@
image:
repository: summerwind/actions-runner
tag: v2.290.1-ubuntu-20.04
pullPolicy: IfNotPresent
# Create runner for an organization or a repository
# Set only one of the two either organization or repository
# By default, it creates runner under github organization test
organization: test
# repository: mumoshu/actions-runner-controller-ci
# Labels you want to add in your runner
runnerLabels:
- test
# If you enable Autoscaler, then it will not be used
replicaCount: 1
# The Runner Group that the runner(s) should be associated with.
# See https://docs.github.com/en/github-ae@latest/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups.
group: Default
autoscaler:
enabled: true
minReplicas: 1
maxReplicas: 5
scaleDownDelaySecondsAfterScaleOut: 120
# metrics (pull method) / scaleUpTriggers (push method)
# https://github.com/actions-runner-controller/actions-runner-controller#pull-driven-scaling
# https://github.com/actions-runner-controller/actions-runner-controller#webhook-driven-scaling
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75'
scaleDownThreshold: '0.25'
scaleUpFactor: '2'
scaleDownFactor: '0.5'
# scaleUpTriggers:
# - githubEvent: {}
# duration: "5m"
podAnnotations: {}
podLabels: {}
imagePullSecrets: []
podSecurityContext:
{}
# fsGroup: 2000
# Leverage a PriorityClass to ensure your pods survive resource shortages
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# PriorityClass: system-cluster-critical
priorityClassName: ""
nodeSelector: {}
tolerations: []
affinity: {}
env:
{}

View File

@@ -0,0 +1,60 @@
### Deploying with exposed github token
resource "kubernetes_namespace" "arc" {
metadata {
name = "actions-runner-system"
}
}
resource "helm_release" "actions-runner-controller" {
count = var.actions_runner_controller
name = "actions-runner-controller"
namespace = kubernetes_namespace.arc.metadata[0].name
create_namespace = true
chart = "actions-runner-controller"
repository = "https://actions-runner-controller.github.io/actions-runner-controller"
version = "v0.19.1"
values = [<<EOF
authSecret:
github_token: hdjasyd7das7d7asd78as87dasdas
create: true
EOF
]
depends_on = [resource.helm_release.cm]
}
#============================================================================================================================================
### Deploying with secret manager like AWS's
# make sure the name of the secret is the same as secret_id
data "aws_secretsmanager_secret_version" "creds" {
secret_id = "github/access_token"
}
locals {
github_creds = jsondecode(
data.aws_secretsmanager_secret_version.creds.secret_string
)
}
resource "kubernetes_namespace" "arc" {
metadata {
name = "actions-runner-system"
}
}
resource "helm_release" "actions-runner-controller" {
count = var.actions_runner_controller
name = "actions-runner-controller"
namespace = kubernetes_namespace.arc.metadata[0].name
create_namespace = true
chart = "actions-runner-controller"
repository = "https://actions-runner-controller.github.io/actions-runner-controller"
version = "v0.19.1"
values = [<<EOF
authSecret:
github_token: ${local.github_creds.github_token}
create: true
EOF
]
depends_on = [resource.helm_release.cm]
}

View File

@@ -0,0 +1,27 @@
# cert-manager must be deployed or included via the deployment process
resource "kubernetes_namespace" "cm" {
metadata {
name = "cert-manager"
}
}
resource "helm_release" "cm" {
count = var.actions_runner_controller
name = "cm"
namespace = kubernetes_namespace.cm.metadata[0].name
create_namespace = true
chart = "cert-manager"
repository = "https://charts.jetstack.io"
version = "v1.8.0"
values = [<<EOF
global:
podSecurityPolicy:
enabled: true
useAppArmor: true
prometheus:
enabled: false
installCRDs: true
EOF
]
}

View File

@@ -9,7 +9,11 @@ import (
"strings"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/google/go-github/v39/github"
prometheus_metrics "github.com/actions-runner-controller/actions-runner-controller/controllers/metrics"
arcgithub "github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/google/go-github/v47/github"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
@@ -19,7 +23,7 @@ const (
defaultScaleDownFactor = 0.7
)
func (r *HorizontalRunnerAutoscalerReconciler) suggestDesiredReplicas(st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
func (r *HorizontalRunnerAutoscalerReconciler) suggestDesiredReplicas(ghc *arcgithub.Client, st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
if hra.Spec.MinReplicas == nil {
return nil, fmt.Errorf("horizontalrunnerautoscaler %s/%s is missing minReplicas", hra.Namespace, hra.Name)
} else if hra.Spec.MaxReplicas == nil {
@@ -46,9 +50,9 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestDesiredReplicas(st scaleTa
switch primaryMetricType {
case v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns:
suggested, err = r.suggestReplicasByQueuedAndInProgressWorkflowRuns(st, hra, &primaryMetric)
suggested, err = r.suggestReplicasByQueuedAndInProgressWorkflowRuns(ghc, st, hra, &primaryMetric)
case v1alpha1.AutoscalingMetricTypePercentageRunnersBusy:
suggested, err = r.suggestReplicasByPercentageRunnersBusy(st, hra, primaryMetric)
suggested, err = r.suggestReplicasByPercentageRunnersBusy(ghc, st, hra, primaryMetric)
default:
return nil, fmt.Errorf("validating autoscaling metrics: unsupported metric type %q", primaryMetric)
}
@@ -81,11 +85,10 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestDesiredReplicas(st scaleTa
)
}
return r.suggestReplicasByQueuedAndInProgressWorkflowRuns(st, hra, &fallbackMetric)
return r.suggestReplicasByQueuedAndInProgressWorkflowRuns(ghc, st, hra, &fallbackMetric)
}
func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgressWorkflowRuns(st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, metrics *v1alpha1.MetricSpec) (*int, error) {
func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgressWorkflowRuns(ghc *arcgithub.Client, st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, metrics *v1alpha1.MetricSpec) (*int, error) {
var repos [][]string
repoID := st.repo
if repoID == "" {
@@ -124,7 +127,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgr
opt := github.ListWorkflowJobsOptions{ListOptions: github.ListOptions{PerPage: 50}}
var allJobs []*github.WorkflowJob
for {
jobs, resp, err := r.GitHubClient.Actions.ListWorkflowJobs(context.TODO(), user, repoName, runID, &opt)
jobs, resp, err := ghc.Actions.ListWorkflowJobs(context.TODO(), user, repoName, runID, &opt)
if err != nil {
r.Log.Error(err, "Error listing workflow jobs")
return //err
@@ -182,7 +185,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgr
for _, repo := range repos {
user, repoName := repo[0], repo[1]
workflowRuns, err := r.GitHubClient.ListRepositoryWorkflowRuns(context.TODO(), user, repoName)
workflowRuns, err := ghc.ListRepositoryWorkflowRuns(context.TODO(), user, repoName)
if err != nil {
return nil, err
}
@@ -209,6 +212,20 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgr
necessaryReplicas := queued + inProgress
prometheus_metrics.SetHorizontalRunnerAutoscalerQueuedAndInProgressWorkflowRuns(
hra.ObjectMeta,
st.enterprise,
st.org,
st.repo,
st.kind,
st.st,
necessaryReplicas,
completed,
inProgress,
queued,
unknown,
)
r.Log.V(1).Info(
fmt.Sprintf("Suggested desired replicas of %d by TotalNumberOfQueuedAndInProgressWorkflowRuns", necessaryReplicas),
"workflow_runs_completed", completed,
@@ -224,7 +241,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgr
return &necessaryReplicas, nil
}
func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunnersBusy(st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, metrics v1alpha1.MetricSpec) (*int, error) {
func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunnersBusy(ghc *arcgithub.Client, st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, metrics v1alpha1.MetricSpec) (*int, error) {
ctx := context.Background()
scaleUpThreshold := defaultScaleUpThreshold
scaleDownThreshold := defaultScaleDownThreshold
@@ -293,7 +310,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunner
)
// ListRunners will return all runners managed by GitHub - not restricted to ns
runners, err := r.GitHubClient.ListRunners(
runners, err := ghc.ListRunners(
ctx,
enterprise,
organization,
@@ -314,22 +331,52 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunner
numRunners int
numRunnersRegistered int
numRunnersBusy int
numTerminatingBusy int
)
numRunners = len(runnerMap)
busyTerminatingRunnerPods := map[string]struct{}{}
kindLabel := LabelKeyRunnerDeploymentName
if hra.Spec.ScaleTargetRef.Kind == "RunnerSet" {
kindLabel = LabelKeyRunnerSetName
}
var runnerPodList corev1.PodList
if err := r.Client.List(ctx, &runnerPodList, client.InNamespace(hra.Namespace), client.MatchingLabels(map[string]string{
kindLabel: hra.Spec.ScaleTargetRef.Name,
})); err != nil {
return nil, err
}
for _, p := range runnerPodList.Items {
if p.Annotations[AnnotationKeyUnregistrationFailureMessage] != "" {
busyTerminatingRunnerPods[p.Name] = struct{}{}
}
}
for _, runner := range runners {
if _, ok := runnerMap[*runner.Name]; ok {
numRunnersRegistered++
if runner.GetBusy() {
numRunnersBusy++
} else if _, ok := busyTerminatingRunnerPods[*runner.Name]; ok {
numTerminatingBusy++
}
delete(busyTerminatingRunnerPods, *runner.Name)
}
}
// Remaining busyTerminatingRunnerPods are runners that were not on the ListRunners API response yet
for range busyTerminatingRunnerPods {
numTerminatingBusy++
}
var desiredReplicas int
fractionBusy := float64(numRunnersBusy) / float64(desiredReplicasBefore)
fractionBusy := float64(numRunnersBusy+numTerminatingBusy) / float64(desiredReplicasBefore)
if fractionBusy >= scaleUpThreshold {
if scaleUpAdjustment > 0 {
desiredReplicas = desiredReplicasBefore + scaleUpAdjustment
@@ -350,6 +397,19 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunner
//
// - num_runners can be as twice as large as replicas_desired_before while
// the runnerdeployment controller is replacing RunnerReplicaSet for runner update.
prometheus_metrics.SetHorizontalRunnerAutoscalerPercentageRunnersBusy(
hra.ObjectMeta,
st.enterprise,
st.org,
st.repo,
st.kind,
st.st,
desiredReplicas,
numRunners,
numRunnersRegistered,
numRunnersBusy,
numTerminatingBusy,
)
r.Log.V(1).Info(
fmt.Sprintf("Suggested desired replicas of %d by PercentageRunnersBusy", desiredReplicas),
@@ -358,6 +418,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunner
"num_runners", numRunners,
"num_runners_registered", numRunnersRegistered,
"num_runners_busy", numRunnersBusy,
"num_terminating_busy", numTerminatingBusy,
"namespace", hra.Namespace,
"kind", st.kind,
"name", st.st,

View File

@@ -330,7 +330,6 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
h := &HorizontalRunnerAutoscalerReconciler{
Log: log,
GitHubClient: client,
Scheme: scheme,
DefaultScaleDownDelay: DefaultScaleDownDelay,
}
@@ -379,7 +378,7 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
st := h.scaleTargetFromRD(context.Background(), rd)
got, err := h.computeReplicasWithCache(log, metav1Now.Time, st, hra, minReplicas)
got, err := h.computeReplicasWithCache(client, log, metav1Now.Time, st, hra, minReplicas)
if err != nil {
if tc.err == "" {
t.Fatalf("unexpected error: expected none, got %v", err)
@@ -720,7 +719,6 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
h := &HorizontalRunnerAutoscalerReconciler{
Log: log,
Scheme: scheme,
GitHubClient: client,
DefaultScaleDownDelay: DefaultScaleDownDelay,
}
@@ -781,7 +779,7 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
st := h.scaleTargetFromRD(context.Background(), rd)
got, err := h.computeReplicasWithCache(log, metav1Now.Time, st, hra, minReplicas)
got, err := h.computeReplicasWithCache(client, log, metav1Now.Time, st, hra, minReplicas)
if err != nil {
if tc.err == "" {
t.Fatalf("unexpected error: expected none, got %v", err)

View File

@@ -4,17 +4,22 @@ import "time"
const (
LabelKeyRunnerSetName = "runnerset-name"
LabelKeyRunner = "actions-runner"
)
const (
// This names requires at least one slash to work.
// See https://github.com/google/knative-gcp/issues/378
runnerPodFinalizerName = "actions.summerwind.dev/runner-pod"
runnerPodFinalizerName = "actions.summerwind.dev/runner-pod"
runnerLinkedResourcesFinalizerName = "actions.summerwind.dev/linked-resources"
annotationKeyPrefix = "actions-runner/"
AnnotationKeyLastRegistrationCheckTime = "actions-runner-controller/last-registration-check-time"
// AnnotationKeyUnregistrationFailureMessage is the annotation that is added onto the pod once it failed to be unregistered from GitHub due to e.g. 422 error
AnnotationKeyUnregistrationFailureMessage = annotationKeyPrefix + "unregistration-failure-message"
// AnnotationKeyUnregistrationCompleteTimestamp is the annotation that is added onto the pod once the previously started unregistration process has been completed.
AnnotationKeyUnregistrationCompleteTimestamp = annotationKeyPrefix + "unregistration-complete-timestamp"
@@ -61,4 +66,7 @@ const (
EnvVarRunnerName = "RUNNER_NAME"
EnvVarRunnerToken = "RUNNER_TOKEN"
// defaultHookPath is path to the hook script used when the "containerMode: kubernetes" is specified
defaultRunnerHookPath = "/runner/k8s/index.js"
)

View File

@@ -0,0 +1,206 @@
package controllers
import (
"context"
"fmt"
"sync"
"time"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/go-logr/logr"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
type batchScaler struct {
Ctx context.Context
Client client.Client
Log logr.Logger
interval time.Duration
queue chan *ScaleTarget
workerStart sync.Once
}
func newBatchScaler(ctx context.Context, client client.Client, log logr.Logger) *batchScaler {
return &batchScaler{
Ctx: ctx,
Client: client,
Log: log,
interval: 3 * time.Second,
}
}
type batchScaleOperation struct {
namespacedName types.NamespacedName
scaleOps []scaleOperation
}
type scaleOperation struct {
trigger v1alpha1.ScaleUpTrigger
log logr.Logger
}
// Add the scale target to the unbounded queue, blocking until the target is successfully added to the queue.
// All the targets in the queue are dequeued every 3 seconds, grouped by the HRA, and applied.
// In a happy path, batchScaler update each HRA only once, even though the HRA had two or more associated webhook events in the 3 seconds interval,
// which results in less K8s API calls and less HRA update conflicts in case your ARC installation receives a lot of webhook events
func (s *batchScaler) Add(st *ScaleTarget) {
if st == nil {
return
}
s.workerStart.Do(func() {
var expBackoff = []time.Duration{time.Second, 2 * time.Second, 4 * time.Second, 8 * time.Second, 16 * time.Second}
s.queue = make(chan *ScaleTarget)
log := s.Log
go func() {
log.Info("Starting batch worker")
defer log.Info("Stopped batch worker")
for {
select {
case <-s.Ctx.Done():
return
default:
}
log.V(2).Info("Batch worker is dequeueing operations")
batches := map[types.NamespacedName]batchScaleOperation{}
after := time.After(s.interval)
var ops uint
batch:
for {
select {
case <-after:
break batch
case st := <-s.queue:
nsName := types.NamespacedName{
Namespace: st.HorizontalRunnerAutoscaler.Namespace,
Name: st.HorizontalRunnerAutoscaler.Name,
}
b, ok := batches[nsName]
if !ok {
b = batchScaleOperation{
namespacedName: nsName,
}
}
b.scaleOps = append(b.scaleOps, scaleOperation{
log: *st.log,
trigger: st.ScaleUpTrigger,
})
batches[nsName] = b
ops++
}
}
log.V(2).Info("Batch worker dequeued operations", "ops", ops, "batches", len(batches))
retry:
for i := 0; ; i++ {
failed := map[types.NamespacedName]batchScaleOperation{}
for nsName, b := range batches {
b := b
if err := s.batchScale(context.Background(), b); err != nil {
log.V(2).Info("Failed to scale due to error", "error", err)
failed[nsName] = b
} else {
log.V(2).Info("Successfully ran batch scale", "hra", b.namespacedName)
}
}
if len(failed) == 0 {
break retry
}
batches = failed
delay := 16 * time.Second
if i < len(expBackoff) {
delay = expBackoff[i]
}
time.Sleep(delay)
}
}
}()
})
s.queue <- st
}
func (s *batchScaler) batchScale(ctx context.Context, batch batchScaleOperation) error {
var hra v1alpha1.HorizontalRunnerAutoscaler
if err := s.Client.Get(ctx, batch.namespacedName, &hra); err != nil {
return err
}
copy := hra.DeepCopy()
copy.Spec.CapacityReservations = getValidCapacityReservations(copy)
var added, completed int
for _, scale := range batch.scaleOps {
amount := 1
if scale.trigger.Amount != 0 {
amount = scale.trigger.Amount
}
scale.log.V(2).Info("Adding capacity reservation", "amount", amount)
if amount > 0 {
now := time.Now()
copy.Spec.CapacityReservations = append(copy.Spec.CapacityReservations, v1alpha1.CapacityReservation{
EffectiveTime: metav1.Time{Time: now},
ExpirationTime: metav1.Time{Time: now.Add(scale.trigger.Duration.Duration)},
Replicas: amount,
})
added += amount
} else if amount < 0 {
var reservations []v1alpha1.CapacityReservation
var found bool
for _, r := range copy.Spec.CapacityReservations {
if !found && r.Replicas+amount == 0 {
found = true
} else {
reservations = append(reservations, r)
}
}
copy.Spec.CapacityReservations = reservations
completed += amount
}
}
before := len(hra.Spec.CapacityReservations)
expired := before - len(copy.Spec.CapacityReservations)
after := len(copy.Spec.CapacityReservations)
s.Log.V(1).Info(
fmt.Sprintf("Updating hra %s for capacityReservations update", hra.Name),
"before", before,
"expired", expired,
"added", added,
"completed", completed,
"after", after,
)
if err := s.Client.Update(ctx, copy); err != nil {
return fmt.Errorf("updating horizontalrunnerautoscaler to add capacity reservation: %w", err)
}
return nil
}

View File

@@ -20,17 +20,17 @@ import (
"context"
"encoding/json"
"fmt"
"io/ioutil"
"io"
"net/http"
"strings"
"sync"
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"github.com/go-logr/logr"
gogithub "github.com/google/go-github/v39/github"
gogithub "github.com/google/go-github/v47/github"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
@@ -46,6 +46,8 @@ const (
keyPrefixEnterprise = "enterprises/"
keyRunnerGroup = "/group/"
DefaultQueueLimit = 100
)
// HorizontalRunnerAutoscalerGitHubWebhook autoscales a HorizontalRunnerAutoscaler and the RunnerDeployment on each
@@ -68,6 +70,13 @@ type HorizontalRunnerAutoscalerGitHubWebhook struct {
// Set to empty for letting it watch for all namespaces.
Namespace string
Name string
// QueueLimit is the maximum length of the bounded queue of scale targets and their associated operations
// A scale target is enqueued on each retrieval of each eligible webhook event, so that it is processed asynchronously.
QueueLimit int
worker *worker
workerInit sync.Once
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Reconcile(_ context.Context, request reconcile.Request) (reconcile.Result, error) {
@@ -122,7 +131,7 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
return
}
} else {
payload, err = ioutil.ReadAll(r.Body)
payload, err = io.ReadAll(r.Body)
if err != nil {
autoscaler.Log.Error(err, "error reading request body")
@@ -312,9 +321,19 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
return
}
if err := autoscaler.tryScale(context.TODO(), target); err != nil {
log.Error(err, "could not scale up")
autoscaler.workerInit.Do(func() {
batchScaler := newBatchScaler(context.Background(), autoscaler.Client, autoscaler.Log)
queueLimit := autoscaler.QueueLimit
if queueLimit == 0 {
queueLimit = DefaultQueueLimit
}
autoscaler.worker = newWorker(context.Background(), queueLimit, batchScaler.Add)
})
target.log = &log
if ok := autoscaler.worker.Add(target); !ok {
log.Error(err, "Could not scale up due to queue full")
return
}
@@ -383,6 +402,8 @@ func matchTriggerConditionAgainstEvent(types []string, eventAction *string) bool
type ScaleTarget struct {
v1alpha1.HorizontalRunnerAutoscaler
v1alpha1.ScaleUpTrigger
log *logr.Logger
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) searchScaleTargets(hras []v1alpha1.HorizontalRunnerAutoscaler, f func(v1alpha1.ScaleUpTrigger) bool) []ScaleTarget {
@@ -501,6 +522,7 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleUpTargetWithF
if autoscaler.GitHubClient != nil {
simu := &simulator.Simulator{
Client: autoscaler.GitHubClient,
Log: log,
}
// Get available organization runner groups and enterprise runner groups for a repository
// These are the sum of runner groups with repository access = All repositories and runner groups
@@ -770,63 +792,6 @@ HRA:
return nil, nil
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) tryScale(ctx context.Context, target *ScaleTarget) error {
if target == nil {
return nil
}
copy := target.HorizontalRunnerAutoscaler.DeepCopy()
amount := 1
if target.ScaleUpTrigger.Amount != 0 {
amount = target.ScaleUpTrigger.Amount
}
capacityReservations := getValidCapacityReservations(copy)
if amount > 0 {
now := time.Now()
copy.Spec.CapacityReservations = append(capacityReservations, v1alpha1.CapacityReservation{
EffectiveTime: metav1.Time{Time: now},
ExpirationTime: metav1.Time{Time: now.Add(target.ScaleUpTrigger.Duration.Duration)},
Replicas: amount,
})
} else if amount < 0 {
var reservations []v1alpha1.CapacityReservation
var found bool
for _, r := range capacityReservations {
if !found && r.Replicas+amount == 0 {
found = true
} else {
reservations = append(reservations, r)
}
}
copy.Spec.CapacityReservations = reservations
}
before := len(target.HorizontalRunnerAutoscaler.Spec.CapacityReservations)
expired := before - len(capacityReservations)
after := len(copy.Spec.CapacityReservations)
autoscaler.Log.V(1).Info(
fmt.Sprintf("Patching hra %s for capacityReservations update", target.HorizontalRunnerAutoscaler.Name),
"before", before,
"expired", expired,
"amount", amount,
"after", after,
)
if err := autoscaler.Client.Patch(ctx, copy, client.MergeFrom(&target.HorizontalRunnerAutoscaler)); err != nil {
return fmt.Errorf("patching horizontalrunnerautoscaler to add capacity reservation: %w", err)
}
return nil
}
func getValidCapacityReservations(autoscaler *v1alpha1.HorizontalRunnerAutoscaler) []v1alpha1.CapacityReservation {
var capacityReservations []v1alpha1.CapacityReservation

View File

@@ -3,7 +3,7 @@ package controllers
import (
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/pkg/actionsglob"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v47/github"
)
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchCheckRunEvent(event *github.CheckRunEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {

View File

@@ -2,7 +2,7 @@ package controllers
import (
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v47/github"
)
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchPullRequestEvent(event *github.PullRequestEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {

View File

@@ -2,7 +2,7 @@ package controllers
import (
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v47/github"
)
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchPushEvent(event *github.PushEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {

View File

@@ -5,7 +5,6 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/http/httptest"
"net/url"
@@ -15,7 +14,7 @@ import (
actionsv1alpha1 "github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/go-logr/logr"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v47/github"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
@@ -504,7 +503,7 @@ func testServerWithInitObjs(t *testing.T, eventType string, event interface{}, w
hraWebhook := &HorizontalRunnerAutoscalerGitHubWebhook{}
client := fake.NewFakeClientWithScheme(sc, initObjs...)
client := fake.NewClientBuilder().WithScheme(sc).WithRuntimeObjects(initObjs...).Build()
logs := installTestLogger(hraWebhook)
@@ -537,7 +536,7 @@ func testServerWithInitObjs(t *testing.T, eventType string, event interface{}, w
t.Error("status:", resp.StatusCode)
}
respBody, err := ioutil.ReadAll(resp.Body)
respBody, err := io.ReadAll(resp.Body)
if err != nil {
t.Fatal(err)
}
@@ -575,7 +574,7 @@ func sendWebhook(server *httptest.Server, eventType string, event interface{}) (
"X-GitHub-Event": {eventType},
"Content-Type": {"application/json"},
},
Body: ioutil.NopCloser(bytes.NewBuffer(reqBody)),
Body: io.NopCloser(bytes.NewBuffer(reqBody)),
}
return http.DefaultClient.Do(req)
@@ -607,7 +606,7 @@ func (l *testLogSink) Info(_ int, msg string, kvs ...interface{}) {
fmt.Fprintf(l.writer, "\n")
}
func (_ *testLogSink) Enabled(level int) bool {
func (*testLogSink) Enabled(level int) bool {
return true
}

View File

@@ -0,0 +1,55 @@
package controllers
import (
"context"
)
// worker is a worker that has a non-blocking bounded queue of scale targets, dequeues scale target and executes the scale operation one by one.
type worker struct {
scaleTargetQueue chan *ScaleTarget
work func(*ScaleTarget)
done chan struct{}
}
func newWorker(ctx context.Context, queueLimit int, work func(*ScaleTarget)) *worker {
w := &worker{
scaleTargetQueue: make(chan *ScaleTarget, queueLimit),
work: work,
done: make(chan struct{}),
}
go func() {
defer close(w.done)
for {
select {
case <-ctx.Done():
return
case t := <-w.scaleTargetQueue:
work(t)
}
}
}()
return w
}
// Add the scale target to the bounded queue, returning the result as a bool value. It returns true on successful enqueue, and returns false otherwise.
// When returned false, the queue is already full so the enqueue operation must be retried later.
// If the enqueue was triggered by an external source and there's no intermediate queue that we can use,
// you must instruct the source to resend the original request later.
// In case you're building a webhook server around this worker, this means that you must return a http error to the webhook server,
// so that (hopefully) the sender can resend the webhook event later, or at least the human operator can notice or be notified about the
// webhook develiery failure so that a manual retry can be done later.
func (w *worker) Add(st *ScaleTarget) bool {
select {
case w.scaleTargetQueue <- st:
return true
default:
return false
}
}
func (w *worker) Done() chan struct{} {
return w.done
}

View File

@@ -0,0 +1,36 @@
package controllers
import (
"context"
"testing"
"github.com/stretchr/testify/require"
)
func TestWorker_Add(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
w := newWorker(ctx, 2, func(st *ScaleTarget) {})
require.True(t, w.Add(&ScaleTarget{}))
require.True(t, w.Add(&ScaleTarget{}))
require.False(t, w.Add(&ScaleTarget{}))
}
func TestWorker_Work(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
var count int
w := newWorker(ctx, 1, func(st *ScaleTarget) {
count++
cancel()
})
require.True(t, w.Add(&ScaleTarget{}))
require.False(t, w.Add(&ScaleTarget{}))
<-w.Done()
require.Equal(t, count, 1)
}

View File

@@ -24,7 +24,6 @@ import (
corev1 "k8s.io/api/core/v1"
"github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/go-logr/logr"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/types"
@@ -38,6 +37,7 @@ import (
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/controllers/metrics"
arcgithub "github.com/actions-runner-controller/actions-runner-controller/github"
)
const (
@@ -47,11 +47,10 @@ const (
// HorizontalRunnerAutoscalerReconciler reconciles a HorizontalRunnerAutoscaler object
type HorizontalRunnerAutoscalerReconciler struct {
client.Client
GitHubClient *github.Client
GitHubClient *MultiGitHubClient
Log logr.Logger
Recorder record.EventRecorder
Scheme *runtime.Scheme
CacheDuration time.Duration
DefaultScaleDownDelay time.Duration
Name string
}
@@ -73,6 +72,8 @@ func (r *HorizontalRunnerAutoscalerReconciler) Reconcile(ctx context.Context, re
}
if !hra.ObjectMeta.DeletionTimestamp.IsZero() {
r.GitHubClient.DeinitForHRA(&hra)
return ctrl.Result{}, nil
}
@@ -310,7 +311,12 @@ func (r *HorizontalRunnerAutoscalerReconciler) reconcile(ctx context.Context, re
return ctrl.Result{}, err
}
newDesiredReplicas, err := r.computeReplicasWithCache(log, now, st, hra, minReplicas)
ghc, err := r.GitHubClient.InitForHRA(context.Background(), &hra)
if err != nil {
return ctrl.Result{}, err
}
newDesiredReplicas, err := r.computeReplicasWithCache(ghc, log, now, st, hra, minReplicas)
if err != nil {
r.Recorder.Event(&hra, corev1.EventTypeNormal, "RunnerAutoscalingFailure", err.Error())
@@ -461,10 +467,10 @@ func (r *HorizontalRunnerAutoscalerReconciler) getMinReplicas(log logr.Logger, n
return minReplicas, active, upcoming, nil
}
func (r *HorizontalRunnerAutoscalerReconciler) computeReplicasWithCache(log logr.Logger, now time.Time, st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, minReplicas int) (int, error) {
func (r *HorizontalRunnerAutoscalerReconciler) computeReplicasWithCache(ghc *arcgithub.Client, log logr.Logger, now time.Time, st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, minReplicas int) (int, error) {
var suggestedReplicas int
v, err := r.suggestDesiredReplicas(st, hra)
v, err := r.suggestDesiredReplicas(ghc, st, hra)
if err != nil {
return 0, err
}

View File

@@ -8,7 +8,7 @@ import (
"time"
github2 "github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v47/github"
"github.com/actions-runner-controller/actions-runner-controller/github/fake"
@@ -99,12 +99,14 @@ func SetupIntegrationTest(ctx2 context.Context) *testEnvironment {
return fmt.Sprintf("%s%s", ns.Name, name)
}
multiClient := NewMultiGitHubClient(mgr.GetClient(), env.ghClient)
runnerController := &RunnerReconciler{
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: env.ghClient,
GitHubClient: multiClient,
RunnerImage: "example/runner:test",
DockerImage: "example/docker:test",
Name: controllerName("runner"),
@@ -116,12 +118,11 @@ func SetupIntegrationTest(ctx2 context.Context) *testEnvironment {
Expect(err).NotTo(HaveOccurred(), "failed to setup runner controller")
replicasetController := &RunnerReplicaSetReconciler{
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: env.ghClient,
Name: controllerName("runnerreplicaset"),
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
Name: controllerName("runnerreplicaset"),
}
err = replicasetController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup runnerreplicaset controller")
@@ -137,13 +138,12 @@ func SetupIntegrationTest(ctx2 context.Context) *testEnvironment {
Expect(err).NotTo(HaveOccurred(), "failed to setup runnerdeployment controller")
autoscalerController := &HorizontalRunnerAutoscalerReconciler{
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
GitHubClient: env.ghClient,
Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"),
CacheDuration: 1 * time.Second,
Name: controllerName("horizontalrunnerautoscaler"),
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
GitHubClient: multiClient,
Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"),
Name: controllerName("horizontalrunnerautoscaler"),
}
err = autoscalerController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup autoscaler controller")
@@ -1367,7 +1367,7 @@ func (env *testEnvironment) ExpectRegisteredNumberCountEventuallyEquals(want int
return len(rs)
},
time.Second*5, time.Millisecond*500).Should(Equal(want), optionalDescriptions...)
time.Second*10, time.Millisecond*500).Should(Equal(want), optionalDescriptions...)
}
func (env *testEnvironment) SendOrgPullRequestEvent(org, repo, branch, action string) {

View File

@@ -7,8 +7,13 @@ import (
)
const (
hraName = "horizontalrunnerautoscaler"
hraNamespace = "namespace"
hraName = "horizontalrunnerautoscaler"
hraNamespace = "namespace"
stEnterprise = "enterprise"
stOrganization = "organization"
stRepository = "repository"
stKind = "kind"
stName = "name"
)
var (
@@ -16,6 +21,16 @@ var (
horizontalRunnerAutoscalerMinReplicas,
horizontalRunnerAutoscalerMaxReplicas,
horizontalRunnerAutoscalerDesiredReplicas,
horizontalRunnerAutoscalerReplicasDesired,
horizontalRunnerAutoscalerRunners,
horizontalRunnerAutoscalerRunnersRegistered,
horizontalRunnerAutoscalerRunnersBusy,
horizontalRunnerAutoscalerTerminatingBusy,
horizontalRunnerAutoscalerNecessaryReplicas,
horizontalRunnerAutoscalerWorkflowRunsCompleted,
horizontalRunnerAutoscalerWorkflowRunsInProgress,
horizontalRunnerAutoscalerWorkflowRunsQueued,
horizontalRunnerAutoscalerWorkflowRunsUnknown,
}
)
@@ -41,6 +56,78 @@ var (
},
[]string{hraName, hraNamespace},
)
// PercentageRunnersBusy
horizontalRunnerAutoscalerReplicasDesired = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "horizontalrunnerautoscaler_replicas_desired",
Help: "replicas_desired of PercentageRunnersBusy",
},
[]string{hraName, hraNamespace, stEnterprise, stOrganization, stRepository, stKind, stName},
)
horizontalRunnerAutoscalerRunners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "horizontalrunnerautoscaler_runners",
Help: "num_runners of PercentageRunnersBusy",
},
[]string{hraName, hraNamespace, stEnterprise, stOrganization, stRepository, stKind, stName},
)
horizontalRunnerAutoscalerRunnersRegistered = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "horizontalrunnerautoscaler_runners_registered",
Help: "num_runners_registered of PercentageRunnersBusy",
},
[]string{hraName, hraNamespace, stEnterprise, stOrganization, stRepository, stKind, stName},
)
horizontalRunnerAutoscalerRunnersBusy = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "horizontalrunnerautoscaler_runners_busy",
Help: "num_runners_busy of PercentageRunnersBusy",
},
[]string{hraName, hraNamespace, stEnterprise, stOrganization, stRepository, stKind, stName},
)
horizontalRunnerAutoscalerTerminatingBusy = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "horizontalrunnerautoscaler_terminating_busy",
Help: "num_terminating_busy of PercentageRunnersBusy",
},
[]string{hraName, hraNamespace, stEnterprise, stOrganization, stRepository, stKind, stName},
)
// QueuedAndInProgressWorkflowRuns
horizontalRunnerAutoscalerNecessaryReplicas = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "horizontalrunnerautoscaler_necessary_replicas",
Help: "necessary_replicas of QueuedAndInProgressWorkflowRuns",
},
[]string{hraName, hraNamespace, stEnterprise, stOrganization, stRepository, stKind, stName},
)
horizontalRunnerAutoscalerWorkflowRunsCompleted = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "horizontalrunnerautoscaler_workflow_runs_completed",
Help: "workflow_runs_completed of QueuedAndInProgressWorkflowRuns",
},
[]string{hraName, hraNamespace, stEnterprise, stOrganization, stRepository, stKind, stName},
)
horizontalRunnerAutoscalerWorkflowRunsInProgress = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "horizontalrunnerautoscaler_workflow_runs_in_progress",
Help: "workflow_runs_in_progress of QueuedAndInProgressWorkflowRuns",
},
[]string{hraName, hraNamespace, stEnterprise, stOrganization, stRepository, stKind, stName},
)
horizontalRunnerAutoscalerWorkflowRunsQueued = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "horizontalrunnerautoscaler_workflow_runs_queued",
Help: "workflow_runs_queued of QueuedAndInProgressWorkflowRuns",
},
[]string{hraName, hraNamespace, stEnterprise, stOrganization, stRepository, stKind, stName},
)
horizontalRunnerAutoscalerWorkflowRunsUnknown = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "horizontalrunnerautoscaler_workflow_runs_unknown",
Help: "workflow_runs_unknown of QueuedAndInProgressWorkflowRuns",
},
[]string{hraName, hraNamespace, stEnterprise, stOrganization, stRepository, stKind, stName},
)
)
func SetHorizontalRunnerAutoscalerSpec(o metav1.ObjectMeta, spec v1alpha1.HorizontalRunnerAutoscalerSpec) {
@@ -65,3 +152,61 @@ func SetHorizontalRunnerAutoscalerStatus(o metav1.ObjectMeta, status v1alpha1.Ho
horizontalRunnerAutoscalerDesiredReplicas.With(labels).Set(float64(*status.DesiredReplicas))
}
}
func SetHorizontalRunnerAutoscalerPercentageRunnersBusy(
o metav1.ObjectMeta,
enterprise string,
organization string,
repository string,
kind string,
name string,
desiredReplicas int,
numRunners int,
numRunnersRegistered int,
numRunnersBusy int,
numTerminatingBusy int,
) {
labels := prometheus.Labels{
hraName: o.Name,
hraNamespace: o.Namespace,
stEnterprise: enterprise,
stOrganization: organization,
stRepository: repository,
stKind: kind,
stName: name,
}
horizontalRunnerAutoscalerReplicasDesired.With(labels).Set(float64(desiredReplicas))
horizontalRunnerAutoscalerRunners.With(labels).Set(float64(numRunners))
horizontalRunnerAutoscalerRunnersRegistered.With(labels).Set(float64(numRunnersRegistered))
horizontalRunnerAutoscalerRunnersBusy.With(labels).Set(float64(numRunnersBusy))
horizontalRunnerAutoscalerTerminatingBusy.With(labels).Set(float64(numTerminatingBusy))
}
func SetHorizontalRunnerAutoscalerQueuedAndInProgressWorkflowRuns(
o metav1.ObjectMeta,
enterprise string,
organization string,
repository string,
kind string,
name string,
necessaryReplicas int,
workflowRunsCompleted int,
workflowRunsInProgress int,
workflowRunsQueued int,
workflowRunsUnknown int,
) {
labels := prometheus.Labels{
hraName: o.Name,
hraNamespace: o.Namespace,
stEnterprise: enterprise,
stOrganization: organization,
stRepository: repository,
stKind: kind,
stName: name,
}
horizontalRunnerAutoscalerNecessaryReplicas.With(labels).Set(float64(necessaryReplicas))
horizontalRunnerAutoscalerWorkflowRunsCompleted.With(labels).Set(float64(workflowRunsCompleted))
horizontalRunnerAutoscalerWorkflowRunsInProgress.With(labels).Set(float64(workflowRunsInProgress))
horizontalRunnerAutoscalerWorkflowRunsQueued.With(labels).Set(float64(workflowRunsQueued))
horizontalRunnerAutoscalerWorkflowRunsUnknown.With(labels).Set(float64(workflowRunsUnknown))
}

View File

@@ -10,12 +10,6 @@ const (
rsNamespace = "namespace"
)
var (
runnerSetMetrics = []prometheus.Collector{
runnerSetReplicas,
}
)
var (
runnerSetReplicas = prometheus.NewGaugeVec(
prometheus.GaugeOpts{

View File

@@ -0,0 +1,334 @@
package controllers
import (
"context"
"crypto/sha1"
"encoding/hex"
"fmt"
"sort"
"strconv"
"sync"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/github"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
// The api creds scret annotation is added by the runner controller or the runnerset controller according to runner.spec.githubAPICredentialsFrom.secretRef.name,
// so that the runner pod controller can share the same GitHub API credentials and the instance of the GitHub API client with the upstream controllers.
annotationKeyGitHubAPICredsSecret = annotationKeyPrefix + "github-api-creds-secret"
)
type runnerOwnerRef struct {
// kind is either StatefulSet or Runner, and populated via the owner reference in the runner pod controller or via the reconcilation target's kind in
// runnerset and runner controllers.
kind string
ns, name string
}
type secretRef struct {
ns, name string
}
// savedClient is the each cache entry that contains the client for the specific set of credentials,
// like a PAT or a pair of key and cert.
// the `hash` is a part of the savedClient not the key because we are going to keep only the client for the latest creds
// in case the operator updated the k8s secret containing the credentials.
type savedClient struct {
hash string
// refs is the map of all the objects that references this client, used for reference counting to gc
// the client if unneeded.
refs map[runnerOwnerRef]struct{}
*github.Client
}
type resourceReader interface {
Get(ctx context.Context, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error
}
type MultiGitHubClient struct {
mu sync.Mutex
client resourceReader
githubClient *github.Client
// The saved client is freed once all its dependents disappear, or the contents of the secret changed.
// We track dependents via a golang map embedded within the savedClient struct. Each dependent is checked on their respective Kubernetes finalizer,
// so that we won't miss any dependent's termination.
// The change is the secret is determined using the hash of its contents.
clients map[secretRef]savedClient
}
func NewMultiGitHubClient(client resourceReader, githubClient *github.Client) *MultiGitHubClient {
return &MultiGitHubClient{
client: client,
githubClient: githubClient,
clients: map[secretRef]savedClient{},
}
}
// Init sets up and return the *github.Client for the object.
// In case the object (like RunnerDeployment) does not request a custom client, it returns the default client.
func (c *MultiGitHubClient) InitForRunnerPod(ctx context.Context, pod *corev1.Pod) (*github.Client, error) {
// These 3 default values are used only when the user created the pod directly, not via Runner, RunnerReplicaSet, RunnerDeploment, or RunnerSet resources.
ref := refFromRunnerPod(pod)
secretName := pod.Annotations[annotationKeyGitHubAPICredsSecret]
// kind can be any of Pod, Runner, RunnerReplicaSet, RunnerDeployment, or RunnerSet depending on which custom resource the user directly created.
return c.initClientWithSecretName(ctx, pod.Namespace, secretName, ref)
}
// Init sets up and return the *github.Client for the object.
// In case the object (like RunnerDeployment) does not request a custom client, it returns the default client.
func (c *MultiGitHubClient) InitForRunner(ctx context.Context, r *v1alpha1.Runner) (*github.Client, error) {
var secretName string
if r.Spec.GitHubAPICredentialsFrom != nil {
secretName = r.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
// These 3 default values are used only when the user created the runner resource directly, not via RunnerReplicaSet, RunnerDeploment, or RunnerSet resources.
ref := refFromRunner(r)
if ref.ns != r.Namespace {
return nil, fmt.Errorf("referencing github api creds secret from owner in another namespace is not supported yet")
}
// kind can be any of Runner, RunnerReplicaSet, or RunnerDeployment depending on which custom resource the user directly created.
return c.initClientWithSecretName(ctx, r.Namespace, secretName, ref)
}
// Init sets up and return the *github.Client for the object.
// In case the object (like RunnerDeployment) does not request a custom client, it returns the default client.
func (c *MultiGitHubClient) InitForRunnerSet(ctx context.Context, rs *v1alpha1.RunnerSet) (*github.Client, error) {
ref := refFromRunnerSet(rs)
var secretName string
if rs.Spec.GitHubAPICredentialsFrom != nil {
secretName = rs.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
return c.initClientWithSecretName(ctx, rs.Namespace, secretName, ref)
}
// Init sets up and return the *github.Client for the object.
// In case the object (like RunnerDeployment) does not request a custom client, it returns the default client.
func (c *MultiGitHubClient) InitForHRA(ctx context.Context, hra *v1alpha1.HorizontalRunnerAutoscaler) (*github.Client, error) {
ref := refFromHorizontalRunnerAutoscaler(hra)
var secretName string
if hra.Spec.GitHubAPICredentialsFrom != nil {
secretName = hra.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
return c.initClientWithSecretName(ctx, hra.Namespace, secretName, ref)
}
func (c *MultiGitHubClient) DeinitForRunnerPod(p *corev1.Pod) {
secretName := p.Annotations[annotationKeyGitHubAPICredsSecret]
c.derefClient(p.Namespace, secretName, refFromRunnerPod(p))
}
func (c *MultiGitHubClient) DeinitForRunner(r *v1alpha1.Runner) {
var secretName string
if r.Spec.GitHubAPICredentialsFrom != nil {
secretName = r.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
c.derefClient(r.Namespace, secretName, refFromRunner(r))
}
func (c *MultiGitHubClient) DeinitForRunnerSet(rs *v1alpha1.RunnerSet) {
var secretName string
if rs.Spec.GitHubAPICredentialsFrom != nil {
secretName = rs.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
c.derefClient(rs.Namespace, secretName, refFromRunnerSet(rs))
}
func (c *MultiGitHubClient) DeinitForHRA(hra *v1alpha1.HorizontalRunnerAutoscaler) {
var secretName string
if hra.Spec.GitHubAPICredentialsFrom != nil {
secretName = hra.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
c.derefClient(hra.Namespace, secretName, refFromHorizontalRunnerAutoscaler(hra))
}
func (c *MultiGitHubClient) initClientForSecret(secret *corev1.Secret, dependent *runnerOwnerRef) (*savedClient, error) {
secRef := secretRef{
ns: secret.Namespace,
name: secret.Name,
}
cliRef := c.clients[secRef]
var ks []string
for k := range secret.Data {
ks = append(ks, k)
}
sort.SliceStable(ks, func(i, j int) bool { return ks[i] < ks[j] })
hash := sha1.New()
for _, k := range ks {
hash.Write(secret.Data[k])
}
hashStr := hex.EncodeToString(hash.Sum(nil))
if cliRef.hash != hashStr {
delete(c.clients, secRef)
conf, err := secretDataToGitHubClientConfig(secret.Data)
if err != nil {
return nil, err
}
// Fallback to the controller-wide setting if EnterpriseURL is not set and the original client is an enterprise client.
if conf.EnterpriseURL == "" && c.githubClient.IsEnterprise {
conf.EnterpriseURL = c.githubClient.GithubBaseURL
}
cli, err := conf.NewClient()
if err != nil {
return nil, err
}
cliRef = savedClient{
hash: hashStr,
refs: map[runnerOwnerRef]struct{}{},
Client: cli,
}
c.clients[secRef] = cliRef
}
if dependent != nil {
c.clients[secRef].refs[*dependent] = struct{}{}
}
return &cliRef, nil
}
func (c *MultiGitHubClient) initClientWithSecretName(ctx context.Context, ns, secretName string, runRef *runnerOwnerRef) (*github.Client, error) {
c.mu.Lock()
defer c.mu.Unlock()
if secretName == "" {
return c.githubClient, nil
}
secRef := secretRef{
ns: ns,
name: secretName,
}
if _, ok := c.clients[secRef]; !ok {
c.clients[secRef] = savedClient{}
}
var sec corev1.Secret
if err := c.client.Get(ctx, types.NamespacedName{Namespace: ns, Name: secretName}, &sec); err != nil {
return nil, err
}
savedClient, err := c.initClientForSecret(&sec, runRef)
if err != nil {
return nil, err
}
return savedClient.Client, nil
}
func (c *MultiGitHubClient) derefClient(ns, secretName string, dependent *runnerOwnerRef) {
c.mu.Lock()
defer c.mu.Unlock()
secRef := secretRef{
ns: ns,
name: secretName,
}
if dependent != nil {
delete(c.clients[secRef].refs, *dependent)
}
cliRef := c.clients[secRef]
if dependent == nil || len(cliRef.refs) == 0 {
delete(c.clients, secRef)
}
}
func secretDataToGitHubClientConfig(data map[string][]byte) (*github.Config, error) {
var (
conf github.Config
err error
)
conf.URL = string(data["github_url"])
conf.UploadURL = string(data["github_upload_url"])
conf.EnterpriseURL = string(data["github_enterprise_url"])
conf.RunnerGitHubURL = string(data["github_runner_url"])
conf.Token = string(data["github_token"])
appID := string(data["github_app_id"])
conf.AppID, err = strconv.ParseInt(appID, 10, 64)
if err != nil {
return nil, err
}
instID := string(data["github_app_installation_id"])
conf.AppInstallationID, err = strconv.ParseInt(instID, 10, 64)
if err != nil {
return nil, err
}
conf.AppPrivateKey = string(data["github_app_private_key"])
return &conf, nil
}
func refFromRunner(r *v1alpha1.Runner) *runnerOwnerRef {
return &runnerOwnerRef{
kind: r.Kind,
ns: r.Namespace,
name: r.Name,
}
}
func refFromRunnerPod(po *corev1.Pod) *runnerOwnerRef {
return &runnerOwnerRef{
kind: po.Kind,
ns: po.Namespace,
name: po.Name,
}
}
func refFromRunnerSet(rs *v1alpha1.RunnerSet) *runnerOwnerRef {
return &runnerOwnerRef{
kind: rs.Kind,
ns: rs.Namespace,
name: rs.Name,
}
}
func refFromHorizontalRunnerAutoscaler(hra *v1alpha1.HorizontalRunnerAutoscaler) *runnerOwnerRef {
return &runnerOwnerRef{
kind: hra.Kind,
ns: hra.Namespace,
name: hra.Name,
}
}

View File

@@ -10,7 +10,9 @@ import (
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func newWorkGenericEphemeralVolume(t *testing.T, storageReq string) corev1.Volume {
@@ -56,7 +58,7 @@ func TestNewRunnerPod(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"runnerset-name": "runner",
"actions-runner": "",
},
},
Spec: corev1.PodSpec{
@@ -125,6 +127,10 @@ func TestNewRunnerPod(t *testing.T) {
Name: "RUNNER_EPHEMERAL",
Value: "true",
},
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
{
Name: "DOCKER_HOST",
Value: "tcp://localhost:2376",
@@ -198,7 +204,7 @@ func TestNewRunnerPod(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"runnerset-name": "runner",
"actions-runner": "",
},
},
Spec: corev1.PodSpec{
@@ -255,6 +261,10 @@ func TestNewRunnerPod(t *testing.T) {
Name: "RUNNER_EPHEMERAL",
Value: "true",
},
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
},
VolumeMounts: []corev1.VolumeMount{
{
@@ -276,7 +286,7 @@ func TestNewRunnerPod(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"runnerset-name": "runner",
"actions-runner": "",
},
},
Spec: corev1.PodSpec{
@@ -333,6 +343,10 @@ func TestNewRunnerPod(t *testing.T) {
Name: "RUNNER_EPHEMERAL",
Value: "true",
},
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
},
VolumeMounts: []corev1.VolumeMount{
{
@@ -515,7 +529,7 @@ func TestNewRunnerPod(t *testing.T) {
for i := range testcases {
tc := testcases[i]
t.Run(tc.description, func(t *testing.T) {
got, err := newRunnerPod("runner", tc.template, tc.config, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL)
got, err := newRunnerPod(tc.template, tc.config, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL, false)
require.NoError(t, err)
require.Equal(t, tc.want, got)
})
@@ -546,7 +560,7 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"pod-template-hash": "8857b86c7",
"runnerset-name": "runner",
"actions-runner": "",
},
OwnerReferences: []metav1.OwnerReference{
{
@@ -624,6 +638,10 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Name: "RUNNER_EPHEMERAL",
Value: "true",
},
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
{
Name: "DOCKER_HOST",
Value: "tcp://localhost:2376",
@@ -703,7 +721,7 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"pod-template-hash": "8857b86c7",
"runnerset-name": "runner",
"actions-runner": "",
},
OwnerReferences: []metav1.OwnerReference{
{
@@ -769,6 +787,10 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Name: "RUNNER_EPHEMERAL",
Value: "true",
},
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
{
Name: "RUNNER_NAME",
Value: "runner",
@@ -800,7 +822,7 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"pod-template-hash": "8857b86c7",
"runnerset-name": "runner",
"actions-runner": "",
},
OwnerReferences: []metav1.OwnerReference{
{
@@ -866,6 +888,10 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Name: "RUNNER_EPHEMERAL",
Value: "true",
},
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
{
Name: "RUNNER_NAME",
Value: "runner",
@@ -1105,13 +1131,20 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
for i := range testcases {
tc := testcases[i]
rr := &testResourceReader{
objects: map[types.NamespacedName]client.Object{},
}
multiClient := NewMultiGitHubClient(rr, &github.Client{GithubBaseURL: githubBaseURL})
t.Run(tc.description, func(t *testing.T) {
r := &RunnerReconciler{
RunnerImage: defaultRunnerImage,
RunnerImagePullSecrets: defaultRunnerImagePullSecrets,
DockerImage: defaultDockerImage,
DockerRegistryMirror: defaultDockerRegistryMirror,
GitHubClient: &github.Client{GithubBaseURL: githubBaseURL},
GitHubClient: multiClient,
Scheme: scheme,
}
got, err := r.newPod(tc.runner)

View File

@@ -6,7 +6,6 @@ import (
"net/http"
"time"
"github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/go-logr/logr"
"gomodules.xyz/jsonpatch/v2"
admissionv1 "k8s.io/api/admission/v1"
@@ -29,7 +28,7 @@ type PodRunnerTokenInjector struct {
Name string
Log logr.Logger
Recorder record.EventRecorder
GitHubClient *github.Client
GitHubClient *MultiGitHubClient
decoder *admission.Decoder
}
@@ -66,7 +65,12 @@ func (t *PodRunnerTokenInjector) Handle(ctx context.Context, req admission.Reque
return newEmptyResponse()
}
rt, err := t.GitHubClient.GetRegistrationToken(context.Background(), enterprise, org, repo, pod.Name)
ghc, err := t.GitHubClient.InitForRunnerPod(ctx, &pod)
if err != nil {
return admission.Errored(http.StatusInternalServerError, err)
}
rt, err := ghc.GetRegistrationToken(context.Background(), enterprise, org, repo, pod.Name)
if err != nil {
t.Log.Error(err, "Failed to get new registration token")
return admission.Errored(http.StatusInternalServerError, err)

View File

@@ -18,7 +18,10 @@ package controllers
import (
"context"
"errors"
"fmt"
"reflect"
"strconv"
"strings"
"time"
@@ -33,10 +36,10 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/github"
)
const (
@@ -49,6 +52,8 @@ const (
EnvVarOrg = "RUNNER_ORG"
EnvVarRepo = "RUNNER_REPO"
EnvVarGroup = "RUNNER_GROUP"
EnvVarLabels = "RUNNER_LABELS"
EnvVarEnterprise = "RUNNER_ENTERPRISE"
EnvVarEphemeral = "RUNNER_EPHEMERAL"
EnvVarTrue = "true"
@@ -60,7 +65,7 @@ type RunnerReconciler struct {
Log logr.Logger
Recorder record.EventRecorder
Scheme *runtime.Scheme
GitHubClient *github.Client
GitHubClient *MultiGitHubClient
RunnerImage string
RunnerImagePullSecrets []string
DockerImage string
@@ -68,16 +73,20 @@ type RunnerReconciler struct {
Name string
RegistrationRecheckInterval time.Duration
RegistrationRecheckJitter time.Duration
UnregistrationRetryDelay time.Duration
UseRunnerStatusUpdateHook bool
UnregistrationRetryDelay time.Duration
}
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=secrets,verbs=get;list;watch;delete
// +kubebuilder:rbac:groups=core,resources=pods/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
// +kubebuilder:rbac:groups=core,resources=serviceaccounts,verbs=create;delete;get
// +kubebuilder:rbac:groups=rbac.authorization.k8s.io,resources=roles,verbs=create;delete;get
// +kubebuilder:rbac:groups=rbac.authorization.k8s.io,resources=rolebindings,verbs=create;delete;get
func (r *RunnerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("runner", req.NamespacedName)
@@ -112,6 +121,9 @@ func (r *RunnerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
// Pod was not found
return r.processRunnerDeletion(runner, ctx, log, nil)
}
r.GitHubClient.DeinitForRunner(&runner)
return r.processRunnerDeletion(runner, ctx, log, &pod)
}
@@ -131,7 +143,7 @@ func (r *RunnerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
ready := runnerPodReady(&pod)
if runner.Status.Phase != phase || runner.Status.Ready != ready {
if (runner.Status.Phase != phase || runner.Status.Ready != ready) && !r.UseRunnerStatusUpdateHook || runner.Status.Phase == "" && r.UseRunnerStatusUpdateHook {
if pod.Status.Phase == corev1.PodRunning {
// Seeing this message, you can expect the runner to become `Running` soon.
log.V(1).Info(
@@ -252,6 +264,96 @@ func (r *RunnerReconciler) processRunnerCreation(ctx context.Context, runner v1a
return ctrl.Result{}, err
}
needsServiceAccount := runner.Spec.ServiceAccountName == "" && (r.UseRunnerStatusUpdateHook || runner.Spec.ContainerMode == "kubernetes")
if needsServiceAccount {
serviceAccount := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: runner.ObjectMeta.Name,
Namespace: runner.ObjectMeta.Namespace,
},
}
if res := r.createObject(ctx, serviceAccount, serviceAccount.ObjectMeta, &runner, log); res != nil {
return *res, nil
}
rules := []rbacv1.PolicyRule{}
if r.UseRunnerStatusUpdateHook {
rules = append(rules, []rbacv1.PolicyRule{
{
APIGroups: []string{"actions.summerwind.dev"},
Resources: []string{"runners/status"},
Verbs: []string{"get", "update", "patch"},
ResourceNames: []string{runner.ObjectMeta.Name},
},
}...)
}
if runner.Spec.ContainerMode == "kubernetes" {
// Permissions based on https://github.com/actions/runner-container-hooks/blob/main/packages/k8s/README.md
rules = append(rules, []rbacv1.PolicyRule{
{
APIGroups: []string{""},
Resources: []string{"pods"},
Verbs: []string{"get", "list", "create", "delete"},
},
{
APIGroups: []string{""},
Resources: []string{"pods/exec"},
Verbs: []string{"get", "create"},
},
{
APIGroups: []string{""},
Resources: []string{"pods/log"},
Verbs: []string{"get", "list", "watch"},
},
{
APIGroups: []string{"batch"},
Resources: []string{"jobs"},
Verbs: []string{"get", "list", "create", "delete"},
},
{
APIGroups: []string{""},
Resources: []string{"secrets"},
Verbs: []string{"get", "list", "create", "delete"},
},
}...)
}
role := &rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: runner.ObjectMeta.Name,
Namespace: runner.ObjectMeta.Namespace,
},
Rules: rules,
}
if res := r.createObject(ctx, role, role.ObjectMeta, &runner, log); res != nil {
return *res, nil
}
roleBinding := &rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: runner.ObjectMeta.Name,
Namespace: runner.ObjectMeta.Namespace,
},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Kind: "Role",
Name: runner.ObjectMeta.Name,
},
Subjects: []rbacv1.Subject{
{
Kind: "ServiceAccount",
Name: runner.ObjectMeta.Name,
Namespace: runner.ObjectMeta.Namespace,
},
},
}
if res := r.createObject(ctx, roleBinding, roleBinding.ObjectMeta, &runner, log); res != nil {
return *res, nil
}
}
if err := r.Create(ctx, &newPod); err != nil {
if kerrors.IsAlreadyExists(err) {
// Gracefully handle pod-already-exists errors due to informer cache delay.
@@ -274,6 +376,27 @@ func (r *RunnerReconciler) processRunnerCreation(ctx context.Context, runner v1a
return ctrl.Result{}, nil
}
func (r *RunnerReconciler) createObject(ctx context.Context, obj client.Object, meta metav1.ObjectMeta, runner *v1alpha1.Runner, log logr.Logger) *ctrl.Result {
kind := strings.Split(reflect.TypeOf(obj).String(), ".")[1]
if err := ctrl.SetControllerReference(runner, obj, r.Scheme); err != nil {
log.Error(err, fmt.Sprintf("Could not add owner reference to %s %s. %s", kind, meta.Name, err.Error()))
return &ctrl.Result{Requeue: true}
}
if err := r.Create(ctx, obj); err != nil {
if kerrors.IsAlreadyExists(err) {
log.Info(fmt.Sprintf("Failed to create %s %s as it already exists. Reusing existing %s", kind, meta.Name, kind))
r.Recorder.Event(runner, corev1.EventTypeNormal, fmt.Sprintf("%sReused", kind), fmt.Sprintf("Reused %s '%s'", kind, meta.Name))
return nil
}
log.Error(err, fmt.Sprintf("Retrying as failed to create %s %s resource", kind, meta.Name))
return &ctrl.Result{Requeue: true}
}
r.Recorder.Event(runner, corev1.EventTypeNormal, fmt.Sprintf("%sCreated", kind), fmt.Sprintf("Created %s '%s'", kind, meta.Name))
log.Info(fmt.Sprintf("Created %s", kind), "name", meta.Name)
return nil
}
func (r *RunnerReconciler) updateRegistrationToken(ctx context.Context, runner v1alpha1.Runner) (bool, error) {
if runner.IsRegisterable() {
return false, nil
@@ -281,7 +404,12 @@ func (r *RunnerReconciler) updateRegistrationToken(ctx context.Context, runner v
log := r.Log.WithValues("runner", runner.Name)
rt, err := r.GitHubClient.GetRegistrationToken(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
ghc, err := r.GitHubClient.InitForRunner(ctx, &runner)
if err != nil {
return false, err
}
rt, err := ghc.GetRegistrationToken(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil {
// An error can be a permanent, permission issue like the below:
// POST https://api.github.com/enterprises/YOUR_ENTERPRISE/actions/runners/registration-token: 403 Resource not accessible by integration []
@@ -321,6 +449,11 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
labels[k] = v
}
ghc, err := r.GitHubClient.InitForRunner(context.Background(), &runner)
if err != nil {
return corev1.Pod{}, err
}
// This implies that...
//
// (1) We recreate the runner pod whenever the runner has changes in:
@@ -344,7 +477,7 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
filterLabels(runner.ObjectMeta.Labels, LabelKeyRunnerTemplateHash),
runner.ObjectMeta.Annotations,
runner.Spec,
r.GitHubClient.GithubBaseURL,
ghc.GithubBaseURL,
// Token change should trigger replacement.
// We need to include this explicitly here because
// runner.Spec does not contain the possibly updated token stored in the
@@ -412,7 +545,17 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
template.Spec.SecurityContext = runner.Spec.SecurityContext
template.Spec.EnableServiceLinks = runner.Spec.EnableServiceLinks
pod, err := newRunnerPod(runner.Name, template, runner.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, r.GitHubClient.GithubBaseURL)
if runner.Spec.ContainerMode == "kubernetes" {
workDir := runner.Spec.WorkDir
if workDir == "" {
workDir = "/runner/_work"
}
if err := applyWorkVolumeClaimTemplateToPod(&template, runner.Spec.WorkVolumeClaimTemplate, workDir); err != nil {
return corev1.Pod{}, err
}
}
pod, err := newRunnerPodWithContainerMode(runner.Spec.ContainerMode, template, runner.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, ghc.GithubBaseURL, r.UseRunnerStatusUpdateHook)
if err != nil {
return pod, err
}
@@ -424,6 +567,9 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
// if operater provides a work volume mount, use that
isPresent, _ := workVolumeMountPresent(runnerSpec.VolumeMounts)
if isPresent {
if runnerSpec.ContainerMode == "kubernetes" {
return pod, errors.New("volume mount \"work\" should be specified by workVolumeClaimTemplate in container mode kubernetes")
}
// remove work volume since it will be provided from runnerSpec.Volumes
// if we don't remove it here we would get a duplicate key error, i.e. two volumes named work
_, index := workVolumeMountPresent(pod.Spec.Containers[0].VolumeMounts)
@@ -437,6 +583,9 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
// if operator provides a work volume. use that
isPresent, _ := workVolumePresent(runnerSpec.Volumes)
if isPresent {
if runnerSpec.ContainerMode == "kubernetes" {
return pod, errors.New("volume \"work\" should be specified by workVolumeClaimTemplate in container mode kubernetes")
}
_, index := workVolumePresent(pod.Spec.Volumes)
// remove work volume since it will be provided from runnerSpec.Volumes
@@ -446,6 +595,7 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
pod.Spec.Volumes = append(pod.Spec.Volumes, runnerSpec.Volumes...)
}
if len(runnerSpec.InitContainers) != 0 {
pod.Spec.InitContainers = append(pod.Spec.InitContainers, runnerSpec.InitContainers...)
}
@@ -453,9 +603,13 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
if runnerSpec.NodeSelector != nil {
pod.Spec.NodeSelector = runnerSpec.NodeSelector
}
if runnerSpec.ServiceAccountName != "" {
pod.Spec.ServiceAccountName = runnerSpec.ServiceAccountName
} else if r.UseRunnerStatusUpdateHook || runner.Spec.ContainerMode == "kubernetes" {
pod.Spec.ServiceAccountName = runner.ObjectMeta.Name
}
if runnerSpec.AutomountServiceAccountToken != nil {
pod.Spec.AutomountServiceAccountToken = runnerSpec.AutomountServiceAccountToken
}
@@ -476,6 +630,10 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
pod.Spec.Tolerations = runnerSpec.Tolerations
}
if runnerSpec.PriorityClassName != "" {
pod.Spec.PriorityClassName = runnerSpec.PriorityClassName
}
if len(runnerSpec.TopologySpreadConstraints) != 0 {
pod.Spec.TopologySpreadConstraints = runnerSpec.TopologySpreadConstraints
}
@@ -492,6 +650,10 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
pod.Spec.HostAliases = runnerSpec.HostAliases
}
if runnerSpec.DnsPolicy != "" {
pod.Spec.DNSPolicy = runnerSpec.DnsPolicy
}
if runnerSpec.DnsConfig != nil {
pod.Spec.DNSConfig = runnerSpec.DnsConfig
}
@@ -526,7 +688,45 @@ func mutatePod(pod *corev1.Pod, token string) *corev1.Pod {
return updated
}
func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string) (corev1.Pod, error) {
func runnerHookEnvs(pod *corev1.Pod) ([]corev1.EnvVar, error) {
isRequireSameNode, err := isRequireSameNode(pod)
if err != nil {
return nil, err
}
return []corev1.EnvVar{
{
Name: "ACTIONS_RUNNER_CONTAINER_HOOKS",
Value: defaultRunnerHookPath,
},
{
Name: "ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER",
Value: "true",
},
{
Name: "ACTIONS_RUNNER_POD_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.name",
},
},
},
{
Name: "ACTIONS_RUNNER_JOB_NAMESPACE",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.namespace",
},
},
},
{
Name: "ACTIONS_RUNNER_REQUIRE_SAME_NODE",
Value: strconv.FormatBool(isRequireSameNode),
},
}, nil
}
func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string, useRunnerStatusUpdateHook bool) (corev1.Pod, error) {
var (
privileged bool = true
dockerdInRunner bool = runnerSpec.DockerdWithinRunnerContainer != nil && *runnerSpec.DockerdWithinRunnerContainer
@@ -535,11 +735,20 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
dockerdInRunnerPrivileged bool = dockerdInRunner
)
if containerMode == "kubernetes" {
dockerdInRunner = false
dockerEnabled = false
dockerdInRunnerPrivileged = false
}
template = *template.DeepCopy()
// This label selector is used by default when rd.Spec.Selector is empty.
template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyRunnerSetName, runnerName)
template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyRunner, "")
template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyPodMutation, LabelValuePodMutation)
if runnerSpec.GitHubAPICredentialsFrom != nil {
template.ObjectMeta.Annotations = CloneAndAddLabel(template.ObjectMeta.Annotations, annotationKeyGitHubAPICredsSecret, runnerSpec.GitHubAPICredentialsFrom.SecretRef.Name)
}
workDir := runnerSpec.WorkDir
if workDir == "" {
@@ -569,11 +778,11 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
Value: runnerSpec.Enterprise,
},
{
Name: "RUNNER_LABELS",
Name: EnvVarLabels,
Value: strings.Join(runnerSpec.Labels, ","),
},
{
Name: "RUNNER_GROUP",
Name: EnvVarGroup,
Value: runnerSpec.Group,
},
{
@@ -596,6 +805,10 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
Name: EnvVarEphemeral,
Value: fmt.Sprintf("%v", ephemeral),
},
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: fmt.Sprintf("%v", useRunnerStatusUpdateHook),
},
}
var seLinuxOptions *corev1.SELinuxOptions
@@ -621,6 +834,17 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
}
}
if containerMode == "kubernetes" {
if dockerdContainer != nil {
template.Spec.Containers = append(template.Spec.Containers[:dockerdContainerIndex], template.Spec.Containers[dockerdContainerIndex+1:]...)
}
if dockerdContainerIndex < runnerContainerIndex {
runnerContainerIndex--
}
dockerdContainer = nil
dockerdContainerIndex = -1
}
if runnerContainer == nil {
runnerContainerIndex = -1
runnerContainer = &corev1.Container{
@@ -651,6 +875,13 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
}
runnerContainer.Env = append(runnerContainer.Env, env...)
if containerMode == "kubernetes" {
hookEnvs, err := runnerHookEnvs(&template)
if err != nil {
return corev1.Pod{}, err
}
runnerContainer.Env = append(runnerContainer.Env, hookEnvs...)
}
if runnerContainer.SecurityContext == nil {
runnerContainer.SecurityContext = &corev1.SecurityContext{}
@@ -841,6 +1072,74 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
},
}...)
// This let dockerd to create container's network interface to have the specified MTU.
// In other words, this is for setting com.docker.network.driver.mtu in the docker bridge options.
// You can see the options by running `docker network inspect bridge`, where you will see something like the below when spec.dockerMTU=1400:
//
// "Options": {
// "com.docker.network.bridge.default_bridge": "true",
// "com.docker.network.bridge.enable_icc": "true",
// "com.docker.network.bridge.enable_ip_masquerade": "true",
// "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
// "com.docker.network.bridge.name": "docker0",
// "com.docker.network.driver.mtu": "1400"
// },
//
// See e.g. https://forums.docker.com/t/changing-mtu-value/74114 and https://mlohr.com/docker-mtu/ for more details.
//
// Note though, this doesn't immediately affect docker0's MTU, and the MTU of the docker network created with docker-create-network:
// You can verity that by running `ip link` within the containers:
//
// # ip link
// 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
// link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
// 2: eth0@if1118: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
// link/ether c2:dd:e6:66:8e:8b brd ff:ff:ff:ff:ff:ff
// 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
// link/ether 02:42:ab:1c:83:69 brd ff:ff:ff:ff:ff:ff
// 4: br-c5bf6c172bd7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
// link/ether 02:42:e2:91:13:1e brd ff:ff:ff:ff:ff:ff
//
// br-c5bf6c172bd7 is the interface that corresponds to the docker network created with docker-create-network.
// We have another ARC feature to inherit the host's MTU to the docker networks:
// https://github.com/actions-runner-controller/actions-runner-controller/pull/1201
//
// docker's MTU is updated to the specified MTU once any container is created.
// You can verity that by running a random container from within the runner or dockerd containers:
//
// / # docker run -d busybox sh -c 'sleep 10'
// e848e6acd6404ca0199e4d9c5ef485d88c974ddfb7aaf2359c66811f68cf5e42
//
// You'll now see the veth767f1a5@if7 got created with the MTU inherited by dockerd:
//
// / # ip link
// 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
// link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
// 2: eth0@if1118: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
// link/ether c2:dd:e6:66:8e:8b brd ff:ff:ff:ff:ff:ff
// 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP
// link/ether 02:42:ab:1c:83:69 brd ff:ff:ff:ff:ff:ff
// 4: br-c5bf6c172bd7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
// link/ether 02:42:e2:91:13:1e brd ff:ff:ff:ff:ff:ff
// 8: veth767f1a5@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1400 qdisc noqueue master docker0 state UP
// link/ether 82:d5:08:28:d8:98 brd ff:ff:ff:ff:ff:ff
//
// # After 10 seconds sleep, you can see the container stops and the veth767f1a5@if7 interface got deleted:
//
// / # ip link
// 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
// link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
// 2: eth0@if1118: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
// link/ether c2:dd:e6:66:8e:8b brd ff:ff:ff:ff:ff:ff
// 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
// link/ether 02:42:ab:1c:83:69 brd ff:ff:ff:ff:ff:ff
// 4: br-c5bf6c172bd7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
// link/ether 02:42:e2:91:13:1e brd ff:ff:ff:ff:ff:ff
//
// See https://github.com/moby/moby/issues/26382#issuecomment-246906331 for reference.
//
// Probably we'd better infer DockerMTU from the host's primary interface's MTU and docker0's MTU?
// That's another story- if you want it, please start a thread in GitHub Discussions!
dockerdContainer.Args = append(dockerdContainer.Args,
"--mtu",
fmt.Sprintf("%d", *runnerSpec.DockerMTU),
@@ -875,6 +1174,10 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
return *pod, nil
}
func newRunnerPod(template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string, useRunnerStatusUpdateHookEphemeralRole bool) (corev1.Pod, error) {
return newRunnerPodWithContainerMode("", template, runnerSpec, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL, useRunnerStatusUpdateHookEphemeralRole)
}
func (r *RunnerReconciler) SetupWithManager(mgr ctrl.Manager) error {
name := "runner-controller"
if r.Name != "" {
@@ -937,3 +1240,61 @@ func workVolumeMountPresent(items []corev1.VolumeMount) (bool, int) {
}
return false, 0
}
func applyWorkVolumeClaimTemplateToPod(pod *corev1.Pod, workVolumeClaimTemplate *v1alpha1.WorkVolumeClaimTemplate, workDir string) error {
if workVolumeClaimTemplate == nil {
return errors.New("work volume claim template must be specified in container mode kubernetes")
}
for i := range pod.Spec.Volumes {
if pod.Spec.Volumes[i].Name == "work" {
return fmt.Errorf("Work volume should not be specified in container mode kubernetes. workVolumeClaimTemplate field should be used instead.")
}
}
pod.Spec.Volumes = append(pod.Spec.Volumes, workVolumeClaimTemplate.V1Volume())
var runnerContainer *corev1.Container
for i := range pod.Spec.Containers {
if pod.Spec.Containers[i].Name == "runner" {
runnerContainer = &pod.Spec.Containers[i]
break
}
}
if runnerContainer == nil {
return fmt.Errorf("runner container is not present when applying work volume claim template")
}
if isPresent, _ := workVolumeMountPresent(runnerContainer.VolumeMounts); isPresent {
return fmt.Errorf("volume mount \"work\" should not be present on the runner container in container mode kubernetes")
}
runnerContainer.VolumeMounts = append(runnerContainer.VolumeMounts, workVolumeClaimTemplate.V1VolumeMount(workDir))
return nil
}
// isRequireSameNode specifies for the runner in kubernetes mode wether it should
// schedule jobs to the same node where the runner is
//
// This function should only be called in containerMode: kubernetes
func isRequireSameNode(pod *corev1.Pod) (bool, error) {
isPresent, index := workVolumePresent(pod.Spec.Volumes)
if !isPresent {
return true, errors.New("internal error: work volume mount must exist in containerMode: kubernetes")
}
if pod.Spec.Volumes[index].Ephemeral == nil || pod.Spec.Volumes[index].Ephemeral.VolumeClaimTemplate == nil {
return true, errors.New("containerMode: kubernetes should have pod.Spec.Volumes[].Ephemeral.VolumeClaimTemplate set")
}
for _, accessMode := range pod.Spec.Volumes[index].Ephemeral.VolumeClaimTemplate.Spec.AccessModes {
switch accessMode {
case corev1.ReadWriteOnce:
return true, nil
case corev1.ReadWriteMany:
default:
return true, errors.New("actions-runner-controller supports ReadWriteOnce and ReadWriteMany modes only")
}
}
return false, nil
}

View File

@@ -9,7 +9,7 @@ import (
"github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/go-logr/logr"
gogithub "github.com/google/go-github/v39/github"
gogithub "github.com/google/go-github/v47/github"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
ctrl "sigs.k8s.io/controller-runtime"
@@ -98,11 +98,27 @@ func ensureRunnerUnregistration(ctx context.Context, retryDelay time.Duration, l
// If it's already unregistered in the previous reconcilation loop,
// you can safely assume that it won't get registered again so it's safe to delete the runner pod.
log.Info("Runner pod is marked as already unregistered.")
} else if runnerID == nil {
} else if runnerID == nil && !runnerPodOrContainerIsStopped(pod) && !podConditionTransitionTimeAfter(pod, corev1.PodReady, registrationTimeout) {
log.Info(
"Unregistration started before runner ID is assigned. " +
"Perhaps the runner pod was terminated by anyone other than ARC? Was it OOM killed? " +
"Unregistration started before runner obtains ID. Waiting for the regisration timeout to elapse, or the runner to obtain ID, or the runner pod to stop",
"registrationTimeout", registrationTimeout,
)
return &ctrl.Result{RequeueAfter: retryDelay}, nil
} else if runnerID == nil && runnerPodOrContainerIsStopped(pod) {
log.Info(
"Unregistration started before runner ID is assigned and the runner stopped before obtaining ID within registration timeout. "+
"Perhaps the runner successfully ran the job and stopped normally before the runner ID becomes visible via GitHub API? "+
"Perhaps the runner pod was terminated by anyone other than ARC? Was it OOM killed? "+
"Marking unregistration as completed anyway because there's nothing ARC can do.",
"registrationTimeout", registrationTimeout,
)
} else if runnerID == nil && podConditionTransitionTimeAfter(pod, corev1.PodReady, registrationTimeout) {
log.Info(
"Unregistration started before runner ID is assigned and the runner was unable to obtain ID within registration timeout. "+
"Perhaps the runner has communication issue, or a firewall egress rule is dropping traffic to GitHub API, or GitHub API is unavailable? "+
"Marking unregistration as completed anyway because there's nothing ARC can do. "+
"This may result in in cancelling the job depending on your terminationGracePeriodSeconds and RUNNER_GRACEFUL_STOP_TIMEOUT settings.",
"registrationTimeout", registrationTimeout,
)
} else if pod != nil && runnerPodOrContainerIsStopped(pod) {
// If it's an ephemeral runner with the actions/runner container exited with 0,
@@ -151,7 +167,10 @@ func ensureRunnerUnregistration(ctx context.Context, retryDelay time.Duration, l
log.V(1).Info("Failed to unregister runner before deleting the pod.", "error", err)
var runnerBusy bool
var (
runnerBusy bool
runnerUnregistrationFailureMessage string
)
errRes := &gogithub.ErrorResponse{}
if errors.As(err, &errRes) {
@@ -173,6 +192,7 @@ func ensureRunnerUnregistration(ctx context.Context, retryDelay time.Duration, l
}
runnerBusy = errRes.Response.StatusCode == 422
runnerUnregistrationFailureMessage = errRes.Message
if runnerBusy && code != nil {
log.V(2).Info("Runner container has already stopped but the unregistration attempt failed. "+
@@ -187,6 +207,11 @@ func ensureRunnerUnregistration(ctx context.Context, retryDelay time.Duration, l
}
if runnerBusy {
_, err := annotatePodOnce(ctx, c, log, pod, AnnotationKeyUnregistrationFailureMessage, runnerUnregistrationFailureMessage)
if err != nil {
return &ctrl.Result{}, err
}
// We want to prevent spamming the deletion attemps but returning ctrl.Result with RequeueAfter doesn't
// work as the reconcilation can happen earlier due to pod status update.
// For ephemeral runners, we can expect it to stop and unregister itself on completion.
@@ -342,21 +367,22 @@ func setRunnerEnv(pod *corev1.Pod, key, value string) {
// Case 1. (true, nil) when it has successfully unregistered the runner.
// Case 2. (false, nil) when (2-1.) the runner has been already unregistered OR (2-2.) the runner will never be created OR (2-3.) the runner is not created yet and it is about to be registered(hence we couldn't see it's existence from GitHub Actions API yet)
// Case 3. (false, err) when it postponed unregistration due to the runner being busy, or it tried to unregister the runner but failed due to
// an error returned by GitHub API.
//
// an error returned by GitHub API.
//
// When the returned values is "Case 2. (false, nil)", the caller must handle the three possible sub-cases appropriately.
// In other words, all those three sub-cases cannot be distinguished by this function alone.
//
// - Case "2-1." can happen when e.g. ARC has successfully unregistered in a previous reconcilation loop or it was an ephemeral runner that finished it's job run(an ephemeral runner is designed to stop after a job run).
// You'd need to maintain the runner state(i.e. if it's already unregistered or not) somewhere,
// so that you can either not call this function at all if the runner state says it's already unregistered, or determine that it's case "2-1." when you got (false, nil).
// - Case "2-1." can happen when e.g. ARC has successfully unregistered in a previous reconcilation loop or it was an ephemeral runner that finished it's job run(an ephemeral runner is designed to stop after a job run).
// You'd need to maintain the runner state(i.e. if it's already unregistered or not) somewhere,
// so that you can either not call this function at all if the runner state says it's already unregistered, or determine that it's case "2-1." when you got (false, nil).
//
// - Case "2-2." can happen when e.g. the runner registration token was somehow broken so that `config.sh` within the runner container was never meant to succeed.
// Waiting and retrying forever on this case is not a solution, because `config.sh` won't succeed with a wrong token hence the runner gets stuck in this state forever.
// There isn't a perfect solution to this, but a practical workaround would be implement a "grace period" in the caller side.
// - Case "2-2." can happen when e.g. the runner registration token was somehow broken so that `config.sh` within the runner container was never meant to succeed.
// Waiting and retrying forever on this case is not a solution, because `config.sh` won't succeed with a wrong token hence the runner gets stuck in this state forever.
// There isn't a perfect solution to this, but a practical workaround would be implement a "grace period" in the caller side.
//
// - Case "2-3." can happen when e.g. ARC recreated an ephemral runner pod in a previous reconcilation loop and then it was requested to delete the runner before the runner comes up.
// If handled inappropriately, this can cause a race condition betweeen a deletion of the runner pod and GitHub scheduling a workflow job onto the runner.
// - Case "2-3." can happen when e.g. ARC recreated an ephemral runner pod in a previous reconcilation loop and then it was requested to delete the runner before the runner comes up.
// If handled inappropriately, this can cause a race condition betweeen a deletion of the runner pod and GitHub scheduling a workflow job onto the runner.
//
// Once successfully detected case "2-1." or "2-2.", you can safely delete the runner pod because you know that the runner won't come back
// as long as you recreate the runner pod.

View File

@@ -20,19 +20,21 @@ import (
"context"
"errors"
"fmt"
"sync"
"time"
"github.com/go-logr/logr"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
corev1 "k8s.io/api/core/v1"
arcv1alpha1 "github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/github"
corev1 "k8s.io/api/core/v1"
)
// RunnerPodReconciler reconciles a Runner object
@@ -41,7 +43,7 @@ type RunnerPodReconciler struct {
Log logr.Logger
Recorder record.EventRecorder
Scheme *runtime.Scheme
GitHubClient *github.Client
GitHubClient *MultiGitHubClient
Name string
RegistrationRecheckInterval time.Duration
RegistrationRecheckJitter time.Duration
@@ -50,6 +52,7 @@ type RunnerPodReconciler struct {
}
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=secrets,verbs=get;list;watch
// +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
@@ -60,8 +63,11 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, client.IgnoreNotFound(err)
}
_, isRunnerPod := runnerPod.Labels[LabelKeyRunnerSetName]
if !isRunnerPod {
_, isRunnerPod := runnerPod.Labels[LabelKeyRunner]
_, isRunnerSetPod := runnerPod.Labels[LabelKeyRunnerSetName]
_, isRunnerDeploymentPod := runnerPod.Labels[LabelKeyRunnerDeploymentName]
if !isRunnerPod && !isRunnerSetPod && !isRunnerDeploymentPod {
return ctrl.Result{}, nil
}
@@ -77,6 +83,7 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
}
var enterprise, org, repo string
var isContainerMode bool
for _, e := range envvars {
switch e.Name {
@@ -86,13 +93,25 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
org = e.Value
case EnvVarRepo:
repo = e.Value
case "ACTIONS_RUNNER_CONTAINER_HOOKS":
isContainerMode = true
}
}
ghc, err := r.GitHubClient.InitForRunnerPod(ctx, &runnerPod)
if err != nil {
return ctrl.Result{}, err
}
if runnerPod.ObjectMeta.DeletionTimestamp.IsZero() {
finalizers, added := addFinalizer(runnerPod.ObjectMeta.Finalizers, runnerPodFinalizerName)
if added {
var cleanupFinalizersAdded bool
if isContainerMode {
finalizers, cleanupFinalizersAdded = addFinalizer(finalizers, runnerLinkedResourcesFinalizerName)
}
if added || cleanupFinalizersAdded {
newRunner := runnerPod.DeepCopy()
newRunner.ObjectMeta.Finalizers = finalizers
@@ -108,13 +127,49 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
} else {
log.V(2).Info("Seen deletion-timestamp is already set")
// Mark the parent Runner resource for deletion before deleting this runner pod from the cluster.
// Otherwise the runner controller can recreate the runner pod thinking it has not created any runner pod yet.
var (
key = types.NamespacedName{Namespace: runnerPod.Namespace, Name: runnerPod.Name}
runner arcv1alpha1.Runner
)
if err := r.Get(ctx, key, &runner); err == nil {
if runner.Name != "" && runner.DeletionTimestamp == nil {
log.Info("This runner pod seems to have been deleted directly, bypassing the parent Runner resource. Marking the runner for deletion to not let it recreate this pod.")
if err := r.Delete(ctx, &runner); err != nil {
return ctrl.Result{}, err
}
}
}
if finalizers, removed := removeFinalizer(runnerPod.ObjectMeta.Finalizers, runnerLinkedResourcesFinalizerName); removed {
if err := r.cleanupRunnerLinkedPods(ctx, &runnerPod, log); err != nil {
log.Info("Runner-linked pods clean up that has failed due to an error. If this persists, please manually remove the runner-linked pods to unblock ARC", "err", err.Error())
return ctrl.Result{Requeue: true, RequeueAfter: 30 * time.Second}, nil
}
if err := r.cleanupRunnerLinkedSecrets(ctx, &runnerPod, log); err != nil {
log.Info("Runner-linked secrets clean up that has failed due to an error. If this persists, please manually remove the runner-linked secrets to unblock ARC", "err", err.Error())
return ctrl.Result{Requeue: true, RequeueAfter: 30 * time.Second}, nil
}
patchedPod := runnerPod.DeepCopy()
patchedPod.ObjectMeta.Finalizers = finalizers
if err := r.Patch(ctx, patchedPod, client.MergeFrom(&runnerPod)); err != nil {
log.Error(err, "Failed to update runner for finalizer linked resources removal")
return ctrl.Result{}, err
}
// Otherwise the subsequent patch request can revive the removed finalizer and it will trigger a unnecessary reconcilation
runnerPod = *patchedPod
}
finalizers, removed := removeFinalizer(runnerPod.ObjectMeta.Finalizers, runnerPodFinalizerName)
if removed {
// In a standard scenario, the upstream controller, like runnerset-controller, ensures this runner to be gracefully stopped before the deletion timestamp is set.
// But for the case that the user manually deleted it for whatever reason,
// we have to ensure it to gracefully stop now.
updatedPod, res, err := tickRunnerGracefulStop(ctx, r.unregistrationRetryDelay(), log, r.GitHubClient, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod)
updatedPod, res, err := tickRunnerGracefulStop(ctx, r.unregistrationRetryDelay(), log, ghc, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod)
if res != nil {
return *res, err
}
@@ -130,6 +185,8 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
log.V(2).Info("Removed finalizer")
r.GitHubClient.DeinitForRunnerPod(updatedPod)
return ctrl.Result{}, nil
}
@@ -168,7 +225,7 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, nil
}
po, res, err := ensureRunnerPodRegistered(ctx, log, r.GitHubClient, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod)
po, res, err := ensureRunnerPodRegistered(ctx, log, ghc, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod)
if res != nil {
return *res, err
}
@@ -182,7 +239,7 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
//
// In a standard scenario, ARC starts the unregistration process before marking the pod for deletion at all,
// so that it isn't subject to terminationGracePeriod and can safely take hours to finish it's work.
_, res, err := tickRunnerGracefulStop(ctx, r.unregistrationRetryDelay(), log, r.GitHubClient, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod)
_, res, err := tickRunnerGracefulStop(ctx, r.unregistrationRetryDelay(), log, ghc, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod)
if res != nil {
return *res, err
}
@@ -222,3 +279,93 @@ func (r *RunnerPodReconciler) SetupWithManager(mgr ctrl.Manager) error {
Named(name).
Complete(r)
}
func (r *RunnerPodReconciler) cleanupRunnerLinkedPods(ctx context.Context, pod *corev1.Pod, log logr.Logger) error {
var runnerLinkedPodList corev1.PodList
if err := r.List(ctx, &runnerLinkedPodList, client.InNamespace(pod.Namespace), client.MatchingLabels(
map[string]string{
"runner-pod": pod.ObjectMeta.Name,
},
)); err != nil {
return fmt.Errorf("failed to list runner-linked pods: %w", err)
}
var (
wg sync.WaitGroup
errs []error
)
for _, p := range runnerLinkedPodList.Items {
if !p.ObjectMeta.DeletionTimestamp.IsZero() {
continue
}
p := p
wg.Add(1)
go func() {
defer wg.Done()
if err := r.Delete(ctx, &p); err != nil {
if kerrors.IsNotFound(err) || kerrors.IsGone(err) {
return
}
errs = append(errs, fmt.Errorf("delete pod %q error: %v", p.ObjectMeta.Name, err))
}
}()
}
wg.Wait()
if len(errs) > 0 {
for _, err := range errs {
log.Error(err, "failed to remove runner-linked pod")
}
return errors.New("failed to remove some runner linked pods")
}
return nil
}
func (r *RunnerPodReconciler) cleanupRunnerLinkedSecrets(ctx context.Context, pod *corev1.Pod, log logr.Logger) error {
log.V(2).Info("Listing runner-linked secrets to be deleted", "ns", pod.Namespace)
var runnerLinkedSecretList corev1.SecretList
if err := r.List(ctx, &runnerLinkedSecretList, client.InNamespace(pod.Namespace), client.MatchingLabels(
map[string]string{
"runner-pod": pod.ObjectMeta.Name,
},
)); err != nil {
return fmt.Errorf("failed to list runner-linked secrets: %w", err)
}
var (
wg sync.WaitGroup
errs []error
)
for _, s := range runnerLinkedSecretList.Items {
if !s.ObjectMeta.DeletionTimestamp.IsZero() {
continue
}
s := s
wg.Add(1)
go func() {
defer wg.Done()
if err := r.Delete(ctx, &s); err != nil {
if kerrors.IsNotFound(err) || kerrors.IsGone(err) {
return
}
errs = append(errs, fmt.Errorf("delete secret %q error: %v", s.ObjectMeta.Name, err))
}
}()
}
wg.Wait()
if len(errs) > 0 {
for _, err := range errs {
log.Error(err, "failed to remove runner-linked secret")
}
return errors.New("failed to remove some runner linked secrets")
}
return nil
}

View File

@@ -315,7 +315,7 @@ func syncRunnerPodsOwners(ctx context.Context, c client.Client, log logr.Logger,
numOwners := len(owners)
var hashes []string
for h, _ := range state.podsForOwners {
for h := range state.podsForOwners {
hashes = append(hashes, h)
}

View File

@@ -165,6 +165,8 @@ func (r *RunnerDeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Req
return ctrl.Result{}, err
}
log.V(1).Info("Updated runnerreplicaset due to selector change")
// At this point, we are already sure that there's no need to create a new replicaset
// as the runner template hash is not changed.
//
@@ -179,7 +181,17 @@ func (r *RunnerDeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Req
newDesiredReplicas := getIntOrDefault(desiredRS.Spec.Replicas, defaultReplicas)
// Please add more conditions that we can in-place update the newest runnerreplicaset without disruption
if currentDesiredReplicas != newDesiredReplicas {
//
// If we missed taking the EffectiveTime diff into account, you might end up experiencing scale-ups being delayed scale-down.
// See https://github.com/actions-runner-controller/actions-runner-controller/pull/1477#issuecomment-1164154496
var et1, et2 time.Time
if newestSet.Spec.EffectiveTime != nil {
et1 = newestSet.Spec.EffectiveTime.Time
}
if rd.Spec.EffectiveTime != nil {
et2 = rd.Spec.EffectiveTime.Time
}
if currentDesiredReplicas != newDesiredReplicas || et1 != et2 {
newestSet.Spec.Replicas = &newDesiredReplicas
newestSet.Spec.EffectiveTime = rd.Spec.EffectiveTime
@@ -189,6 +201,13 @@ func (r *RunnerDeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Req
return ctrl.Result{}, err
}
log.V(1).Info("Updated runnerreplicaset due to spec change",
"currentDesiredReplicas", currentDesiredReplicas,
"newDesiredReplicas", newDesiredReplicas,
"currentEffectiveTime", newestSet.Spec.EffectiveTime,
"newEffectiveTime", rd.Spec.EffectiveTime,
)
return ctrl.Result{}, err
}

View File

@@ -32,17 +32,15 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/github"
)
// RunnerReplicaSetReconciler reconciles a Runner object
type RunnerReplicaSetReconciler struct {
client.Client
Log logr.Logger
Recorder record.EventRecorder
Scheme *runtime.Scheme
GitHubClient *github.Client
Name string
Log logr.Logger
Recorder record.EventRecorder
Scheme *runtime.Scheme
Name string
}
const (

View File

@@ -52,15 +52,13 @@ func SetupTest(ctx2 context.Context) *corev1.Namespace {
runnersList = fake.NewRunnersList()
server = runnersList.GetServer()
ghClient := newGithubClient(server)
controller := &RunnerReplicaSetReconciler{
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: ghClient,
Name: "runnerreplicaset-" + ns.Name,
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
Name: "runnerreplicaset-" + ns.Name,
}
err = controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")

Some files were not shown because too many files have changed in this diff Show More