Compare commits

...

168 Commits

Author SHA1 Message Date
Yusuke Kuoka
3e0bc3f7be Fix docker.sock permission error for non-dind Ubuntu 20.04 runners since v0.27.2 (#2499)
#2490 has been happening since v0.27.2 for non-dind runners based on Ubuntu 20.04 runner images. It does not affect Ubuntu 22.04 non-dind runners(i.e. runners with dockerd sidecars) and Ubuntu 20.04/22.04 dind runners(i.e. runners without dockerd sidecars). However, presuming many folks are still using Ubuntu 20.04 runners and non-dind runners, we should fix it.

This change tries to fix it by defaulting to the docker group id 1001 used by Ubuntu 20.04 runners, and use gid 121 for Ubuntu 22.04 runners. We use the image tag to see which Ubuntu version the runner is based on. The algorithm is so simple- we assume it's Ubuntu-22.04-based if the image tag contains "22.04".

This might be a breaking change for folks who have already upgraded to Ubuntu 22.04 runners using their own custom runner images. Note again; we rely on the image tag to detect Ubuntu 22.04 runner images and use the proper docker gid- Folks using our official Ubuntu 22.04 runner images are not affected. It is a breaking change anyway, so I have added a remedy-

ARC got a new flag, `--docker-gid`, which defaults to `1001` but can be set to `121` or whatever gid the operator/admin likes. This can be set to `--docker-gid=121`, for example, if you are using your own custom runner image based on Ubuntu 22.04 and the image tag does not contain "22.04".

Fixes #2490
2023-04-17 21:30:41 +09:00
Nikola Jokic
ba1ac0990b Reordering methods and constants so it is easier to look it up (#2501) 2023-04-12 09:50:23 +02:00
Nikola Jokic
76fe43e8e0 Update limit manager role permissions ADR (#2500)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-04-11 16:25:43 +02:00
Nikola Jokic
8869ad28bb Fix e2e tests infinite looping when waiting for resources (#2496)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-04-10 21:03:02 +02:00
Nikola Jokic
b86af190f7 Extend manager roles to accept ephemeralrunnerset/finalizers (#2493) 2023-04-10 08:49:32 +02:00
Bassem Dghaidi
1a491cbfe5 Fix the publish chart workflow (#2489)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-04-06 08:01:48 -04:00
Yusuke Kuoka
087f20fd5d Fix chart publishing workflow (#2487) 2023-04-05 12:20:12 -04:00
Hidetake Iwata
a880114e57 chart: Bump version to 0.23.1 (#2483) 2023-04-05 22:39:29 +09:00
Nikola Jokic
e80bc21fa5 gha-runner-scale-set 0.4.0 release (#2467)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-04-05 08:56:27 -04:00
Tingluo Huang
56754094ea Remove deprecated method. (#2481) 2023-04-04 15:15:11 -04:00
Tingluo Huang
8fa4520376 Treat .ghe.com domain as hosted environment (#2480)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-04-04 14:43:45 -04:00
Nikola Jokic
a804bf8b00 Add ImagePullPolicy to the AutoscalingListener, configurable through Manager env (#2477) 2023-04-04 19:07:20 +02:00
Nikola Jokic
5dea6db412 Fix helm uninstall cleanup by adding finalizers and cleaning them from the controller (#2433)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-04-03 21:06:12 +02:00
Bassem Dghaidi
2a0b770a63 Add troubleshooting advice (#2456) 2023-04-03 07:01:15 -04:00
Stewart Thomson
a7ef871248 Check if appID and instID are non-empty before attempting to parseInt (#2463) 2023-04-03 09:06:59 +09:00
Tingluo Huang
e45e4c53f1 Add E2E test to assert self-signed CA support. (#2458) 2023-03-31 10:31:25 -04:00
Yusuke Kuoka
a608abd124 actions-metrics: Do our best not to fail the whole event processing on no API creds (#2459) 2023-03-31 20:42:25 +09:00
Bassem Dghaidi
02d9add322 Fix bug preventing env variables from being specified (#2450)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-30 09:40:28 -04:00
Yusuke Kuoka
f5ac134787 Fix chart publishing workflow to not throw away releases between the latest and 0.21.0 (#2453)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-30 05:46:29 -04:00
Yusuke Kuoka
42abad5def chart: Bump version to 0.23.0 (#2449) 2023-03-30 10:10:18 +09:00
Milas Bowman
514b7da742 Install Docker Compose v2 as a Docker CLI plugin (#2326)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-29 10:40:10 +09:00
Francesco Renzi
c8e3bb5ec3 Remove containerMode from values (#2442) 2023-03-28 10:16:38 +01:00
Milas Bowman
878c9b8b49 runner: Use Docker socket via shared emptyDir instead of TCP/mTLS (#2324)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-28 11:29:16 +09:00
Jonathan Wiemers
4536707af6 chart: Allow webhook server env to be set individually (#2377)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-28 11:18:07 +09:00
Waldek Herka
13802c5a6d chart: Restricting the RBAC rules on secrets (#2265)
Co-authored-by: Waldek Herka <wherka-ama@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-28 08:43:33 +09:00
cskinfill
362fa5d52e crd: Add enterprise, organization, repository, and runner labels to runnerdeployments print columns (#2310)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-28 08:43:01 +09:00
Zane Hala
65184f1ed8 chart: Allow customization of admission webhook timeout (#2398)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-28 08:42:20 +09:00
Bassem Dghaidi
c23e31123c Housekeeping: move adrs/ to docs/ and update status (#2443)
Co-authored-by: Francesco Renzi <rentziass@github.com>
2023-03-27 10:38:27 -04:00
Nikola Jokic
56e1c62ac2 Add labels to autoscaling runner set subresources to allow easier inspection (#2391)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-27 11:19:34 +02:00
Bassem Dghaidi
64cedff2b4 Delete e2e-test-dispatch-workflow.yaml (#2441) 2023-03-24 07:11:57 -04:00
Bassem Dghaidi
37f93b794e Enhance quickstart troubleshooting guidelines (#2435) 2023-03-23 11:40:58 -04:00
Francesco Renzi
dc833e57a0 Add new workflows (#2423) 2023-03-23 14:39:37 +00:00
Tingluo Huang
5228aded87 Update e2e workflow (#2430) 2023-03-21 14:11:47 -04:00
Bassem Dghaidi
f49d08e4bc Update 2022-12-05-adding-labels-k8s-resources.md (#2420) 2023-03-17 06:39:56 -04:00
Tingluo Huang
064039afc0 Ignore extra dind container when contaerinMode.type=dind. (#2418) 2023-03-17 09:26:51 +01:00
Nikola Jokic
e5d8d65396 Introduce ADR change for adding labels to our resources (#2407)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-16 11:02:42 -04:00
Bassem Dghaidi
c465ace8fb Update the values.yaml sample for improved clarity (#2416) 2023-03-16 11:02:18 -04:00
Tingluo Huang
34f3878829 Fix helm chart rendering errors. (#2414) 2023-03-16 09:21:43 -04:00
Tingluo Huang
44c3931d8e Adding e2e workflows to test dind, kube mode and proxy (#2412) 2023-03-15 12:17:11 -04:00
Tingluo Huang
08acb1b831 Get RunnerScaleSet based on both RunnerGroupId and Name. (#2413) 2023-03-15 11:10:09 -04:00
Tingluo Huang
40811ebe0e Support the controller to watching a single namespace. (#2374) 2023-03-14 10:52:25 -04:00
github-actions[bot]
3417c5a3a8 Update runner to version 2.303.0 (#2411)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-03-14 15:41:03 +01:00
Bassem Dghaidi
172faa883c Fix GITHUB_TOKEN permissions (#2410) 2023-03-14 10:38:04 -04:00
Tingluo Huang
9e6c7d019f Delay role/rolebinding creation to gha-runner-scale-set installation time (#2363) 2023-03-14 09:45:44 -04:00
Bassem Dghaidi
9fbcafa703 Fix canary image tag name (#2409) 2023-03-14 09:29:10 -04:00
Tingluo Huang
2bf83d0d7f Remove list/watch secrets permission from the manager cluster role. (#2276) 2023-03-14 09:23:14 -04:00
Bassem Dghaidi
19d30dea5f Add docker buildx pre-requisites (#2408) 2023-03-14 09:22:38 -04:00
Bassem Dghaidi
6c66c1633f Prevent releases on wrong tag name (#2406) 2023-03-14 09:13:25 -04:00
Bassem Dghaidi
e55708588b Add gha-runner-scale-set-controller canary build (#2405) 2023-03-14 09:12:53 -04:00
Tingluo Huang
261d4371b5 Update E2E test workflow. (#2395) 2023-03-14 09:00:07 -04:00
Tingluo Huang
bd9f32e354 Create separate chart validation workflow for gha-* charts. (#2393)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-03-13 12:44:54 -04:00
Nikola Jokic
babbfc77d5 Surface EphemeralRunnerSet stats to AutoscalingRunnerSet (#2382) 2023-03-13 16:16:28 +01:00
Bassem Dghaidi
322df79617 Delete renovate.json5 (#2397) 2023-03-13 08:39:07 -04:00
Bassem Dghaidi
1c7c6639ed Fix wrong file name in the workflow (#2394) 2023-03-13 06:56:21 -04:00
Hamish Forbes
bcaac39a2e feat(actionsmetrics): Add owner and workflow_name labels to workflow job metrics (#2225) 2023-03-13 10:50:36 +09:00
Milas Bowman
af625dd1cb Upgrade to Docker Engine v20.10.23 (#2328)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-13 10:29:40 +09:00
Bassem Dghaidi
44969659df Add upgrade steps (#2392)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-03-10 12:14:00 -05:00
Nikola Jokic
a5f98dea75 Refactor main.go and introduce make run-scaleset to be able to run manager locally (#2337) 2023-03-10 18:05:51 +01:00
Francesco Renzi
1d24d3b00d Prepare 0.3.0 release (#2388)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-10 10:28:07 -05:00
Ava Stancu
9994d3aa60 replaced inexistent variable with correct one for tag (#2390) 2023-03-10 16:57:35 +02:00
Bassem Dghaidi
a2ea12e93c Fix test's quotes issue (#2389)
Co-authored-by: Francesco Renzi <rentziass@gmail.com>
2023-03-10 09:22:19 -05:00
Tingluo Huang
d7b589bed5 Helm chart react changes for the new runner image. (#2348) 2023-03-10 11:18:21 +00:00
Ava Stancu
4f293c6f79 Build local image and load to kind cluster (#2378) 2023-03-10 13:16:07 +02:00
Francesco Renzi
c569304271 Add support for self-signed CA certificates (#2268)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-09 17:23:32 +00:00
Tingluo Huang
068f987238 Update permission ADR based on prototype. (#2383)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-03-09 12:18:53 -05:00
Tingluo Huang
a462ecbe79 Trim slash for configure URL. (#2381) 2023-03-09 09:02:05 -05:00
Nikola Jokic
c5d6842d5f Update gomega with new ginkgo version (#2373) 2023-03-07 12:05:25 +01:00
dependabot[bot]
947bc8ab5b chore(deps): bump github.com/onsi/ginkgo/v2 from 2.7.0 to 2.9.0 (#2369)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:27:54 +01:00
dependabot[bot]
9d5c6e85c5 chore(deps): bump k8s.io/client-go from 0.26.1 to 0.26.2 (#2370)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:27:23 +01:00
dependabot[bot]
2420a40c02 chore(deps): bump golang.org/x/net from 0.7.0 to 0.8.0 (#2368)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:24:20 +01:00
dependabot[bot]
b3e7c723d2 chore(deps): bump github.com/golang-jwt/jwt/v4 from 4.4.1 to 4.5.0 (#2367)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:23:37 +01:00
dependabot[bot]
2e36db52c3 chore(deps): bump github.com/gruntwork-io/terratest from 0.41.9 to 0.41.11 (#2335)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 07:39:31 +09:00
dependabot[bot]
5d41609bea chore(deps): bump github.com/teambition/rrule-go from 1.8.0 to 1.8.2 (#2230)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 07:38:53 +09:00
Francesco Renzi
e289fe43d4 Apply proxy settings from environment in listener (#2366)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-06 19:21:22 +00:00
Piotr Palka
91fddca3f7 Fix webhook server logging (#2320)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-06 14:20:46 -05:00
Tingluo Huang
befe4cee0a ADR for Limit cluster role permission on Secerts. (#2275) 2023-03-03 13:05:51 -05:00
Yusuke Kuoka
548acdf05c Correct and simplify a sentence in the scheduled overrides doc (#2323) 2023-03-03 09:18:07 -05:00
Chris Patterson
41f2ca3ed9 Adding parameter to configure the runner set name. (#2279)
Co-authored-by: TingluoHuang <TingluoHuang@github.com>
2023-03-03 08:36:14 -05:00
Bassem Dghaidi
00996ec799 Upgrading & pinning action versions (#2346) 2023-03-03 06:00:18 -05:00
Ava Stancu
893833fdd5 Added e2e workflow trigger on master push and on PRs (#2356) 2023-03-03 05:55:02 -05:00
github-actions[bot]
7f3eef8761 Update runner to version 2.302.1 (#2294)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-03-03 05:43:03 -05:00
Francesco Renzi
40c905f25d Simplify the setup of controller tests (#2352) 2023-03-02 18:55:49 +00:00
Nikola Jokic
2984de912c Split listener pod label to avoid long names issue (#2341) 2023-03-02 17:25:50 +01:00
dependabot[bot]
1df06a69d7 bump golang.org/x/net from 0.5.0 to 0.7.0 (#2299)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 10:41:18 +01:00
Nikola Jokic
be47190d4c Chart naming validation on AutoscalingRunnerSet install (#2347)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
Co-authored-by: Bassem Dghaidi <Link-@github.com>
2023-03-02 10:35:55 +01:00
Tingluo Huang
e8d8c6f357 Make CT test to install charts in the right order. (#2350) 2023-03-02 03:16:40 -05:00
Ava Stancu
0c091f59b6 Matrix jobs workflow path update (#2349) 2023-03-02 00:10:34 +02:00
Bassem Dghaidi
a4751b74e0 Update trigger events for validate-chart (#2342) 2023-03-01 10:55:08 -05:00
Bassem Dghaidi
adad3d5530 Rename actions-runner-controller-2 and auto-scaling-runner-set helm charts (#2333)
Co-authored-by: Ava S <avastancu@github.com>
2023-03-01 07:16:03 -05:00
Ava Stancu
70156e3fea Added space before backslash on the multi line command (#2340) 2023-03-01 11:43:17 +02:00
Alex Williams
69abd51f30 Ensure that EffectiveTime is updated on webhook scale down (#2258)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-01 08:27:37 +09:00
dhawalseth
73e35b1dc6 chart: Create actionsmetrics.secrets.yaml (#2208)
Co-authored-by: Dhawal Seth <dseth@linkedin.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-01 08:19:58 +09:00
dependabot[bot]
c4178d5633 chore(deps): bump github.com/stretchr/testify from 1.8.0 to 1.8.2 (#2336)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-01 07:21:24 +09:00
dependabot[bot]
edf924106b chore(deps): bump sigs.k8s.io/controller-runtime from 0.14.1 to 0.14.4 (#2261)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-01 07:19:47 +09:00
Milas Bowman
34ebbf74d1 Upgrade Docker Compose to v2.16.0 (#2327) 2023-03-01 07:18:13 +09:00
Ava Stancu
a9af82ec78 Change e2e config url (#2338) 2023-02-28 14:26:01 -05:00
Ava Stancu
b5e9e14244 Added org for getting the workflow token job as it errored without (#2334) 2023-02-27 23:30:40 +02:00
Ava Stancu
910269aa11 Avastancu/arc e2e test linux vm (#2285) 2023-02-27 16:36:15 +02:00
Yusuke Kuoka
149cf47c83 Fix actions-metrics-server segfault issue (#2325) 2023-02-27 07:34:29 +09:00
Kirill Bilchenko
ec3afef00d Add reposity name and full name for prometheus labels in actions metrics (#2218)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-02-25 16:02:22 +09:00
Dimitar
7d0918b6d5 Allow custom graceful termination and loadBalancerSourceRanges for the githubwebhook service (#2305)
Co-authored-by: Dimitar Hristov <dimitar.hristov@skyscanner.net>
2023-02-25 14:18:29 +09:00
João Carlos Ferra de Almeida
678eafcd67 [Docs] Fix typo (#2314) 2023-02-24 07:19:51 -05:00
Bassem Dghaidi
b6515fe25c Add release change log to quickstart guide (#2315) 2023-02-23 06:20:39 -05:00
Tingluo Huang
1c7b7f467d Bump arc-2 chart version and prepare 0.2.0 release (#2313) 2023-02-23 08:40:21 +00:00
Francesco Renzi
73e22a1756 Disable metrics serving in proxy tests (#2307) 2023-02-22 16:57:59 +00:00
ggreenwood
9b44f0051c Documentation corrections (#2116)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-02-21 13:40:23 -05:00
Francesco Renzi
6b4250ca90 Add support for proxy (#2286)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
Co-authored-by: Ferenc Hammerl <fhammerl@github.com>
2023-02-21 17:33:48 +00:00
Nathan Klick
ced88228fc Resolves the erroneous webhook scale down due to check runs (#2119)
Signed-off-by: Nathan Klick <nathan@swirldslabs.com>
2023-02-21 10:56:46 +09:00
Andrei Vydrin
44c06c21ce fix: case-insensitive webhook label matching (#2302)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-02-21 09:37:42 +09:00
Tingluo Huang
4103fe35df Use DOCKER_IMAGE_NAME instead of NAME to avoid conflict. (#2303) 2023-02-20 18:27:14 -05:00
Yusuke Kuoka
a44fe04bef Fix manager crashloopback for ARC deployments without scaleset-related controllers (#2293) 2023-02-21 08:18:59 +09:00
Ava Stancu
274d0c874e Added ability to configure log level from chart values (#2252) 2023-02-17 14:16:20 +02:00
Tingluo Huang
256e08eb45 Ask runner to wait for docker daemon from DinD. (#2292) 2023-02-15 17:29:56 -05:00
Yusuke Kuoka
f677fd5872 doc: Fix chart name for helm commands in docs (#2287) 2023-02-16 07:09:23 +09:00
Tingluo Huang
d9627141dc Fix helm chart when containerMode.type=dind. (#2291) 2023-02-15 14:29:52 -05:00
Bassem Dghaidi
3886f285f8 Add EKS test environment Terraform templates (#2290)
Co-authored-by: Francesco Renzi <rentziass@gmail.com>
2023-02-15 10:29:49 -05:00
Ava Stancu
dab900462b Added workflow to be triggered via rest api dispatch in e2e test (#2283) 2023-02-14 16:06:46 +02:00
Francesco Renzi
dd8ec1a055 Add testserver package (#2281) 2023-02-14 12:11:46 +01:00
Nikola Jokic
8e52a6d2cf EphemeralRunner: On cleanup, if pod is pending, delete from service (#2255)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-02-11 19:55:12 -05:00
Nikola Jokic
9990243520 Early return if finalizer does not exist to make it more readable (#2262) 2023-02-08 15:21:13 +01:00
Ferenc Hammerl
08919814b1 Port ADRs from internal repo (#2267) 2023-02-08 14:42:45 +01:00
Tingluo Huang
facae69e0b Remove un-required permissions for the manager-role of the new AutoScalingRunnerSet (#2260) 2023-02-07 12:37:09 -05:00
Francesco Renzi
8f62e35f6b Add options to multi client (#2257) 2023-02-07 08:47:59 +01:00
Francesco Renzi
55951c2bdb Add new workflow to automate runner updates (#2247)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-02-06 10:22:58 +00:00
Nikola Jokic
c4297d25bb Avoid deleting scale set if annotation is not parsable or if it does not exist (#2239) 2023-02-03 17:27:31 +01:00
Francesco Renzi
0774f0680c ADR: automate runner updates (#2244) 2023-02-02 18:11:59 +01:00
Francesco Renzi
92ab11b4d2 Use UUID v5 for client identifiers (#2241) 2023-02-02 09:28:34 +01:00
Francesco Renzi
7414dc6568 Add Identifier to actions.Client (#2237) 2023-02-01 14:47:54 +01:00
dhawalseth
34efb9d585 Add documentation to update ARC with prometheus CRDs needed by actions metrics server (#2209)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-02-01 03:04:18 -05:00
Tingluo Huang
fbad56197f Allow provide pre-defined kubernetes secret when helm-install AutoScalingRunnerSet (#2234) 2023-01-31 17:04:03 -05:00
Tingluo Huang
a5cef7e47b Resolve CI break due to bad merge. (#2236) 2023-01-31 22:00:26 +01:00
Tingluo Huang
1f4fe4681e Delete RunnerScaleSet on service when AutoScalingRunnerSet is deleted. (#2223) 2023-01-31 15:03:11 -05:00
Kirill Bilchenko
067686c684 Fix typos and markdown structure in troubleshooting guide (#2148) 2023-01-31 09:57:42 -05:00
Francesco Renzi
df12e00c9e Remove network requests from actions.NewClient (#2219)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-01-31 10:55:23 +00:00
Tingluo Huang
cc26593a9b Skip CT when list-changed=false. (#2228) 2023-01-30 14:03:30 -05:00
Tingluo Huang
835eac7835 Fix helm charts when pass values file. (#2222) 2023-01-30 08:37:26 -05:00
Francesco Renzi
01e9dd31a9 Update Validate ARC workflow to go 1.19 (#2220) 2023-01-27 15:17:28 +00:00
Tingluo Huang
803818162c Allow update runner group for AutoScalingRunnerSet (#2216) 2023-01-27 09:27:52 -05:00
dependabot[bot]
219ba5b477 chore(deps): bump sigs.k8s.io/controller-runtime from 0.13.1 to 0.14.1 (#2132)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-01-27 09:23:28 +09:00
Tingluo Huang
b09e3a2dc9 Return error for non-existing runner group. (#2215) 2023-01-26 12:19:52 -05:00
Bassem Dghaidi
7ea60e497c Fix intermittent image push failures to GHCR (#2214) 2023-01-26 05:52:21 -05:00
Francesco Renzi
c8918f5a7b Fix URL for authenticating using a GitHub app (#2206)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-01-24 18:02:23 +01:00
Francesco Renzi
d57d17f161 Add support for custom CA in actions.Client (#2199) 2023-01-23 17:36:57 -05:00
dependabot[bot]
6e69c75637 chore(deps): bump github.com/hashicorp/go-retryablehttp from 0.7.1 to 0.7.2 (#2203)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-23 17:36:05 -05:00
Nikola Jokic
882bfab569 Renaming autoScaling to autoscaling in tests matching the convention (#2201) 2023-01-23 17:03:01 +01:00
Francesco Renzi
3327f620fb Refactor actions.Client with options to help extensibility (#2193) 2023-01-23 11:50:14 +00:00
dependabot[bot]
282f2dd09c chore(deps): bump github.com/onsi/gomega from 1.20.2 to 1.25.0 (#2169)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-21 12:56:08 +09:00
Nikola Jokic
d67f80863e Include nikola-jokic in CODEOWNERS file (#2184) 2023-01-19 08:21:08 -05:00
Tingluo Huang
4932412cd6 Fix L0 test to make it more reliable. (#2178) 2023-01-19 07:33:04 -05:00
Bassem Dghaidi
6da1cde09c Update runner version to 2.301.1 (#2182)
Co-authored-by: TingluoHuang <TingluoHuang@github.com>
2023-01-19 05:36:05 -05:00
Bassem Dghaidi
f9bae708c2 Add distinct namespace best practice note (#2181) 2023-01-18 09:59:31 -05:00
Bassem Dghaidi
05a3908ba6 Add arc-2 quickstart guide (#2180) 2023-01-18 08:17:25 -05:00
Stephane Moser
606ed1b28e Add Repository information to Runner Status (#2093)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-01-18 09:09:45 +09:00
Tingluo Huang
de244a17be Update publish-arc2 workflow to use right path. (#2173) 2023-01-17 18:26:30 -05:00
Ritesh Khadgaray
5be307ec62 Update installing-arc.md (#2162) 2023-01-18 08:13:08 +09:00
Hyeonmin Park
ee71ff14bd Fix logFormat comment for each module in Helm chart (#2166) 2023-01-18 08:12:24 +09:00
xi2817-aajgaonkar
9e93c7ee54 Update quickstart.md (#2164) 2023-01-18 08:12:13 +09:00
Tingluo Huang
bb61bb1342 Include extra user-agent for runners created by actions-runner-controller. (#2177) 2023-01-18 07:38:59 +09:00
James Bradshaw
23fdca4786 Fix minor typos in 0.27.md (#2171) 2023-01-18 07:38:42 +09:00
Hyeonmin Park
211bacaf1e Fix typo in release note for ARC 0.27.0 (#2158) 2023-01-18 07:38:05 +09:00
Tingluo Huang
0324658a3f Introduce new helm charts for the preview auto-scaling mode for ARC. (#2168) 2023-01-17 14:36:04 -05:00
Tingluo Huang
c4d3cff3df Fix typo in workflow. (#2172) 2023-01-17 18:07:40 +00:00
Tingluo Huang
294bd75cf1 Populate resolve ref when input.ref is empty. (#2170) 2023-01-17 12:58:37 -05:00
Bassem Dghaidi
068a427c52 Create publish-arc2.yaml (#2167) 2023-01-17 12:07:52 -05:00
Tingluo Huang
622eaa34f8 Introduce new preview auto-scaling mode for ARC. (#2153)
Co-authored-by: Cory Miller <cory-miller@github.com>
Co-authored-by: Nikola Jokic <nikola-jokic@github.com>
Co-authored-by: Ava Stancu <AvaStancu@github.com>
Co-authored-by: Ferenc Hammerl <fhammerl@github.com>
Co-authored-by: Francesco Renzi <rentziass@github.com>
Co-authored-by: Bassem Dghaidi <Link-@github.com>
2023-01-17 12:06:20 -05:00
Tingluo Huang
619667fc3b Ignore the new helm charts path for now. (#2165) 2023-01-17 10:26:53 -05:00
Bassem Dghaidi
3e88ae2d38 fix: Update target branch from main to master (#2161) 2023-01-16 18:31:43 +09:00
Yusuke Kuoka
360957cfbc chart: Bump chart and app versions for ARC 0.27.0 (#2160) 2023-01-16 04:24:24 -05:00
240 changed files with 55145 additions and 1150 deletions

View File

@@ -0,0 +1,160 @@
name: 'Execute and Assert ARC E2E Test Action'
description: 'Queue E2E test workflow and assert workflow run result to be succeed'
inputs:
auth-token:
description: 'GitHub access token to queue workflow run'
required: true
repo-owner:
description: "The repository owner name that has the test workflow file, ex: actions"
required: true
repo-name:
description: "The repository name that has the test workflow file, ex: test"
required: true
workflow-file:
description: 'The file name of the workflow yaml, ex: test.yml'
required: true
arc-name:
description: 'The name of the configured gha-runner-scale-set'
required: true
arc-namespace:
description: 'The namespace of the configured gha-runner-scale-set'
required: true
arc-controller-namespace:
description: 'The namespace of the configured gha-runner-scale-set-controller'
required: true
runs:
using: "composite"
steps:
- name: Queue test workflow
shell: bash
id: queue_workflow
run: |
queue_time=`date +%FT%TZ`
echo "queue_time=$queue_time" >> $GITHUB_OUTPUT
curl -X POST https://api.github.com/repos/${{inputs.repo-owner}}/${{inputs.repo-name}}/actions/workflows/${{inputs.workflow-file}}/dispatches \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token ${{inputs.auth-token}}" \
-d '{"ref": "main", "inputs": { "arc_name": "${{inputs.arc-name}}" } }'
- name: Fetch workflow run & job ids
uses: actions/github-script@v6
id: query_workflow
with:
script: |
// Try to find the workflow run triggered by the previous step using the workflow_dispatch event.
// - Find recently create workflow runs in the test repository
// - For each workflow run, list its workflow job and see if the job's labels contain `inputs.arc-name`
// - Since the inputs.arc-name should be unique per e2e workflow run, once we find the job with the label, we find the workflow that we just triggered.
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms))
}
const owner = '${{inputs.repo-owner}}'
const repo = '${{inputs.repo-name}}'
const workflow_id = '${{inputs.workflow-file}}'
let workflow_run_id = 0
let workflow_job_id = 0
let workflow_run_html_url = ""
let count = 0
while (count++<12) {
await sleep(10 * 1000);
let listRunResponse = await github.rest.actions.listWorkflowRuns({
owner: owner,
repo: repo,
workflow_id: workflow_id,
created: '>${{steps.queue_workflow.outputs.queue_time}}'
})
if (listRunResponse.data.total_count > 0) {
console.log(`Found some new workflow runs for ${workflow_id}`)
for (let i = 0; i<listRunResponse.data.total_count; i++) {
let workflowRun = listRunResponse.data.workflow_runs[i]
console.log(`Check if workflow run ${workflowRun.id} is triggered by us.`)
let listJobResponse = await github.rest.actions.listJobsForWorkflowRun({
owner: owner,
repo: repo,
run_id: workflowRun.id
})
console.log(`Workflow run ${workflowRun.id} has ${listJobResponse.data.total_count} jobs.`)
if (listJobResponse.data.total_count > 0) {
for (let j = 0; j<listJobResponse.data.total_count; j++) {
let workflowJob = listJobResponse.data.jobs[j]
console.log(`Check if workflow job ${workflowJob.id} is triggered by us.`)
console.log(JSON.stringify(workflowJob.labels));
if (workflowJob.labels.includes('${{inputs.arc-name}}')) {
console.log(`Workflow job ${workflowJob.id} (Run id: ${workflowJob.run_id}) is triggered by us.`)
workflow_run_id = workflowJob.run_id
workflow_job_id = workflowJob.id
workflow_run_html_url = workflowRun.html_url
break
}
}
}
if (workflow_job_id > 0) {
break;
}
}
}
if (workflow_job_id > 0) {
break;
}
}
if (workflow_job_id == 0) {
core.setFailed(`Can't find workflow run and workflow job triggered to 'runs-on ${{inputs.arc-name}}'`)
} else {
core.setOutput('workflow_run', workflow_run_id);
core.setOutput('workflow_job', workflow_job_id);
core.setOutput('workflow_run_url', workflow_run_html_url);
}
- name: Generate summary about the triggered workflow run
shell: bash
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Triggered workflow run** |
|:--------------------------:|
| ${{steps.query_workflow.outputs.workflow_run_url}} |
EOF
- name: Wait for workflow to finish successfully
uses: actions/github-script@v6
with:
script: |
// Wait 5 minutes and make sure the workflow run we triggered completed with result 'success'
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms))
}
const owner = '${{inputs.repo-owner}}'
const repo = '${{inputs.repo-name}}'
const workflow_run_id = ${{steps.query_workflow.outputs.workflow_run}}
const workflow_job_id = ${{steps.query_workflow.outputs.workflow_job}}
let count = 0
while (count++<10) {
await sleep(30 * 1000);
let getRunResponse = await github.rest.actions.getWorkflowRun({
owner: owner,
repo: repo,
run_id: workflow_run_id
})
console.log(`${getRunResponse.data.html_url}: ${getRunResponse.data.status} (${getRunResponse.data.conclusion})`);
if (getRunResponse.data.status == 'completed') {
if ( getRunResponse.data.conclusion == 'success') {
console.log(`Workflow run finished properly.`)
return
} else {
core.setFailed(`The triggered workflow run finish with result ${getRunResponse.data.conclusion}`)
return
}
}
}
core.setFailed(`The triggered workflow run didn't finish properly using ${{inputs.arc-name}}`)
- name: Gather logs and cleanup
shell: bash
if: always()
run: |
helm uninstall ${{ inputs.arc-name }} --namespace ${{inputs.arc-namespace}} --debug
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n ${{inputs.arc-name}} -l app.kubernetes.io/instance=${{ inputs.arc-name }}
kubectl logs deployment/arc-gha-runner-scale-set-controller -n ${{inputs.arc-controller-namespace}}

View File

@@ -0,0 +1,63 @@
name: 'Setup ARC E2E Test Action'
description: 'Build controller image, create kind cluster, load the image, and exchange ARC configure token.'
inputs:
app-id:
description: 'GitHub App Id for exchange access token'
required: true
app-pk:
description: "GitHub App private key for exchange access token"
required: true
image-name:
description: "Local docker image name for building"
required: true
image-tag:
description: "Tag of ARC Docker image for building"
required: true
target-org:
description: "The test organization for ARC e2e test"
required: true
outputs:
token:
description: 'Token to use for configure ARC'
value: ${{steps.config-token.outputs.token}}
runs:
using: "composite"
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
# Pinning v0.9.1 for Buildx and BuildKit v0.10.6
# BuildKit v0.11 which has a bug causing intermittent
# failures pushing images to GHCR
version: v0.9.1
driver-opts: image=moby/buildkit:v0.10.6
- name: Build controller image
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64
load: true
build-args: |
DOCKER_IMAGE_NAME=${{inputs.image-name}}
VERSION=${{inputs.image-tag}}
tags: |
${{inputs.image-name}}:${{inputs.image-tag}}
no-cache: true
- name: Create minikube cluster and load image
shell: bash
run: |
minikube start
minikube image load ${{inputs.image-name}}:${{inputs.image-tag}}
- name: Get configure token
id: config-token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ inputs.app-id }}
application_private_key: ${{ inputs.app-pk }}
organization: ${{ inputs.target-org}}

View File

@@ -1,43 +0,0 @@
{
"extends": ["config:base"],
"labels": ["dependencies"],
"packageRules": [
{
// automatically merge an update of runner
"matchPackageNames": ["actions/runner"],
"extractVersion": "^v(?<version>.*)$",
"automerge": true
}
],
"regexManagers": [
{
// use https://github.com/actions/runner/releases
"fileMatch": [
".github/workflows/runners.yaml"
],
"matchStrings": ["RUNNER_VERSION: +(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
},
{
"fileMatch": [
"runner/Makefile",
"Makefile"
],
"matchStrings": ["RUNNER_VERSION \\?= +(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
},
{
"fileMatch": [
"runner/actions-runner.ubuntu-20.04.dockerfile",
"runner/actions-runner.ubuntu-22.04.dockerfile",
"runner/actions-runner-dind.ubuntu-20.04.dockerfile",
"runner/actions-runner-dind-rootless.ubuntu-20.04.dockerfile"
],
"matchStrings": ["RUNNER_VERSION=+(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
}
]
}

705
.github/workflows/e2e-test-linux-vm.yaml vendored Normal file
View File

@@ -0,0 +1,705 @@
name: CI ARC E2E Linux VM Test
on:
push:
branches:
- master
pull_request:
branches:
- master
workflow_dispatch:
permissions:
contents: read
env:
TARGET_ORG: actions-runner-controller
TARGET_REPO: arc_e2e_test_dummy
IMAGE_NAME: "arc-test-image"
IMAGE_VERSION: "dev"
jobs:
default-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
single-namespace-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
kubectl create namespace arc-runners
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
--set flags.watchSingleNamespace=arc-runners \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
dind-mode-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: arc-test-dind-workflow.yaml
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set containerMode.type="dind" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
kubernetes-mode-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-kubernetes-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
echo "Install openebs/dynamic-localpv-provisioner"
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install openebs openebs/openebs -n openebs --create-namespace
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
kubectl wait --timeout=30s --for=condition=ready pod -n openebs -l name=openebs-localpv-provisioner
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set containerMode.type="kubernetes" \
--set containerMode.kubernetesModeWorkVolumeClaim.accessModes={"ReadWriteOnce"} \
--set containerMode.kubernetesModeWorkVolumeClaim.storageClassName="openebs-hostpath" \
--set containerMode.kubernetesModeWorkVolumeClaim.resources.requests.storage="1Gi" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
auth-proxy-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--name squid \
--publish 3128:3128 \
huangtingluo/squid-proxy:latest
kubectl create namespace arc-runners
kubectl create secret generic proxy-auth \
--namespace=arc-runners \
--from-literal=username=github \
--from-literal=password='actions'
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:3128" \
--set proxy.https.credentialSecretRef="proxy-auth" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
anonymous-proxy-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--name squid \
--publish 3128:3128 \
ubuntu/squid:latest
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:3128" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
self-signed-ca-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--rm \
--name mitmproxy \
--publish 8080:8080 \
-v ${{ github.workspace }}/mitmproxy:/home/mitmproxy/.mitmproxy \
mitmproxy/mitmproxy:latest \
mitmdump
count=0
while true; do
if [ -f "${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.pem" ]; then
echo "CA cert generated"
cat ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.pem
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for mitmproxy generate its CA cert"
exit 1
fi
sleep 1
count=$((count+1))
done
sudo cp ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.pem ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.crt
sudo chown runner ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.crt
kubectl create namespace arc-runners
kubectl -n arc-runners create configmap ca-cert --from-file="${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.crt"
kubectl -n arc-runners get configmap ca-cert -o yaml
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:8080" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
--set "githubServerTLS.certificateFrom.configMapKeyRef.name=ca-cert" \
--set "githubServerTLS.certificateFrom.configMapKeyRef.key=mitmproxy-ca-cert.crt" \
--set "githubServerTLS.runnerMountPath=/usr/local/share/ca-certificates/" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"

80
.github/workflows/go.yaml vendored Normal file
View File

@@ -0,0 +1,80 @@
name: Go
on:
push:
branches:
- master
paths:
- '.github/workflows/go.yaml'
- '**.go'
- 'go.mod'
- 'go.sum'
pull_request:
paths:
- '.github/workflows/go.yaml'
- '**.go'
- 'go.mod'
- 'go.sum'
permissions:
contents: read
jobs:
fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
cache: false
- name: fmt
run: go fmt ./...
- name: Check diff
run: git diff --exit-code
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
cache: false
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
only-new-issues: true
version: v1.51.1
generate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
cache: false
- name: Generate
run: make generate
- name: Check diff
run: git diff --exit-code
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
- run: make manifests
- name: Check diff
run: git diff --exit-code
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_linux_amd64.tar.gz
tar zxvf kubebuilder_2.3.2_linux_amd64.tar.gz
sudo mv kubebuilder_2.3.2_linux_amd64 /usr/local/kubebuilder
- name: Run go tests
run: |
go test -short `go list ./... | grep -v ./test_e2e_arc`

View File

@@ -1,23 +0,0 @@
name: golangci-lint
on:
push:
branches:
- master
pull_request:
permissions:
contents: read
pull-requests: read
jobs:
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: 1.19
- uses: actions/checkout@v3
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
only-new-issues: true
version: v1.49.0

View File

@@ -29,6 +29,10 @@ jobs:
release-controller:
name: Release
runs-on: ubuntu-latest
# gha-runner-scale-set has its own release workflow.
# We don't want to publish a new actions-runner-controller image
# we release gha-runner-scale-set.
if: ${{ !startsWith(github.event.inputs.release_tag_name, 'gha-runner-scale-set-') }}
steps:
- name: Checkout
uses: actions/checkout@v3

View File

@@ -8,35 +8,47 @@ on:
- master
paths-ignore:
- '**.md'
- '.github/actions/**'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/e2e-test-dispatch-workflow.yaml'
- '.github/workflows/e2e-test-linux-vm.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/runners.yaml'
- '.github/workflows/validate-entrypoint.yaml'
- '.github/renovate.*'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/publish-runner-scale-set.yaml'
- '.github/workflows/release-runners.yaml'
- '.github/workflows/run-codeql.yaml'
- '.github/workflows/run-first-interaction.yaml'
- '.github/workflows/run-stale.yaml'
- '.github/workflows/update-runners.yaml'
- '.github/workflows/validate-arc.yaml'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/validate-gha-chart.yaml'
- '.github/workflows/validate-runners.yaml'
- '.github/dependabot.yml'
- '.github/RELEASE_NOTE_TEMPLATE.md'
- 'runner/**'
- '.gitignore'
- 'PROJECT'
- 'LICENSE'
- 'Makefile'
env:
# Safeguard to prevent pushing images to registeries after build
PUSH_TO_REGISTRIES: true
TARGET_ORG: actions-runner-controller
TARGET_REPO: actions-runner-controller
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: read
packages: write
env:
# Safeguard to prevent pushing images to registeries after build
PUSH_TO_REGISTRIES: true
jobs:
canary-build:
name: Build and Publish Canary Image
legacy-canary-build:
name: Build and Publish Legacy Canary Image
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
TARGET_ORG: actions-runner-controller
TARGET_REPO: actions-runner-controller
steps:
- name: Checkout
uses: actions/checkout@v3
@@ -68,3 +80,50 @@ jobs:
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "[https://github.com/actions-runner-controller/releases/actions/workflows/publish-canary.yaml](https://github.com/actions-runner-controller/releases/actions/workflows/publish-canary.yaml)" >> $GITHUB_STEP_SUMMARY
canary-build:
name: Build and Publish gha-runner-scale-set-controller Canary Image
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Normalization is needed because upper case characters are not allowed in the repository name
# and the short sha is needed for image tagging
- name: Resolve parameters
id: resolve_parameters
run: |
echo "INFO: Resolving short sha"
echo "short_sha=$(git rev-parse --short ${{ github.ref }})" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: latest
# Unstable builds - run at your own risk
- name: Build and Push
uses: docker/build-push-action@v3
with:
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64
build-args: VERSION=canary-"${{ github.ref }}"
push: ${{ env.PUSH_TO_REGISTRIES }}
tags: |
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:canary
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:canary-${{ steps.resolve_parameters.outputs.short_sha }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -5,20 +5,28 @@ name: Publish Helm Chart
on:
push:
branches:
- master
- master
paths:
- 'charts/**'
- '.github/workflows/publish-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- 'charts/**'
- '.github/workflows/publish-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
- '!**.md'
workflow_dispatch:
inputs:
force:
description: 'Force publish even if the chart version is not bumped'
type: boolean
required: true
default: false
env:
KUBE_SCORE_VERSION: 1.10.0
HELM_VERSION: v3.8.0
permissions:
contents: read
contents: write
jobs:
lint-chart:
@@ -27,91 +35,86 @@ jobs:
outputs:
publish-chart: ${{ steps.publish-chart-step.outputs.publish }}
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v3.4
with:
version: ${{ env.HELM_VERSION }}
- name: Set up Helm
uses: azure/setup-helm@v3.4
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score - --ignore-test pod-networkpolicy --ignore-test deployment-has-poddisruptionbudget --ignore-test deployment-has-host-podantiaffinity --ignore-test container-security-context --ignore-test pod-probes --ignore-test container-image-tag --enable-optional-test container-security-context-privileged --enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.7'
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config charts/.ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config charts/.ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: |
ct lint --config charts/.ci/ct-config.yaml
- name: Run chart-testing (lint)
run: |
ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
if: steps.list-changed.outputs.changed == 'true'
uses: helm/kind-action@v1.4.0
- name: Create kind cluster
if: steps.list-changed.outputs.changed == 'true'
uses: helm/kind-action@v1.4.0
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
if: steps.list-changed.outputs.changed == 'true'
run: |
helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
if: steps.list-changed.outputs.changed == 'true'
run: |
helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: ct install --config charts/.ci/ct-config.yaml
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: ct install --config charts/.ci/ct-config.yaml
# WARNING: This relies on the latest release being at the top of the JSON from GitHub and a clean chart.yaml
- name: Check if Chart Publish is Needed
id: publish-chart-step
run: |
CHART_TEXT=$(curl -fs https://raw.githubusercontent.com/${{ github.repository }}/master/charts/actions-runner-controller/Chart.yaml)
NEW_CHART_VERSION=$(echo "$CHART_TEXT" | grep version: | cut -d ' ' -f 2)
RELEASE_LIST=$(curl -fs https://api.github.com/repos/${{ github.repository }}/releases | jq .[].tag_name | grep actions-runner-controller | cut -d '"' -f 2 | cut -d '-' -f 4)
LATEST_RELEASED_CHART_VERSION=$(echo $RELEASE_LIST | cut -d ' ' -f 1)
echo "CHART_VERSION_IN_MASTER=$NEW_CHART_VERSION" >> $GITHUB_ENV
echo "LATEST_CHART_VERSION=$LATEST_RELEASED_CHART_VERSION" >> $GITHUB_ENV
if [[ $NEW_CHART_VERSION != $LATEST_RELEASED_CHART_VERSION ]]; then
echo "publish=true" >> $GITHUB_OUTPUT
else
echo "publish=false" >> $GITHUB_OUTPUT
fi
# WARNING: This relies on the latest release being at the top of the JSON from GitHub and a clean chart.yaml
- name: Check if Chart Publish is Needed
id: publish-chart-step
run: |
CHART_TEXT=$(curl -fs https://raw.githubusercontent.com/${{ github.repository }}/master/charts/actions-runner-controller/Chart.yaml)
NEW_CHART_VERSION=$(echo "$CHART_TEXT" | grep version: | cut -d ' ' -f 2)
RELEASE_LIST=$(curl -fs https://api.github.com/repos/${{ github.repository }}/releases | jq .[].tag_name | grep actions-runner-controller | cut -d '"' -f 2 | cut -d '-' -f 4)
LATEST_RELEASED_CHART_VERSION=$(echo $RELEASE_LIST | cut -d ' ' -f 1)
- name: Job summary
run: |
echo "Chart linting has been completed." >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "- chart version in master: ${{ env.CHART_VERSION_IN_MASTER }}" >> $GITHUB_STEP_SUMMARY
echo "- latest chart version: ${{ env.LATEST_CHART_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- publish new chart: ${{ steps.publish-chart-step.outputs.publish }}" >> $GITHUB_STEP_SUMMARY
echo "CHART_VERSION_IN_MASTER=$NEW_CHART_VERSION" >> $GITHUB_ENV
echo "LATEST_CHART_VERSION=$LATEST_RELEASED_CHART_VERSION" >> $GITHUB_ENV
# Always publish if force is true
if [[ $NEW_CHART_VERSION != $LATEST_RELEASED_CHART_VERSION || "${{ inputs.force }}" == "true" ]]; then
echo "publish=true" >> $GITHUB_OUTPUT
else
echo "publish=false" >> $GITHUB_OUTPUT
fi
- name: Job summary
run: |
echo "Chart linting has been completed." >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "- chart version in master: ${{ env.CHART_VERSION_IN_MASTER }}" >> $GITHUB_STEP_SUMMARY
echo "- latest chart version: ${{ env.LATEST_CHART_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- publish new chart: ${{ steps.publish-chart-step.outputs.publish }}" >> $GITHUB_STEP_SUMMARY
publish-chart:
if: needs.lint-chart.outputs.publish-chart == 'true'
@@ -119,85 +122,86 @@ jobs:
name: Publish Chart
runs-on: ubuntu-latest
permissions:
contents: write # for helm/chart-releaser-action to push chart release and create a release
contents: write # for helm/chart-releaser-action to push chart release and create a release
env:
CHART_TARGET_ORG: actions-runner-controller
CHART_TARGET_REPO: actions-runner-controller.github.io
CHART_TARGET_BRANCH: main
CHART_TARGET_BRANCH: master
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ secrets.ACTIONS_ACCESS_APP_ID }}
application_private_key: ${{ secrets.ACTIONS_ACCESS_PK }}
organization: ${{ env.CHART_TARGET_ORG }}
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ secrets.ACTIONS_ACCESS_APP_ID }}
application_private_key: ${{ secrets.ACTIONS_ACCESS_PK }}
organization: ${{ env.CHART_TARGET_ORG }}
- name: Install chart-releaser
uses: helm/chart-releaser-action@v1.4.1
with:
install_only: true
install_dir: ${{ github.workspace }}/bin
- name: Install chart-releaser
uses: helm/chart-releaser-action@v1.4.1
with:
install_only: true
install_dir: ${{ github.workspace }}/bin
- name: Package and upload release assets
run: |
cr package \
${{ github.workspace }}/charts/actions-runner-controller/ \
--package-path .cr-release-packages
- name: Package and upload release assets
run: |
cr package \
${{ github.workspace }}/charts/actions-runner-controller/ \
--package-path .cr-release-packages
cr upload \
--owner "$(echo ${{ github.repository }} | cut -d '/' -f 1)" \
--git-repo "$(echo ${{ github.repository }} | cut -d '/' -f 2)" \
--package-path .cr-release-packages \
--token ${{ secrets.GITHUB_TOKEN }}
cr upload \
--owner "$(echo ${{ github.repository }} | cut -d '/' -f 1)" \
--git-repo "$(echo ${{ github.repository }} | cut -d '/' -f 2)" \
--package-path .cr-release-packages \
--token ${{ secrets.GITHUB_TOKEN }}
- name: Generate updated index.yaml
run: |
cr index \
--owner "$(echo ${{ github.repository }} | cut -d '/' -f 1)" \
--git-repo "$(echo ${{ github.repository }} | cut -d '/' -f 2)" \
--index-path ${{ github.workspace }}/index.yaml \
--pages-branch 'gh-pages' \
--pages-index-path 'index.yaml'
- name: Generate updated index.yaml
run: |
cr index \
--owner "$(echo ${{ github.repository }} | cut -d '/' -f 1)" \
--git-repo "$(echo ${{ github.repository }} | cut -d '/' -f 2)" \
--index-path ${{ github.workspace }}/index.yaml \
--push \
--pages-branch 'gh-pages' \
--pages-index-path 'index.yaml'
# Chart Release was never intended to publish to a different repo
# this workaround is intended to move the index.yaml to the target repo
# where the github pages are hosted
- name: Checkout pages repository
uses: actions/checkout@v3
with:
repository: ${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}
path: ${{ env.CHART_TARGET_REPO }}
ref: ${{ env.CHART_TARGET_BRANCH }}
token: ${{ steps.get_workflow_token.outputs.token }}
# Chart Release was never intended to publish to a different repo
# this workaround is intended to move the index.yaml to the target repo
# where the github pages are hosted
- name: Checkout target repository
uses: actions/checkout@v3
with:
repository: ${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}
path: ${{ env.CHART_TARGET_REPO }}
ref: ${{ env.CHART_TARGET_BRANCH }}
token: ${{ steps.get_workflow_token.outputs.token }}
- name: Copy index.yaml
run: |
cp ${{ github.workspace }}/index.yaml ${{ env.CHART_TARGET_REPO }}/actions-runner-controller/index.yaml
- name: Commit and push
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
git add .
git commit -m "Update index.yaml"
git push
working-directory: ${{ github.workspace }}/${{ env.CHART_TARGET_REPO }}
- name: Copy index.yaml
run: |
cp ${{ github.workspace }}/index.yaml ${{ env.CHART_TARGET_REPO }}/actions-runner-controller/index.yaml
- name: Job summary
run: |
echo "New helm chart has been published" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "- New [index.yaml](https://github.com/${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}/tree/main/actions-runner-controller) pushed" >> $GITHUB_STEP_SUMMARY
- name: Commit and push to target repository
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
git add .
git commit -m "Update index.yaml"
git push
working-directory: ${{ github.workspace }}/${{ env.CHART_TARGET_REPO }}
- name: Job summary
run: |
echo "New helm chart has been published" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "- New [index.yaml](https://github.com/${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}/tree/main/actions-runner-controller) pushed" >> $GITHUB_STEP_SUMMARY

View File

@@ -0,0 +1,201 @@
name: Publish Runner Scale Set Controller Charts
on:
workflow_dispatch:
inputs:
ref:
description: 'The branch, tag or SHA to cut a release from'
required: false
type: string
default: ''
release_tag_name:
description: 'The name to tag the controller image with'
required: true
type: string
default: 'canary'
push_to_registries:
description: 'Push images to registries'
required: true
type: boolean
default: false
publish_gha_runner_scale_set_controller_chart:
description: 'Publish new helm chart for gha-runner-scale-set-controller'
required: true
type: boolean
default: false
publish_gha_runner_scale_set_chart:
description: 'Publish new helm chart for gha-runner-scale-set'
required: true
type: boolean
default: false
env:
HELM_VERSION: v3.8.0
permissions:
packages: write
jobs:
build-push-image:
name: Build and push controller image
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
# If inputs.ref is empty, it'll resolve to the default branch
ref: ${{ inputs.ref }}
- name: Resolve parameters
id: resolve_parameters
run: |
resolvedRef="${{ inputs.ref }}"
if [ -z "$resolvedRef" ]
then
resolvedRef="${{ github.ref }}"
fi
echo "resolved_ref=$resolvedRef" >> $GITHUB_OUTPUT
echo "INFO: Resolving short SHA for $resolvedRef"
echo "short_sha=$(git rev-parse --short $resolvedRef)" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
# Pinning v0.9.1 for Buildx and BuildKit v0.10.6
# BuildKit v0.11 which has a bug causing intermittent
# failures pushing images to GHCR
version: v0.9.1
driver-opts: image=moby/buildkit:v0.10.6
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build & push controller image
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
build-args: VERSION=${{ inputs.release_tag_name }}
push: ${{ inputs.push_to_registries }}
tags: |
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:${{ inputs.release_tag_name }}
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:${{ inputs.release_tag_name }}-${{ steps.resolve_parameters.outputs.short_sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Job summary
run: |
echo "The [publish-runner-scale-set.yaml](https://github.com/actions/actions-runner-controller/blob/main/.github/workflows/publish-runner-scale-set.yaml) workflow run was completed successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- Release tag: ${{ inputs.release_tag_name }}" >> $GITHUB_STEP_SUMMARY
echo "- Push to registries: ${{ inputs.push_to_registries }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
publish-helm-chart-gha-runner-scale-set-controller:
if: ${{ inputs.publish_gha_runner_scale_set_controller_chart == true }}
needs: build-push-image
name: Publish Helm chart for gha-runner-scale-set-controller
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
# If inputs.ref is empty, it'll resolve to the default branch
ref: ${{ inputs.ref }}
- name: Resolve parameters
id: resolve_parameters
run: |
resolvedRef="${{ inputs.ref }}"
if [ -z "$resolvedRef" ]
then
resolvedRef="${{ github.ref }}"
fi
echo "INFO: Resolving short SHA for $resolvedRef"
echo "short_sha=$(git rev-parse --short $resolvedRef)" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Publish new helm chart for gha-runner-scale-set-controller
run: |
echo ${{ secrets.GITHUB_TOKEN }} | helm registry login ghcr.io --username ${{ github.actor }} --password-stdin
GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG=$(cat charts/gha-runner-scale-set-controller/Chart.yaml | grep version: | cut -d " " -f 2)
echo "GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG=${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}" >> $GITHUB_ENV
helm package charts/gha-runner-scale-set-controller/ --version="${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}"
helm push gha-runner-scale-set-controller-"${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}".tgz oci://ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-charts
- name: Job summary
run: |
echo "New helm chart for gha-runner-scale-set-controller published successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- gha-runner-scale-set-controller Chart version: ${{ env.GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG }}" >> $GITHUB_STEP_SUMMARY
publish-helm-chart-gha-runner-scale-set:
if: ${{ inputs.publish_gha_runner_scale_set_chart == true }}
needs: build-push-image
name: Publish Helm chart for gha-runner-scale-set
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
# If inputs.ref is empty, it'll resolve to the default branch
ref: ${{ inputs.ref }}
- name: Resolve parameters
id: resolve_parameters
run: |
resolvedRef="${{ inputs.ref }}"
if [ -z "$resolvedRef" ]
then
resolvedRef="${{ github.ref }}"
fi
echo "INFO: Resolving short SHA for $resolvedRef"
echo "short_sha=$(git rev-parse --short $resolvedRef)" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Publish new helm chart for gha-runner-scale-set
run: |
echo ${{ secrets.GITHUB_TOKEN }} | helm registry login ghcr.io --username ${{ github.actor }} --password-stdin
GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG=$(cat charts/gha-runner-scale-set/Chart.yaml | grep version: | cut -d " " -f 2)
echo "GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG=${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}" >> $GITHUB_ENV
helm package charts/gha-runner-scale-set/ --version="${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}"
helm push gha-runner-scale-set-"${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}".tgz oci://ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-charts
- name: Job summary
run: |
echo "New helm chart for gha-runner-scale-set published successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- gha-runner-scale-set Chart version: ${{ env.GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG }}" >> $GITHUB_STEP_SUMMARY

View File

@@ -3,24 +3,21 @@ name: Runners
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
on:
# We must do a trigger on a push: instead of a types: closed so GitHub Secrets
# We must do a trigger on a push: instead of a types: closed so GitHub Secrets
# are available to the workflow run
push:
branches:
- 'master'
paths:
- 'runner/**'
- '!runner/Makefile'
- '.github/workflows/runners.yaml'
- '!**.md'
- 'runner/VERSION'
- '.github/workflows/release-runners.yaml'
env:
# Safeguard to prevent pushing images to registeries after build
# Safeguard to prevent pushing images to registeries after build
PUSH_TO_REGISTRIES: true
TARGET_ORG: actions-runner-controller
TARGET_WORKFLOW: release-runners.yaml
RUNNER_VERSION: 2.300.2
DOCKER_VERSION: 20.10.21
DOCKER_VERSION: 20.10.23
RUNNER_CONTAINER_HOOKS_VERSION: 0.2.0
jobs:
@@ -28,6 +25,13 @@ jobs:
name: Trigger Build and Push of Runner Images
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Get runner version
id: runner_version
run: |
version=$(echo -n $(cat runner/VERSION))
echo runner_version=$version >> $GITHUB_OUTPUT
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
@@ -37,6 +41,8 @@ jobs:
organization: ${{ env.TARGET_ORG }}
- name: Trigger Build And Push Runner Images To Registries
env:
RUNNER_VERSION: ${{ steps.runner_version.outputs.runner_version }}
run: |
# Authenticate
gh auth login --with-token <<< ${{ steps.get_workflow_token.outputs.token }}
@@ -50,6 +56,8 @@ jobs:
-f push_to_registries=${{ env.PUSH_TO_REGISTRIES }}
- name: Job summary
env:
RUNNER_VERSION: ${{ steps.runner_version.outputs.runner_version }}
run: |
echo "The [release-runners.yaml](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/release-runners.yaml) workflow has been triggered!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY

109
.github/workflows/update-runners.yaml vendored Normal file
View File

@@ -0,0 +1,109 @@
# This workflows polls releases from actions/runner and in case of a new one it
# updates files containing runner version and opens a pull request.
name: Update runners
on:
schedule:
# run daily
- cron: "0 9 * * *"
workflow_dispatch:
jobs:
# check_versions compares our current version and the latest available runner
# version and sets them as outputs.
check_versions:
runs-on: ubuntu-latest
env:
GH_TOKEN: ${{ github.token }}
outputs:
current_version: ${{ steps.versions.outputs.current_version }}
latest_version: ${{ steps.versions.outputs.latest_version }}
steps:
- uses: actions/checkout@v3
- name: Get current and latest versions
id: versions
run: |
CURRENT_VERSION=$(echo -n $(cat runner/VERSION))
echo "Current version: $CURRENT_VERSION"
echo current_version=$CURRENT_VERSION >> $GITHUB_OUTPUT
LATEST_VERSION=$(gh release list --exclude-drafts --exclude-pre-releases --limit 1 -R actions/runner | grep -oP '(?<=v)[0-9.]+' | head -1)
echo "Latest version: $LATEST_VERSION"
echo latest_version=$LATEST_VERSION >> $GITHUB_OUTPUT
# check_pr checks if a PR for the same update already exists. It only runs if
# runner latest version != our current version. If no existing PR is found,
# it sets a PR name as output.
check_pr:
runs-on: ubuntu-latest
needs: check_versions
if: needs.check_versions.outputs.current_version != needs.check_versions.outputs.latest_version
outputs:
pr_name: ${{ steps.pr_name.outputs.pr_name }}
env:
GH_TOKEN: ${{ github.token }}
steps:
- name: debug
run:
echo ${{ needs.check_versions.outputs.current_version }}
echo ${{ needs.check_versions.outputs.latest_version }}
- uses: actions/checkout@v3
- name: PR Name
id: pr_name
env:
LATEST_VERSION: ${{ needs.check_versions.outputs.latest_version }}
run: |
PR_NAME="Update runner to version ${LATEST_VERSION}"
result=$(gh pr list --search "$PR_NAME" --json number --jq ".[].number" --limit 1)
if [ -z "$result" ]
then
echo "No existing PRs found, setting output with pr_name=$PR_NAME"
echo pr_name=$PR_NAME >> $GITHUB_OUTPUT
else
echo "Found a PR with title '$PR_NAME' already existing: ${{ github.server_url }}/${{ github.repository }}/pull/$result"
fi
# update_version updates runner version in the files listed below, commits
# the changes and opens a pull request as `github-actions` bot.
update_version:
runs-on: ubuntu-latest
needs:
- check_versions
- check_pr
if: needs.check_pr.outputs.pr_name
permissions:
pull-requests: write
contents: write
actions: write
env:
GH_TOKEN: ${{ github.token }}
CURRENT_VERSION: ${{ needs.check_versions.outputs.current_version }}
LATEST_VERSION: ${{ needs.check_versions.outputs.latest_version }}
PR_NAME: ${{ needs.check_pr.outputs.pr_name }}
steps:
- uses: actions/checkout@v3
- name: New branch
run: git checkout -b update-runner-$LATEST_VERSION
- name: Update files
run: |
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/VERSION
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" test/e2e/e2e_test.go
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" .github/workflows/e2e-test-linux-vm.yaml
- name: Commit changes
run: |
# from https://github.com/orgs/community/discussions/26560
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config user.name "github-actions[bot]"
git add .
git commit -m "$PR_NAME"
git push -u origin HEAD
- name: Create pull request
run: gh pr create -f

View File

@@ -1,60 +0,0 @@
name: Validate ARC
on:
pull_request:
branches:
- master
paths-ignore:
- '**.md'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/publish-canary.yaml'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/runners.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/validate-entrypoint.yaml'
- '.github/renovate.*'
- 'runner/**'
- '.gitignore'
- 'PROJECT'
- 'LICENSE'
- 'Makefile'
permissions:
contents: read
jobs:
test-controller:
name: Test ARC
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set-up Go
uses: actions/setup-go@v3
with:
go-version: '1.18.2'
check-latest: false
- uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_linux_amd64.tar.gz
tar zxvf kubebuilder_2.3.2_linux_amd64.tar.gz
sudo mv kubebuilder_2.3.2_linux_amd64 /usr/local/kubebuilder
- name: Run tests
run: |
make test
- name: Verify manifests are up-to-date
run: |
make manifests
git diff --exit-code

View File

@@ -1,12 +1,24 @@
name: Validate Helm Chart
on:
pull_request:
branches:
- master
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
push:
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.10.0
@@ -26,7 +38,8 @@ jobs:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v3.4
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
@@ -78,5 +91,6 @@ jobs:
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: |
ct install --config charts/.ci/ct-config.yaml

View File

@@ -0,0 +1,134 @@
name: Validate Helm Chart (gha-runner-scale-set-controller and gha-runner-scale-set)
on:
pull_request:
branches:
- master
paths:
- 'charts/**'
- '.github/workflows/validate-gha-chart.yaml'
- '!charts/actions-runner-controller/**'
- '!**.md'
push:
paths:
- 'charts/**'
- '.github/workflows/validate-gha-chart.yaml'
- '!charts/actions-runner-controller/**'
- '!**.md'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.16.1
HELM_VERSION: v3.8.0
permissions:
contents: read
jobs:
validate-chart:
name: Lint Chart
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
- name: Set up latest version chart-testing
run: |
echo 'deb [trusted=yes] https://repo.goreleaser.com/apt/ /' | sudo tee /etc/apt/sources.list.d/goreleaser.list
sudo apt update
sudo apt install goreleaser
git clone https://github.com/helm/chart-testing
cd chart-testing
unset CT_CONFIG_DIR
goreleaser build --clean --skip-validate
./dist/chart-testing_linux_amd64_v1/ct version
echo 'Adding ct directory to PATH...'
echo "$RUNNER_TEMP/chart-testing/dist/chart-testing_linux_amd64_v1" >> "$GITHUB_PATH"
echo 'Setting CT_CONFIG_DIR...'
echo "CT_CONFIG_DIR=$RUNNER_TEMP/chart-testing/etc" >> "$GITHUB_ENV"
working-directory: ${{ runner.temp }}
- name: Run chart-testing (list-changed)
id: list-changed
run: |
ct version
changed=$(ct list-changed --config charts/.ci/ct-config-gha.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: |
ct lint --config charts/.ci/ct-config-gha.yaml
- name: Set up docker buildx
uses: docker/setup-buildx-action@v2
if: steps.list-changed.outputs.changed == 'true'
with:
version: latest
- name: Build controller image
uses: docker/build-push-action@v3
if: steps.list-changed.outputs.changed == 'true'
with:
file: Dockerfile
platforms: linux/amd64
load: true
build-args: |
DOCKER_IMAGE_NAME=test-arc
VERSION=dev
tags: |
test-arc:dev
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Create kind cluster
uses: helm/kind-action@v1.4.0
if: steps.list-changed.outputs.changed == 'true'
with:
cluster_name: chart-testing
- name: Load image into cluster
if: steps.list-changed.outputs.changed == 'true'
run: |
export DOCKER_IMAGE_NAME=test-arc
export VERSION=dev
export IMG_RESULT=load
make docker-buildx
kind load docker-image test-arc:dev --name chart-testing
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: |
ct install --config charts/.ci/ct-config-gha.yaml

1
.gitignore vendored
View File

@@ -29,6 +29,7 @@ bin
.env
.test.env
*.pem
!github/actions/testdata/*.pem
# OS
.DS_STORE

View File

@@ -1,2 +1,2 @@
# actions-runner-controller maintainers
* @mumoshu @toast-gear @actions/actions-runtime
* @mumoshu @toast-gear @actions/actions-runtime @nikola-jokic

View File

@@ -37,8 +37,10 @@ RUN --mount=target=. \
--mount=type=cache,mode=0777,target=${GOCACHE} \
export GOOS=${TARGETOS} GOARCH=${TARGETARCH} GOARM=${TARGETVARIANT#v} && \
go build -trimpath -ldflags="-s -w -X 'github.com/actions/actions-runner-controller/build.Version=${VERSION}'" -o /out/manager main.go && \
go build -trimpath -ldflags="-s -w" -o /out/github-runnerscaleset-listener ./cmd/githubrunnerscalesetlistener && \
go build -trimpath -ldflags="-s -w" -o /out/github-webhook-server ./cmd/githubwebhookserver && \
go build -trimpath -ldflags="-s -w" -o /out/actions-metrics-server ./cmd/actionsmetricsserver
go build -trimpath -ldflags="-s -w" -o /out/actions-metrics-server ./cmd/actionsmetricsserver && \
go build -trimpath -ldflags="-s -w" -o /out/sleep ./cmd/sleep
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
@@ -49,6 +51,8 @@ WORKDIR /
COPY --from=builder /out/manager .
COPY --from=builder /out/github-webhook-server .
COPY --from=builder /out/actions-metrics-server .
COPY --from=builder /out/github-runnerscaleset-listener .
COPY --from=builder /out/sleep .
USER 65532:65532

View File

@@ -1,11 +1,11 @@
ifdef DOCKER_USER
NAME ?= ${DOCKER_USER}/actions-runner-controller
DOCKER_IMAGE_NAME ?= ${DOCKER_USER}/actions-runner-controller
else
NAME ?= summerwind/actions-runner-controller
DOCKER_IMAGE_NAME ?= summerwind/actions-runner-controller
endif
DOCKER_USER ?= $(shell echo ${NAME} | cut -d / -f1)
DOCKER_USER ?= $(shell echo ${DOCKER_IMAGE_NAME} | cut -d / -f1)
VERSION ?= dev
RUNNER_VERSION ?= 2.300.2
RUNNER_VERSION ?= 2.303.0
TARGETPLATFORM ?= $(shell arch)
RUNNER_NAME ?= ${DOCKER_USER}/actions-runner
RUNNER_TAG ?= ${VERSION}
@@ -73,7 +73,7 @@ GO_TEST_ARGS ?= -short
# Run tests
test: generate fmt vet manifests shellcheck
go test $(GO_TEST_ARGS) ./... -coverprofile cover.out
go test $(GO_TEST_ARGS) `go list ./... | grep -v ./test_e2e_arc` -coverprofile cover.out
go test -fuzz=Fuzz -fuzztime=10s -run=Fuzz* ./controllers/actions.summerwind.net
test-with-deps: kube-apiserver etcd kubectl
@@ -86,14 +86,20 @@ test-with-deps: kube-apiserver etcd kubectl
# Build manager binary
manager: generate fmt vet
go build -o bin/manager main.go
go build -o bin/github-runnerscaleset-listener ./cmd/githubrunnerscalesetlistener
# Run against the configured Kubernetes cluster in ~/.kube/config
run: generate fmt vet manifests
go run ./main.go
run-scaleset: generate fmt vet
CONTROLLER_MANAGER_POD_NAMESPACE=default \
CONTROLLER_MANAGER_CONTAINER_IMAGE="${DOCKER_IMAGE_NAME}:${VERSION}" \
go run ./main.go --auto-scaling-runner-set-only
# Install CRDs into a cluster
install: manifests
kustomize build config/crd | kubectl apply -f -
kustomize build config/crd | kubectl apply --server-side -f -
# Uninstall CRDs from a cluster
uninstall: manifests
@@ -101,8 +107,8 @@ uninstall: manifests
# Deploy controller in the configured Kubernetes cluster in ~/.kube/config
deploy: manifests
cd config/manager && kustomize edit set image controller=${NAME}:${VERSION}
kustomize build config/default | kubectl apply -f -
cd config/manager && kustomize edit set image controller=${DOCKER_IMAGE_NAME}:${VERSION}
kustomize build config/default | kubectl apply --server-side -f -
# Generate manifests e.g. CRD, RBAC etc.
manifests: manifests-gen-crds chart-crds
@@ -112,9 +118,75 @@ manifests-gen-crds: controller-gen yq
for YAMLFILE in config/crd/bases/actions*.yaml; do \
$(YQ) '.spec.preserveUnknownFields = false' --inplace "$$YAMLFILE" ; \
done
make manifests-gen-crds-fix DELETE_KEY=x-kubernetes-list-type
make manifests-gen-crds-fix DELETE_KEY=x-kubernetes-list-map-keys
manifests-gen-crds-fix: DELETE_KEY ?=
manifests-gen-crds-fix:
#runners
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
#runnerreplicasets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
#runnerdeployments
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
#runnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.volumeClaimTemplates.items.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
#autoscalingrunnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
#ehemeralrunnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
# ephemeralrunners
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
chart-crds:
cp config/crd/bases/*.yaml charts/actions-runner-controller/crds/
cp config/crd/bases/actions.github.com_autoscalingrunnersets.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_autoscalinglisteners.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_ephemeralrunnersets.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_ephemeralrunners.yaml charts/gha-runner-scale-set-controller/crds/
rm charts/actions-runner-controller/crds/actions.github.com_autoscalingrunnersets.yaml
rm charts/actions-runner-controller/crds/actions.github.com_autoscalinglisteners.yaml
rm charts/actions-runner-controller/crds/actions.github.com_ephemeralrunnersets.yaml
rm charts/actions-runner-controller/crds/actions.github.com_ephemeralrunners.yaml
# Run go fmt against code
fmt:
@@ -142,18 +214,18 @@ docker-buildx:
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
--build-arg VERSION=${VERSION} \
-t "${NAME}:${VERSION}" \
-t "${DOCKER_IMAGE_NAME}:${VERSION}" \
-f Dockerfile \
. ${PUSH_ARG}
# Push the docker image
docker-push:
docker push ${NAME}:${VERSION}
docker push ${DOCKER_IMAGE_NAME}:${VERSION}
docker push ${RUNNER_NAME}:${RUNNER_TAG}
# Generate the release manifest file
release: manifests
cd config/manager && kustomize edit set image controller=${NAME}:${VERSION}
cd config/manager && kustomize edit set image controller=${DOCKER_IMAGE_NAME}:${VERSION}
mkdir -p release
kustomize build config/default > release/actions-runner-controller.yaml
@@ -177,7 +249,7 @@ acceptance/kind:
# Otherwise `load docker-image` fail while running `docker save`.
# See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap
acceptance/load:
kind load docker-image ${NAME}:${VERSION} --name ${CLUSTER}
kind load docker-image ${DOCKER_IMAGE_NAME}:${VERSION} --name ${CLUSTER}
kind load docker-image quay.io/brancz/kube-rbac-proxy:$(KUBE_RBAC_PROXY_VERSION) --name ${CLUSTER}
kind load docker-image ${RUNNER_NAME}:${RUNNER_TAG} --name ${CLUSTER}
kind load docker-image docker:dind --name ${CLUSTER}
@@ -207,7 +279,7 @@ acceptance/teardown:
kind delete cluster --name ${CLUSTER}
acceptance/deploy:
NAME=${NAME} DOCKER_USER=${DOCKER_USER} VERSION=${VERSION} RUNNER_NAME=${RUNNER_NAME} RUNNER_TAG=${RUNNER_TAG} TEST_REPO=${TEST_REPO} \
DOCKER_IMAGE_NAME=${DOCKER_IMAGE_NAME} DOCKER_USER=${DOCKER_USER} VERSION=${VERSION} RUNNER_NAME=${RUNNER_NAME} RUNNER_TAG=${RUNNER_TAG} TEST_REPO=${TEST_REPO} \
TEST_ORG=${TEST_ORG} TEST_ORG_REPO=${TEST_ORG_REPO} SYNC_PERIOD=${SYNC_PERIOD} \
USE_RUNNERSET=${USE_RUNNERSET} \
TEST_EPHEMERAL=${TEST_EPHEMERAL} \

12
PROJECT
View File

@@ -10,4 +10,16 @@ resources:
- group: actions
kind: RunnerDeployment
version: v1alpha1
- group: actions
kind: AutoscalingRunnerSet
version: v1alpha1
- group: actions
kind: EphemeralRunnerSet
version: v1alpha1
- group: actions
kind: EphemeralRunner
version: v1alpha1
- group: actions
kind: AutoscalingListener
version: v1alpha1
version: "2"

View File

@@ -17,8 +17,8 @@
A list of tools which are helpful for troubleshooting
* https://github.com/rewanthtammana/kubectl-fields Kubernetes resources hierarchy parsing tool
* https://github.com/stern/stern Multi pod and container log tailing for Kubernetes
* [Kubernetes resources hierarchy parsing tool `kubectl-fields`](https://github.com/rewanthtammana/kubectl-fields)
* [Multi pod and container log tailing for Kubernetes `stern`](https://github.com/stern/stern)
## Installation
@@ -30,7 +30,7 @@ Troubeshooting runbooks that relate to ARC installation problems
This issue can come up for various reasons like leftovers from previous installations or not being able to access the K8s service's clusterIP associated with the admission webhook server (of ARC).
```
```text
Internal error occurred: failed calling webhook "mutate.runnerdeployment.actions.summerwind.dev":
Post "https://actions-runner-controller-webhook.actions-runner-system.svc:443/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment?timeout=10s": context deadline exceeded
```
@@ -39,22 +39,24 @@ Post "https://actions-runner-controller-webhook.actions-runner-system.svc:443/mu
First we will try the common solution of checking webhook leftovers from previous installations:
1. ```bash
kubectl get validatingwebhookconfiguration -A
kubectl get mutatingwebhookconfiguration -A
```
2. If you see any webhooks related to actions-runner-controller, delete them:
1. ```bash
kubectl get validatingwebhookconfiguration -A
kubectl get mutatingwebhookconfiguration -A
```
2. If you see any webhooks related to actions-runner-controller, delete them:
```bash
kubectl delete mutatingwebhookconfiguration actions-runner-controller-mutating-webhook-configuration
kubectl delete validatingwebhookconfiguration actions-runner-controller-validating-webhook-configuration
```
If that didn't work then probably your K8s control-plane is somehow unable to access the K8s service's clusterIP associated with the admission webhook server:
1. You're running apiserver as a binary and you didn't make service cluster IPs available to the host network.
2. You're running the apiserver in the pod but your pod network (i.e. CNI plugin installation and config) is not good so your pods(like kube-apiserver) in the K8s control-plane nodes can't access ARC's admission webhook server pod(s) in probably data-plane nodes.
Another reason could be due to GKEs firewall settings you may run into the following errors when trying to deploy runners on a private GKE cluster:
Another reason could be due to GKEs firewall settings you may run into the following errors when trying to deploy runners on a private GKE cluster:
To fix this, you may either:
@@ -65,7 +67,7 @@ To fix this, you may either:
# With helm, you'd set `webhookPort` to the port number of your choice
# See https://github.com/actions/actions-runner-controller/pull/1410/files for more information
helm upgrade --install --namespace actions-runner-system --create-namespace \
--wait actions-runner-controller actions/actions-runner-controller \
--wait actions-runner-controller actions-runner-controller/actions-runner-controller \
--set webhookPort=10250
```
@@ -93,7 +95,7 @@ To fix this, you may either:
**Problem**
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error
{
"controller": "runner",
"request": "actions-runner-system/runner-deployment-dk7q8-dk5c9",
@@ -104,6 +106,7 @@ To fix this, you may either:
**Solution**
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added, either:
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documented flags
@@ -111,7 +114,7 @@ Your base64'ed PAT token has a new line at the end, it needs to be created witho
**Problem**
```
```text
Error: UPGRADE FAILED: failed to create resource: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority
```
@@ -119,7 +122,7 @@ Apparently, it's failing while `helm` is creating one of resources defined in th
You'd try to tail logs from the `cert-manager-cainjector` and see it's failing with an error like:
```
```text
$ kubectl -n cert-manager logs cert-manager-cainjector-7cdbb9c945-g6bt4
I0703 03:31:55.159339 1 start.go:91] "starting" version="v1.1.1" revision="3ac7418070e22c87fae4b22603a6b952f797ae96"
I0703 03:31:55.615061 1 leaderelection.go:243] attempting to acquire leader lease kube-system/cert-manager-cainjector-leader-election...
@@ -137,7 +140,7 @@ Your cluster is based on a new enough Kubernetes of version 1.22 or greater whic
In many cases, it's not an option to downgrade Kubernetes. So, just upgrade `cert-manager` to a more recent version that does have have the support for the specific Kubernetes version you're using.
See https://cert-manager.io/docs/installation/supported-releases/ for the list of available cert-manager versions.
See <https://cert-manager.io/docs/installation/supported-releases/> for the list of available cert-manager versions.
## Operations
@@ -153,7 +156,7 @@ Sometimes either the runner kind (`kubectl get runners`) or it's underlying pod
Remove the finaliser from the relevent runner kind or pod
```
```text
# Get all kind runners and remove the finalizer
$ kubectl get runners --no-headers | awk {'print $1'} | xargs kubectl patch runner --type merge -p '{"metadata":{"finalizers":null}}'
@@ -195,7 +198,7 @@ spec:
If you're running your action runners on a service mesh like Istio, you might
have problems with runner configuration accompanied by logs like:
```
```text
....
runner Starting Runner listener with startup type: service
runner Started listener process
@@ -210,7 +213,7 @@ configuration script tries to communicate with the network.
More broadly, there are many other circumstances where the runner pod coming up first can cause issues.
**Solution**<br />
**Solution**
> Added originally to help users with older istio instances.
> Newer Istio instances can use Istio's `holdApplicationUntilProxyStarts` attribute ([istio/istio#11130](https://github.com/istio/istio/issues/11130)) to avoid having to delay starting up the runner.
@@ -232,7 +235,7 @@ spec:
value: "5"
```
## Outgoing network action hangs indefinitely
### Outgoing network action hangs indefinitely
**Problem**
@@ -278,9 +281,9 @@ spec:
```
You can read the discussion regarding this issue in
(#1406)[https://github.com/actions/actions-runner-controller/issues/1046].
[#1406](https://github.com/actions/actions-runner-controller/issues/1046).
## Unable to scale to zero with TotalNumberOfQueuedAndInProgressWorkflowRuns
### Unable to scale to zero with TotalNumberOfQueuedAndInProgressWorkflowRuns
**Problem**
@@ -292,7 +295,7 @@ You very likely have some dangling workflow jobs stuck in `queued` or `in_progre
Manually call [the "list workflow runs" API](https://docs.github.com/en/rest/actions/workflow-runs#list-workflow-runs-for-a-repository), and [remove the dangling workflow job(s)](https://docs.github.com/en/rest/actions/workflow-runs#delete-a-workflow-run).
## Slow / failure to boot dind sidecar (default runner)
### Slow / failure to boot dind sidecar (default runner)
**Problem**
@@ -300,4 +303,4 @@ If you noticed that it takes several minutes for sidecar dind container to be cr
**Solution**
The solution is to switch to using faster storage, if you are experiencing this issue you are probably using hdd, switch to ssh fixed the problem in my case. Most cloud providers have a list of storage options to use just pick something faster that your current disk, for on prem clusters you will need to invest in some ssds.
The solution is to switch to using faster storage, if you are experiencing this issue you are probably using HDD storage. Switching to SSD storage fixed the problem in my case. Most cloud providers have a list of storage options to use just pick something faster that your current disk, for on prem clusters you will need to invest in some SSDs.

View File

@@ -35,7 +35,7 @@ else
echo 'Skipped deploying secret "github-webhook-server". Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
fi
if [ -n "${WEBHOOK_GITHUB_TOKEN}" ]; then
if [ -n "${WEBHOOK_GITHUB_TOKEN}" ] && [ -z "${CREATE_SECRETS_USING_HELM}" ]; then
kubectl -n actions-runner-system delete secret \
actions-metrics-server || :
kubectl -n actions-runner-system create secret generic \
@@ -61,6 +61,9 @@ if [ "${tool}" == "helm" ]; then
flags+=( --set githubWebhookServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
flags+=( --set actionsMetricsServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
fi
if [ "${WATCH_NAMESPACE}" != "" ]; then
flags+=( --set watchNamespace=${WATCH_NAMESPACE} --set singleNamespace=true)
fi
if [ "${CHART_VERSION}" != "" ]; then
flags+=( --version ${CHART_VERSION})
fi
@@ -69,6 +72,21 @@ if [ "${tool}" == "helm" ]; then
flags+=( --set githubWebhookServer.logFormat=${LOG_FORMAT})
flags+=( --set actionsMetricsServer.logFormat=${LOG_FORMAT})
fi
if [ "${ADMISSION_WEBHOOKS_TIMEOUT}" != "" ]; then
flags+=( --set admissionWebHooks.timeoutSeconds=${ADMISSION_WEBHOOKS_TIMEOUT})
fi
if [ -n "${CREATE_SECRETS_USING_HELM}" ]; then
if [ -z "${WEBHOOK_GITHUB_TOKEN}" ]; then
echo 'Failed deploying secret "actions-metrics-server" using helm. Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
exit 1
fi
flags+=( --set actionsMetricsServer.secret.create=true)
flags+=( --set actionsMetricsServer.secret.github_token=${WEBHOOK_GITHUB_TOKEN})
fi
if [ -n "${GITHUB_WEBHOOK_SERVER_ENV_NAME}" ] && [ -n "${GITHUB_WEBHOOK_SERVER_ENV_VALUE}" ]; then
flags+=( --set githubWebhookServer.env[0].name=${GITHUB_WEBHOOK_SERVER_ENV_NAME})
flags+=( --set githubWebhookServer.env[0].value=${GITHUB_WEBHOOK_SERVER_ENV_VALUE})
fi
set -vx

View File

@@ -1,18 +0,0 @@
# Title
<!-- ADR titles should typically be imperative sentences. -->
**Status**: (Proposed|Accepted|Rejected|Superceded|Deprecated)
## Context
*What is the issue or background knowledge necessary for future readers
to understand why this ADR was written?*
## Decision
**What** is the change being proposed? / **How** will it be implemented?*
## Consequences
*What becomes easier or more difficult to do because of this change?*

View File

@@ -0,0 +1,97 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// AutoscalingListenerSpec defines the desired state of AutoscalingListener
type AutoscalingListenerSpec struct {
// Required
GitHubConfigUrl string `json:"githubConfigUrl,omitempty"`
// Required
GitHubConfigSecret string `json:"githubConfigSecret,omitempty"`
// Required
RunnerScaleSetId int `json:"runnerScaleSetId,omitempty"`
// Required
AutoscalingRunnerSetNamespace string `json:"autoscalingRunnerSetNamespace,omitempty"`
// Required
AutoscalingRunnerSetName string `json:"autoscalingRunnerSetName,omitempty"`
// Required
EphemeralRunnerSetName string `json:"ephemeralRunnerSetName,omitempty"`
// Required
// +kubebuilder:validation:Minimum:=0
MaxRunners int `json:"maxRunners,omitempty"`
// Required
// +kubebuilder:validation:Minimum:=0
MinRunners int `json:"minRunners,omitempty"`
// Required
Image string `json:"image,omitempty"`
// Required
ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"`
// Required
ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"`
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
}
// AutoscalingListenerStatus defines the observed state of AutoscalingListener
type AutoscalingListenerStatus struct{}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:printcolumn:JSONPath=".spec.githubConfigUrl",name=GitHub Configure URL,type=string
//+kubebuilder:printcolumn:JSONPath=".spec.autoscalingRunnerSetNamespace",name=AutoscalingRunnerSet Namespace,type=string
//+kubebuilder:printcolumn:JSONPath=".spec.autoscalingRunnerSetName",name=AutoscalingRunnerSet Name,type=string
// AutoscalingListener is the Schema for the autoscalinglisteners API
type AutoscalingListener struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec AutoscalingListenerSpec `json:"spec,omitempty"`
Status AutoscalingListenerStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// AutoscalingListenerList contains a list of AutoscalingListener
type AutoscalingListenerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []AutoscalingListener `json:"items"`
}
func init() {
SchemeBuilder.Register(&AutoscalingListener{}, &AutoscalingListenerList{})
}

View File

@@ -0,0 +1,289 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"crypto/x509"
"fmt"
"net/http"
"net/url"
"strings"
"github.com/actions/actions-runner-controller/hash"
"golang.org/x/net/http/httpproxy"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:printcolumn:JSONPath=".spec.minRunners",name=Minimum Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".spec.maxRunners",name=Maximum Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.currentRunners",name=Current Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.state",name=State,type=string
//+kubebuilder:printcolumn:JSONPath=".status.pendingEphemeralRunners",name=Pending Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.runningEphemeralRunners",name=Running Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.finishedEphemeralRunners",name=Finished Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.deletingEphemeralRunners",name=Deleting Runners,type=integer
// AutoscalingRunnerSet is the Schema for the autoscalingrunnersets API
type AutoscalingRunnerSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec AutoscalingRunnerSetSpec `json:"spec,omitempty"`
Status AutoscalingRunnerSetStatus `json:"status,omitempty"`
}
// AutoscalingRunnerSetSpec defines the desired state of AutoscalingRunnerSet
type AutoscalingRunnerSetSpec struct {
// Required
GitHubConfigUrl string `json:"githubConfigUrl,omitempty"`
// Required
GitHubConfigSecret string `json:"githubConfigSecret,omitempty"`
// +optional
RunnerGroup string `json:"runnerGroup,omitempty"`
// +optional
RunnerScaleSetName string `json:"runnerScaleSetName,omitempty"`
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
// Required
Template corev1.PodTemplateSpec `json:"template,omitempty"`
// +optional
// +kubebuilder:validation:Minimum:=0
MaxRunners *int `json:"maxRunners,omitempty"`
// +optional
// +kubebuilder:validation:Minimum:=0
MinRunners *int `json:"minRunners,omitempty"`
}
type GitHubServerTLSConfig struct {
// Required
CertificateFrom *TLSCertificateSource `json:"certificateFrom,omitempty"`
}
func (c *GitHubServerTLSConfig) ToCertPool(keyFetcher func(name, key string) ([]byte, error)) (*x509.CertPool, error) {
if c.CertificateFrom == nil {
return nil, fmt.Errorf("certificateFrom not specified")
}
if c.CertificateFrom.ConfigMapKeyRef == nil {
return nil, fmt.Errorf("configMapKeyRef not specified")
}
cert, err := keyFetcher(c.CertificateFrom.ConfigMapKeyRef.Name, c.CertificateFrom.ConfigMapKeyRef.Key)
if err != nil {
return nil, fmt.Errorf(
"failed to fetch key %q in configmap %q: %w",
c.CertificateFrom.ConfigMapKeyRef.Key,
c.CertificateFrom.ConfigMapKeyRef.Name,
err,
)
}
systemPool, err := x509.SystemCertPool()
if err != nil {
return nil, fmt.Errorf("failed to get system cert pool: %w", err)
}
pool := systemPool.Clone()
if !pool.AppendCertsFromPEM(cert) {
return nil, fmt.Errorf("failed to parse certificate")
}
return pool, nil
}
type TLSCertificateSource struct {
// Required
ConfigMapKeyRef *corev1.ConfigMapKeySelector `json:"configMapKeyRef,omitempty"`
}
type ProxyConfig struct {
// +optional
HTTP *ProxyServerConfig `json:"http,omitempty"`
// +optional
HTTPS *ProxyServerConfig `json:"https,omitempty"`
// +optional
NoProxy []string `json:"noProxy,omitempty"`
}
func (c *ProxyConfig) toHTTPProxyConfig(secretFetcher func(string) (*corev1.Secret, error)) (*httpproxy.Config, error) {
config := &httpproxy.Config{
NoProxy: strings.Join(c.NoProxy, ","),
}
if c.HTTP != nil {
u, err := url.Parse(c.HTTP.Url)
if err != nil {
return nil, fmt.Errorf("failed to parse proxy http url %q: %w", c.HTTP.Url, err)
}
if c.HTTP.CredentialSecretRef != "" {
secret, err := secretFetcher(c.HTTP.CredentialSecretRef)
if err != nil {
return nil, fmt.Errorf(
"failed to get secret %s for http proxy: %w",
c.HTTP.CredentialSecretRef,
err,
)
}
u.User = url.UserPassword(
string(secret.Data["username"]),
string(secret.Data["password"]),
)
}
config.HTTPProxy = u.String()
}
if c.HTTPS != nil {
u, err := url.Parse(c.HTTPS.Url)
if err != nil {
return nil, fmt.Errorf("failed to parse proxy https url %q: %w", c.HTTPS.Url, err)
}
if c.HTTPS.CredentialSecretRef != "" {
secret, err := secretFetcher(c.HTTPS.CredentialSecretRef)
if err != nil {
return nil, fmt.Errorf(
"failed to get secret %s for https proxy: %w",
c.HTTPS.CredentialSecretRef,
err,
)
}
u.User = url.UserPassword(
string(secret.Data["username"]),
string(secret.Data["password"]),
)
}
config.HTTPSProxy = u.String()
}
return config, nil
}
func (c *ProxyConfig) ToSecretData(secretFetcher func(string) (*corev1.Secret, error)) (map[string][]byte, error) {
config, err := c.toHTTPProxyConfig(secretFetcher)
if err != nil {
return nil, err
}
data := map[string][]byte{}
data["http_proxy"] = []byte(config.HTTPProxy)
data["https_proxy"] = []byte(config.HTTPSProxy)
data["no_proxy"] = []byte(config.NoProxy)
return data, nil
}
func (c *ProxyConfig) ProxyFunc(secretFetcher func(string) (*corev1.Secret, error)) (func(*http.Request) (*url.URL, error), error) {
config, err := c.toHTTPProxyConfig(secretFetcher)
if err != nil {
return nil, err
}
proxyFunc := func(req *http.Request) (*url.URL, error) {
return config.ProxyFunc()(req.URL)
}
return proxyFunc, nil
}
type ProxyServerConfig struct {
// Required
Url string `json:"url,omitempty"`
// +optional
CredentialSecretRef string `json:"credentialSecretRef,omitempty"`
}
// AutoscalingRunnerSetStatus defines the observed state of AutoscalingRunnerSet
type AutoscalingRunnerSetStatus struct {
// +optional
CurrentRunners int `json:"currentRunners"`
// +optional
State string `json:"state"`
// EphemeralRunner counts separated by the stage ephemeral runners are in, taken from the EphemeralRunnerSet
//+optional
PendingEphemeralRunners int `json:"pendingEphemeralRunners"`
// +optional
RunningEphemeralRunners int `json:"runningEphemeralRunners"`
// +optional
FailedEphemeralRunners int `json:"failedEphemeralRunners"`
}
func (ars *AutoscalingRunnerSet) ListenerSpecHash() string {
arsSpec := ars.Spec.DeepCopy()
spec := arsSpec
return hash.ComputeTemplateHash(&spec)
}
func (ars *AutoscalingRunnerSet) RunnerSetSpecHash() string {
type runnerSetSpec struct {
GitHubConfigUrl string
GitHubConfigSecret string
RunnerGroup string
RunnerScaleSetName string
Proxy *ProxyConfig
GitHubServerTLS *GitHubServerTLSConfig
Template corev1.PodTemplateSpec
}
spec := &runnerSetSpec{
GitHubConfigUrl: ars.Spec.GitHubConfigUrl,
GitHubConfigSecret: ars.Spec.GitHubConfigSecret,
RunnerGroup: ars.Spec.RunnerGroup,
RunnerScaleSetName: ars.Spec.RunnerScaleSetName,
Proxy: ars.Spec.Proxy,
GitHubServerTLS: ars.Spec.GitHubServerTLS,
Template: ars.Spec.Template,
}
return hash.ComputeTemplateHash(&spec)
}
//+kubebuilder:object:root=true
// AutoscalingRunnerSetList contains a list of AutoscalingRunnerSet
type AutoscalingRunnerSetList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []AutoscalingRunnerSet `json:"items"`
}
func init() {
SchemeBuilder.Register(&AutoscalingRunnerSet{}, &AutoscalingRunnerSetList{})
}

View File

@@ -0,0 +1,133 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.githubConfigUrl",name="GitHub Config URL",type=string
// +kubebuilder:printcolumn:JSONPath=".status.runnerId",name=RunnerId,type=number
// +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string
// +kubebuilder:printcolumn:JSONPath=".status.jobRepositoryName",name=JobRepository,type=string
// +kubebuilder:printcolumn:JSONPath=".status.jobWorkflowRef",name=JobWorkflowRef,type=string
// +kubebuilder:printcolumn:JSONPath=".status.workflowRunId",name=WorkflowRunId,type=number
// +kubebuilder:printcolumn:JSONPath=".status.jobDisplayName",name=JobDisplayName,type=string
// +kubebuilder:printcolumn:JSONPath=".status.message",name=Message,type=string
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// EphemeralRunner is the Schema for the ephemeralrunners API
type EphemeralRunner struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec EphemeralRunnerSpec `json:"spec,omitempty"`
Status EphemeralRunnerStatus `json:"status,omitempty"`
}
// EphemeralRunnerSpec defines the desired state of EphemeralRunner
type EphemeralRunnerSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// +required
GitHubConfigUrl string `json:"githubConfigUrl,omitempty"`
// +required
GitHubConfigSecret string `json:"githubConfigSecret,omitempty"`
// +required
RunnerScaleSetId int `json:"runnerScaleSetId,omitempty"`
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
// +optional
ProxySecretRef string `json:"proxySecretRef,omitempty"`
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
// +required
corev1.PodTemplateSpec `json:",inline"`
}
// EphemeralRunnerStatus defines the observed state of EphemeralRunner
type EphemeralRunnerStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Turns true only if the runner is online.
// +optional
Ready bool `json:"ready"`
// Phase describes phases where EphemeralRunner can be in.
// The underlying type is a PodPhase, but the meaning is more restrictive
//
// The PodFailed phase should be set only when EphemeralRunner fails to start
// after multiple retries. That signals that this EphemeralRunner won't work,
// and manual inspection is required
//
// The PodSucceded phase should be set only when confirmed that EphemeralRunner
// actually executed the job and has been removed from the service.
// +optional
Phase corev1.PodPhase `json:"phase,omitempty"`
// +optional
Reason string `json:"reason,omitempty"`
// +optional
Message string `json:"message,omitempty"`
// +optional
RunnerId int `json:"runnerId,omitempty"`
// +optional
RunnerName string `json:"runnerName,omitempty"`
// +optional
RunnerJITConfig string `json:"runnerJITConfig,omitempty"`
// +optional
Failures map[string]bool `json:"failures,omitempty"`
// +optional
JobRequestId int64 `json:"jobRequestId,omitempty"`
// +optional
JobRepositoryName string `json:"jobRepositoryName,omitempty"`
// +optional
JobWorkflowRef string `json:"jobWorkflowRef,omitempty"`
// +optional
WorkflowRunId int64 `json:"workflowRunId,omitempty"`
// +optional
JobDisplayName string `json:"jobDisplayName,omitempty"`
}
//+kubebuilder:object:root=true
// EphemeralRunnerList contains a list of EphemeralRunner
type EphemeralRunnerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []EphemeralRunner `json:"items"`
}
func init() {
SchemeBuilder.Register(&EphemeralRunner{}, &EphemeralRunnerList{})
}

View File

@@ -0,0 +1,75 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EphemeralRunnerSetSpec defines the desired state of EphemeralRunnerSet
type EphemeralRunnerSetSpec struct {
// Replicas is the number of desired EphemeralRunner resources in the k8s namespace.
Replicas int `json:"replicas,omitempty"`
EphemeralRunnerSpec EphemeralRunnerSpec `json:"ephemeralRunnerSpec,omitempty"`
}
// EphemeralRunnerSetStatus defines the observed state of EphemeralRunnerSet
type EphemeralRunnerSetStatus struct {
// CurrentReplicas is the number of currently running EphemeralRunner resources being managed by this EphemeralRunnerSet.
CurrentReplicas int `json:"currentReplicas"`
// EphemeralRunner counts separated by the stage ephemeral runners are in
// +optional
PendingEphemeralRunners int `json:"pendingEphemeralRunners"`
// +optional
RunningEphemeralRunners int `json:"runningEphemeralRunners"`
// +optional
FailedEphemeralRunners int `json:"failedEphemeralRunners"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name="DesiredReplicas",type="integer"
// +kubebuilder:printcolumn:JSONPath=".status.currentReplicas", name="CurrentReplicas",type="integer"
//+kubebuilder:printcolumn:JSONPath=".status.pendingEphemeralRunners",name=Pending Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.runningEphemeralRunners",name=Running Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.finishedEphemeralRunners",name=Finished Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.deletingEphemeralRunners",name=Deleting Runners,type=integer
// EphemeralRunnerSet is the Schema for the ephemeralrunnersets API
type EphemeralRunnerSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec EphemeralRunnerSetSpec `json:"spec,omitempty"`
Status EphemeralRunnerSetStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// EphemeralRunnerSetList contains a list of EphemeralRunnerSet
type EphemeralRunnerSetList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []EphemeralRunnerSet `json:"items"`
}
func init() {
SchemeBuilder.Register(&EphemeralRunnerSet{}, &EphemeralRunnerSetList{})
}

View File

@@ -0,0 +1,36 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1 contains API Schema definitions for the batch v1 API group
// +kubebuilder:object:generate=true
// +groupName=actions.github.com
package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
var (
// GroupVersion is group version used to register these objects
GroupVersion = schema.GroupVersion{Group: "actions.github.com", Version: "v1alpha1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)

View File

@@ -0,0 +1,118 @@
package v1alpha1_test
import (
"net/http"
"testing"
corev1 "k8s.io/api/core/v1"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestProxyConfig_ToSecret(t *testing.T) {
config := &v1alpha1.ProxyConfig{
HTTP: &v1alpha1.ProxyServerConfig{
Url: "http://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
HTTPS: &v1alpha1.ProxyServerConfig{
Url: "https://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
NoProxy: []string{
"noproxy.example.com",
"noproxy2.example.com",
},
}
secretFetcher := func(string) (*corev1.Secret, error) {
return &corev1.Secret{
Data: map[string][]byte{
"username": []byte("username"),
"password": []byte("password"),
},
}, nil
}
result, err := config.ToSecretData(secretFetcher)
require.NoError(t, err)
require.NotNil(t, result)
assert.Equal(t, "http://username:password@proxy.example.com:8080", string(result["http_proxy"]))
assert.Equal(t, "https://username:password@proxy.example.com:8080", string(result["https_proxy"]))
assert.Equal(t, "noproxy.example.com,noproxy2.example.com", string(result["no_proxy"]))
}
func TestProxyConfig_ProxyFunc(t *testing.T) {
config := &v1alpha1.ProxyConfig{
HTTP: &v1alpha1.ProxyServerConfig{
Url: "http://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
HTTPS: &v1alpha1.ProxyServerConfig{
Url: "https://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
NoProxy: []string{
"noproxy.example.com",
"noproxy2.example.com",
},
}
secretFetcher := func(string) (*corev1.Secret, error) {
return &corev1.Secret{
Data: map[string][]byte{
"username": []byte("username"),
"password": []byte("password"),
},
}, nil
}
result, err := config.ProxyFunc(secretFetcher)
require.NoError(t, err)
tests := []struct {
name string
in string
out string
}{
{
name: "http target",
in: "http://target.com",
out: "http://username:password@proxy.example.com:8080",
},
{
name: "https target",
in: "https://target.com",
out: "https://username:password@proxy.example.com:8080",
},
{
name: "no proxy",
in: "https://noproxy.example.com",
out: "",
},
{
name: "no proxy 2",
in: "https://noproxy2.example.com",
out: "",
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req, err := http.NewRequest("GET", test.in, nil)
require.NoError(t, err)
u, err := result(req)
require.NoError(t, err)
if test.out == "" {
assert.Nil(t, u)
return
}
assert.Equal(t, test.out, u.String())
})
}
}

View File

@@ -0,0 +1,105 @@
package v1alpha1_test
import (
"crypto/tls"
"crypto/x509"
"net/http"
"os"
"path/filepath"
"testing"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/actions/actions-runner-controller/github/actions/testserver"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
v1 "k8s.io/api/core/v1"
)
func TestGitHubServerTLSConfig_ToCertPool(t *testing.T) {
t.Run("returns an error if CertificateFrom not specified", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: nil,
}
pool, err := c.ToCertPool(nil)
assert.Nil(t, pool)
require.Error(t, err)
assert.Equal(t, err.Error(), "certificateFrom not specified")
})
t.Run("returns an error if CertificateFrom.ConfigMapKeyRef not specified", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: &v1alpha1.TLSCertificateSource{},
}
pool, err := c.ToCertPool(nil)
assert.Nil(t, pool)
require.Error(t, err)
assert.Equal(t, err.Error(), "configMapKeyRef not specified")
})
t.Run("returns a valid cert pool with correct configuration", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: &v1alpha1.TLSCertificateSource{
ConfigMapKeyRef: &v1.ConfigMapKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: "name",
},
Key: "key",
},
},
}
certsFolder := filepath.Join(
"../../../",
"github",
"actions",
"testdata",
)
fetcher := func(name, key string) ([]byte, error) {
cert, err := os.ReadFile(filepath.Join(certsFolder, "rootCA.crt"))
require.NoError(t, err)
pool := x509.NewCertPool()
ok := pool.AppendCertsFromPEM(cert)
assert.True(t, ok)
return cert, nil
}
pool, err := c.ToCertPool(fetcher)
require.NoError(t, err)
require.NotNil(t, pool)
// can be used to communicate with a server
serverSuccessfullyCalled := false
server := testserver.NewUnstarted(t, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
serverSuccessfullyCalled = true
w.WriteHeader(http.StatusOK)
}))
cert, err := tls.LoadX509KeyPair(
filepath.Join(certsFolder, "server.crt"),
filepath.Join(certsFolder, "server.key"),
)
require.NoError(t, err)
server.TLS = &tls.Config{Certificates: []tls.Certificate{cert}}
server.StartTLS()
client := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
},
},
}
_, err = client.Get(server.URL)
assert.NoError(t, err)
assert.True(t, serverSuccessfullyCalled)
})
}

View File

@@ -0,0 +1,523 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
package v1alpha1
import (
"k8s.io/api/core/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListener) DeepCopyInto(out *AutoscalingListener) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListener.
func (in *AutoscalingListener) DeepCopy() *AutoscalingListener {
if in == nil {
return nil
}
out := new(AutoscalingListener)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingListener) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListenerList) DeepCopyInto(out *AutoscalingListenerList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]AutoscalingListener, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerList.
func (in *AutoscalingListenerList) DeepCopy() *AutoscalingListenerList {
if in == nil {
return nil
}
out := new(AutoscalingListenerList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingListenerList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListenerSpec) DeepCopyInto(out *AutoscalingListenerSpec) {
*out = *in
if in.ImagePullSecrets != nil {
in, out := &in.ImagePullSecrets, &out.ImagePullSecrets
*out = make([]v1.LocalObjectReference, len(*in))
copy(*out, *in)
}
if in.Proxy != nil {
in, out := &in.Proxy, &out.Proxy
*out = new(ProxyConfig)
(*in).DeepCopyInto(*out)
}
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerSpec.
func (in *AutoscalingListenerSpec) DeepCopy() *AutoscalingListenerSpec {
if in == nil {
return nil
}
out := new(AutoscalingListenerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListenerStatus) DeepCopyInto(out *AutoscalingListenerStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerStatus.
func (in *AutoscalingListenerStatus) DeepCopy() *AutoscalingListenerStatus {
if in == nil {
return nil
}
out := new(AutoscalingListenerStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSet) DeepCopyInto(out *AutoscalingRunnerSet) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSet.
func (in *AutoscalingRunnerSet) DeepCopy() *AutoscalingRunnerSet {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSet)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingRunnerSet) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSetList) DeepCopyInto(out *AutoscalingRunnerSetList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]AutoscalingRunnerSet, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSetList.
func (in *AutoscalingRunnerSetList) DeepCopy() *AutoscalingRunnerSetList {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSetList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingRunnerSetList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSetSpec) DeepCopyInto(out *AutoscalingRunnerSetSpec) {
*out = *in
if in.Proxy != nil {
in, out := &in.Proxy, &out.Proxy
*out = new(ProxyConfig)
(*in).DeepCopyInto(*out)
}
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template)
if in.MaxRunners != nil {
in, out := &in.MaxRunners, &out.MaxRunners
*out = new(int)
**out = **in
}
if in.MinRunners != nil {
in, out := &in.MinRunners, &out.MinRunners
*out = new(int)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSetSpec.
func (in *AutoscalingRunnerSetSpec) DeepCopy() *AutoscalingRunnerSetSpec {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSetSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSetStatus) DeepCopyInto(out *AutoscalingRunnerSetStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSetStatus.
func (in *AutoscalingRunnerSetStatus) DeepCopy() *AutoscalingRunnerSetStatus {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSetStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunner) DeepCopyInto(out *EphemeralRunner) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunner.
func (in *EphemeralRunner) DeepCopy() *EphemeralRunner {
if in == nil {
return nil
}
out := new(EphemeralRunner)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunner) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerList) DeepCopyInto(out *EphemeralRunnerList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]EphemeralRunner, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerList.
func (in *EphemeralRunnerList) DeepCopy() *EphemeralRunnerList {
if in == nil {
return nil
}
out := new(EphemeralRunnerList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunnerList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSet) DeepCopyInto(out *EphemeralRunnerSet) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSet.
func (in *EphemeralRunnerSet) DeepCopy() *EphemeralRunnerSet {
if in == nil {
return nil
}
out := new(EphemeralRunnerSet)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunnerSet) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSetList) DeepCopyInto(out *EphemeralRunnerSetList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]EphemeralRunnerSet, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSetList.
func (in *EphemeralRunnerSetList) DeepCopy() *EphemeralRunnerSetList {
if in == nil {
return nil
}
out := new(EphemeralRunnerSetList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunnerSetList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSetSpec) DeepCopyInto(out *EphemeralRunnerSetSpec) {
*out = *in
in.EphemeralRunnerSpec.DeepCopyInto(&out.EphemeralRunnerSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSetSpec.
func (in *EphemeralRunnerSetSpec) DeepCopy() *EphemeralRunnerSetSpec {
if in == nil {
return nil
}
out := new(EphemeralRunnerSetSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSetStatus) DeepCopyInto(out *EphemeralRunnerSetStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSetStatus.
func (in *EphemeralRunnerSetStatus) DeepCopy() *EphemeralRunnerSetStatus {
if in == nil {
return nil
}
out := new(EphemeralRunnerSetStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSpec) DeepCopyInto(out *EphemeralRunnerSpec) {
*out = *in
if in.Proxy != nil {
in, out := &in.Proxy, &out.Proxy
*out = new(ProxyConfig)
(*in).DeepCopyInto(*out)
}
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
in.PodTemplateSpec.DeepCopyInto(&out.PodTemplateSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSpec.
func (in *EphemeralRunnerSpec) DeepCopy() *EphemeralRunnerSpec {
if in == nil {
return nil
}
out := new(EphemeralRunnerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerStatus) DeepCopyInto(out *EphemeralRunnerStatus) {
*out = *in
if in.Failures != nil {
in, out := &in.Failures, &out.Failures
*out = make(map[string]bool, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerStatus.
func (in *EphemeralRunnerStatus) DeepCopy() *EphemeralRunnerStatus {
if in == nil {
return nil
}
out := new(EphemeralRunnerStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubServerTLSConfig) DeepCopyInto(out *GitHubServerTLSConfig) {
*out = *in
if in.CertificateFrom != nil {
in, out := &in.CertificateFrom, &out.CertificateFrom
*out = new(TLSCertificateSource)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GitHubServerTLSConfig.
func (in *GitHubServerTLSConfig) DeepCopy() *GitHubServerTLSConfig {
if in == nil {
return nil
}
out := new(GitHubServerTLSConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProxyConfig) DeepCopyInto(out *ProxyConfig) {
*out = *in
if in.HTTP != nil {
in, out := &in.HTTP, &out.HTTP
*out = new(ProxyServerConfig)
**out = **in
}
if in.HTTPS != nil {
in, out := &in.HTTPS, &out.HTTPS
*out = new(ProxyServerConfig)
**out = **in
}
if in.NoProxy != nil {
in, out := &in.NoProxy, &out.NoProxy
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProxyConfig.
func (in *ProxyConfig) DeepCopy() *ProxyConfig {
if in == nil {
return nil
}
out := new(ProxyConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProxyServerConfig) DeepCopyInto(out *ProxyServerConfig) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProxyServerConfig.
func (in *ProxyServerConfig) DeepCopy() *ProxyServerConfig {
if in == nil {
return nil
}
out := new(ProxyServerConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TLSCertificateSource) DeepCopyInto(out *TLSCertificateSource) {
*out = *in
if in.ConfigMapKeyRef != nil {
in, out := &in.ConfigMapKeyRef, &out.ConfigMapKeyRef
*out = new(v1.ConfigMapKeySelector)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TLSCertificateSource.
func (in *TLSCertificateSource) DeepCopy() *TLSCertificateSource {
if in == nil {
return nil
}
out := new(TLSCertificateSource)
in.DeepCopyInto(out)
return out
}

View File

@@ -248,10 +248,60 @@ type RunnerStatus struct {
// +optional
Message string `json:"message,omitempty"`
// +optional
WorkflowStatus *WorkflowStatus `json:"workflow"`
// +optional
// +nullable
LastRegistrationCheckTime *metav1.Time `json:"lastRegistrationCheckTime,omitempty"`
}
// WorkflowStatus contains various information that is propagated
// from GitHub Actions workflow run environment variables to
// ease monitoring workflow run/job/steps that are triggerred on the runner.
type WorkflowStatus struct {
// +optional
// Name is the name of the workflow
// that is triggerred within the runner.
// It corresponds to GITHUB_WORKFLOW defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Name string `json:"name,omitempty"`
// +optional
// Repository is the owner and repository name of the workflow
// that is triggerred within the runner.
// It corresponds to GITHUB_REPOSITORY defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Repository string `json:"repository,omitempty"`
// +optional
// ReositoryOwner is the repository owner's name for the workflow
// that is triggerred within the runner.
// It corresponds to GITHUB_REPOSITORY_OWNER defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
RepositoryOwner string `json:"repositoryOwner,omitempty"`
// +optional
// GITHUB_RUN_NUMBER is the unique number for the current workflow run
// that is triggerred within the runner.
// It corresponds to GITHUB_RUN_ID defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
RunNumber string `json:"runNumber,omitempty"`
// +optional
// RunID is the unique number for the current workflow run
// that is triggerred within the runner.
// It corresponds to GITHUB_RUN_ID defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
RunID string `json:"runID,omitempty"`
// +optional
// Job is the name of the current job
// that is triggerred within the runner.
// It corresponds to GITHUB_JOB defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Job string `json:"job,omitempty"`
// +optional
// Action is the name of the current action or the step ID of the current step
// that is triggerred within the runner.
// It corresponds to GITHUB_ACTION defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Action string `json:"action,omitempty"`
}
// RunnerStatusRegistration contains runner registration status
type RunnerStatusRegistration struct {
Enterprise string `json:"enterprise,omitempty"`
@@ -316,6 +366,8 @@ func (w *WorkVolumeClaimTemplate) V1VolumeMount(mountPath string) corev1.VolumeM
// +kubebuilder:printcolumn:JSONPath=".spec.labels",name=Labels,type=string
// +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string
// +kubebuilder:printcolumn:JSONPath=".status.message",name=Message,type=string
// +kubebuilder:printcolumn:JSONPath=".status.workflow.repository",name=WF Repo,type=string
// +kubebuilder:printcolumn:JSONPath=".status.workflow.runID",name=WF Run,type=string
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// Runner is the Schema for the runners API

View File

@@ -77,6 +77,11 @@ type RunnerDeploymentStatus struct {
// +kubebuilder:object:root=true
// +kubebuilder:resource:shortName=rdeploy
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.enterprise",name=Enterprise,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.organization",name=Organization,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.repository",name=Repository,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.group",name=Group,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.labels",name=Labels,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Current,type=number
// +kubebuilder:printcolumn:JSONPath=".status.updatedReplicas",name=Up-To-Date,type=number

View File

@@ -1049,6 +1049,11 @@ func (in *RunnerSpec) DeepCopy() *RunnerSpec {
func (in *RunnerStatus) DeepCopyInto(out *RunnerStatus) {
*out = *in
in.Registration.DeepCopyInto(&out.Registration)
if in.WorkflowStatus != nil {
in, out := &in.WorkflowStatus, &out.WorkflowStatus
*out = new(WorkflowStatus)
**out = **in
}
if in.LastRegistrationCheckTime != nil {
in, out := &in.LastRegistrationCheckTime, &out.LastRegistrationCheckTime
*out = (*in).DeepCopy()
@@ -1212,3 +1217,18 @@ func (in *WorkflowJobSpec) DeepCopy() *WorkflowJobSpec {
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkflowStatus) DeepCopyInto(out *WorkflowStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkflowStatus.
func (in *WorkflowStatus) DeepCopy() *WorkflowStatus {
if in == nil {
return nil
}
out := new(WorkflowStatus)
in.DeepCopyInto(out)
return out
}

View File

@@ -0,0 +1,9 @@
# This file defines the config for "ct" (chart tester) used by the helm linting GitHub workflow
lint-conf: charts/.ci/lint-config.yaml
chart-repos:
- jetstack=https://charts.jetstack.io
check-version-increment: false # Disable checking that the chart version has been bumped
charts:
- charts/gha-runner-scale-set-controller
- charts/gha-runner-scale-set
skip-clean-up: true

View File

@@ -3,3 +3,5 @@ lint-conf: charts/.ci/lint-config.yaml
chart-repos:
- jetstack=https://charts.jetstack.io
check-version-increment: false # Disable checking that the chart version has been bumped
charts:
- charts/actions-runner-controller

View File

@@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.21.1
version: 0.23.1
# Used as the default manager tag value when no tag property is provided in the values.yaml
appVersion: 0.26.0
appVersion: 0.27.2
home: https://github.com/actions/actions-runner-controller

View File

@@ -17,6 +17,21 @@ spec:
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.template.spec.enterprise
name: Enterprise
type: string
- jsonPath: .spec.template.spec.organization
name: Organization
type: string
- jsonPath: .spec.template.spec.repository
name: Repository
type: string
- jsonPath: .spec.template.spec.group
name: Group
type: string
- jsonPath: .spec.template.spec.labels
name: Labels
type: string
- jsonPath: .spec.replicas
name: Desired
type: number
@@ -1081,6 +1096,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1500,6 +1527,18 @@ spec:
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2141,6 +2180,18 @@ spec:
resources:
description: Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2963,6 +3014,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3256,6 +3319,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3328,7 +3403,7 @@ spec:
- type
type: object
supplementalGroups:
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows.
items:
format: int64
type: integer
@@ -3873,6 +3948,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -4221,10 +4308,10 @@ spec:
format: int32
type: integer
nodeAffinityPolicy:
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
nodeTaintsPolicy:
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
topologyKey:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
@@ -4554,7 +4641,7 @@ spec:
type: string
type: array
dataSource:
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4570,7 +4657,7 @@ spec:
- name
type: object
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4581,6 +4668,9 @@ spec:
name:
description: Name is the name of resource being referenced
type: string
namespace:
description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
type: string
required:
- kind
- name
@@ -4588,6 +4678,18 @@ spec:
resources:
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -5216,6 +5318,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:

View File

@@ -1078,6 +1078,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1497,6 +1509,18 @@ spec:
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2138,6 +2162,18 @@ spec:
resources:
description: Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2960,6 +2996,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3253,6 +3301,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3325,7 +3385,7 @@ spec:
- type
type: object
supplementalGroups:
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows.
items:
format: int64
type: integer
@@ -3870,6 +3930,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -4218,10 +4290,10 @@ spec:
format: int32
type: integer
nodeAffinityPolicy:
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
nodeTaintsPolicy:
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
topologyKey:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
@@ -4551,7 +4623,7 @@ spec:
type: string
type: array
dataSource:
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4567,7 +4639,7 @@ spec:
- name
type: object
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4578,6 +4650,9 @@ spec:
name:
description: Name is the name of resource being referenced
type: string
namespace:
description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
type: string
required:
- kind
- name
@@ -4585,6 +4660,18 @@ spec:
resources:
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -5213,6 +5300,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:

View File

@@ -36,6 +36,12 @@ spec:
- jsonPath: .status.message
name: Message
type: string
- jsonPath: .status.workflow.repository
name: WF Repo
type: string
- jsonPath: .status.workflow.runID
name: WF Run
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
@@ -1025,6 +1031,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1444,6 +1462,18 @@ spec:
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2085,6 +2115,18 @@ spec:
resources:
description: Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2907,6 +2949,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3200,6 +3254,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3272,7 +3338,7 @@ spec:
- type
type: object
supplementalGroups:
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows.
items:
format: int64
type: integer
@@ -3817,6 +3883,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -4165,10 +4243,10 @@ spec:
format: int32
type: integer
nodeAffinityPolicy:
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
nodeTaintsPolicy:
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
topologyKey:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
@@ -4498,7 +4576,7 @@ spec:
type: string
type: array
dataSource:
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4514,7 +4592,7 @@ spec:
- name
type: object
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4525,6 +4603,9 @@ spec:
name:
description: Name is the name of resource being referenced
type: string
namespace:
description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
type: string
required:
- kind
- name
@@ -4532,6 +4613,18 @@ spec:
resources:
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -5160,6 +5253,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -5225,6 +5330,31 @@ spec:
- expiresAt
- token
type: object
workflow:
description: WorkflowStatus contains various information that is propagated from GitHub Actions workflow run environment variables to ease monitoring workflow run/job/steps that are triggerred on the runner.
properties:
action:
description: Action is the name of the current action or the step ID of the current step that is triggerred within the runner. It corresponds to GITHUB_ACTION defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
job:
description: Job is the name of the current job that is triggerred within the runner. It corresponds to GITHUB_JOB defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
name:
description: Name is the name of the workflow that is triggerred within the runner. It corresponds to GITHUB_WORKFLOW defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
repository:
description: Repository is the owner and repository name of the workflow that is triggerred within the runner. It corresponds to GITHUB_REPOSITORY defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
repositoryOwner:
description: ReositoryOwner is the repository owner's name for the workflow that is triggerred within the runner. It corresponds to GITHUB_REPOSITORY_OWNER defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
runID:
description: RunID is the unique number for the current workflow run that is triggerred within the runner. It corresponds to GITHUB_RUN_ID defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
runNumber:
description: GITHUB_RUN_NUMBER is the unique number for the current workflow run that is triggerred within the runner. It corresponds to GITHUB_RUN_ID defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
type: object
type: object
type: object
served: true

View File

@@ -89,6 +89,14 @@ spec:
description: Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready)
format: int32
type: integer
ordinals:
description: ordinals controls the numbering of replica indices in a StatefulSet. The default ordinals behavior assigns a "0" index to the first replica and increments the index by one for each additional replica requested. Using the ordinals field requires the StatefulSetStartOrdinal feature gate to be enabled, which is alpha.
properties:
start:
description: 'start is the number representing the first replica''s index. It may be used to number replicas from an alternate index (eg: 1-indexed) over the default 0-indexed names, or to orchestrate progressive movement of replicas from one StatefulSet to another. If set, replica indices will be in the range: [.spec.ordinals.start, .spec.ordinals.start + .spec.replicas). If unset, defaults to 0. Replica indices will be in the range: [0, .spec.replicas).'
format: int32
type: integer
type: object
organization:
pattern: ^[^/]+$
type: string
@@ -152,7 +160,7 @@ spec:
description: 'serviceName is the name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where "pod-specific-string" is managed by the StatefulSet controller.'
type: string
template:
description: template is the object that describes the pod that will be created if insufficient replicas are detected. Each pod stamped out by the StatefulSet will fulfill this Template, but have a unique identity from the rest of the StatefulSet.
description: template is the object that describes the pod that will be created if insufficient replicas are detected. Each pod stamped out by the StatefulSet will fulfill this Template, but have a unique identity from the rest of the StatefulSet. Each pod will be named with the format <statefulsetname>-<podindex>. For example, a pod in a StatefulSet named "web" with index number "3" would be named "web-3".
properties:
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'
@@ -1151,6 +1159,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1963,6 +1983,18 @@ spec:
resources:
description: Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2786,6 +2818,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3109,6 +3153,31 @@ spec:
- conditionType
type: object
type: array
resourceClaims:
description: "ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name.
properties:
name:
description: Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL.
type: string
source:
description: Source describes where to find the ResourceClaim.
properties:
resourceClaimName:
description: ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod.
type: string
resourceClaimTemplateName:
description: "ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. \n The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The name of the ResourceClaim will be <pod name>-<resource name>, where <resource name> is the PodResourceClaim.Name. Pod validation will reject the pod if the concatenated name is not valid for a ResourceClaim (e.g. too long). \n An existing ResourceClaim with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated resource by mistake. Scheduling and pod startup are then blocked until the unrelated ResourceClaim is removed. \n This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim."
type: string
type: object
required:
- name
type: object
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
restartPolicy:
description: 'Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy'
type: string
@@ -3118,6 +3187,21 @@ spec:
schedulerName:
description: If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler.
type: string
schedulingGates:
description: "SchedulingGates is an opaque list of values that if specified will block scheduling the pod. More info: https://git.k8s.io/enhancements/keps/sig-scheduling/3521-pod-scheduling-readiness. \n This is an alpha-level feature enabled by PodSchedulingReadiness feature gate."
items:
description: PodSchedulingGate is associated to a Pod to guard its scheduling.
properties:
name:
description: Name of the scheduling gate. Each scheduling gate must have a unique name field.
type: string
required:
- name
type: object
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
securityContext:
description: 'SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field.'
properties:
@@ -3168,7 +3252,7 @@ spec:
- type
type: object
supplementalGroups:
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows.
items:
format: int64
type: integer
@@ -3298,10 +3382,10 @@ spec:
format: int32
type: integer
nodeAffinityPolicy:
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
nodeTaintsPolicy:
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
topologyKey:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
@@ -3601,7 +3685,7 @@ spec:
type: string
type: array
dataSource:
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -3617,7 +3701,7 @@ spec:
- name
type: object
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -3628,6 +3712,9 @@ spec:
name:
description: Name is the name of resource being referenced
type: string
namespace:
description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
type: string
required:
- kind
- name
@@ -3635,6 +3722,18 @@ spec:
resources:
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -4317,7 +4416,7 @@ spec:
type: string
type: array
dataSource:
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4333,7 +4432,7 @@ spec:
- name
type: object
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4344,6 +4443,9 @@ spec:
name:
description: Name is the name of resource being referenced
type: string
namespace:
description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
type: string
required:
- kind
- name
@@ -4351,6 +4453,18 @@ spec:
resources:
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -4493,6 +4607,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:

View File

@@ -29,6 +29,8 @@ curl -L https://github.com/actions/actions-runner-controller/releases/download/a
kubectl replace -f crds/
```
Note that in case you're going to create prometheus-operator `ServiceMonitor` resources via the chart, you'd need to deploy prometheus-operator-related CRDs as well.
2. Upgrade the Helm release
```shell

View File

@@ -15,7 +15,7 @@ spec:
metadata:
{{- with .Values.actionsMetricsServer.podAnnotations }}
annotations:
kubectl.kubernetes.io/default-logs-container: "github-webhook-server"
kubectl.kubernetes.io/default-container: "actions-metrics-server"
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
@@ -45,7 +45,7 @@ spec:
{{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }}
{{- if .Values.actionsMetricsServer.logFormat }}
{{- if .Values.actionsMetricsServer.logFormat }}
- "--log-format={{ .Values.actionsMetricsServer.logFormat }}"
{{- end }}
command:
@@ -74,25 +74,25 @@ spec:
valueFrom:
secretKeyRef:
key: github_token
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_ID
valueFrom:
secretKeyRef:
key: github_app_id
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_INSTALLATION_ID
valueFrom:
secretKeyRef:
key: github_app_installation_id
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_PRIVATE_KEY
valueFrom:
secretKeyRef:
key: github_app_private_key
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
{{- if .Values.authSecret.github_basicauth_username }}
- name: GITHUB_BASICAUTH_USERNAME

View File

@@ -0,0 +1,28 @@
{{- if .Values.actionsMetricsServer.enabled }}
{{- if .Values.actionsMetricsServer.secret.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque
data:
{{- if .Values.actionsMetricsServer.secret.github_webhook_secret_token }}
github_webhook_secret_token: {{ .Values.actionsMetricsServer.secret.github_webhook_secret_token | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_id }}
github_app_id: {{ .Values.actionsMetricsServer.secret.github_app_id | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_installation_id }}
github_app_installation_id: {{ .Values.actionsMetricsServer.secret.github_app_installation_id | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_private_key }}
github_app_private_key: {{ .Values.actionsMetricsServer.secret.github_app_private_key | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_token }}
github_token: {{ .Values.actionsMetricsServer.secret.github_token | toString | b64enc }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -51,11 +51,17 @@ spec:
{{- if .Values.githubWebhookServer.queueLimit }}
- "--queue-limit={{ .Values.githubWebhookServer.queueLimit }}"
{{- end }}
{{- if .Values.githubWebhookServer.logFormat }}
{{- if .Values.githubWebhookServer.logFormat }}
- "--log-format={{ .Values.githubWebhookServer.logFormat }}"
{{- end }}
command:
- "/github-webhook-server"
{{- if .Values.githubWebhookServer.lifecycle }}
{{- with .Values.githubWebhookServer.lifecycle }}
lifecycle:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
env:
- name: GITHUB_WEBHOOK_SECRET_TOKEN
valueFrom:
@@ -111,10 +117,14 @@ spec:
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
{{- end }}
{{- if kindIs "slice" .Values.githubWebhookServer.env }}
{{- toYaml .Values.githubWebhookServer.env | nindent 8 }}
{{- else }}
{{- range $key, $val := .Values.githubWebhookServer.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: github-webhook-server
imagePullPolicy: {{ .Values.image.pullPolicy }}
@@ -148,7 +158,7 @@ spec:
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- end }}
terminationGracePeriodSeconds: 10
terminationGracePeriodSeconds: {{ .Values.githubWebhookServer.terminationGracePeriodSeconds }}
{{- with .Values.githubWebhookServer.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -23,4 +23,10 @@ spec:
{{- end }}
selector:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 4 }}
{{- if .Values.githubWebhookServer.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- range $ip := .Values.githubWebhookServer.service.loadBalancerSourceRanges }}
- {{ $ip -}}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -250,14 +250,6 @@ rules:
- patch
- update
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
{{- if .Values.runner.statusUpdateHook.enabled }}
- apiGroups:
- ""
@@ -311,11 +303,4 @@ rules:
- list
- create
- delete
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
{{- end }}

View File

@@ -0,0 +1,21 @@
apiVersion: rbac.authorization.k8s.io/v1
{{- if .Values.scope.singleNamespace }}
kind: RoleBinding
{{- else }}
kind: ClusterRoleBinding
{{- end }}
metadata:
name: {{ include "actions-runner-controller.managerRoleName" . }}-secrets
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
{{- if .Values.scope.singleNamespace }}
kind: Role
{{- else }}
kind: ClusterRole
{{- end }}
name: {{ include "actions-runner-controller.managerRoleName" . }}-secrets
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,24 @@
apiVersion: rbac.authorization.k8s.io/v1
{{- if .Values.scope.singleNamespace }}
kind: Role
{{- else }}
kind: ClusterRole
{{- end }}
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.managerRoleName" . }}-secrets
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
{{- if .Values.rbac.allowGrantingKubernetesContainerModePermissions }}
{{/* These permissions are required by ARC to create RBAC resources for the runner pod to use the kubernetes container mode. */}}
{{/* See https://github.com/actions/actions-runner-controller/pull/1268/files#r917331632 */}}
- create
- delete
{{- end }}

View File

@@ -44,6 +44,7 @@ webhooks:
resources:
- runners
sideEffects: None
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
@@ -74,6 +75,7 @@ webhooks:
resources:
- runnerdeployments
sideEffects: None
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
@@ -104,6 +106,7 @@ webhooks:
resources:
- runnerreplicasets
sideEffects: None
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
@@ -136,6 +139,7 @@ webhooks:
objectSelector:
matchLabels:
"actions-runner-controller/inject-registration-token": "true"
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
@@ -177,6 +181,7 @@ webhooks:
resources:
- runners
sideEffects: None
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
@@ -207,6 +212,7 @@ webhooks:
resources:
- runnerdeployments
sideEffects: None
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
@@ -238,6 +244,7 @@ webhooks:
- runnerreplicasets
sideEffects: None
{{ if not (or (hasKey .Values.admissionWebHooks "caBundle") .Values.certManagerEnabled) }}
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
---
apiVersion: v1
kind: Secret

View File

@@ -195,7 +195,7 @@ githubWebhookServer:
enabled: false
replicaCount: 1
useRunnerGroupsVisibility: false
## specify log format for github webhook controller. Valid options are "text" and "json"
## specify log format for github webhook server. Valid options are "text" and "json"
logFormat: text
secret:
enabled: false
@@ -240,6 +240,7 @@ githubWebhookServer:
protocol: TCP
name: http
#nodePort: someFixedPortForUseWithTerraformCdkCfnEtc
loadBalancerSourceRanges: []
ingress:
enabled: false
ingressClassName: ""
@@ -276,6 +277,21 @@ githubWebhookServer:
# minAvailable: 1
# maxUnavailable: 3
# queueLimit: 100
terminationGracePeriodSeconds: 10
lifecycle: {}
# specify additional environment variables for the webhook server pod.
# It's possible to specify either key vale pairs e.g.:
# my_env_var: "some value"
# my_other_env_var: "other value"
# or a list of complete environment variable definitions e.g.:
# - name: GITHUB_WEBHOOK_SECRET_TOKEN
# valueFrom:
# secretKeyRef:
# key: GITHUB_WEBHOOK_SECRET_TOKEN
# name: prod-gha-controller-webhook-token
# optional: true
env: {}
actionsMetrics:
serviceAnnotations: {}
@@ -298,7 +314,7 @@ actionsMetricsServer:
# See the thread below for more context.
# https://github.com/actions/actions-runner-controller/pull/1814#discussion_r974758924
replicaCount: 1
## specify log format for github webhook controller. Valid options are "text" and "json"
## specify log format for actions metrics server. Valid options are "text" and "json"
logFormat: text
secret:
enabled: false

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,33 @@
apiVersion: v2
name: gha-runner-scale-set-controller
description: A Helm chart for install actions-runner-controller CRD
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.4.0"
home: https://github.com/actions/actions-runner-controller
sources:
- "https://github.com/actions/actions-runner-controller"
maintainers:
- name: actions
url: https://github.com/actions

View File

@@ -0,0 +1,5 @@
# Set the following to dummy values.
# This is only useful in CI
image:
repository: test-arc
tag: dev

View File

@@ -0,0 +1,145 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
creationTimestamp: null
name: autoscalinglisteners.actions.github.com
spec:
group: actions.github.com
names:
kind: AutoscalingListener
listKind: AutoscalingListenerList
plural: autoscalinglisteners
singular: autoscalinglistener
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.githubConfigUrl
name: GitHub Configure URL
type: string
- jsonPath: .spec.autoscalingRunnerSetNamespace
name: AutoscalingRunnerSet Namespace
type: string
- jsonPath: .spec.autoscalingRunnerSetName
name: AutoscalingRunnerSet Name
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: AutoscalingListener is the Schema for the autoscalinglisteners API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: AutoscalingListenerSpec defines the desired state of AutoscalingListener
properties:
autoscalingRunnerSetName:
description: Required
type: string
autoscalingRunnerSetNamespace:
description: Required
type: string
ephemeralRunnerSetName:
description: Required
type: string
githubConfigSecret:
description: Required
type: string
githubConfigUrl:
description: Required
type: string
githubServerTLS:
properties:
certificateFrom:
description: Required
properties:
configMapKeyRef:
description: Required
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
type: object
type: object
image:
description: Required
type: string
imagePullPolicy:
description: Required
type: string
imagePullSecrets:
description: Required
items:
description: LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
properties:
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
type: array
maxRunners:
description: Required
minimum: 0
type: integer
minRunners:
description: Required
minimum: 0
type: integer
proxy:
properties:
http:
properties:
credentialSecretRef:
type: string
url:
description: Required
type: string
type: object
https:
properties:
credentialSecretRef:
type: string
url:
description: Required
type: string
type: object
noProxy:
items:
type: string
type: array
type: object
runnerScaleSetId:
description: Required
type: integer
type: object
status:
description: AutoscalingListenerStatus defines the observed state of AutoscalingListener
type: object
type: object
served: true
storage: true
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -0,0 +1,3 @@
Thank you for installing {{ .Chart.Name }}.
Your release is named {{ .Release.Name }}.

View File

@@ -0,0 +1,113 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "gha-runner-scale-set-controller.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "gha-runner-scale-set-controller.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "gha-runner-scale-set-controller.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "gha-runner-scale-set-controller.labels" -}}
helm.sh/chart: {{ include "gha-runner-scale-set-controller.chart" . }}
{{ include "gha-runner-scale-set-controller.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- range $k, $v := .Values.labels }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "gha-runner-scale-set-controller.selectorLabels" -}}
app.kubernetes.io/name: {{ include "gha-runner-scale-set-controller.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "gha-runner-scale-set-controller.serviceAccountName" -}}
{{- if eq .Values.serviceAccount.name "default"}}
{{- fail "serviceAccount.name cannot be set to 'default'" }}
{{- end }}
{{- if .Values.serviceAccount.create }}
{{- default (include "gha-runner-scale-set-controller.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- if not .Values.serviceAccount.name }}
{{- fail "serviceAccount.name must be set if serviceAccount.create is false" }}
{{- else }}
{{- .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set-controller.managerClusterRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-cluster-role
{{- end }}
{{- define "gha-runner-scale-set-controller.managerClusterRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-cluster-rolebinding
{{- end }}
{{- define "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-single-namespace-role
{{- end }}
{{- define "gha-runner-scale-set-controller.managerSingleNamespaceRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-single-namespace-rolebinding
{{- end }}
{{- define "gha-runner-scale-set-controller.managerListenerRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-listener-role
{{- end }}
{{- define "gha-runner-scale-set-controller.managerListenerRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-listener-rolebinding
{{- end }}
{{- define "gha-runner-scale-set-controller.leaderElectionRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-leader-election-role
{{- end }}
{{- define "gha-runner-scale-set-controller.leaderElectionRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-leader-election-rolebinding
{{- end }}
{{- define "gha-runner-scale-set-controller.imagePullSecretsNames" -}}
{{- $names := list }}
{{- range $k, $v := . }}
{{- $names = append $names $v.name }}
{{- end }}
{{- $names | join ","}}
{{- end }}

View File

@@ -0,0 +1,104 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "gha-runner-scale-set-controller.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "gha-runner-scale-set-controller.labels" . | nindent 4 }}
actions.github.com/controller-service-account-namespace: {{ .Release.Namespace }}
actions.github.com/controller-service-account-name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
{{- if .Values.flags.watchSingleNamespace }}
actions.github.com/controller-watch-single-namespace: {{ .Values.flags.watchSingleNamespace }}
{{- end }}
spec:
replicas: {{ default 1 .Values.replicaCount }}
selector:
matchLabels:
{{- include "gha-runner-scale-set-controller.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
kubectl.kubernetes.io/default-container: "manager"
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/component: controller-manager
app.kubernetes.io/version: {{ .Chart.Version }}
{{- include "gha-runner-scale-set-controller.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.priorityClassName }}
priorityClassName: "{{ . }}"
{{- end }}
containers:
- name: manager
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- "--auto-scaling-runner-set-only"
{{- if gt (int (default 1 .Values.replicaCount)) 1 }}
- "--enable-leader-election"
- "--leader-election-id={{ include "gha-runner-scale-set-controller.fullname" . }}"
{{- end }}
{{- with .Values.imagePullSecrets }}
- "--auto-scaler-image-pull-secrets={{ include "gha-runner-scale-set-controller.imagePullSecretsNames" . }}"
{{- end }}
{{- with .Values.flags.logLevel }}
- "--log-level={{ . }}"
{{- end }}
{{- with .Values.flags.watchSingleNamespace }}
- "--watch-single-namespace={{ . }}"
{{- end }}
command:
- "/manager"
env:
- name: CONTROLLER_MANAGER_CONTAINER_IMAGE
value: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
- name: CONTROLLER_MANAGER_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY
value: "{{ .Values.image.pullPolicy | default "IfNotPresent" }}"
{{- with .Values.env }}
{{- if kindIs "slice" . }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
volumeMounts:
- mountPath: /tmp
name: tmp
terminationGracePeriodSeconds: 10
volumes:
- name: tmp
emptyDir: {}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,12 @@
{{- if gt (int (default 1 .Values.replicaCount)) 1 }}
# permissions to do leader election.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set-controller.leaderElectionRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if gt (int (default 1 .Values.replicaCount)) 1 }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.leaderElectionRoleBinding" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set-controller.leaderElectionRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,144 @@
{{- if empty .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "gha-runner-scale-set-controller.managerClusterRoleName" . }}
rules:
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- list
- watch
- patch
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if empty .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.managerClusterRoleBinding" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "gha-runner-scale-set-controller.managerClusterRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,40 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set-controller.managerListenerRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- get
- apiGroups:
- ""
resources:
- pods/status
verbs:
- get
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- patch
- update
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- patch
- update

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.managerListenerRoleBinding" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set-controller.managerListenerRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,84 @@
{{- if .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners/finalizers
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- list
- watch
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets
verbs:
- list
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets
verbs:
- list
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners
verbs:
- list
- watch
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleBinding" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,125 @@
{{- if .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" . }}
namespace: {{ .Values.flags.watchSingleNamespace }}
rules:
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- list
- watch
- patch
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleBinding" . }}
namespace: {{ .Values.flags.watchSingleNamespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "gha-runner-scale-set-controller.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,887 @@
package tests
import (
"os"
"path/filepath"
"strings"
"testing"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/gruntwork-io/terratest/modules/k8s"
"github.com/gruntwork-io/terratest/modules/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v2"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
)
type Chart struct {
Version string `yaml:"version"`
AppVersion string `yaml:"appVersion"`
}
func TestTemplate_CreateServiceAccount(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
var serviceAccount corev1.ServiceAccount
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", serviceAccount.Name)
assert.Equal(t, "bar", string(serviceAccount.Annotations["foo"]))
}
func TestTemplate_CreateServiceAccount_OverwriteName(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.name": "overwritten-name",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
var serviceAccount corev1.ServiceAccount
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "overwritten-name", serviceAccount.Name)
assert.Equal(t, "bar", string(serviceAccount.Annotations["foo"]))
}
func TestTemplate_CreateServiceAccount_CannotUseDefaultServiceAccount(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.name": "default",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
assert.ErrorContains(t, err, "serviceAccount.name cannot be set to 'default'", "We should get an error because the default service account cannot be used")
}
func TestTemplate_NotCreateServiceAccount(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "false",
"serviceAccount.name": "overwritten-name",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
assert.ErrorContains(t, err, "could not find template templates/serviceaccount.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_NotCreateServiceAccount_ServiceAccountNotSet(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "false",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
assert.ErrorContains(t, err, "serviceAccount.name must be set if serviceAccount.create is false", "We should get an error because the default service account cannot be used")
}
func TestTemplate_CreateManagerClusterRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_cluster_role.yaml"})
var managerClusterRole rbacv1.ClusterRole
helm.UnmarshalK8SYaml(t, output, &managerClusterRole)
assert.Empty(t, managerClusterRole.Namespace, "ClusterRole should not have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-cluster-role", managerClusterRole.Name)
assert.Equal(t, 16, len(managerClusterRole.Rules))
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_single_namespace_controller_role.yaml in chart", "We should get an error because the template should be skipped")
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_watch_role.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_single_namespace_watch_role.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_ManagerClusterRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_cluster_role_binding.yaml"})
var managerClusterRoleBinding rbacv1.ClusterRoleBinding
helm.UnmarshalK8SYaml(t, output, &managerClusterRoleBinding)
assert.Empty(t, managerClusterRoleBinding.Namespace, "ClusterRoleBinding should not have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-cluster-rolebinding", managerClusterRoleBinding.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-cluster-role", managerClusterRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerClusterRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerClusterRoleBinding.Subjects[0].Namespace)
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role_binding.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_single_namespace_controller_role_binding.yaml in chart", "We should get an error because the template should be skipped")
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_watch_role_binding.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_single_namespace_watch_role_binding.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_CreateManagerListenerRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_listener_role.yaml"})
var managerListenerRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &managerListenerRole)
assert.Equal(t, namespaceName, managerListenerRole.Namespace, "Role should have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-listener-role", managerListenerRole.Name)
assert.Equal(t, 4, len(managerListenerRole.Rules))
assert.Equal(t, "pods", managerListenerRole.Rules[0].Resources[0])
assert.Equal(t, "pods/status", managerListenerRole.Rules[1].Resources[0])
assert.Equal(t, "secrets", managerListenerRole.Rules[2].Resources[0])
assert.Equal(t, "serviceaccounts", managerListenerRole.Rules[3].Resources[0])
}
func TestTemplate_ManagerListenerRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_listener_role_binding.yaml"})
var managerListenerRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &managerListenerRoleBinding)
assert.Equal(t, namespaceName, managerListenerRoleBinding.Namespace, "RoleBinding should have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-listener-rolebinding", managerListenerRoleBinding.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-listener-role", managerListenerRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerListenerRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerListenerRoleBinding.Subjects[0].Namespace)
}
func TestTemplate_ControllerDeployment_Defaults(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
chartContent, err := os.ReadFile(filepath.Join(helmChartPath, "Chart.yaml"))
require.NoError(t, err)
chart := new(Chart)
err = yaml.Unmarshal(chartContent, chart)
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"image.tag": "dev",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Equal(t, "gha-runner-scale-set-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, namespaceName, deployment.Labels["actions.github.com/controller-service-account-namespace"])
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Labels["actions.github.com/controller-service-account-name"])
assert.NotContains(t, deployment.Labels, "actions.github.com/controller-watch-single-namespace")
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 0)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Nil(t, deployment.Spec.Template.Spec.SecurityContext)
assert.Empty(t, deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
assert.Len(t, deployment.Spec.Template.Spec.Volumes, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Volumes[0].Name)
assert.NotNil(t, 10, deployment.Spec.Template.Spec.Volumes[0].EmptyDir)
assert.Len(t, deployment.Spec.Template.Spec.NodeSelector, 0)
assert.Nil(t, deployment.Spec.Template.Spec.Affinity)
assert.Len(t, deployment.Spec.Template.Spec.Tolerations, 0)
managerImage := "ghcr.io/actions/gha-runner-scale-set-controller:dev"
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullIfNotPresent, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 2)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 3)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY", deployment.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "IfNotPresent", deployment.Spec.Template.Spec.Containers[0].Env[2].Value) // default value. Needs to align with controllers/actions.github.com/resourcebuilder.go
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Resources)
assert.Nil(t, deployment.Spec.Template.Spec.Containers[0].SecurityContext)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].VolumeMounts, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].Name)
assert.Equal(t, "/tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath)
}
func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
chartContent, err := os.ReadFile(filepath.Join(helmChartPath, "Chart.yaml"))
require.NoError(t, err)
chart := new(Chart)
err = yaml.Unmarshal(chartContent, chart)
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"labels.foo": "bar",
"labels.github": "actions",
"replicaCount": "1",
"image.pullPolicy": "Always",
"image.tag": "dev",
"imagePullSecrets[0].name": "dockerhub",
"nameOverride": "gha-runner-scale-set-controller-override",
"fullnameOverride": "gha-runner-scale-set-controller-fullname-override",
"env[0].name": "ENV_VAR_NAME_1",
"env[0].value": "ENV_VAR_VALUE_1",
"serviceAccount.name": "gha-runner-scale-set-controller-sa",
"podAnnotations.foo": "bar",
"podSecurityContext.fsGroup": "1000",
"securityContext.runAsUser": "1000",
"securityContext.runAsNonRoot": "true",
"resources.limits.cpu": "500m",
"nodeSelector.foo": "bar",
"tolerations[0].key": "foo",
"affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key": "foo",
"affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator": "bar",
"priorityClassName": "test-priority-class",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "gha-runner-scale-set-controller-fullname-override", deployment.Name)
assert.Equal(t, "gha-runner-scale-set-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-runner-scale-set-controller-override", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, "bar", deployment.Labels["foo"])
assert.Equal(t, "actions", deployment.Labels["github"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "gha-runner-scale-set-controller-override", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set-controller-override", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "bar", deployment.Spec.Template.Annotations["foo"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Equal(t, "ENV_VAR_NAME_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Name)
assert.Equal(t, "ENV_VAR_VALUE_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Value)
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 1)
assert.Equal(t, "dockerhub", deployment.Spec.Template.Spec.ImagePullSecrets[0].Name)
assert.Equal(t, "gha-runner-scale-set-controller-sa", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Equal(t, int64(1000), *deployment.Spec.Template.Spec.SecurityContext.FSGroup)
assert.Equal(t, "test-priority-class", deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
assert.Len(t, deployment.Spec.Template.Spec.Volumes, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Volumes[0].Name)
assert.NotNil(t, 10, deployment.Spec.Template.Spec.Volumes[0].EmptyDir)
assert.Len(t, deployment.Spec.Template.Spec.NodeSelector, 1)
assert.Equal(t, "bar", deployment.Spec.Template.Spec.NodeSelector["foo"])
assert.NotNil(t, deployment.Spec.Template.Spec.Affinity.NodeAffinity)
assert.Equal(t, "foo", deployment.Spec.Template.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0].MatchExpressions[0].Key)
assert.Equal(t, "bar", string(deployment.Spec.Template.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0].MatchExpressions[0].Operator))
assert.Len(t, deployment.Spec.Template.Spec.Tolerations, 1)
assert.Equal(t, "foo", deployment.Spec.Template.Spec.Tolerations[0].Key)
managerImage := "ghcr.io/actions/gha-runner-scale-set-controller:dev"
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullAlways, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--auto-scaler-image-pull-secrets=dockerhub", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 4)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY", deployment.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "Always", deployment.Spec.Template.Spec.Containers[0].Env[2].Value) // default value. Needs to align with controllers/actions.github.com/resourcebuilder.go
assert.Equal(t, "ENV_VAR_NAME_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Name)
assert.Equal(t, "ENV_VAR_VALUE_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "500m", deployment.Spec.Template.Spec.Containers[0].Resources.Limits.Cpu().String())
assert.True(t, *deployment.Spec.Template.Spec.Containers[0].SecurityContext.RunAsNonRoot)
assert.Equal(t, int64(1000), *deployment.Spec.Template.Spec.Containers[0].SecurityContext.RunAsUser)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].VolumeMounts, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].Name)
assert.Equal(t, "/tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath)
}
func TestTemplate_EnableLeaderElectionRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"replicaCount": "2",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/leader_election_role.yaml"})
var leaderRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &leaderRole)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-leader-election-role", leaderRole.Name)
assert.Equal(t, namespaceName, leaderRole.Namespace)
}
func TestTemplate_EnableLeaderElectionRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"replicaCount": "2",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/leader_election_role_binding.yaml"})
var leaderRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &leaderRoleBinding)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-leader-election-rolebinding", leaderRoleBinding.Name)
assert.Equal(t, namespaceName, leaderRoleBinding.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-leader-election-role", leaderRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", leaderRoleBinding.Subjects[0].Name)
}
func TestTemplate_EnableLeaderElection(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"replicaCount": "2",
"image.tag": "dev",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Equal(t, int32(2), *deployment.Spec.Replicas)
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/gha-runner-scale-set-controller:dev", deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullIfNotPresent, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 4)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--enable-leader-election", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--leader-election-id=test-arc-gha-runner-scale-set-controller", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[3])
}
func TestTemplate_ControllerDeployment_ForwardImagePullSecrets(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"imagePullSecrets[0].name": "dockerhub",
"imagePullSecrets[1].name": "ghcr",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--auto-scaler-image-pull-secrets=dockerhub,ghcr", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[2])
}
func TestTemplate_ControllerDeployment_WatchSingleNamespace(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
chartContent, err := os.ReadFile(filepath.Join(helmChartPath, "Chart.yaml"))
require.NoError(t, err)
chart := new(Chart)
err = yaml.Unmarshal(chartContent, chart)
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"image.tag": "dev",
"flags.watchSingleNamespace": "demo",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Equal(t, "gha-runner-scale-set-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, namespaceName, deployment.Labels["actions.github.com/controller-service-account-namespace"])
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Labels["actions.github.com/controller-service-account-name"])
assert.Equal(t, "demo", deployment.Labels["actions.github.com/controller-watch-single-namespace"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 0)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Nil(t, deployment.Spec.Template.Spec.SecurityContext)
assert.Empty(t, deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
assert.Len(t, deployment.Spec.Template.Spec.Volumes, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Volumes[0].Name)
assert.NotNil(t, 10, deployment.Spec.Template.Spec.Volumes[0].EmptyDir)
assert.Len(t, deployment.Spec.Template.Spec.NodeSelector, 0)
assert.Nil(t, deployment.Spec.Template.Spec.Affinity)
assert.Len(t, deployment.Spec.Template.Spec.Tolerations, 0)
managerImage := "ghcr.io/actions/gha-runner-scale-set-controller:dev"
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/gha-runner-scale-set-controller:dev", deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullIfNotPresent, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--watch-single-namespace=demo", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 3)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY", deployment.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "IfNotPresent", deployment.Spec.Template.Spec.Containers[0].Env[2].Value) // default value. Needs to align with controllers/actions.github.com/resourcebuilder.go
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Resources)
assert.Nil(t, deployment.Spec.Template.Spec.Containers[0].SecurityContext)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].VolumeMounts, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].Name)
assert.Equal(t, "/tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath)
}
func TestTemplate_ControllerContainerEnvironmentVariables(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"env[0].Name": "ENV_VAR_NAME_1",
"env[0].Value": "ENV_VAR_VALUE_1",
"env[1].Name": "ENV_VAR_NAME_2",
"env[1].ValueFrom.SecretKeyRef.Key": "ENV_VAR_NAME_2",
"env[1].ValueFrom.SecretKeyRef.Name": "secret-name",
"env[1].ValueFrom.SecretKeyRef.Optional": "true",
"env[2].Name": "ENV_VAR_NAME_3",
"env[2].Value": "",
"env[3].Name": "ENV_VAR_NAME_4",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 7)
assert.Equal(t, "ENV_VAR_NAME_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Name)
assert.Equal(t, "ENV_VAR_VALUE_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Value)
assert.Equal(t, "ENV_VAR_NAME_2", deployment.Spec.Template.Spec.Containers[0].Env[4].Name)
assert.Equal(t, "secret-name", deployment.Spec.Template.Spec.Containers[0].Env[4].ValueFrom.SecretKeyRef.Name)
assert.Equal(t, "ENV_VAR_NAME_2", deployment.Spec.Template.Spec.Containers[0].Env[4].ValueFrom.SecretKeyRef.Key)
assert.True(t, *deployment.Spec.Template.Spec.Containers[0].Env[4].ValueFrom.SecretKeyRef.Optional)
assert.Equal(t, "ENV_VAR_NAME_3", deployment.Spec.Template.Spec.Containers[0].Env[5].Name)
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Env[5].Value)
assert.Equal(t, "ENV_VAR_NAME_4", deployment.Spec.Template.Spec.Containers[0].Env[6].Name)
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Env[6].ValueFrom)
}
func TestTemplate_WatchSingleNamespace_NotCreateManagerClusterRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"flags.watchSingleNamespace": "demo",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_cluster_role.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_cluster_role.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_WatchSingleNamespace_NotManagerClusterRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"flags.watchSingleNamespace": "demo",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_cluster_role_binding.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_cluster_role_binding.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_CreateManagerSingleNamespaceRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"flags.watchSingleNamespace": "demo",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role.yaml"})
var managerSingleNamespaceControllerRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceControllerRole)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceControllerRole.Name)
assert.Equal(t, namespaceName, managerSingleNamespaceControllerRole.Namespace)
assert.Equal(t, 10, len(managerSingleNamespaceControllerRole.Rules))
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_watch_role.yaml"})
var managerSingleNamespaceWatchRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceWatchRole)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceWatchRole.Name)
assert.Equal(t, "demo", managerSingleNamespaceWatchRole.Namespace)
assert.Equal(t, 14, len(managerSingleNamespaceWatchRole.Rules))
}
func TestTemplate_ManagerSingleNamespaceRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"flags.watchSingleNamespace": "demo",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role_binding.yaml"})
var managerSingleNamespaceControllerRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceControllerRoleBinding)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-rolebinding", managerSingleNamespaceControllerRoleBinding.Name)
assert.Equal(t, namespaceName, managerSingleNamespaceControllerRoleBinding.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceControllerRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerSingleNamespaceControllerRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerSingleNamespaceControllerRoleBinding.Subjects[0].Namespace)
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_watch_role_binding.yaml"})
var managerSingleNamespaceWatchRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceWatchRoleBinding)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-rolebinding", managerSingleNamespaceWatchRoleBinding.Name)
assert.Equal(t, "demo", managerSingleNamespaceWatchRoleBinding.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceWatchRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerSingleNamespaceWatchRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerSingleNamespaceWatchRoleBinding.Subjects[0].Namespace)
}

View File

@@ -0,0 +1,85 @@
# Default values for gha-runner-scale-set-controller.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
labels: {}
# leaderElection will be enabled when replicaCount>1,
# So, only one replica will in charge of reconciliation at a given time
# leaderElectionId will be set to {{ define gha-runner-scale-set-controller.fullname }}.
replicaCount: 1
image:
repository: "ghcr.io/actions/gha-runner-scale-set-controller"
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
env:
## Define environment variables for the controller pod
# - name: "ENV_VAR_NAME_1"
# value: "ENV_VAR_VALUE_1"
# - name: "ENV_VAR_NAME_2"
# valueFrom:
# secretKeyRef:
# key: ENV_VAR_NAME_2
# name: secret-name
# optional: true
serviceAccount:
# Specifies whether a service account should be created for running the controller pod
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
# You can not use the default service account for this.
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
resources: {}
## We usually recommend not to specify default resources and to leave this as a conscious
## choice for the user. This also increases chances charts run on environments with little
## resources, such as Minikube. If you do want to specify resources, uncomment the following
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
# Leverage a PriorityClass to ensure your pods survive resource shortages
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# PriorityClass: system-cluster-critical
priorityClassName: ""
flags:
# Log level can be set here with one of the following values: "debug", "info", "warn", "error".
# Defaults to "debug".
logLevel: "debug"
## Restricts the controller to only watch resources in the desired namespace.
## Defaults to watch all namespaces when unset.
# watchSingleNamespace: ""

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,33 @@
apiVersion: v2
name: gha-runner-scale-set
description: A Helm chart for deploying an AutoScalingRunnerSet
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.4.0"
home: https://github.com/actions/dev-arc
sources:
- "https://github.com/actions/dev-arc"
maintainers:
- name: actions
url: https://github.com/actions

View File

@@ -0,0 +1,6 @@
# Set the following to dummy values.
# This is only useful in CI
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test

View File

@@ -0,0 +1,3 @@
Thank you for installing {{ .Chart.Name }}.
Your release is named {{ .Release.Name }}.

View File

@@ -0,0 +1,550 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "gha-runner-scale-set.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "gha-runner-scale-set.fullname" -}}
{{- $name := default .Chart.Name }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "gha-runner-scale-set.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "gha-runner-scale-set.labels" -}}
helm.sh/chart: {{ include "gha-runner-scale-set.chart" . }}
{{ include "gha-runner-scale-set.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/part-of: gha-runner-scale-set
actions.github.com/scale-set-name: {{ .Release.Name }}
actions.github.com/scale-set-namespace: {{ .Release.Namespace }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "gha-runner-scale-set.selectorLabels" -}}
app.kubernetes.io/name: {{ include "gha-runner-scale-set.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{- define "gha-runner-scale-set.githubsecret" -}}
{{- if kindIs "string" .Values.githubConfigSecret }}
{{- if not (empty .Values.githubConfigSecret) }}
{{- .Values.githubConfigSecret }}
{{- else}}
{{- fail "Values.githubConfigSecret is required for setting auth with GitHub server." }}
{{- end }}
{{- else }}
{{- include "gha-runner-scale-set.fullname" . }}-github-secret
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.noPermissionServiceAccountName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-no-permission-service-account
{{- end }}
{{- define "gha-runner-scale-set.kubeModeRoleName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-role
{{- end }}
{{- define "gha-runner-scale-set.kubeModeRoleBindingName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-role-binding
{{- end }}
{{- define "gha-runner-scale-set.kubeModeServiceAccountName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-service-account
{{- end }}
{{- define "gha-runner-scale-set.dind-init-container" -}}
{{- range $i, $val := .Values.template.spec.containers }}
{{- if eq $val.name "runner" }}
image: {{ $val.image }}
command: ["cp"]
args: ["-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
volumeMounts:
- name: dind-externals
mountPath: /home/runner/tmpDir
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.dind-container" -}}
image: docker:dind
securityContext:
privileged: true
volumeMounts:
- name: work
mountPath: /home/runner/_work
- name: dind-cert
mountPath: /certs/client
- name: dind-externals
mountPath: /home/runner/externals
{{- end }}
{{- define "gha-runner-scale-set.dind-volume" -}}
- name: dind-cert
emptyDir: {}
- name: dind-externals
emptyDir: {}
{{- end }}
{{- define "gha-runner-scale-set.tls-volume" -}}
- name: github-server-tls-cert
configMap:
name: {{ .certificateFrom.configMapKeyRef.name }}
items:
- key: {{ .certificateFrom.configMapKeyRef.key }}
path: {{ .certificateFrom.configMapKeyRef.key }}
{{- end }}
{{- define "gha-runner-scale-set.dind-work-volume" -}}
{{- $createWorkVolume := 1 }}
{{- range $i, $volume := .Values.template.spec.volumes }}
{{- if eq $volume.name "work" }}
{{- $createWorkVolume = 0 }}
- {{ $volume | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- if eq $createWorkVolume 1 }}
- name: work
emptyDir: {}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.kubernetes-mode-work-volume" -}}
{{- $createWorkVolume := 1 }}
{{- range $i, $volume := .Values.template.spec.volumes }}
{{- if eq $volume.name "work" }}
{{- $createWorkVolume = 0 }}
- {{ $volume | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- if eq $createWorkVolume 1 }}
- name: work
ephemeral:
volumeClaimTemplate:
spec:
{{- .Values.containerMode.kubernetesModeWorkVolumeClaim | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.non-work-volumes" -}}
{{- range $i, $volume := .Values.template.spec.volumes }}
{{- if ne $volume.name "work" }}
- {{ $volume | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.non-runner-containers" -}}
{{- range $i, $container := .Values.template.spec.containers }}
{{- if ne $container.name "runner" }}
- {{ $container | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.non-runner-non-dind-containers" -}}
{{- range $i, $container := .Values.template.spec.containers }}
{{- if and (ne $container.name "runner") (ne $container.name "dind") }}
- {{ $container | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.dind-runner-container" -}}
{{- $tlsConfig := (default (dict) .Values.githubServerTLS) }}
{{- range $i, $container := .Values.template.spec.containers }}
{{- if eq $container.name "runner" }}
{{- range $key, $val := $container }}
{{- if and (ne $key "env") (ne $key "volumeMounts") (ne $key "name") }}
{{ $key }}: {{ $val | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- $setDockerHost := 1 }}
{{- $setDockerTlsVerify := 1 }}
{{- $setDockerCertPath := 1 }}
{{- $setRunnerWaitDocker := 1 }}
{{- $setNodeExtraCaCerts := 0 }}
{{- $setRunnerUpdateCaCerts := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $setNodeExtraCaCerts = 1 }}
{{- $setRunnerUpdateCaCerts = 1 }}
{{- end }}
env:
{{- with $container.env }}
{{- range $i, $env := . }}
{{- if eq $env.name "DOCKER_HOST" }}
{{- $setDockerHost = 0 }}
{{- end }}
{{- if eq $env.name "DOCKER_TLS_VERIFY" }}
{{- $setDockerTlsVerify = 0 }}
{{- end }}
{{- if eq $env.name "DOCKER_CERT_PATH" }}
{{- $setDockerCertPath = 0 }}
{{- end }}
{{- if eq $env.name "RUNNER_WAIT_FOR_DOCKER_IN_SECONDS" }}
{{- $setRunnerWaitDocker = 0 }}
{{- end }}
{{- if eq $env.name "NODE_EXTRA_CA_CERTS" }}
{{- $setNodeExtraCaCerts = 0 }}
{{- end }}
{{- if eq $env.name "RUNNER_UPDATE_CA_CERTS" }}
{{- $setRunnerUpdateCaCerts = 0 }}
{{- end }}
- {{ $env | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- if $setDockerHost }}
- name: DOCKER_HOST
value: tcp://localhost:2376
{{- end }}
{{- if $setDockerTlsVerify }}
- name: DOCKER_TLS_VERIFY
value: "1"
{{- end }}
{{- if $setDockerCertPath }}
- name: DOCKER_CERT_PATH
value: /certs/client
{{- end }}
{{- if $setRunnerWaitDocker }}
- name: RUNNER_WAIT_FOR_DOCKER_IN_SECONDS
value: "120"
{{- end }}
{{- if $setNodeExtraCaCerts }}
- name: NODE_EXTRA_CA_CERTS
value: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
{{- end }}
{{- if $setRunnerUpdateCaCerts }}
- name: RUNNER_UPDATE_CA_CERTS
value: "1"
{{- end }}
{{- $mountWork := 1 }}
{{- $mountDindCert := 1 }}
{{- $mountGitHubServerTLS := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $mountGitHubServerTLS = 1 }}
{{- end }}
volumeMounts:
{{- with $container.volumeMounts }}
{{- range $i, $volMount := . }}
{{- if eq $volMount.name "work" }}
{{- $mountWork = 0 }}
{{- end }}
{{- if eq $volMount.name "dind-cert" }}
{{- $mountDindCert = 0 }}
{{- end }}
{{- if eq $volMount.name "github-server-tls-cert" }}
{{- $mountGitHubServerTLS = 0 }}
{{- end }}
- {{ $volMount | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- if $mountWork }}
- name: work
mountPath: /home/runner/_work
{{- end }}
{{- if $mountDindCert }}
- name: dind-cert
mountPath: /certs/client
readOnly: true
{{- end }}
{{- if $mountGitHubServerTLS }}
- name: github-server-tls-cert
mountPath: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
subPath: {{ $tlsConfig.certificateFrom.configMapKeyRef.key }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.kubernetes-mode-runner-container" -}}
{{- $tlsConfig := (default (dict) .Values.githubServerTLS) }}
{{- range $i, $container := .Values.template.spec.containers }}
{{- if eq $container.name "runner" }}
{{- range $key, $val := $container }}
{{- if and (ne $key "env") (ne $key "volumeMounts") (ne $key "name") }}
{{ $key }}: {{ $val | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- $setContainerHooks := 1 }}
{{- $setPodName := 1 }}
{{- $setRequireJobContainer := 1 }}
{{- $setNodeExtraCaCerts := 0 }}
{{- $setRunnerUpdateCaCerts := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $setNodeExtraCaCerts = 1 }}
{{- $setRunnerUpdateCaCerts = 1 }}
{{- end }}
env:
{{- with $container.env }}
{{- range $i, $env := . }}
{{- if eq $env.name "ACTIONS_RUNNER_CONTAINER_HOOKS" }}
{{- $setContainerHooks = 0 }}
{{- end }}
{{- if eq $env.name "ACTIONS_RUNNER_POD_NAME" }}
{{- $setPodName = 0 }}
{{- end }}
{{- if eq $env.name "ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER" }}
{{- $setRequireJobContainer = 0 }}
{{- end }}
{{- if eq $env.name "NODE_EXTRA_CA_CERTS" }}
{{- $setNodeExtraCaCerts = 0 }}
{{- end }}
{{- if eq $env.name "RUNNER_UPDATE_CA_CERTS" }}
{{- $setRunnerUpdateCaCerts = 0 }}
{{- end }}
- {{ $env | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- if $setContainerHooks }}
- name: ACTIONS_RUNNER_CONTAINER_HOOKS
value: /home/runner/k8s/index.js
{{- end }}
{{- if $setPodName }}
- name: ACTIONS_RUNNER_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
{{- end }}
{{- if $setRequireJobContainer }}
- name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
value: "true"
{{- end }}
{{- if $setNodeExtraCaCerts }}
- name: NODE_EXTRA_CA_CERTS
value: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
{{- end }}
{{- if $setRunnerUpdateCaCerts }}
- name: RUNNER_UPDATE_CA_CERTS
value: "1"
{{- end }}
{{- $mountWork := 1 }}
{{- $mountGitHubServerTLS := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $mountGitHubServerTLS = 1 }}
{{- end }}
volumeMounts:
{{- with $container.volumeMounts }}
{{- range $i, $volMount := . }}
{{- if eq $volMount.name "work" }}
{{- $mountWork = 0 }}
{{- end }}
{{- if eq $volMount.name "github-server-tls-cert" }}
{{- $mountGitHubServerTLS = 0 }}
{{- end }}
- {{ $volMount | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- if $mountWork }}
- name: work
mountPath: /home/runner/_work
{{- end }}
{{- if $mountGitHubServerTLS }}
- name: github-server-tls-cert
mountPath: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
subPath: {{ $tlsConfig.certificateFrom.configMapKeyRef.key }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.default-mode-runner-containers" -}}
{{- $tlsConfig := (default (dict) .Values.githubServerTLS) }}
{{- range $i, $container := .Values.template.spec.containers }}
{{- if ne $container.name "runner" }}
- {{ $container | toYaml | nindent 2 }}
{{- else }}
- name: {{ $container.name }}
{{- range $key, $val := $container }}
{{- if and (ne $key "env") (ne $key "volumeMounts") (ne $key "name") }}
{{ $key }}: {{ $val | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- $setNodeExtraCaCerts := 0 }}
{{- $setRunnerUpdateCaCerts := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $setNodeExtraCaCerts = 1 }}
{{- $setRunnerUpdateCaCerts = 1 }}
{{- end }}
env:
{{- with $container.env }}
{{- range $i, $env := . }}
{{- if eq $env.name "NODE_EXTRA_CA_CERTS" }}
{{- $setNodeExtraCaCerts = 0 }}
{{- end }}
{{- if eq $env.name "RUNNER_UPDATE_CA_CERTS" }}
{{- $setRunnerUpdateCaCerts = 0 }}
{{- end }}
- {{ $env | toYaml | nindent 6 }}
{{- end }}
{{- end }}
{{- if $setNodeExtraCaCerts }}
- name: NODE_EXTRA_CA_CERTS
value: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
{{- end }}
{{- if $setRunnerUpdateCaCerts }}
- name: RUNNER_UPDATE_CA_CERTS
value: "1"
{{- end }}
{{- $mountGitHubServerTLS := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $mountGitHubServerTLS = 1 }}
{{- end }}
volumeMounts:
{{- with $container.volumeMounts }}
{{- range $i, $volMount := . }}
{{- if eq $volMount.name "github-server-tls-cert" }}
{{- $mountGitHubServerTLS = 0 }}
{{- end }}
- {{ $volMount | toYaml | nindent 6 }}
{{- end }}
{{- end }}
{{- if $mountGitHubServerTLS }}
- name: github-server-tls-cert
mountPath: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
subPath: {{ $tlsConfig.certificateFrom.configMapKeyRef.key }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.managerRoleName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-manager-role
{{- end }}
{{- define "gha-runner-scale-set.managerRoleBindingName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-manager-role-binding
{{- end }}
{{- define "gha-runner-scale-set.managerServiceAccountName" -}}
{{- $searchControllerDeployment := 1 }}
{{- if .Values.controllerServiceAccount }}
{{- if .Values.controllerServiceAccount.name }}
{{- $searchControllerDeployment = 0 }}
{{- .Values.controllerServiceAccount.name }}
{{- end }}
{{- end }}
{{- if eq $searchControllerDeployment 1 }}
{{- $multiNamespacesCounter := 0 }}
{{- $singleNamespaceCounter := 0 }}
{{- $controllerDeployment := dict }}
{{- $singleNamespaceControllerDeployments := dict }}
{{- $managerServiceAccountName := "" }}
{{- range $index, $deployment := (lookup "apps/v1" "Deployment" "" "").items }}
{{- if kindIs "map" $deployment.metadata.labels }}
{{- if eq (get $deployment.metadata.labels "app.kubernetes.io/part-of") "gha-runner-scale-set-controller" }}
{{- if hasKey $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace" }}
{{- $singleNamespaceCounter = add $singleNamespaceCounter 1 }}
{{- $_ := set $singleNamespaceControllerDeployments (get $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace") $deployment}}
{{- else }}
{{- $multiNamespacesCounter = add $multiNamespacesCounter 1 }}
{{- $controllerDeployment = $deployment }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if and (eq $multiNamespacesCounter 0) (eq $singleNamespaceCounter 0) }}
{{- fail "No gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if and (gt $multiNamespacesCounter 0) (gt $singleNamespaceCounter 0) }}
{{- fail "Found both gha-runner-scale-set-controller installed with flags.watchSingleNamespace set and unset in cluster, this is not supported. Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if gt $multiNamespacesCounter 1 }}
{{- fail "More than one gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if eq $multiNamespacesCounter 1 }}
{{- with $controllerDeployment.metadata }}
{{- $managerServiceAccountName = (get $controllerDeployment.metadata.labels "actions.github.com/controller-service-account-name") }}
{{- end }}
{{- else if gt $singleNamespaceCounter 0 }}
{{- if hasKey $singleNamespaceControllerDeployments .Release.Namespace }}
{{- $controllerDeployment = get $singleNamespaceControllerDeployments .Release.Namespace }}
{{- with $controllerDeployment.metadata }}
{{- $managerServiceAccountName = (get $controllerDeployment.metadata.labels "actions.github.com/controller-service-account-name") }}
{{- end }}
{{- else }}
{{- fail "No gha-runner-scale-set-controller deployment that watch this namespace found using label (actions.github.com/controller-watch-single-namespace). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- end }}
{{- if eq $managerServiceAccountName "" }}
{{- fail "No service account name found for gha-runner-scale-set-controller deployment using label (actions.github.com/controller-service-account-name), consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- $managerServiceAccountName }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.managerServiceAccountNamespace" -}}
{{- $searchControllerDeployment := 1 }}
{{- if .Values.controllerServiceAccount }}
{{- if .Values.controllerServiceAccount.namespace }}
{{- $searchControllerDeployment = 0 }}
{{- .Values.controllerServiceAccount.namespace }}
{{- end }}
{{- end }}
{{- if eq $searchControllerDeployment 1 }}
{{- $multiNamespacesCounter := 0 }}
{{- $singleNamespaceCounter := 0 }}
{{- $controllerDeployment := dict }}
{{- $singleNamespaceControllerDeployments := dict }}
{{- $managerServiceAccountNamespace := "" }}
{{- range $index, $deployment := (lookup "apps/v1" "Deployment" "" "").items }}
{{- if kindIs "map" $deployment.metadata.labels }}
{{- if eq (get $deployment.metadata.labels "app.kubernetes.io/part-of") "gha-runner-scale-set-controller" }}
{{- if hasKey $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace" }}
{{- $singleNamespaceCounter = add $singleNamespaceCounter 1 }}
{{- $_ := set $singleNamespaceControllerDeployments (get $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace") $deployment}}
{{- else }}
{{- $multiNamespacesCounter = add $multiNamespacesCounter 1 }}
{{- $controllerDeployment = $deployment }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if and (eq $multiNamespacesCounter 0) (eq $singleNamespaceCounter 0) }}
{{- fail "No gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if and (gt $multiNamespacesCounter 0) (gt $singleNamespaceCounter 0) }}
{{- fail "Found both gha-runner-scale-set-controller installed with flags.watchSingleNamespace set and unset in cluster, this is not supported. Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if gt $multiNamespacesCounter 1 }}
{{- fail "More than one gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if eq $multiNamespacesCounter 1 }}
{{- with $controllerDeployment.metadata }}
{{- $managerServiceAccountNamespace = (get $controllerDeployment.metadata.labels "actions.github.com/controller-service-account-namespace") }}
{{- end }}
{{- else if gt $singleNamespaceCounter 0 }}
{{- if hasKey $singleNamespaceControllerDeployments .Release.Namespace }}
{{- $controllerDeployment = get $singleNamespaceControllerDeployments .Release.Namespace }}
{{- with $controllerDeployment.metadata }}
{{- $managerServiceAccountNamespace = (get $controllerDeployment.metadata.labels "actions.github.com/controller-service-account-namespace") }}
{{- end }}
{{- else }}
{{- fail "No gha-runner-scale-set-controller deployment that watch this namespace found using label (actions.github.com/controller-watch-single-namespace). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- end }}
{{- if eq $managerServiceAccountNamespace "" }}
{{- fail "No service account namespace found for gha-runner-scale-set-controller deployment using label (actions.github.com/controller-service-account-namespace), consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- $managerServiceAccountNamespace }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,157 @@
apiVersion: actions.github.com/v1alpha1
kind: AutoscalingRunnerSet
metadata:
{{- if or (not .Release.Name) (gt (len .Release.Name) 45) }}
{{ fail "Name must have up to 45 characters" }}
{{- end }}
{{- if gt (len .Release.Namespace) 63 }}
{{ fail "Namespace must have up to 63 characters" }}
{{- end }}
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/component: "autoscaling-runner-set"
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
annotations:
{{- $containerMode := .Values.containerMode }}
{{- if not (kindIs "string" .Values.githubConfigSecret) }}
actions.github.com/cleanup-github-secret-name: {{ include "gha-runner-scale-set.githubsecret" . }}
{{- end }}
actions.github.com/cleanup-manager-role-binding: {{ include "gha-runner-scale-set.managerRoleBindingName" . }}
actions.github.com/cleanup-manager-role-name: {{ include "gha-runner-scale-set.managerRoleName" . }}
{{- if and $containerMode (eq $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
actions.github.com/cleanup-kubernetes-mode-role-binding-name: {{ include "gha-runner-scale-set.kubeModeRoleBindingName" . }}
actions.github.com/cleanup-kubernetes-mode-role-name: {{ include "gha-runner-scale-set.kubeModeRoleName" . }}
actions.github.com/cleanup-kubernetes-mode-service-account-name: {{ include "gha-runner-scale-set.kubeModeServiceAccountName" . }}
{{- end }}
{{- if and (ne $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
actions.github.com/cleanup-no-permission-service-account-name: {{ include "gha-runner-scale-set.noPermissionServiceAccountName" . }}
{{- end }}
spec:
githubConfigUrl: {{ required ".Values.githubConfigUrl is required" (trimSuffix "/" .Values.githubConfigUrl) }}
githubConfigSecret: {{ include "gha-runner-scale-set.githubsecret" . }}
{{- with .Values.runnerGroup }}
runnerGroup: {{ . }}
{{- end }}
{{- with .Values.runnerScaleSetName }}
runnerScaleSetName: {{ . }}
{{- end }}
{{- if .Values.githubServerTLS }}
githubServerTLS:
{{- with .Values.githubServerTLS.certificateFrom }}
certificateFrom:
configMapKeyRef:
name: {{ .configMapKeyRef.name }}
key: {{ .configMapKeyRef.key }}
{{- end }}
{{- end }}
{{- if .Values.proxy }}
proxy:
{{- if .Values.proxy.http }}
http:
url: {{ .Values.proxy.http.url }}
{{- if .Values.proxy.http.credentialSecretRef }}
credentialSecretRef: {{ .Values.proxy.http.credentialSecretRef }}
{{- end }}
{{- end }}
{{- if .Values.proxy.https }}
https:
url: {{ .Values.proxy.https.url }}
{{- if .Values.proxy.https.credentialSecretRef }}
credentialSecretRef: {{ .Values.proxy.https.credentialSecretRef }}
{{- end }}
{{- end }}
{{- if and .Values.proxy.noProxy (kindIs "slice" .Values.proxy.noProxy) }}
noProxy: {{ .Values.proxy.noProxy | toYaml | nindent 6}}
{{- end }}
{{- end }}
{{- if and (or (kindIs "int64" .Values.minRunners) (kindIs "float64" .Values.minRunners)) (or (kindIs "int64" .Values.maxRunners) (kindIs "float64" .Values.maxRunners)) }}
{{- if gt .Values.minRunners .Values.maxRunners }}
{{- fail "maxRunners has to be greater or equal to minRunners" }}
{{- end }}
{{- end }}
{{- if or (kindIs "int64" .Values.maxRunners) (kindIs "float64" .Values.maxRunners) }}
{{- if lt (.Values.maxRunners | int) 0 }}
{{- fail "maxRunners has to be greater or equal to 0" }}
{{- end }}
maxRunners: {{ .Values.maxRunners | int }}
{{- end }}
{{- if or (kindIs "int64" .Values.minRunners) (kindIs "float64" .Values.minRunners) }}
{{- if lt (.Values.minRunners | int) 0 }}
{{- fail "minRunners has to be greater or equal to 0" }}
{{- end }}
minRunners: {{ .Values.minRunners | int }}
{{- end }}
template:
{{- with .Values.template.metadata }}
metadata:
{{- with .labels }}
labels:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .annotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
spec:
{{- range $key, $val := .Values.template.spec }}
{{- if and (ne $key "containers") (ne $key "volumes") (ne $key "initContainers") (ne $key "serviceAccountName") }}
{{ $key }}: {{ $val | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- $containerMode := .Values.containerMode }}
{{- if eq $containerMode.type "kubernetes" }}
serviceAccountName: {{ default (include "gha-runner-scale-set.kubeModeServiceAccountName" .) .Values.template.spec.serviceAccountName }}
{{- else }}
serviceAccountName: {{ default (include "gha-runner-scale-set.noPermissionServiceAccountName" .) .Values.template.spec.serviceAccountName }}
{{- end }}
{{- if or .Values.template.spec.initContainers (eq $containerMode.type "dind") }}
initContainers:
{{- if eq $containerMode.type "dind" }}
- name: init-dind-externals
{{- include "gha-runner-scale-set.dind-init-container" . | nindent 8 }}
{{- end }}
{{- with .Values.template.spec.initContainers }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
containers:
{{- if eq $containerMode.type "dind" }}
- name: runner
{{- include "gha-runner-scale-set.dind-runner-container" . | nindent 8 }}
- name: dind
{{- include "gha-runner-scale-set.dind-container" . | nindent 8 }}
{{- include "gha-runner-scale-set.non-runner-non-dind-containers" . | nindent 6 }}
{{- else if eq $containerMode.type "kubernetes" }}
- name: runner
{{- include "gha-runner-scale-set.kubernetes-mode-runner-container" . | nindent 8 }}
{{- include "gha-runner-scale-set.non-runner-containers" . | nindent 6 }}
{{- else }}
{{- include "gha-runner-scale-set.default-mode-runner-containers" . | nindent 6 }}
{{- end }}
{{- $tlsConfig := (default (dict) .Values.githubServerTLS) }}
{{- if or .Values.template.spec.volumes (eq $containerMode.type "dind") (eq $containerMode.type "kubernetes") $tlsConfig.runnerMountPath }}
volumes:
{{- if $tlsConfig.runnerMountPath }}
{{- include "gha-runner-scale-set.tls-volume" $tlsConfig | nindent 6 }}
{{- end }}
{{- if eq $containerMode.type "dind" }}
{{- include "gha-runner-scale-set.dind-volume" . | nindent 6 }}
{{- include "gha-runner-scale-set.dind-work-volume" . | nindent 6 }}
{{- include "gha-runner-scale-set.non-work-volumes" . | nindent 6 }}
{{- else if eq $containerMode.type "kubernetes" }}
{{- include "gha-runner-scale-set.kubernetes-mode-work-volume" . | nindent 6 }}
{{- include "gha-runner-scale-set.non-work-volumes" . | nindent 6 }}
{{- else }}
{{- with .Values.template.spec.volumes }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,39 @@
{{- if not (kindIs "string" .Values.githubConfigSecret) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "gha-runner-scale-set.githubsecret" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
finalizers:
- actions.github.com/cleanup-protection
data:
{{- $hasToken := false }}
{{- $hasAppId := false }}
{{- $hasInstallationId := false }}
{{- $hasPrivateKey := false }}
{{- range $secretName, $secretValue := (required "Values.githubConfigSecret is required for setting auth with GitHub server." .Values.githubConfigSecret) }}
{{- if $secretValue }}
{{ $secretName }}: {{ $secretValue | toString | b64enc }}
{{- if eq $secretName "github_token" }}
{{- $hasToken = true }}
{{- end }}
{{- if eq $secretName "github_app_id" }}
{{- $hasAppId = true }}
{{- end }}
{{- if eq $secretName "github_app_installation_id" }}
{{- $hasInstallationId = true }}
{{- end }}
{{- if eq $secretName "github_app_private_key" }}
{{- $hasPrivateKey = true }}
{{- end }}
{{- end }}
{{- end }}
{{- if and (not $hasToken) (not ($hasAppId)) }}
{{- fail "A valid .Values.githubConfigSecret is required for setting auth with GitHub server, provide .Values.githubConfigSecret.github_token or .Values.githubConfigSecret.github_app_id." }}
{{- end }}
{{- if and $hasAppId (or (not $hasInstallationId) (not $hasPrivateKey)) }}
{{- fail "A valid .Values.githubConfigSecret is required for setting auth with GitHub server, provide .Values.githubConfigSecret.github_app_installation_id and .Values.githubConfigSecret.github_app_private_key." }}
{{- end }}
{{- end}}

View File

@@ -0,0 +1,27 @@
{{- $containerMode := .Values.containerMode }}
{{- if and (eq $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
# default permission for runner pod service account in kubernetes mode (container hook)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set.kubeModeRoleName" . }}
namespace: {{ .Release.Namespace }}
finalizers:
- actions.github.com/cleanup-protection
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch",]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "delete"]
{{- end }}

View File

@@ -0,0 +1,18 @@
{{- $containerMode := .Values.containerMode }}
{{- if and (eq $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set.kubeModeRoleBindingName" . }}
namespace: {{ .Release.Namespace }}
finalizers:
- actions.github.com/cleanup-protection
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set.kubeModeRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set.kubeModeServiceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,12 @@
{{- $containerMode := .Values.containerMode }}
{{- if and (eq $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "gha-runner-scale-set.kubeModeServiceAccountName" . }}
namespace: {{ .Release.Namespace }}
finalizers:
- actions.github.com/cleanup-protection
labels:
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,75 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set.managerRoleName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
app.kubernetes.io/component: manager-role
finalizers:
- actions.github.com/cleanup-protection
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- get
- apiGroups:
- ""
resources:
- pods/status
verbs:
- get
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- list
- patch
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- create
- delete
- get
- patch
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- create
- delete
- get
- patch
- update
{{- if .Values.githubServerTLS }}
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
{{- end }}

View File

@@ -0,0 +1,18 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set.managerRoleBindingName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
app.kubernetes.io/component: manager-role-binding
finalizers:
- actions.github.com/cleanup-protection
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set.managerRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set.managerServiceAccountName" . | nindent 4 }}
namespace: {{ include "gha-runner-scale-set.managerServiceAccountNamespace" . | nindent 4 }}

View File

@@ -0,0 +1,12 @@
{{- $containerMode := .Values.containerMode }}
{{- if and (ne $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "gha-runner-scale-set.noPermissionServiceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
finalizers:
- actions.github.com/cleanup-protection
{{- end }}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,8 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
maxRunners: 10
minRunners: 5
controllerServiceAccount:
name: "arc"
namespace: "arc-system"

View File

@@ -0,0 +1,19 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: other
image: other-image:latest
volumes:
- name: foo
emptyDir: {}
- name: bar
emptyDir: {}
- name: work
hostPath:
path: /data
type: Directory
containerMode:
type: dind

View File

@@ -0,0 +1,31 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: runner
image: runner-image:latest
env:
- name: DOCKER_HOST
value: tcp://localhost:9999
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: work
mountPath: /work
- name: others
mountPath: /others
resources:
limits:
memory: "64Mi"
cpu: "250m"
volumes:
- name: work
hostPath:
path: /data
type: Directory
containerMode:
type: dind

View File

@@ -0,0 +1,46 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: runner
image: runner-image:latest
env:
- name: SOME_ENV
value: SOME_VALUE
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: work
mountPath: /work
- name: others
mountPath: /others
resources:
limits:
memory: "64Mi"
cpu: "250m"
- name: other
image: other-image:latest
volumeMounts:
- name: work
mountPath: /work
- name: others
mountPath: /others
resources:
limits:
memory: "64Mi"
cpu: "250m"
volumes:
- name: work
hostPath:
path: /data
type: Directory
dnsPolicy: "None"
dnsConfig:
nameservers:
- 192.0.2.1
containerMode:
type: none

View File

@@ -0,0 +1,12 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: runner
image: runner-image:latest
dnsPolicy: "None"
dnsConfig:
nameservers:
- 192.0.2.1

View File

@@ -0,0 +1,17 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: other
image: other-image:latest
volumes:
- name: foo
emptyDir: {}
- name: bar
emptyDir: {}
- name: work
hostPath:
path: /data
type: Directory

View File

@@ -0,0 +1,19 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: other
image: other-image:latest
volumes:
- name: foo
emptyDir: {}
- name: bar
emptyDir: {}
- name: work
hostPath:
path: /data
type: Directory
containerMode:
type: kubernetes

View File

@@ -0,0 +1,31 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: runner
image: runner-image:latest
env:
- name: ACTIONS_RUNNER_CONTAINER_HOOKS
value: /k8s/index.js
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: work
mountPath: /work
- name: others
mountPath: /others
resources:
limits:
memory: "64Mi"
cpu: "250m"
volumes:
- name: work
hostPath:
path: /data
type: Directory
containerMode:
type: kubernetes

View File

@@ -0,0 +1,172 @@
## githubConfigUrl is the GitHub url for where you want to configure runners
## ex: https://github.com/myorg/myrepo or https://github.com/myorg
githubConfigUrl: ""
## githubConfigSecret is the k8s secrets to use when auth with GitHub API.
## You can choose to use GitHub App or a PAT token
githubConfigSecret:
### GitHub Apps Configuration
## NOTE: IDs MUST be strings, use quotes
#github_app_id: ""
#github_app_installation_id: ""
#github_app_private_key: |
### GitHub PAT Configuration
github_token: ""
## If you have a pre-define Kubernetes secret in the same namespace the gha-runner-scale-set is going to deploy,
## you can also reference it via `githubConfigSecret: pre-defined-secret`.
## You need to make sure your predefined secret has all the required secret data set properly.
## For a pre-defined secret using GitHub PAT, the secret needs to be created like this:
## > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_token='ghp_your_pat'
## For a pre-defined secret using GitHub App, the secret needs to be created like this:
## > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_app_id=123456 --from-literal=github_app_installation_id=654321 --from-literal=github_app_private_key='-----BEGIN CERTIFICATE-----*******'
# githubConfigSecret: pre-defined-secret
## proxy can be used to define proxy settings that will be used by the
## controller, the listener and the runner of this scale set.
#
# proxy:
# http:
# url: http://proxy.com:1234
# credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
# https:
# url: http://proxy.com:1234
# credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
# noProxy:
# - example.com
# - example.org
## maxRunners is the max number of runners the auto scaling runner set will scale up to.
# maxRunners: 5
## minRunners is the min number of runners the auto scaling runner set will scale down to.
# minRunners: 0
# runnerGroup: "default"
## name of the runner scale set to create. Defaults to the helm release name
# runnerScaleSetName: ""
## A self-signed CA certificate for communication with the GitHub server can be
## provided using a config map key selector. If `runnerMountPath` is set, for
## each runner pod ARC will:
## - create a `github-server-tls-cert` volume containing the certificate
## specified in `certificateFrom`
## - mount that volume on path `runnerMountPath`/{certificate name}
## - set NODE_EXTRA_CA_CERTS environment variable to that same path
## - set RUNNER_UPDATE_CA_CERTS environment variable to "1" (as of version
## 2.303.0 this will instruct the runner to reload certificates on the host)
##
## If any of the above had already been set by the user in the runner pod
## template, ARC will observe those and not overwrite them.
## Example configuration:
#
# githubServerTLS:
# certificateFrom:
# configMapKeyRef:
# name: config-map-name
# key: ca.crt
# runnerMountPath: /usr/local/share/ca-certificates/
# containerMode:
# type: "dind" ## type can be set to dind or kubernetes
# ## the following is required when containerMode.type=kubernetes
# kubernetesModeWorkVolumeClaim:
# accessModes: ["ReadWriteOnce"]
# # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
# storageClassName: "dynamic-blob-storage"
# resources:
# requests:
# storage: 1Gi
## template is the PodSpec for each runner Pod
template:
## template.spec will be modified if you change the container mode
## with containerMode.type=dind, we will populate the template.spec with following pod spec
## template:
## spec:
## initContainers:
## - name: init-dind-externals
## image: ghcr.io/actions/actions-runner:latest
## command: ["cp", "-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
## volumeMounts:
## - name: dind-externals
## mountPath: /home/runner/tmpDir
## containers:
## - name: runner
## image: ghcr.io/actions/actions-runner:latest
## env:
## - name: DOCKER_HOST
## value: tcp://localhost:2376
## - name: DOCKER_TLS_VERIFY
## value: "1"
## - name: DOCKER_CERT_PATH
## value: /certs/client
## volumeMounts:
## - name: work
## mountPath: /home/runner/_work
## - name: dind-cert
## mountPath: /certs/client
## readOnly: true
## - name: dind
## image: docker:dind
## securityContext:
## privileged: true
## volumeMounts:
## - name: work
## mountPath: /home/runner/_work
## - name: dind-cert
## mountPath: /certs/client
## - name: dind-externals
## mountPath: /home/runner/externals
## volumes:
## - name: work
## emptyDir: {}
## - name: dind-cert
## emptyDir: {}
## - name: dind-externals
## emptyDir: {}
######################################################################################################
## with containerMode.type=kubernetes, we will populate the template.spec with following pod spec
## template:
## spec:
## containers:
## - name: runner
## image: ghcr.io/actions/actions-runner:latest
## env:
## - name: ACTIONS_RUNNER_CONTAINER_HOOKS
## value: /home/runner/k8s/index.js
## - name: ACTIONS_RUNNER_POD_NAME
## valueFrom:
## fieldRef:
## fieldPath: metadata.name
## - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
## value: "true"
## volumeMounts:
## - name: work
## mountPath: /home/runner/_work
## volumes:
## - name: work
## ephemeral:
## volumeClaimTemplate:
## spec:
## accessModes: [ "ReadWriteOnce" ]
## storageClassName: "local-path"
## resources:
## requests:
## storage: 1Gi
spec:
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
# controllerServiceAccount:
# namespace: arc-system
# name: test-arc-gha-runner-scale-set-controller

View File

@@ -0,0 +1,129 @@
package main
import (
"context"
"encoding/json"
"fmt"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
jsonpatch "github.com/evanphx/json-patch"
"github.com/go-logr/logr"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
type AutoScalerKubernetesManager struct {
*kubernetes.Clientset
logger logr.Logger
}
func NewKubernetesManager(logger *logr.Logger) (*AutoScalerKubernetesManager, error) {
conf, err := rest.InClusterConfig()
if err != nil {
return nil, err
}
kubeClient, err := kubernetes.NewForConfig(conf)
if err != nil {
return nil, err
}
var manager = &AutoScalerKubernetesManager{
Clientset: kubeClient,
logger: logger.WithName("KubernetesManager"),
}
return manager, nil
}
func (k *AutoScalerKubernetesManager) ScaleEphemeralRunnerSet(ctx context.Context, namespace, resourceName string, runnerCount int) error {
original := &v1alpha1.EphemeralRunnerSet{
Spec: v1alpha1.EphemeralRunnerSetSpec{
Replicas: -1,
},
}
originalJson, err := json.Marshal(original)
if err != nil {
k.logger.Error(err, "could not marshal empty ephemeral runner set")
}
patch := &v1alpha1.EphemeralRunnerSet{
Spec: v1alpha1.EphemeralRunnerSetSpec{
Replicas: runnerCount,
},
}
patchJson, err := json.Marshal(patch)
if err != nil {
k.logger.Error(err, "could not marshal patch ephemeral runner set")
}
mergePatch, err := jsonpatch.CreateMergePatch(originalJson, patchJson)
if err != nil {
k.logger.Error(err, "could not create merge patch json for ephemeral runner set")
}
k.logger.Info("Created merge patch json for EphemeralRunnerSet update", "json", string(mergePatch))
patchedEphemeralRunnerSet := &v1alpha1.EphemeralRunnerSet{}
err = k.RESTClient().
Patch(types.MergePatchType).
Prefix("apis", "actions.github.com", "v1alpha1").
Namespace(namespace).
Resource("EphemeralRunnerSets").
Name(resourceName).
Body([]byte(mergePatch)).
Do(ctx).
Into(patchedEphemeralRunnerSet)
if err != nil {
return fmt.Errorf("could not patch ephemeral runner set , patch JSON: %s, error: %w", string(mergePatch), err)
}
k.logger.Info("Ephemeral runner set scaled.", "namespace", namespace, "name", resourceName, "replicas", patchedEphemeralRunnerSet.Spec.Replicas)
return nil
}
func (k *AutoScalerKubernetesManager) UpdateEphemeralRunnerWithJobInfo(ctx context.Context, namespace, resourceName, ownerName, repositoryName, jobWorkflowRef, jobDisplayName string, workflowRunId, jobRequestId int64) error {
original := &v1alpha1.EphemeralRunner{}
originalJson, err := json.Marshal(original)
if err != nil {
return fmt.Errorf("could not marshal empty ephemeral runner, error: %w", err)
}
patch := &v1alpha1.EphemeralRunner{
Status: v1alpha1.EphemeralRunnerStatus{
JobRequestId: jobRequestId,
JobRepositoryName: fmt.Sprintf("%s/%s", ownerName, repositoryName),
WorkflowRunId: workflowRunId,
JobWorkflowRef: jobWorkflowRef,
JobDisplayName: jobDisplayName,
},
}
patchedJson, err := json.Marshal(patch)
if err != nil {
return fmt.Errorf("could not marshal patched ephemeral runner, error: %w", err)
}
mergePatch, err := jsonpatch.CreateMergePatch(originalJson, patchedJson)
if err != nil {
k.logger.Error(err, "could not create merge patch json for ephemeral runner")
}
k.logger.Info("Created merge patch json for EphemeralRunner status update", "json", string(mergePatch))
patchedStatus := &v1alpha1.EphemeralRunner{}
err = k.RESTClient().
Patch(types.MergePatchType).
Prefix("apis", "actions.github.com", "v1alpha1").
Namespace(namespace).
Resource("EphemeralRunners").
Name(resourceName).
SubResource("status").
Body(mergePatch).
Do(ctx).
Into(patchedStatus)
if err != nil {
return fmt.Errorf("could not patch ephemeral runner status, patch JSON: %s, error: %w", string(mergePatch), err)
}
return nil
}

View File

@@ -0,0 +1,184 @@
package main
import (
"context"
"encoding/json"
"fmt"
"math/rand"
"net/http"
"os"
"time"
"github.com/actions/actions-runner-controller/github/actions"
"github.com/go-logr/logr"
"github.com/google/uuid"
"github.com/pkg/errors"
)
const (
sessionCreationMaxRetryCount = 10
)
type devContextKey bool
var testIgnoreSleep devContextKey = true
type AutoScalerClient struct {
client actions.SessionService
logger logr.Logger
lastMessageId int64
initialMessage *actions.RunnerScaleSetMessage
}
func NewAutoScalerClient(
ctx context.Context,
client actions.ActionsService,
logger *logr.Logger,
runnerScaleSetId int,
options ...func(*AutoScalerClient),
) (*AutoScalerClient, error) {
listener := AutoScalerClient{
logger: logger.WithName("auto_scaler"),
}
session, initialMessage, err := createSession(ctx, &listener.logger, client, runnerScaleSetId)
if err != nil {
return nil, fmt.Errorf("fail to create session. %w", err)
}
listener.lastMessageId = 0
listener.initialMessage = initialMessage
listener.client = newSessionClient(client, logger, session)
for _, option := range options {
option(&listener)
}
return &listener, nil
}
func createSession(ctx context.Context, logger *logr.Logger, client actions.ActionsService, runnerScaleSetId int) (*actions.RunnerScaleSetSession, *actions.RunnerScaleSetMessage, error) {
hostName, err := os.Hostname()
if err != nil {
hostName = uuid.New().String()
logger.Info("could not get hostname, fail back to a random string.", "fallback", hostName)
}
var runnerScaleSetSession *actions.RunnerScaleSetSession
var retryCount int
for {
runnerScaleSetSession, err = client.CreateMessageSession(ctx, runnerScaleSetId, hostName)
if err == nil {
break
}
clientSideError := &actions.HttpClientSideError{}
if errors.As(err, &clientSideError) && clientSideError.Code != http.StatusConflict {
logger.Info("unable to create message session. The error indicates something is wrong on the client side, won't make any retry.")
return nil, nil, fmt.Errorf("create message session http request failed. %w", err)
}
retryCount++
if retryCount >= sessionCreationMaxRetryCount {
return nil, nil, fmt.Errorf("create message session failed since it exceed %d retry limit. %w", sessionCreationMaxRetryCount, err)
}
logger.Info("unable to create message session. Will try again in 30 seconds", "error", err.Error())
if ok := ctx.Value(testIgnoreSleep); ok == nil {
time.Sleep(getRandomDuration(30, 45))
}
}
statistics, _ := json.Marshal(runnerScaleSetSession.Statistics)
logger.Info("current runner scale set statistics.", "statistics", string(statistics))
if runnerScaleSetSession.Statistics.TotalAvailableJobs > 0 || runnerScaleSetSession.Statistics.TotalAssignedJobs > 0 {
acquirableJobs, err := client.GetAcquirableJobs(ctx, runnerScaleSetId)
if err != nil {
return nil, nil, fmt.Errorf("get acquirable jobs failed. %w", err)
}
acquirableJobsJson, err := json.Marshal(acquirableJobs.Jobs)
if err != nil {
return nil, nil, fmt.Errorf("marshal acquirable jobs failed. %w", err)
}
initialMessage := &actions.RunnerScaleSetMessage{
MessageId: 0,
MessageType: "RunnerScaleSetJobMessages",
Statistics: runnerScaleSetSession.Statistics,
Body: string(acquirableJobsJson),
}
return runnerScaleSetSession, initialMessage, nil
}
return runnerScaleSetSession, nil, nil
}
func (m *AutoScalerClient) Close() error {
m.logger.Info("closing.")
return m.client.Close()
}
func (m *AutoScalerClient) GetRunnerScaleSetMessage(ctx context.Context, handler func(msg *actions.RunnerScaleSetMessage) error) error {
if m.initialMessage != nil {
err := handler(m.initialMessage)
if err != nil {
return fmt.Errorf("fail to process initial message. %w", err)
}
m.initialMessage = nil
return nil
}
for {
message, err := m.client.GetMessage(ctx, m.lastMessageId)
if err != nil {
return fmt.Errorf("get message failed from refreshing client. %w", err)
}
if message == nil {
continue
}
err = handler(message)
if err != nil {
return fmt.Errorf("handle message failed. %w", err)
}
m.lastMessageId = message.MessageId
return m.deleteMessage(ctx, message.MessageId)
}
}
func (m *AutoScalerClient) deleteMessage(ctx context.Context, messageId int64) error {
err := m.client.DeleteMessage(ctx, messageId)
if err != nil {
return fmt.Errorf("delete message failed from refreshing client. %w", err)
}
m.logger.Info("deleted message.", "messageId", messageId)
return nil
}
func (m *AutoScalerClient) AcquireJobsForRunnerScaleSet(ctx context.Context, requestIds []int64) error {
m.logger.Info("acquiring jobs.", "request count", len(requestIds), "requestIds", fmt.Sprint(requestIds))
if len(requestIds) == 0 {
return nil
}
ids, err := m.client.AcquireJobs(ctx, requestIds)
if err != nil {
return fmt.Errorf("acquire jobs failed from refreshing client. %w", err)
}
m.logger.Info("acquired jobs.", "requested", len(requestIds), "acquired", len(ids))
return nil
}
func getRandomDuration(minSeconds, maxSeconds int) time.Duration {
return time.Duration(rand.Intn(maxSeconds-minSeconds)+minSeconds) * time.Second
}

View File

@@ -0,0 +1,701 @@
package main
import (
"context"
"fmt"
"testing"
"github.com/actions/actions-runner-controller/github/actions"
"github.com/actions/actions-runner-controller/logging"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestCreateSession(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1)
require.NoError(t, err, "Error creating autoscaler client")
assert.Equal(t, session, session, "Session is not correct")
assert.Nil(t, asClient.initialMessage, "Initial message should be nil")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be 0")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestCreateSession_CreateInitMessage(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{
TotalAvailableJobs: 1,
TotalAssignedJobs: 5,
},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockActionsClient.On("GetAcquirableJobs", ctx, 1).Return(&actions.AcquirableJobList{
Count: 1,
Jobs: []actions.AcquirableJob{
{
RunnerRequestId: 1,
OwnerName: "owner",
RepositoryName: "repo",
AcquireJobUrl: "https://github.com",
},
},
}, nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1)
require.NoError(t, err, "Error creating autoscaler client")
assert.Equal(t, session, session, "Session is not correct")
assert.NotNil(t, asClient.initialMessage, "Initial message should not be nil")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be 0")
assert.Equal(t, int64(0), asClient.initialMessage.MessageId, "Initial message id should be 0")
assert.Equal(t, "RunnerScaleSetJobMessages", asClient.initialMessage.MessageType, "Initial message type should be RunnerScaleSetJobMessages")
assert.Equal(t, 5, asClient.initialMessage.Statistics.TotalAssignedJobs, "Initial message total assigned jobs should be 5")
assert.Equal(t, 1, asClient.initialMessage.Statistics.TotalAvailableJobs, "Initial message total available jobs should be 1")
assert.Equal(t, "[{\"acquireJobUrl\":\"https://github.com\",\"messageType\":\"\",\"runnerRequestId\":1,\"repositoryName\":\"repo\",\"ownerName\":\"owner\",\"jobWorkflowRef\":\"\",\"eventName\":\"\",\"requestLabels\":null}]", asClient.initialMessage.Body, "Initial message body is not correct")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestCreateSession_CreateInitMessageWithOnlyAssignedJobs(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{
TotalAssignedJobs: 5,
},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockActionsClient.On("GetAcquirableJobs", ctx, 1).Return(&actions.AcquirableJobList{
Count: 0,
Jobs: []actions.AcquirableJob{},
}, nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1)
require.NoError(t, err, "Error creating autoscaler client")
assert.Equal(t, session, session, "Session is not correct")
assert.NotNil(t, asClient.initialMessage, "Initial message should not be nil")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be 0")
assert.Equal(t, int64(0), asClient.initialMessage.MessageId, "Initial message id should be 0")
assert.Equal(t, "RunnerScaleSetJobMessages", asClient.initialMessage.MessageType, "Initial message type should be RunnerScaleSetJobMessages")
assert.Equal(t, 5, asClient.initialMessage.Statistics.TotalAssignedJobs, "Initial message total assigned jobs should be 5")
assert.Equal(t, 0, asClient.initialMessage.Statistics.TotalAvailableJobs, "Initial message total available jobs should be 0")
assert.Equal(t, "[]", asClient.initialMessage.Body, "Initial message body is not correct")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestCreateSession_CreateInitMessageFailed(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{
TotalAvailableJobs: 1,
TotalAssignedJobs: 5,
},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockActionsClient.On("GetAcquirableJobs", ctx, 1).Return(nil, fmt.Errorf("error"))
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1)
assert.ErrorContains(t, err, "get acquirable jobs failed. error", "Unexpected error")
assert.Nil(t, asClient, "Client should be nil")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestCreateSession_RetrySessionConflict(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.WithValue(context.Background(), testIgnoreSleep, true)
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(nil, &actions.HttpClientSideError{
Code: 409,
}).Once()
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil).Once()
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1)
require.NoError(t, err, "Error creating autoscaler client")
assert.Equal(t, session, session, "Session is not correct")
assert.Nil(t, asClient.initialMessage, "Initial message should be nil")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be 0")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestCreateSession_RetrySessionConflict_RunOutOfRetry(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.WithValue(context.Background(), testIgnoreSleep, true)
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(nil, &actions.HttpClientSideError{
Code: 409,
})
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1)
assert.Error(t, err, "Error should be returned")
assert.Nil(t, asClient, "AutoScaler should be nil")
assert.True(t, mockActionsClient.AssertNumberOfCalls(t, "CreateMessageSession", sessionCreationMaxRetryCount), "CreateMessageSession should be called 10 times")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestCreateSession_NotRetryOnGeneralException(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.WithValue(context.Background(), testIgnoreSleep, true)
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(nil, &actions.HttpClientSideError{
Code: 403,
})
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1)
assert.Error(t, err, "Error should be returned")
assert.Nil(t, asClient, "AutoScaler should be nil")
assert.True(t, mockActionsClient.AssertNumberOfCalls(t, "CreateMessageSession", 1), "CreateMessageSession should be called 1 time and not retry on generic error")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestDeleteSession(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
mockSessionClient := &actions.MockSessionService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockSessionClient.On("Close").Return(nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1, func(asc *AutoScalerClient) {
asc.client = mockSessionClient
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.Close()
assert.NoError(t, err, "Error deleting session")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockSessionClient.AssertExpectations(t), "All expectations should be met")
}
func TestDeleteSession_Failed(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
mockSessionClient := &actions.MockSessionService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockSessionClient.On("Close").Return(fmt.Errorf("error"))
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1, func(asc *AutoScalerClient) {
asc.client = mockSessionClient
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.Close()
assert.Error(t, err, "Error should be returned")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockSessionClient.AssertExpectations(t), "All expectations should be met")
}
func TestGetRunnerScaleSetMessage(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
mockSessionClient := &actions.MockSessionService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockSessionClient.On("GetMessage", ctx, int64(0)).Return(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "test",
Body: "test",
}, nil)
mockSessionClient.On("DeleteMessage", ctx, int64(1)).Return(nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1, func(asc *AutoScalerClient) {
asc.client = mockSessionClient
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return nil
})
assert.NoError(t, err, "Error getting message")
assert.Equal(t, int64(1), asClient.lastMessageId, "Last message id should be updated")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockSessionClient.AssertExpectations(t), "All expectations should be met")
}
func TestGetRunnerScaleSetMessage_HandleFailed(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
mockSessionClient := &actions.MockSessionService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockSessionClient.On("GetMessage", ctx, int64(0)).Return(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "test",
Body: "test",
}, nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1, func(asc *AutoScalerClient) {
asc.client = mockSessionClient
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return fmt.Errorf("error")
})
assert.ErrorContains(t, err, "handle message failed. error", "Error getting message")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be updated")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockSessionClient.AssertExpectations(t), "All expectations should be met")
}
func TestGetRunnerScaleSetMessage_HandleInitialMessage(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{
TotalAvailableJobs: 1,
TotalAssignedJobs: 2,
},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockActionsClient.On("GetAcquirableJobs", ctx, 1).Return(&actions.AcquirableJobList{
Count: 1,
Jobs: []actions.AcquirableJob{
{
RunnerRequestId: 1,
OwnerName: "owner",
RepositoryName: "repo",
AcquireJobUrl: "https://github.com",
},
},
}, nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1)
require.NoError(t, err, "Error creating autoscaler client")
require.NotNil(t, asClient.initialMessage, "Initial message should be set")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return nil
})
assert.NoError(t, err, "Error getting message")
assert.Nil(t, asClient.initialMessage, "Initial message should be nil")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be updated")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestGetRunnerScaleSetMessage_HandleInitialMessageFailed(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{
TotalAvailableJobs: 1,
TotalAssignedJobs: 2,
},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockActionsClient.On("GetAcquirableJobs", ctx, 1).Return(&actions.AcquirableJobList{
Count: 1,
Jobs: []actions.AcquirableJob{
{
RunnerRequestId: 1,
OwnerName: "owner",
RepositoryName: "repo",
AcquireJobUrl: "https://github.com",
},
},
}, nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1)
require.NoError(t, err, "Error creating autoscaler client")
require.NotNil(t, asClient.initialMessage, "Initial message should be set")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return fmt.Errorf("error")
})
assert.ErrorContains(t, err, "fail to process initial message. error", "Error getting message")
assert.NotNil(t, asClient.initialMessage, "Initial message should be nil")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be updated")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestGetRunnerScaleSetMessage_RetryUntilGetMessage(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
mockSessionClient := &actions.MockSessionService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockSessionClient.On("GetMessage", ctx, int64(0)).Return(nil, nil).Times(3)
mockSessionClient.On("GetMessage", ctx, int64(0)).Return(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "test",
Body: "test",
}, nil).Once()
mockSessionClient.On("DeleteMessage", ctx, int64(1)).Return(nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1, func(asc *AutoScalerClient) {
asc.client = mockSessionClient
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return nil
})
assert.NoError(t, err, "Error getting message")
assert.Equal(t, int64(1), asClient.lastMessageId, "Last message id should be updated")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestGetRunnerScaleSetMessage_ErrorOnGetMessage(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
mockSessionClient := &actions.MockSessionService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockSessionClient.On("GetMessage", ctx, int64(0)).Return(nil, fmt.Errorf("error"))
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1, func(asc *AutoScalerClient) {
asc.client = mockSessionClient
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
return fmt.Errorf("Should not be called")
})
assert.ErrorContains(t, err, "get message failed from refreshing client. error", "Error should be returned")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be updated")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockSessionClient.AssertExpectations(t), "All expectations should be met")
}
func TestDeleteRunnerScaleSetMessage_Error(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
mockSessionClient := &actions.MockSessionService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockSessionClient.On("GetMessage", ctx, int64(0)).Return(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "test",
Body: "test",
}, nil)
mockSessionClient.On("DeleteMessage", ctx, int64(1)).Return(fmt.Errorf("error"))
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1, func(asc *AutoScalerClient) {
asc.client = mockSessionClient
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return nil
})
assert.ErrorContains(t, err, "delete message failed from refreshing client. error", "Error getting message")
assert.Equal(t, int64(1), asClient.lastMessageId, "Last message id should be updated")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
func TestAcquireJobsForRunnerScaleSet(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
mockSessionClient := &actions.MockSessionService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockSessionClient.On("AcquireJobs", ctx, mock.MatchedBy(func(ids []int64) bool { return ids[0] == 1 && ids[1] == 2 && ids[2] == 3 })).Return([]int64{1, 2, 3}, nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1, func(asc *AutoScalerClient) {
asc.client = mockSessionClient
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.AcquireJobsForRunnerScaleSet(ctx, []int64{1, 2, 3})
assert.NoError(t, err, "Error acquiring jobs")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockSessionClient.AssertExpectations(t), "All expectations should be met")
}
func TestAcquireJobsForRunnerScaleSet_SkipEmptyList(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
mockSessionClient := &actions.MockSessionService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1, func(asc *AutoScalerClient) {
asc.client = mockSessionClient
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.AcquireJobsForRunnerScaleSet(ctx, []int64{})
assert.NoError(t, err, "Error acquiring jobs")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockSessionClient.AssertExpectations(t), "All expectations should be met")
}
func TestAcquireJobsForRunnerScaleSet_Failed(t *testing.T) {
mockActionsClient := &actions.MockActionsService{}
mockSessionClient := &actions.MockSessionService{}
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, err, "Error creating logger")
ctx := context.Background()
sessionId := uuid.New()
session := &actions.RunnerScaleSetSession{
SessionId: &sessionId,
OwnerName: "owner",
MessageQueueUrl: "https://github.com",
MessageQueueAccessToken: "token",
RunnerScaleSet: &actions.RunnerScaleSet{
Id: 1,
},
Statistics: &actions.RunnerScaleSetStatistic{},
}
mockActionsClient.On("CreateMessageSession", ctx, 1, mock.Anything).Return(session, nil)
mockSessionClient.On("AcquireJobs", ctx, mock.Anything).Return(nil, fmt.Errorf("error"))
asClient, err := NewAutoScalerClient(ctx, mockActionsClient, &logger, 1, func(asc *AutoScalerClient) {
asc.client = mockSessionClient
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.AcquireJobsForRunnerScaleSet(ctx, []int64{1, 2, 3})
assert.ErrorContains(t, err, "acquire jobs failed from refreshing client. error", "Expect error acquiring jobs")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockSessionClient.AssertExpectations(t), "All expectations should be met")
}

Some files were not shown because too many files have changed in this diff Show More