Compare commits

...

402 Commits

Author SHA1 Message Date
Bassem Dghaidi
f49d08e4bc Update 2022-12-05-adding-labels-k8s-resources.md (#2420) 2023-03-17 06:39:56 -04:00
Tingluo Huang
064039afc0 Ignore extra dind container when contaerinMode.type=dind. (#2418) 2023-03-17 09:26:51 +01:00
Nikola Jokic
e5d8d65396 Introduce ADR change for adding labels to our resources (#2407)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-16 11:02:42 -04:00
Bassem Dghaidi
c465ace8fb Update the values.yaml sample for improved clarity (#2416) 2023-03-16 11:02:18 -04:00
Tingluo Huang
34f3878829 Fix helm chart rendering errors. (#2414) 2023-03-16 09:21:43 -04:00
Tingluo Huang
44c3931d8e Adding e2e workflows to test dind, kube mode and proxy (#2412) 2023-03-15 12:17:11 -04:00
Tingluo Huang
08acb1b831 Get RunnerScaleSet based on both RunnerGroupId and Name. (#2413) 2023-03-15 11:10:09 -04:00
Tingluo Huang
40811ebe0e Support the controller to watching a single namespace. (#2374) 2023-03-14 10:52:25 -04:00
github-actions[bot]
3417c5a3a8 Update runner to version 2.303.0 (#2411)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-03-14 15:41:03 +01:00
Bassem Dghaidi
172faa883c Fix GITHUB_TOKEN permissions (#2410) 2023-03-14 10:38:04 -04:00
Tingluo Huang
9e6c7d019f Delay role/rolebinding creation to gha-runner-scale-set installation time (#2363) 2023-03-14 09:45:44 -04:00
Bassem Dghaidi
9fbcafa703 Fix canary image tag name (#2409) 2023-03-14 09:29:10 -04:00
Tingluo Huang
2bf83d0d7f Remove list/watch secrets permission from the manager cluster role. (#2276) 2023-03-14 09:23:14 -04:00
Bassem Dghaidi
19d30dea5f Add docker buildx pre-requisites (#2408) 2023-03-14 09:22:38 -04:00
Bassem Dghaidi
6c66c1633f Prevent releases on wrong tag name (#2406) 2023-03-14 09:13:25 -04:00
Bassem Dghaidi
e55708588b Add gha-runner-scale-set-controller canary build (#2405) 2023-03-14 09:12:53 -04:00
Tingluo Huang
261d4371b5 Update E2E test workflow. (#2395) 2023-03-14 09:00:07 -04:00
Tingluo Huang
bd9f32e354 Create separate chart validation workflow for gha-* charts. (#2393)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-03-13 12:44:54 -04:00
Nikola Jokic
babbfc77d5 Surface EphemeralRunnerSet stats to AutoscalingRunnerSet (#2382) 2023-03-13 16:16:28 +01:00
Bassem Dghaidi
322df79617 Delete renovate.json5 (#2397) 2023-03-13 08:39:07 -04:00
Bassem Dghaidi
1c7c6639ed Fix wrong file name in the workflow (#2394) 2023-03-13 06:56:21 -04:00
Hamish Forbes
bcaac39a2e feat(actionsmetrics): Add owner and workflow_name labels to workflow job metrics (#2225) 2023-03-13 10:50:36 +09:00
Milas Bowman
af625dd1cb Upgrade to Docker Engine v20.10.23 (#2328)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-13 10:29:40 +09:00
Bassem Dghaidi
44969659df Add upgrade steps (#2392)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-03-10 12:14:00 -05:00
Nikola Jokic
a5f98dea75 Refactor main.go and introduce make run-scaleset to be able to run manager locally (#2337) 2023-03-10 18:05:51 +01:00
Francesco Renzi
1d24d3b00d Prepare 0.3.0 release (#2388)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-10 10:28:07 -05:00
Ava Stancu
9994d3aa60 replaced inexistent variable with correct one for tag (#2390) 2023-03-10 16:57:35 +02:00
Bassem Dghaidi
a2ea12e93c Fix test's quotes issue (#2389)
Co-authored-by: Francesco Renzi <rentziass@gmail.com>
2023-03-10 09:22:19 -05:00
Tingluo Huang
d7b589bed5 Helm chart react changes for the new runner image. (#2348) 2023-03-10 11:18:21 +00:00
Ava Stancu
4f293c6f79 Build local image and load to kind cluster (#2378) 2023-03-10 13:16:07 +02:00
Francesco Renzi
c569304271 Add support for self-signed CA certificates (#2268)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-09 17:23:32 +00:00
Tingluo Huang
068f987238 Update permission ADR based on prototype. (#2383)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-03-09 12:18:53 -05:00
Tingluo Huang
a462ecbe79 Trim slash for configure URL. (#2381) 2023-03-09 09:02:05 -05:00
Nikola Jokic
c5d6842d5f Update gomega with new ginkgo version (#2373) 2023-03-07 12:05:25 +01:00
dependabot[bot]
947bc8ab5b chore(deps): bump github.com/onsi/ginkgo/v2 from 2.7.0 to 2.9.0 (#2369)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:27:54 +01:00
dependabot[bot]
9d5c6e85c5 chore(deps): bump k8s.io/client-go from 0.26.1 to 0.26.2 (#2370)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:27:23 +01:00
dependabot[bot]
2420a40c02 chore(deps): bump golang.org/x/net from 0.7.0 to 0.8.0 (#2368)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:24:20 +01:00
dependabot[bot]
b3e7c723d2 chore(deps): bump github.com/golang-jwt/jwt/v4 from 4.4.1 to 4.5.0 (#2367)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:23:37 +01:00
dependabot[bot]
2e36db52c3 chore(deps): bump github.com/gruntwork-io/terratest from 0.41.9 to 0.41.11 (#2335)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 07:39:31 +09:00
dependabot[bot]
5d41609bea chore(deps): bump github.com/teambition/rrule-go from 1.8.0 to 1.8.2 (#2230)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 07:38:53 +09:00
Francesco Renzi
e289fe43d4 Apply proxy settings from environment in listener (#2366)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-06 19:21:22 +00:00
Piotr Palka
91fddca3f7 Fix webhook server logging (#2320)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-06 14:20:46 -05:00
Tingluo Huang
befe4cee0a ADR for Limit cluster role permission on Secerts. (#2275) 2023-03-03 13:05:51 -05:00
Yusuke Kuoka
548acdf05c Correct and simplify a sentence in the scheduled overrides doc (#2323) 2023-03-03 09:18:07 -05:00
Chris Patterson
41f2ca3ed9 Adding parameter to configure the runner set name. (#2279)
Co-authored-by: TingluoHuang <TingluoHuang@github.com>
2023-03-03 08:36:14 -05:00
Bassem Dghaidi
00996ec799 Upgrading & pinning action versions (#2346) 2023-03-03 06:00:18 -05:00
Ava Stancu
893833fdd5 Added e2e workflow trigger on master push and on PRs (#2356) 2023-03-03 05:55:02 -05:00
github-actions[bot]
7f3eef8761 Update runner to version 2.302.1 (#2294)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-03-03 05:43:03 -05:00
Francesco Renzi
40c905f25d Simplify the setup of controller tests (#2352) 2023-03-02 18:55:49 +00:00
Nikola Jokic
2984de912c Split listener pod label to avoid long names issue (#2341) 2023-03-02 17:25:50 +01:00
dependabot[bot]
1df06a69d7 bump golang.org/x/net from 0.5.0 to 0.7.0 (#2299)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 10:41:18 +01:00
Nikola Jokic
be47190d4c Chart naming validation on AutoscalingRunnerSet install (#2347)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
Co-authored-by: Bassem Dghaidi <Link-@github.com>
2023-03-02 10:35:55 +01:00
Tingluo Huang
e8d8c6f357 Make CT test to install charts in the right order. (#2350) 2023-03-02 03:16:40 -05:00
Ava Stancu
0c091f59b6 Matrix jobs workflow path update (#2349) 2023-03-02 00:10:34 +02:00
Bassem Dghaidi
a4751b74e0 Update trigger events for validate-chart (#2342) 2023-03-01 10:55:08 -05:00
Bassem Dghaidi
adad3d5530 Rename actions-runner-controller-2 and auto-scaling-runner-set helm charts (#2333)
Co-authored-by: Ava S <avastancu@github.com>
2023-03-01 07:16:03 -05:00
Ava Stancu
70156e3fea Added space before backslash on the multi line command (#2340) 2023-03-01 11:43:17 +02:00
Alex Williams
69abd51f30 Ensure that EffectiveTime is updated on webhook scale down (#2258)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-01 08:27:37 +09:00
dhawalseth
73e35b1dc6 chart: Create actionsmetrics.secrets.yaml (#2208)
Co-authored-by: Dhawal Seth <dseth@linkedin.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-01 08:19:58 +09:00
dependabot[bot]
c4178d5633 chore(deps): bump github.com/stretchr/testify from 1.8.0 to 1.8.2 (#2336)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-01 07:21:24 +09:00
dependabot[bot]
edf924106b chore(deps): bump sigs.k8s.io/controller-runtime from 0.14.1 to 0.14.4 (#2261)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-01 07:19:47 +09:00
Milas Bowman
34ebbf74d1 Upgrade Docker Compose to v2.16.0 (#2327) 2023-03-01 07:18:13 +09:00
Ava Stancu
a9af82ec78 Change e2e config url (#2338) 2023-02-28 14:26:01 -05:00
Ava Stancu
b5e9e14244 Added org for getting the workflow token job as it errored without (#2334) 2023-02-27 23:30:40 +02:00
Ava Stancu
910269aa11 Avastancu/arc e2e test linux vm (#2285) 2023-02-27 16:36:15 +02:00
Yusuke Kuoka
149cf47c83 Fix actions-metrics-server segfault issue (#2325) 2023-02-27 07:34:29 +09:00
Kirill Bilchenko
ec3afef00d Add reposity name and full name for prometheus labels in actions metrics (#2218)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-02-25 16:02:22 +09:00
Dimitar
7d0918b6d5 Allow custom graceful termination and loadBalancerSourceRanges for the githubwebhook service (#2305)
Co-authored-by: Dimitar Hristov <dimitar.hristov@skyscanner.net>
2023-02-25 14:18:29 +09:00
João Carlos Ferra de Almeida
678eafcd67 [Docs] Fix typo (#2314) 2023-02-24 07:19:51 -05:00
Bassem Dghaidi
b6515fe25c Add release change log to quickstart guide (#2315) 2023-02-23 06:20:39 -05:00
Tingluo Huang
1c7b7f467d Bump arc-2 chart version and prepare 0.2.0 release (#2313) 2023-02-23 08:40:21 +00:00
Francesco Renzi
73e22a1756 Disable metrics serving in proxy tests (#2307) 2023-02-22 16:57:59 +00:00
ggreenwood
9b44f0051c Documentation corrections (#2116)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-02-21 13:40:23 -05:00
Francesco Renzi
6b4250ca90 Add support for proxy (#2286)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
Co-authored-by: Ferenc Hammerl <fhammerl@github.com>
2023-02-21 17:33:48 +00:00
Nathan Klick
ced88228fc Resolves the erroneous webhook scale down due to check runs (#2119)
Signed-off-by: Nathan Klick <nathan@swirldslabs.com>
2023-02-21 10:56:46 +09:00
Andrei Vydrin
44c06c21ce fix: case-insensitive webhook label matching (#2302)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-02-21 09:37:42 +09:00
Tingluo Huang
4103fe35df Use DOCKER_IMAGE_NAME instead of NAME to avoid conflict. (#2303) 2023-02-20 18:27:14 -05:00
Yusuke Kuoka
a44fe04bef Fix manager crashloopback for ARC deployments without scaleset-related controllers (#2293) 2023-02-21 08:18:59 +09:00
Ava Stancu
274d0c874e Added ability to configure log level from chart values (#2252) 2023-02-17 14:16:20 +02:00
Tingluo Huang
256e08eb45 Ask runner to wait for docker daemon from DinD. (#2292) 2023-02-15 17:29:56 -05:00
Yusuke Kuoka
f677fd5872 doc: Fix chart name for helm commands in docs (#2287) 2023-02-16 07:09:23 +09:00
Tingluo Huang
d9627141dc Fix helm chart when containerMode.type=dind. (#2291) 2023-02-15 14:29:52 -05:00
Bassem Dghaidi
3886f285f8 Add EKS test environment Terraform templates (#2290)
Co-authored-by: Francesco Renzi <rentziass@gmail.com>
2023-02-15 10:29:49 -05:00
Ava Stancu
dab900462b Added workflow to be triggered via rest api dispatch in e2e test (#2283) 2023-02-14 16:06:46 +02:00
Francesco Renzi
dd8ec1a055 Add testserver package (#2281) 2023-02-14 12:11:46 +01:00
Nikola Jokic
8e52a6d2cf EphemeralRunner: On cleanup, if pod is pending, delete from service (#2255)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-02-11 19:55:12 -05:00
Nikola Jokic
9990243520 Early return if finalizer does not exist to make it more readable (#2262) 2023-02-08 15:21:13 +01:00
Ferenc Hammerl
08919814b1 Port ADRs from internal repo (#2267) 2023-02-08 14:42:45 +01:00
Tingluo Huang
facae69e0b Remove un-required permissions for the manager-role of the new AutoScalingRunnerSet (#2260) 2023-02-07 12:37:09 -05:00
Francesco Renzi
8f62e35f6b Add options to multi client (#2257) 2023-02-07 08:47:59 +01:00
Francesco Renzi
55951c2bdb Add new workflow to automate runner updates (#2247)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-02-06 10:22:58 +00:00
Nikola Jokic
c4297d25bb Avoid deleting scale set if annotation is not parsable or if it does not exist (#2239) 2023-02-03 17:27:31 +01:00
Francesco Renzi
0774f0680c ADR: automate runner updates (#2244) 2023-02-02 18:11:59 +01:00
Francesco Renzi
92ab11b4d2 Use UUID v5 for client identifiers (#2241) 2023-02-02 09:28:34 +01:00
Francesco Renzi
7414dc6568 Add Identifier to actions.Client (#2237) 2023-02-01 14:47:54 +01:00
dhawalseth
34efb9d585 Add documentation to update ARC with prometheus CRDs needed by actions metrics server (#2209)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-02-01 03:04:18 -05:00
Tingluo Huang
fbad56197f Allow provide pre-defined kubernetes secret when helm-install AutoScalingRunnerSet (#2234) 2023-01-31 17:04:03 -05:00
Tingluo Huang
a5cef7e47b Resolve CI break due to bad merge. (#2236) 2023-01-31 22:00:26 +01:00
Tingluo Huang
1f4fe4681e Delete RunnerScaleSet on service when AutoScalingRunnerSet is deleted. (#2223) 2023-01-31 15:03:11 -05:00
Kirill Bilchenko
067686c684 Fix typos and markdown structure in troubleshooting guide (#2148) 2023-01-31 09:57:42 -05:00
Francesco Renzi
df12e00c9e Remove network requests from actions.NewClient (#2219)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-01-31 10:55:23 +00:00
Tingluo Huang
cc26593a9b Skip CT when list-changed=false. (#2228) 2023-01-30 14:03:30 -05:00
Tingluo Huang
835eac7835 Fix helm charts when pass values file. (#2222) 2023-01-30 08:37:26 -05:00
Francesco Renzi
01e9dd31a9 Update Validate ARC workflow to go 1.19 (#2220) 2023-01-27 15:17:28 +00:00
Tingluo Huang
803818162c Allow update runner group for AutoScalingRunnerSet (#2216) 2023-01-27 09:27:52 -05:00
dependabot[bot]
219ba5b477 chore(deps): bump sigs.k8s.io/controller-runtime from 0.13.1 to 0.14.1 (#2132)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-01-27 09:23:28 +09:00
Tingluo Huang
b09e3a2dc9 Return error for non-existing runner group. (#2215) 2023-01-26 12:19:52 -05:00
Bassem Dghaidi
7ea60e497c Fix intermittent image push failures to GHCR (#2214) 2023-01-26 05:52:21 -05:00
Francesco Renzi
c8918f5a7b Fix URL for authenticating using a GitHub app (#2206)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-01-24 18:02:23 +01:00
Francesco Renzi
d57d17f161 Add support for custom CA in actions.Client (#2199) 2023-01-23 17:36:57 -05:00
dependabot[bot]
6e69c75637 chore(deps): bump github.com/hashicorp/go-retryablehttp from 0.7.1 to 0.7.2 (#2203)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-23 17:36:05 -05:00
Nikola Jokic
882bfab569 Renaming autoScaling to autoscaling in tests matching the convention (#2201) 2023-01-23 17:03:01 +01:00
Francesco Renzi
3327f620fb Refactor actions.Client with options to help extensibility (#2193) 2023-01-23 11:50:14 +00:00
dependabot[bot]
282f2dd09c chore(deps): bump github.com/onsi/gomega from 1.20.2 to 1.25.0 (#2169)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-21 12:56:08 +09:00
Nikola Jokic
d67f80863e Include nikola-jokic in CODEOWNERS file (#2184) 2023-01-19 08:21:08 -05:00
Tingluo Huang
4932412cd6 Fix L0 test to make it more reliable. (#2178) 2023-01-19 07:33:04 -05:00
Bassem Dghaidi
6da1cde09c Update runner version to 2.301.1 (#2182)
Co-authored-by: TingluoHuang <TingluoHuang@github.com>
2023-01-19 05:36:05 -05:00
Bassem Dghaidi
f9bae708c2 Add distinct namespace best practice note (#2181) 2023-01-18 09:59:31 -05:00
Bassem Dghaidi
05a3908ba6 Add arc-2 quickstart guide (#2180) 2023-01-18 08:17:25 -05:00
Stephane Moser
606ed1b28e Add Repository information to Runner Status (#2093)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-01-18 09:09:45 +09:00
Tingluo Huang
de244a17be Update publish-arc2 workflow to use right path. (#2173) 2023-01-17 18:26:30 -05:00
Ritesh Khadgaray
5be307ec62 Update installing-arc.md (#2162) 2023-01-18 08:13:08 +09:00
Hyeonmin Park
ee71ff14bd Fix logFormat comment for each module in Helm chart (#2166) 2023-01-18 08:12:24 +09:00
xi2817-aajgaonkar
9e93c7ee54 Update quickstart.md (#2164) 2023-01-18 08:12:13 +09:00
Tingluo Huang
bb61bb1342 Include extra user-agent for runners created by actions-runner-controller. (#2177) 2023-01-18 07:38:59 +09:00
James Bradshaw
23fdca4786 Fix minor typos in 0.27.md (#2171) 2023-01-18 07:38:42 +09:00
Hyeonmin Park
211bacaf1e Fix typo in release note for ARC 0.27.0 (#2158) 2023-01-18 07:38:05 +09:00
Tingluo Huang
0324658a3f Introduce new helm charts for the preview auto-scaling mode for ARC. (#2168) 2023-01-17 14:36:04 -05:00
Tingluo Huang
c4d3cff3df Fix typo in workflow. (#2172) 2023-01-17 18:07:40 +00:00
Tingluo Huang
294bd75cf1 Populate resolve ref when input.ref is empty. (#2170) 2023-01-17 12:58:37 -05:00
Bassem Dghaidi
068a427c52 Create publish-arc2.yaml (#2167) 2023-01-17 12:07:52 -05:00
Tingluo Huang
622eaa34f8 Introduce new preview auto-scaling mode for ARC. (#2153)
Co-authored-by: Cory Miller <cory-miller@github.com>
Co-authored-by: Nikola Jokic <nikola-jokic@github.com>
Co-authored-by: Ava Stancu <AvaStancu@github.com>
Co-authored-by: Ferenc Hammerl <fhammerl@github.com>
Co-authored-by: Francesco Renzi <rentziass@github.com>
Co-authored-by: Bassem Dghaidi <Link-@github.com>
2023-01-17 12:06:20 -05:00
Tingluo Huang
619667fc3b Ignore the new helm charts path for now. (#2165) 2023-01-17 10:26:53 -05:00
Bassem Dghaidi
3e88ae2d38 fix: Update target branch from main to master (#2161) 2023-01-16 18:31:43 +09:00
Yusuke Kuoka
360957cfbc chart: Bump chart and app versions for ARC 0.27.0 (#2160) 2023-01-16 04:24:24 -05:00
Bassem Dghaidi
e1fcd63f92 Fix the workflow by adding the version resolve step (#2159) 2023-01-16 18:04:41 +09:00
Bassem Dghaidi
461c016b98 Add resolve push to registries step (#2157) 2023-01-16 17:40:40 +09:00
Yusuke Kuoka
2e406e3aef Add release note for ARC 0.27.0 (#2068) 2023-01-15 10:56:13 +00:00
Tingluo Huang
044c8ad4d5 Include actions-runner-controller in runner's User-Agent for better telemetry in Actions service. (#2155) 2023-01-15 09:35:56 +09:00
Tingluo Huang
eaa451df32 Update controller package names to match the owning API group name (#2150)
* Update controller package names to match the owning API group name

* feedback.

Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-01-13 08:24:11 +09:00
Bassem Dghaidi
ab04a2b616 Add job summary to the runners release workflow (#2140)
* Add and update job summaries

* Fix workflow reference links

* Fix / deny push to registries on PR

* Rename the workflow to match the releases repo
2023-01-13 07:24:33 +09:00
Yusuke Kuoka
d32319be50 fix(e2e): Make runner graceful shutdown checker cancellable (#2145)
So that the whole test run can be stopped immediately with a failure, without failing until the verify timeout.
2023-01-13 07:15:37 +09:00
Yusuke Kuoka
057b04763f fix(e2e): Use the correct full chart name in test (#2146)
The whole E2E test breaks due to the invalid chart name without this fix.
2023-01-13 07:15:05 +09:00
Yusuke Kuoka
bc4f4fee12 Fix various golangci-lint errors (#2147)
that we introduced via controller-runtime upgrade and via the removal of legacy pull-based scale triggers (#2001).
2023-01-13 07:14:36 +09:00
Siara
a6c4d84234 Fix broken links in docs (#2144) 2023-01-12 05:37:58 -05:00
Bassem Dghaidi
e71c64683b Update runner version to 2.300.2 (#2141)
* Update runner version to 2.300.2

* Bump up runner and container hooks versions

* Bump up runner version

* Bump up runner and container hooks versions

* Update actions-runner-dind-rootless.ubuntu-22.04.dockerfile

* Update actions-runner-dind.ubuntu-20.04.dockerfile

* Update actions-runner-dind.ubuntu-22.04.dockerfile

* Update actions-runner.ubuntu-20.04.dockerfile

* Update actions-runner.ubuntu-22.04.dockerfile

* Bump up runner versions

* Bump up container hooks versions
2023-01-11 08:29:32 -05:00
Bassem Dghaidi
4aadc7d128 Update release workflows post-migration (#2120)
* Fix to trigger extracted release workflows

* Fix input descriptions

* Add tool installation steps

* Fix indentation

* Fix token passing

* Fix release tag name reference

* Fix release tag name reference

* Fix release tag name

* Update publish-canary workflow

* Update workflows

* Fix target org

* Add push to registries flag

* Update publish-chart

* Add job summary to publish-arc

* Enhance summary message

* Add publish canary workflow

* Remove backticks

* Fix variable

* Fix index.yaml location and add job summary

* Fix publish chart workflow

* Enhance job summary for publish-chart

* Enhance chart version identification and fix chart upload

* Fix cr index

* Fix cr index and add comments

* Fix comment

* Pin marketplace actions

* Remove 3rd party action

* Add comments, parametrise where needed

* Add release process brief

* Change target repo

* Removing failsafe

* Removing failsafe

* Replace DOCKER_USER with DOCKERHUB_USERNAME
2023-01-11 03:34:54 -05:00
Bassem Dghaidi
45ebcb1c0a Enable dependabot by creating dependabot.yml (#2128) 2023-01-09 07:51:41 -05:00
Siara
3ede9b5a01 Restructure documentation (#2114)
Breaks up the ARC documentation into several smaller articles. 

`@vijay-train` and `@martin389` put together the plan for this update, and I've just followed it here. 

In these updates:

- The README has been updated to include more general project information, and link to each new article.
- The `detailed-docs.md` file has been broken up into multiple articles, and then deleted.
- The Actions Runner Controller Overview doc has been renamed to `about-arc.md`.

Any edits to content beyond generally renaming headers or fixing typos is out of scope for this PR, but will be made in the future. 

Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-01-05 04:47:52 -05:00
DongHo Jung
84104de74b remove redundant description (#2111) 2022-12-31 10:16:30 +09:00
Nikola Jokic
aa6dab5a9a Changes to folder structure to allow multigroups and changed go mod name (#2105)
* Changed folder structure to allow multi group registration

* included actions.github.com directory for resources and controllers

* updated go module to actions/actions-runner-controller

* publish arc packages under actions-runner-controller

* Update charts/actions-runner-controller/docs/UPGRADING.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-12-28 09:38:34 +09:00
Yusuke Kuoka
086f9fd2d6 Fix docker-shim.sh for rootless-dind-runner (#2100)
Fixes #2097
2022-12-22 23:00:17 +09:00
Yusuke Kuoka
2407e4f6c6 fix: Add missing actions-metrics-server command (#2099)
Fixes #2090
2022-12-22 23:00:02 +09:00
Bassem Dghaidi
fc23b7d08e chore: add toast-gear to the list of maintainers (#2095) 2022-12-15 11:00:08 +01:00
Bassem Dghaidi
85ec00a5a5 Merge pull request #2091 from actions/Link-/oss-process-changes
Add important OSS guidelines
2022-12-14 10:16:44 +01:00
Bassem Dghaidi
03d4784518 Update CODEOWNERS 2022-12-14 04:07:56 -05:00
Bassem Dghaidi
dcfacf4f1e Removing toast-gear until we resolve the access issue 2022-12-13 13:15:12 +00:00
Bassem Dghaidi
02d84575e2 Add needs triage label to issue templates 2022-12-13 13:03:21 +00:00
Bassem Dghaidi
3021be73c7 Add security guidelines and policy 2022-12-13 11:39:39 +00:00
Bassem Dghaidi
adb5bc9f66 Add code of conduct 2022-12-13 11:38:01 +00:00
Bassem Dghaidi
466be710ee Add the actions-runtime team to codeowners 2022-12-13 06:24:20 -05:00
Yusuke Kuoka
acbce4b70a runner: Expose dind runner dockerd logs via stdout/stderr (#2082)
* runner: Expose dind runner dockerd logs via stdout/stderr

We've been letting supervisord to run dockerd within the dind runner container presuming it would avoid producing zombie processes. However we used dumb-init to wrap supervisord to wrap dockerd. In this picture supervisord might be unnecessary and dumb-init is actually a correct pid 0 for containers.

Rmoving supervisord removes this unnecessary complexity, while saving a little memory, and more importantly logs from dockerd is exposed via stdout/stderr of the container for easy access from kubectl-logs, fluentd, and so on.
2022-12-12 08:39:35 +09:00
Callum Tait
418f719bdf chore: highlight watch namespace (#2087)
* chore: highlight watch namespace

* chore: wording

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-12-12 08:39:04 +09:00
Yusuke Kuoka
300e93c59d Expose workflow job metrics via new actions-metrics-server (#2057)
* Add workflow job metrics to Github webhook server

* Fix handling of workflow_job.Conclusion

* Make the prometheus metrics exporter for the workflow jobs a dedicated application

* chart: Add support for deploying actions-metrics-server

* A few improvements to make it easy to cover in E2E

* chart: Add missing actionsmetrics.service.yaml

* chart: Do not modify actionsMetricsServer.replicaCount

* chart: Add documentation for actionsMetrics and actionsMetricsServer

Co-authored-by: Colin Heathman <cheathman@benchsci.com>
2022-12-10 08:24:28 +09:00
renovate[bot]
0285da1a32 fix(deps): update kubernetes packages to v0.25.5 (#2083)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-12-09 07:03:21 +09:00
renovate[bot]
b8e5185fef fix(deps): update module golang.org/x/oauth2 to v0.3.0 (#2074)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-12-08 07:09:07 +09:00
renovate[bot]
187479f08c chore(deps): update golang docker tag to v1.19.4 (#2076)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-12-08 07:07:47 +09:00
Callum Tait
31244dd61b ci: add new runners to deploy (#2081) 2022-12-08 07:06:35 +09:00
Callum Tait
a8417ec67e feat: dind-rootless 22.04 runner (#2033)
* feat: dind-rootless 22.04 runner

* runner: Bring back packages needed by rootlesskit

* e2e: Update E2E buildvars with ubuntu 22.04 dockerfiles

* feat: use new uid for runner user

* e2e: Make it possible to inject ubuntu version via envvar for actiosn-runner-dind image

* doc: Use fsGroup=1001 for IRSA on Ubuntu 22.04 runner

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-12-07 19:02:35 +09:00
Callum Tait
775dc60c94 feat: dind 22.04 runner (#2030)
* feat: dind 22.04 runner

* chore: remove zstd

* chore: remove test

* chore: add missing make targets and bcump

* runner: Add missing iptables package to dind ubuntu 22.04

* feat: use new ids

* feat: use new ids

* Revert "feat: use new ids"

This reverts commit 2e4e2bb6d9.

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-12-07 14:20:45 +09:00
Yusuke Kuoka
ecd7531917 feat: Set runner UID and docker GID to match github actions runner (#2077)
This is a successor to #1688

Co-authored-by: Suhas Gaddam <sgaddam@trueaccord.com>
2022-12-07 14:17:57 +09:00
Callum Tait
ad1989072e feat: use new uid for 22.04 images (#2079)
* feat: use new uid for 22.04 images

* feat: use new gid for docker group

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-12-07 14:14:31 +09:00
Callum Tait
fe05987eea ci: use single quotes (#2067)
* ci: use single quotes

* ci: add 22.04 image to renovate

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-12-02 09:14:29 +09:00
Callum Tait
bd392c3665 ci: fix runners workflow 2022-12-01 22:35:31 +00:00
renovate[bot]
58d80a7c12 fix(deps): update module go.uber.org/zap to v1.24.0 (#2059)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-12-02 07:30:58 +09:00
Callum Tait
212b9daec3 feat: 22.04 default runner image (#2050)
* feat: 22.04 default runner image

* docs: update bundled software

* chore: remove test in Dockerfile

* ci: add 22.04 runner build

* chore: remove build-essential

* chore: remove python path entry

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-12-02 07:29:59 +09:00
Callum Tait
28ea8d4e7b ci: align renovate config with new names (#2065) 2022-12-02 06:40:49 +09:00
Callum Tait
c1fb793773 feat: bump docker and hooks in 20.04 (#2063)
Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-12-02 06:40:12 +09:00
Callum Tait
63d2cbfdaa ci: multiple ubuntu version (#2036)
* ci: prepare ci for multiple runners

* chore: rename dockerfiles

* chore: sup multiple os in makefile

* chore: changes to support multiple versions

* chore: remove test for TARGETPLATFORM

* chore: fixes and add individual targets

* ci: add latest tag back in

* ci: remove latest suffix tag

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-12-01 00:00:16 +09:00
Yusuke Kuoka
18077a1e83 docs: do not recommend combining pull-based autoscaling with webhook-based autoscaling (#2051)
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1962
2022-11-27 19:31:12 +00:00
Yusuke Kuoka
3ae9f09532 e2e: Do honor the runner graceful stop timeout also in the dockerd sidecar prestop hook (#2044)
The runner graceful stop timeout has never been propagated to the dind sidecar due to configuration error in E2E. This fixes it, so that we can verify that the dind sidecar prestop can respect the graceful stop timeout.

Related to #1759
2022-11-27 11:13:56 +09:00
Yusuke Kuoka
96a930bfd9 Fix runner pod to not stuck in Terminating when runner got deleted before pod scheduling (#2043)
This fixes the said issue that I found while I was running a series of E2E tests to test other features and pull requestes I have recently contributed.
2022-11-27 11:13:38 +09:00
Alex Grand
877c93c5c3 Fix admissionWebHooks.caBundle template formatting (#2049)
* Use quote on caBundle values for the webhook deployment

* Drop unrecognized --log-format arg on the manager container

* Update custom cert docs with the default san/secret names

* Revert "Drop unrecognized --log-format arg on the manager container"

This reverts commit d76dd67317.
2022-11-27 09:46:33 +09:00
Igor Sarkisov
95c324b550 Add rootless runner to the Makefile and improve target platform handling. (#2005)
* Add rootless runner to the Makefile and improve target platform handling

* Add rootless image to docker-push-ubuntu target

* Update runner/Makefile

* Update runner/actions-runner-dind-rootless.dockerfile

* Update runner/actions-runner-dind.dockerfile

* Update runner/actions-runner.dockerfile

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-11-26 18:10:26 +09:00
renovate[bot]
5e8f576f65 fix(deps): update kubernetes packages to v0.25.4 (#2008)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-26 13:14:43 +09:00
Callum Tait
cc15ff0119 docs: remove caveat from useRunnerGroupsVisibility (#2034) 2022-11-26 13:09:20 +09:00
Gwyn
8318523627 Update Etcd To Make make test-with-deps Work On macOS (#2013)
* Fixes etcd for macos.

The older version of etcd packaged in kubebuilder 2.3.2 for Darwin
throws a stack trace upon attempted startup.

This retrieves the latest version of etcd from coreos and installs
that instead; this works on all OSes.

I removed some redundancy in the Makefile around test dependency
retrieval, too.

* Capture further OS specific test command tweaks.
2022-11-26 13:08:24 +09:00
Callum Tait
fcb65b046b ci: fix multi-arch runner builds (#2048)
* ci: fix multi-arch runner builds
2022-11-25 15:48:18 +00:00
Callum Tait
87f566e1e6 feat: add docker-compose and clean up the default runner (#1924)
* feat: clean and add docker-compose

* feat: make docker compose download arch aware

* fix: use new ARG name

* fix: correct case in url

* ci: add some debug output to workflow

* ci: add ARG for docker

* fix: various fixes

* chore: more alignment changes

* chore: use /usr/bin over /usr/local/bin

* chore: more logical order

* fix: add recursive flag

* chore: actions/runner stuff with actions/runner

* ci: bump checkout to latest

* fix: rootless build

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-11-25 10:31:13 +09:00
Callum Tait
a786dae450 docs: disable runner log levels (#2042) 2022-11-25 08:48:58 +09:00
Callum Tait
666ce8f917 feat: add docker-compose and clean up the dind runner (#1925)
* feat: align runner and add docker compose

* feat: make docker compose download arch aware

* fix: use new ARG name

* chore: alignment stuff

* chore: use /usr/bin over /usr/local/bin

* chore: replicate default runner order

* feat: set-up actions container hooks

* chore: small flags

* fix: install all docker components

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-11-22 12:10:38 +09:00
Callum Tait
9ba4b6b96a chore: clean up the dind rootless dockerfile so it aligns with the other runners (#1926)
* chore: align dockerfile with other runners

* chore: superfluous comments

* feat: make docker compose download arch aware

* chore: stuff

* chore: align runner tool cache set-up

* fix: copy and paste error

* feat: add container hooks

* feat: add rootless into makefile

* feat: support all architectures and fix compose

* fix: export SKIP_IPTABLES correctly

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-11-22 12:10:28 +09:00
Yusuke Kuoka
ae86b1a011 Use the patch API instead to prevent unnecessary field updates (#1998)
Fixes #1916
2022-11-22 12:09:24 +09:00
Yusuke Kuoka
154fcde7d0 runner: Make WAIT_FOR_DOCKER_SECONDS configurable and working (#1999)
* runner: Make WAIT_FOR_DOCKER_SECONDS configurable and working

Ref #1830
Ref #1804

* Update acceptance/testdata/runnerdeploy.envsubst.yaml

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>

* Update docs/detailed-docs.md

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-11-22 12:08:54 +09:00
Yusuke Kuoka
86d7893d61 breaking: Make legacy webhook scale triggers no-op (#2001)
Ref #1607
2022-11-22 12:08:29 +09:00
Igor Sarkisov
8f374d561f Do not explicitly set Privileged to false. (#2009)
Setting SecurityContext.Privileged bit to false, which is default,
prevents GKE from admitting Windows pods.  Privileged bit is not
supported on Windows.
2022-11-15 11:29:37 +09:00
renovate[bot]
40eec3c783 fix(deps): update module github.com/prometheus/client_golang to v1.14.0 (#1996)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-10 19:06:26 +09:00
renovate[bot]
0c4798b773 fix(deps): update module golang.org/x/oauth2 to v0.2.0 (#2004)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-10 07:42:29 +09:00
renovate[bot]
7680cfd371 fix(deps): update module github.com/onsi/gomega to v1.24.1 (#2003)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-10 07:42:14 +09:00
DongHo Jung
3b1771385f enhance webhook setting doc (#1995) 2022-11-09 11:39:04 +09:00
qube
cb288fc99b Fix typo in detailed-doc.md (#1997) 2022-11-09 11:38:48 +09:00
Vitalii Tverdokhlib
7c81c2eec1 fix doc detailed-docs.md (#1992)
helm params
2022-11-09 10:20:04 +09:00
Callum Tait
0908715786 docs: better wording and grammar 2022-11-07 20:24:00 +00:00
Yusuke Kuoka
186c98cf36 ci: Fix runner builds for pull requests coming from "master" branches of forks (#1983)
* ci: Fix runner builds but not pushes for forks

I noticed that our runners workflow is failing on docker-login due to that a pull request workflow job from a fork does not have access to repo secrets.

https://github.com/malachiobadeyi/actions-runner-controller/actions/runs/3390463793/jobs/5634638183

Can we try this, so that hopefully it suppresses docker-login for pull requests from forks?

* Update .github/workflows/runners.yaml

* fixup! Update .github/workflows/runners.yaml

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>

* fixup! fixup! Update .github/workflows/runners.yaml

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-11-07 19:01:03 +09:00
Callum Tait
d328c61fc3 docs: add the limitation to disabling updates (#1988)
* docs: add the limitation to disabling updates

* docs: better wording
2022-11-06 08:13:31 +09:00
Richard Fussenegger
61d1235d2a Added DEBIAN_FRONTEND=noninteractive to sudo (#1859)
By default `sudo` drops all environment variables and executes its commands with a clean environment. This is by design, but for the `DEBIAN_FRONTEND` environment variable it is not what we want, since it results in installers being interactive. This adds the `env_keep` instruction to `/etc/sudoers` to keep `DEBIAN_FRONTEND` with its `noninteractive` value, and thus pass it on to commands that care about it. Note that this makes no difference in our builds, because we are running them directly as `root`. However, for users of our image this is going to make a difference, since they start out as `runner` and have to use `sudo`.

Co-authored-by: Fleshgrinder <fleshgrinder@users.noreply.github.com>
2022-11-05 17:20:53 +09:00
Claudio Vellage
3b36a81db6 Allow to set docker default address pool (#1971)
* Allow to set docker default address pool

* fixup! Allow to set docker default address pool

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>

* Revert unnecessary chart ver bump

* Update docs for DOCKER_DEFAULT_ADDRESS_POOL_*

* Fix the dockerd default address pool scripts to actually work as probably intended

* Update the E2E testdata runnerdeployment to accomodate the new docker default addr pool options

* Correct default dockerd addr pool doc

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
Co-authored-by: Claudio Vellage <claudio.vellage@pm.me>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-11-05 14:46:32 +09:00
malachiobadeyi
fbdfe0df8c 1770 update log format and add additional fields to webhook server logs (#1771)
* 1770 update log format and add runID and Id to worflow logs

update tests, change log format for controllers.HorizontalRunnerAutoscalerGitHubWebhook

use logging package

remove unused modules

add setup name to setuplog

add flag to change log format

change flag name to enableProdLogConfig

move log opts to logger package

remove empty else and reset timeEncoder

update flag description

use get function to handle nil

rename flag and update logger function

Update main.go

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

Update controllers/horizontal_runner_autoscaler_webhook.go

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

Update logging/logger.go

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

copy log opt per each NewLogger call

revert to use autoscaler.log

update flag descript and remove unused imports

add logFormat to readme

 rename setupLog to logger

make fmt

* Fix E2E along the way

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-11-04 10:46:58 +09:00
Yusuke Kuoka
63e8f32281 Fix permission issue when you use PV for rootless dind cache (#1977)
* Fix permission issue when you use PV for rootless dind cache

This fixes the said issue I have found while testing #1759.

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-11-04 06:46:21 +09:00
renovate[bot]
2ac48038c6 fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.1 (#1982)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-04 06:30:18 +09:00
Yusuke Kuoka
23c8fe4a8b Fix dead-lock when runner unregistration triggered before PV attachment (#1975)
This fixes an issue discovered while I was testing #1759. Please see the new comment in code for more information.
2022-11-04 06:29:19 +09:00
Yusuke Kuoka
8505c95719 runner: Fix rootless dind to respect specified MTU (#1976)
While testing #1759, I found an issue in the rootless dind entrypoint that it was not respecting the configured MTU for dind docker due to a permission issue. This fixes that.
2022-11-04 06:29:03 +09:00
Yusuke Kuoka
3de8085b87 Fix rootless dind to do write logs (#1978)
It turned out too hard to debug configuration issues on the rootless dind daemon as it was not writing any logs to stdout/stderr of the container. This fixes that, so that any rootless dind configuration or startup errors are visible in e.g. the kubectl-logs output.
2022-11-04 06:28:47 +09:00
Cristian Calin
828d51baf2 admissionWebHooks: fix checking for caBundle (#1968) 2022-11-03 22:48:39 +09:00
Jakub Bielawski
0b0219b88f docs: add missing index entry (#1981)
docs: add missing index entry for "Slow / failure to boot dind sidecar (default runner)"
2022-11-03 12:36:14 +00:00
Jakub Bielawski
362b6c10a3 Describe slow/failure to boot dind sidecar in TROUBLESHOOTING.md (#1972)
* Update TROUBLESHOOTING.md

Describe solution for slowly starting dind containers

* Fix typeos

* Fix typeos
2022-11-03 20:58:40 +09:00
Mark Woolley
2110fa5122 Add some information on Prometheus metrics (#1966) 2022-11-03 20:57:36 +09:00
Gwyn
9eae9ac75f Updates Makefile so shellcheck is cross-platform. (#1946)
Specifically, it was always downloading the linux version
no matter the platform. So I moved the OS detection to be early
in the makefile and verified that I can now download the darwin
version of shellcheck.
2022-11-03 20:57:04 +09:00
Yusuke Kuoka
9bb416084b e2e: Fix continuous rolling updater to do stop on test completion (#1979) 2022-11-03 11:55:36 +00:00
Yusuke Kuoka
fdb049ba1e e2e: Bump runner version to 2.299.1 (#1980) 2022-11-03 10:02:54 +00:00
Callum Tait
5f27548a73 ci: replace set-output (#1928)
* ci: replace set-output

* ci: use env var format

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-11-03 18:43:39 +09:00
renovate[bot]
f7fb1490dc chore(deps): update golang docker tag to v1.19.3 (#1969)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-03 18:42:02 +09:00
renovate[bot]
9d48c791f5 fix(deps): update module github.com/onsi/gomega to v1.24.0 (#1953)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-03 17:51:24 +09:00
renovate[bot]
371eff09ce chore(deps): update azure/setup-helm action to v3.4 (#1959)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-03 14:40:54 +09:00
renovate[bot]
cfad7a9b08 fix(deps): update module github.com/prometheus/client_golang to v1.13.1 (#1970)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-03 14:40:39 +09:00
renovate[bot]
6234c568bd chore(deps): update dependency actions/runner to v2.299.1 (#1973)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-03 14:40:06 +09:00
renovate[bot]
3ae3811944 fix(deps): update module github.com/stretchr/testify to v1.8.1 (#1947)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-11-01 20:30:44 +09:00
Yusuke Kuoka
c74ad6195f Fix runners to do their best to gracefully stop on pod eviction (#1759)
Ref #1535
Ref #1581

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-11-01 20:30:10 +09:00
Jesse Haka
332548093a feat: replace v1beta1 api with v1 (#1931)
* replace v1beta1 api with v1
2022-10-25 20:12:31 +01:00
renovate[bot]
b4e143dadc fix(deps): update module golang.org/x/oauth2 to v0.1.0 (#1938)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-23 09:20:49 +09:00
Cory Miller
93c4dd856e Add first interaction workflow to greet new contributors and users (#1918) 2022-10-21 22:58:06 +09:00
Cory Miller
93aea48c38 Revamp the contribution guide (#1917)
* Revamp the contribution guide

* Fix link

* Update CONTRIBUTING.md

Co-authored-by: Ava Stancu <avastancu@github.com>

* Update CONTRIBUTING.md

Co-authored-by: Ava Stancu <avastancu@github.com>

* add guidance on image tool requests

Co-authored-by: Ava Stancu <avastancu@github.com>
2022-10-21 22:56:42 +09:00
DongHo Jung
14b17cca73 docs: fix typo for syncPeriod in chart README (#1942) 2022-10-21 09:54:59 +01:00
Ayoola Ajebeku
5298c6ea29 docs: remove duplicate word (#1930) 2022-10-17 19:47:13 +01:00
renovate[bot]
8fa08d59b1 fix(deps): update golang.org/x/oauth2 digest to 6fdb5e3 (#1922)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-16 17:17:25 +09:00
renovate[bot]
003c552c34 fix(deps): update kubernetes packages to v0.25.3 (#1919)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-16 17:17:11 +09:00
Callum Tait
83370d7f95 ci: use github-pr-check reporter for shellcheck (#1927)
* ci: use github-pr-review reporter for shellcheck

* ci: use default reporter

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-10-16 17:15:56 +09:00
Connor Murphy
7da9d0ae19 Docs: fix broken link to readme (#1912) 2022-10-14 06:43:35 +09:00
Callum Tait
b56fa6a748 docs: minor grammar fix (#1915) 2022-10-13 09:08:09 +09:00
Callum Tait
a22ee8a5f1 chore: add new label to bug form (#1913) 2022-10-13 09:07:26 +09:00
Yusuke Kuoka
e1762ba746 Fix inability to configure MTU for rootless dind runner (#1856)
Follow-up for https://github.com/actions-runner-controller/actions-runner-controller/pull/1644
2022-10-13 09:04:56 +09:00
Yusuke Kuoka
710e2fbc3a Prevent runner controller from recreating runner pod when pod was terminated externally (#1851) 2022-10-13 09:04:50 +09:00
Yusuke Kuoka
433552770e Let it be a bug only when it's reproducible with official runner image (#1839)
* Let it be a bug only when it's reproducible with official runner image

A custom runner image can break runners and ARC in interesting ways. Probably it's better to clearly state that ARC is not guaranteed to work with every custom runner image in the wild.

* Update .github/ISSUE_TEMPLATE/bug_report.yml
2022-10-13 09:04:40 +09:00
renovate[bot]
ba2a32eef6 fix(deps): update module github.com/onsi/gomega to v1.22.1 (#1911)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-12 10:54:15 +09:00
renovate[bot]
de0d7ad78c fix(deps): update module github.com/onsi/gomega to v1.22.0 (#1909)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-12 06:37:40 +09:00
renovate[bot]
0382f3bbd5 chore(deps): update quay.io/brancz/kube-rbac-proxy docker tag to v0.13.1 (#1899)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-09 18:09:24 +09:00
renovate[bot]
b6640a033c fix(deps): update module github.com/onsi/gomega to v1.21.1 (#1901)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-09 18:09:12 +09:00
renovate[bot]
998c028d90 fix(deps): update golang.org/x/oauth2 digest to b44042a (#1900)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-09 16:53:46 +09:00
Yusuke Kuoka
f8dffab19d Add workflow for validating runner scripts with shellcheck (#1853)
* Add workflow for validating runner scripts with shellcheck

I am about to revisit #1517, #1454, #1561, and #1560 as a part of our on-going effort to a major enhancement to the runner entrypoints being made in #1759.

This change is a counterpart of #1852. #1852 enables you to easily run shellcheck locally. This enables you to automatically run shellcheck on every pull request.

Currently, any shellcheck error does not result in failing the workflow job. Once we addressed all the shellcheck findings, we can flip the fail_on_error option to true and let jobs start failing on pull requests that introduce invalid or suspicious bash code.
2022-10-09 16:53:22 +09:00
Yusuke Kuoka
7ff5b7da8c Handle missing runner ID more gracefully (#1855)
so that ARC respect the registration timeout, terminationGracePeriodSeconds and RUNNER_GRACEFUL_STOP_TIMEOUT(#1759) when the runner pod was terminated externally too early after its creation

While I was running E2E tests for #1759, I discovered a potential issue that ARC can terminate runner pods without waiting for the registration timeout of 10 minutes.

You won't be affected by this in normal circumstances, as this failure scenario can be triggered only when you or another K8s controller like cluster-autoscaler deleted the runner or the runner pod immediately after the runner or the runner pod has been created. But probably is it worth fixing it anyway because it's not impossible to trigger it?
2022-10-09 16:52:51 +09:00
renovate[bot]
6aaff4ecee chore(deps): update golang docker tag to v1.19.2 (#1894)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-05 08:30:09 +09:00
renovate[bot]
437d0173b0 chore(deps): update dependency actions/runner to v2.298.2 (#1891)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-10-05 08:16:38 +09:00
Nicholas Farley
a389292478 Allow RunnerDeployments to configure dnsPolicy for runners (#1892)
* Add DnsPolicy field to RunnerPodSpec struct

* Ensure the runnerSpec's DNSPolicy is mirrored to the pod.Spec

* Run `make manifests`
2022-10-05 08:16:11 +09:00
Vijay-train
6eadb03669 Update links to QuickStartGuide.md (#1890)
* Update detailed-docs.md

Quick start instructions are now inline in README.md. Updating the link to point to the `Getting Started` section of README.md

* Update Actions-Runner-Controller-Overview.md

Quick start instructions are now inline in README.md. Updating the link to point to the `Getting Started` section of README.md

* Delete QuickStartGuide.md

This guide is no longer needed as the details of it are merged with README.md.
2022-10-04 20:32:16 +09:00
Yusuke Kuoka
aa60021ab0 e2e: Make docker build timeout longer (#1862)
Depending on the host machine spec and load, it can take more time than 5 minutes. This change adhocly increases the timeout by 1.5x to address that.
2022-10-04 20:31:10 +09:00
Yusuke Kuoka
50e26bd2f6 makefile: Add shellcheck installation and run targets (#1852)
I am about to revisit #1517, #1454, #1561, and #1560 as a part of our on-going effort for a major enhancement to the runner entrypoints being made in #1759.
This commit adds the makefile target to run shellformat locally, so that any contributor can use it before submitting a pull request.
2022-10-04 20:30:43 +09:00
Yusuke Kuoka
2dd13b4a19 runner: Address all shellcheck findings (#1854)
I am about to revisit #1517, #1454, #1561, and #1560 as a part of our on-going effort for a major enhancement to the runner entrypoints being made in #1759.

This change updates and reintroduces #1517 contributed by @CASABECI in a way it becomes applicable to today's code-base.
2022-10-04 20:30:27 +09:00
Yusuke Kuoka
35af24cf03 Enhnace log-related fields in the bug report form (#1838)
Both fields can be useless when the reporter thought only one or two lines of the respective logs are relevant and it turned out we had to see another line later.
To avoid such a situation I'd like to change the field labels to include `Whole` so that it looks like `Whole Runner Pod Logs`, and ask to not omit the logs in the field description.
2022-10-04 20:29:51 +09:00
Yusuke Kuoka
ca96b66fbe Update bug_report.yml 2022-10-04 20:28:34 +09:00
Yusuke Kuoka
4db5fbc7a1 Update bug_report.yml 2022-10-04 20:27:00 +09:00
Yusuke Kuoka
add83bc7bc Update bug_report.yml (#1837)
Honestly, I'm a bit tired of seeing issues filed with the default title "Bug" that are seemingly not related to real bugs!
I'm emptying it so that the reporter is more encourage to write up a sentence to describe the problem and they hopefully notice along the way that there are places to diagnose before considering it as a bug.
2022-10-04 20:25:30 +09:00
Yusuke Kuoka
666bba784c e2e: Bump runner version to 2.297.0 (#1850)
* e2e: Bump runner version to 2.296.2

* Update e2e_test.go
2022-10-04 20:25:13 +09:00
Karthik KN
0672ff0ff9 FIx broken links in documentation (#1882)
* Fix links on Actions-Runner-Controller-Overview.md

* Fix link on QuickStartGuide.md

* Fix link on Contributing.md

* Fix links on detailed-docs.md

* Update CONTRIBUTING.md

* Update docs/Actions-Runner-Controller-Overview.md

* Update docs/Actions-Runner-Controller-Overview.md

* Update docs/QuickStartGuide.md

* Update docs/Actions-Runner-Controller-Overview.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-30 09:53:32 +09:00
pratikbin
f2f22827a6 Use -trimpath and ldflags -s -w build flags (#1880) 2022-09-30 09:19:52 +09:00
Vijay-train
a3a801757d Fix link to 'detailed docs' (#1879) 2022-09-30 08:42:59 +09:00
Vijay-train
0c003f20d4 Create Simplified README.md with only 'Getting Started' steps and with links to additional detailed documentation. (#1864)
* Update with recent changes from README.md

* Update README.md

This is the final phase of simplifying README.md
- include only Getting started` steps ( These steps were earlier reviewed as part of /docs/QuickStartGuide.md)
- links to a detailed documentation ( the detailed documentation is a copy of the current README.md)
Once this is merged, any new detailed docs should be captured in /docs/detailed-docs.md

* Update detailed-docs.md

Redo the change made in #1873

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-29 21:13:29 +09:00
renovate[bot]
863760828a chore(deps): update helm/chart-releaser-action action to v1.4.1 (#1870)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-29 21:13:00 +09:00
renovate[bot]
517fae4119 chore(deps): update helm/chart-testing-action action to v2.3.1 (#1871)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-29 20:45:02 +09:00
gitulisca
d9e1e64dc6 Fix repositoryNames in workflow runs queued based HRA example (#1873) 2022-09-29 20:32:38 +09:00
Mike
3c4ab2d479 Add terraform deployment method to contrib/examples (#1559)
Co-authored-by: Mike Joseph <mike@Mikes-MacBook-Pro-5618.local>
2022-09-28 13:31:52 +09:00
Saravanan Palanisamy
3ca96557a6 helm charts for actions runner (#1375)
Fixes: #942 

helm charts for actions runner, currently its having only RunnerDeployment and Autoscaler resources.

Looks like deployment order is important here, facing the below issue if Autoscaler deployed first and then autoscaling not working as expected.
```
2022-04-21T12:13:08Z    DEBUG    controllers.webhookbasedautoscaler    RunnerDeployment not found with scale target ref name test-actions-runner for hra test-actions-runner-autoscaler
```
Helm doesn't support [ordering](https://github.com/helm/helm/issues/8439) for custom resources. So using List to overcome this issue, didn't use helm chart hooks for ordering since its not [tracked](https://helm.sh/docs/topics/charts_hooks/#hook-resources-are-not-managed-with-corresponding-releases) after creation.

Co-authored-by: Josh Feierman <joshua.feierman@warnermedia.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-28 11:07:56 +09:00
renovate[bot]
5fd6ec4bc8 chore(deps): update dependency actions/runner to v2.297.0 (#1860)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-27 09:11:53 +09:00
renovate[bot]
6863bdb208 chore(deps): update helm/kind-action action to v1.4.0 2022-09-25 10:35:05 +09:00
Yusuke Kuoka
f3fcb428ae rootless-dind-dockerfile: Add comment about installation path 2022-09-25 07:50:12 +09:00
Yusuke Kuoka
41bae32a9f runner: Dump supervisor log on dockerd timeout 2022-09-25 07:50:12 +09:00
Yusuke Kuoka
e4879e7ae4 Tweak E2E and documentation about MTU configuration 2022-09-25 07:50:12 +09:00
Yusuke Kuoka
e5bb130fda Add MTU propagation docker-shim also to rootless dind runner images
Related to #1201
2022-09-25 07:50:12 +09:00
Tiago Melo
e7a21cfc53 feat: Add container to propagate host network MTU (#1201)
* feat: Add container to propagate host network MTU

Some network environments use non-standard MTU values. In these
situations, the `DockerMTU` setting might be used to specify the MTU
setting for the `bridge` network created by Docker. However, when the
Github Actions workflow creates networks, it doesn't propagate the
`bridge` network MTU which can lead to `connection reset by peer`
messages.

To overcome this, I've created a new docker image called
`summerwind/actions-runner-mtu` that shims the docker binary in order to
propagate the MTU setting to networks created by Github workflows.

This is a follow-up on the discussion in
(#1046)[https://github.com/actions-runner-controller/actions-runner-controller/issues/1046]
and uses a separate image since there might be some unintended
side-effects with this approach.

* fixup! feat: Add container to propagate host network MTU

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-23 17:08:28 +09:00
Sebastian N
8f54644b08 Create app-version-mapping.md (#1820) 2022-09-23 10:36:28 +09:00
renovate[bot]
c56d6a6c85 chore(deps): update actions/stale action to v6 (#1827)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-23 10:25:36 +09:00
renovate[bot]
a96c3e1102 fix(deps): update kubernetes packages to v0.25.2 (#1829)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-23 10:25:24 +09:00
Cristian Calin
d29de8d454 feat: use helm genCA to generate a certificate for the mutating web hook if no cert-manager is available (#1780) 2022-09-23 10:21:00 +09:00
renovate[bot]
12c4d96250 fix(deps): update kubernetes packages to v0.25.1 (#1729)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-21 11:50:52 +09:00
renovate[bot]
46a13c0626 fix(deps): update module github.com/onsi/gomega to v1.20.2 (#1757)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-21 11:22:27 +09:00
Frederic MARTIN
e32a8054d0 🍱 add git-lfs package as standard tool (#1821) 2022-09-21 11:04:43 +09:00
renovate[bot]
0deb6809b9 fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.0 (#1775)
* fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.0

* fixup! fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.0

* fixup! fixup! fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.0

* fixup! fixup! fixup! fix(deps): update module sigs.k8s.io/controller-runtime to v0.13.0

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-21 11:04:07 +09:00
David Young
21af1ec19d Use numeric USER for nonroot:nonroot in Dockerfile (#1765)
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2022-09-21 10:50:27 +09:00
renovate[bot]
dfadb86d66 fix(deps): update module github.com/google/go-github/v47 to v47.1.0 (#1813)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-21 10:23:37 +09:00
Cory Miller
c91e76f169 Add golangci-lilnt to CI (#1794)
This introduces a linter to PRs to help with code reviews and code hygiene. I've also gone ahead and fixed (or ignored) the existing lints.

I've only setup the default linters right now. There are many more options that are documented at https://golangci-lint.run/.

The GitHub Action should add appropriate annotations to the lint job for the PR. Contributors can also lint locally using `make lint`.
2022-09-21 09:08:22 +09:00
Yusuke Kuoka
718232f8f4 Add ArtifactHub badge (#1816)
Ref #1502
2022-09-20 18:49:50 +09:00
renovate[bot]
c7f5f7d161 fix(deps): update golang.org/x/oauth2 digest to f213421 (#1792)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-20 09:32:56 +09:00
renovate[bot]
33bb6902bc fix(deps): update module github.com/google/go-cmp to v0.5.9 (#1787)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-20 09:32:07 +09:00
Yusuke Kuoka
aeb0601147 Update bug_report.yml (#1812)
I think we should at least recommend the reporter read the troubleshooting guide before submitting a bug report. It would give them more chances to realize it isn't a bug in some cases.
2022-09-20 09:03:36 +09:00
Yusuke Kuoka
991c0b3211 chore: disable blank issues (#1809)
Even though we have a bug report form, we've seen so many issues being written in freestyle and I think every occurrence of it resulted in needing multiple hops for us to gather the information necessary for diagnosing the issue. I hope every bug report is more actionable. Now we disable blank issues so that the reporter is likely to choose the bug report form or Discussions depending on their situation.
2022-09-16 16:17:49 +01:00
Vijay-train
71da6d5271 [Docs] Move into [docs] folder and other minor fixes (#1769)
* [Docs] Move into docs folder and other minor fixes

* Move into [docs] folder

* Fix minor formatting

* Create detailed-docs.md

Moving current Readme contents into a separate detailed-docs.md file. Added one additional section "Getting Started" to have the getting started link. Rest of the file is as is from the current Readme.md

This file would be linked from a new simplified Readme (to be raised a separate PR)
2022-09-16 10:37:22 +09:00
David Girón
e4fd4bc99c Update dependency docker/cli to v20.10.18 (#1803) 2022-09-16 10:25:12 +09:00
Yusuke Kuoka
d9a8dc7e84 chart: Bump chart and app versions for ARC 0.26.0 (#1799) 2022-09-13 09:09:29 +09:00
Yusuke Kuoka
795cf8b1de Add releasenote for 0.26.0 (#1796) 2022-09-13 08:43:28 +09:00
renovate[bot]
0615c2adb1 chore(deps): update dependency actions/runner to v2.296.2 (#1791)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-09-09 18:43:00 +09:00
Yusuke Kuoka
a918e56ece Merge pull request #1784 from actions-runner-controller/renovate/golang-1.x
chore(deps): update golang docker tag to v1.19.1
2022-09-09 18:42:44 +09:00
Yusuke Kuoka
546b5251ed Merge pull request #1781 from THG-Site-Reliability-Engineering/master
Fix bug with enterpriseURL for multi-tenancy
2022-09-09 18:40:46 +09:00
renovate[bot]
74dda4ea1b chore(deps): update golang docker tag to v1.19.1 2022-09-06 22:14:03 +00:00
Barun Mishra
b29816290a Merge branch 'actions-runner-controller:master' into master 2022-09-05 13:58:49 +01:00
Barun Mishra
921daff61b Add cmd line arg for enterprise url. Fix enterprise bug. (#1)
* Add cmd line arg for enterprise url. Fix enterprise bug.

* Fix package import order

* Fix comment
2022-09-05 13:50:17 +01:00
renovate[bot]
e233f7ad6a chore(deps): update dependency actions/runner to v2.296.1 2022-09-01 12:31:39 +00:00
Yusuke Kuoka
623c84fa52 Merge pull request #1758 from actions-runner-controller/fix-e2e
e2e: A bunch of fixes
2022-08-27 16:29:56 +09:00
Yusuke Kuoka
d4fb6204cb Add TODO comment to the PVC reconciler 2022-08-27 07:14:16 +00:00
Yusuke Kuoka
f8e07c7fe4 e2e: Update RunnerSet template for rootless-dind test 2022-08-27 07:12:55 +00:00
Yusuke Kuoka
f73713859c e2e: Fix workflow for rootless-dind test to actually pass 2022-08-27 07:12:06 +00:00
Yusuke Kuoka
e0a7be253e e2e: Change the default runner rolling-update interval from 10s to 60s to let the runners actually get jobs assigned by GitHub Actions 2022-08-27 07:11:17 +00:00
Yusuke Kuoka
915739b972 e2e: Fix broken token expiration checks 2022-08-27 07:10:10 +00:00
Yusuke Kuoka
4925880e5e e2e: Install workflow before starting continuous rolling-updates of runners 2022-08-27 07:08:56 +00:00
Yusuke Kuoka
c143fd50b5 e2e: Use newer version of actions/runner(0.296.0) 2022-08-27 07:07:56 +00:00
Yusuke Kuoka
dbd668ae2d e2e: Set ARC_E2E_SKIP_RUNNERDEPLOYMENT to skip RunnerDeployment test 2022-08-26 01:48:54 +00:00
Yusuke Kuoka
5c1be3265b e2e: Fix the token check to actually fail on expiration 2022-08-26 01:48:36 +00:00
Yusuke Kuoka
ebcd838501 e2e: Continuous rolling-update of runners while workflow jobs are running
This should help revealing issues like https://github.com/actions-runner-controller/actions-runner-controller/issues/1535 if any.
2022-08-26 01:28:08 +00:00
Yusuke Kuoka
6ef276b239 e2e: Custom RBAC resources for make test success reporting work when k8s container mode or runner update hook is enabled 2022-08-26 01:28:08 +00:00
Yusuke Kuoka
f70f325f48 e2e: Set ARC_E2E_DO_DOCKER_BUILD to verify docker-build 2022-08-26 01:28:08 +00:00
Yusuke Kuoka
f7c336f9dd e2e: Mention maintained versions of cert-manager for reference 2022-08-26 01:28:08 +00:00
renovate[bot]
ae380f5987 fix(deps): update module go.uber.org/zap to v1.23.0 (#1752)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-25 10:25:52 +09:00
Yusuke Kuoka
4bf1c12a98 e2e: Fix inability to install the stable version of ARC before the edge / Validate GH tokenn on start (#1748)
Let me improve two things I had found while I was E2E-testing ARC for the upcoming 0.26.0 release.

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-25 10:25:06 +09:00
Callum Tait
cb561d8db4 docs: webhook scaling (#1709)
* docs: remove legacy webhook scaling triggers

* docs: remove runnerset limitations

* docs: noddy whitespace

* docs: more technically correct wording

* docs: wording

* docs: correct EffectiveTime logic

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* docs: remove non workflow_job events

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* docs: stuff

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-24 22:23:44 +09:00
Callum Tait
eaf6d2f2e2 docs: bump the required min GHES version (#1749) 2022-08-24 21:38:30 +09:00
Vijay-train
5ae7ce16e0 Fixing typo to render link properly (#1750) 2022-08-24 21:38:14 +09:00
Yusuke Kuoka
bdcde44642 chore: Bump go-github and minimum GHES version to 3.6 (#1747)
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1574
2022-08-24 13:08:40 +09:00
renovate[bot]
5116e3800e fix(deps): update module go.uber.org/zap to v1.22.0 (#1704)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-24 11:23:34 +09:00
renovate[bot]
4e107a4e50 fix(deps): update module github.com/prometheus/client_golang to v1.13.0 (#1699)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-24 11:22:33 +09:00
renovate[bot]
93238697d9 fix(deps): update module github.com/onsi/gomega to v1.20.0 (#1661)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-24 10:56:23 +09:00
Evan Hines
48f62b4c89 Allow customization of ServiceMonitor namespace for helm-template (#1491)
* Allow users to customize which namespace they deploy their service monitors into

* Add missing metrics object reference

* Update charts/actions-runner-controller/templates/githubwebhook.serviceMonitor.yaml

* Update charts/actions-runner-controller/templates/controller.metrics.serviceMonitor.yaml

* Update charts/actions-runner-controller/values.yaml

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-24 10:55:44 +09:00
Yusuke Kuoka
ea94b3cc5b e2e: Add new option to test rootless docker (#1742)
Related to #1644

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-24 10:42:45 +09:00
Callum Tait
0cac005ab2 ci: include sha in canary version (#1744) 2022-08-24 10:21:46 +09:00
renovate[bot]
55ca7bfdf5 chore(deps): update dependency actions/runner to v2.296.0 2022-08-23 19:47:18 +00:00
Viktor Lindgren
ca97f39fcb Print Version Number on startup (#1659)
* Changed Dockerfile to get the Enviroment variable from the github actions workflow and pass it to the main.go file

Added a function in main.go to fetch the enviroment varible and to have a fallback if the env variable isnt there

Added a test for the version to use for this branch only

* Update test-version.yaml

* Update test-version.yaml

* Removed the test because its not needed when we push upstream

* Moved the version print in main.go to the Log codeblock as requested by toast-gear

Added version as issue#1161 requests.

Decided to use a docker tag structure for the userAgent string, with : being a seperator of the name and version

* Used ldflags instead like mumoshu recommended

Changed Dockerfile to use $VERSION from the workflow

Added version.go and the build package
Removed the getVersion function as we can just get the value directly

* Used ldflags instead like mumoshu recommended

Changed Dockerfile to use $VERSION from the workflow

Added version.go and the build package
Removed the getVersion function as we can just get the value directly

* * Removed the default from the go code (set it as N/A)
* Changed version from latest to dev inside makefile
* Added buildarg for version to the dockerfile in the makerfile
* Added VERSION with default dev value as arg inside dockerfile
* Cleaned up inside dockerfile

* Fix failing test

* Fix possible missing VERSION in the ARC UA suffix due to missing build arg in docker-build-push step

Co-authored-by: S8338C <viktor.lindgren@seb.se>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-23 13:40:16 +09:00
renovate[bot]
f0c8c07428 fix(deps): update golang.org/x/oauth2 digest to 0ebed06 (#1678)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-23 13:33:49 +09:00
renovate[bot]
e54edea918 chore(deps): update golang docker tag to v1.19.0 (#1682)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-23 13:33:21 +09:00
Ian Flores Siaca
e58f82bfce Document how to add Windows self-hosted runners (#1608)
* adding windows docs

* adding windows docs

* Editing the explanations

* Update README.md

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-23 13:32:37 +09:00
Alex Dubov
244e0dd987 Fix Typos in Readme (#1741) 2022-08-23 13:04:16 +09:00
renovate[bot]
02009cef17 chore(deps): update module go to 1.19 (#1664)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-23 13:00:17 +09:00
Vijay-train
2b5af62184 [Doc] Create ARC Overview doc (#1707)
* [Doc] Create ARC overview doc

The purpose of this doc is a starting point overview to ARC, with links to Quick start guide within.

* Update Actions-Runner-Controller-Overview.md

Fixed some formatting

* Update for minor formatting

Fixed links to include quotes, where missing. Added spaces after periods, where missing.

* Updated links to the QuickStart guide

* Updated Images and scaling sections

Updated the following based on PR feedback
- `The Runner container image` now calls out more explicitly the recommended way to install additional software
- `Scaling runners - dynamically with Pull Driven ScalingScaling runners - dynamically with Pull Driven Scaling` - Removed mentions of `TotalNumberOfQueuedAndInProgressWorkflowRuns` as its not fully implemented

* Apply suggestions from code review

Incorporated review feedback from  @andyfeller, @sethrylan, @debuger24 and @mumoshu. Thank you all.

Co-authored-by: Andy Feller <andyfeller@github.com>
Co-authored-by: Rahul Kumar <rahulcomp24@gmail.com>
Co-authored-by: Seth Rylan Gainey <sethrylan@github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Apply suggestions from code review

Add more detailed config for PercentageRunnersBusy metric

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Updated link text to "Pull Driven Scaling"

Co-authored-by: Andy Feller <andyfeller@github.com>
Co-authored-by: Rahul Kumar <rahulcomp24@gmail.com>
Co-authored-by: Seth Rylan Gainey <sethrylan@github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-08-23 12:53:22 +09:00
Sajad Orouji
ec58ad19e0 feat: add queue size limit to github webhook server helm template (#1712)
* Update githubwebhook.deployment.yaml

* Update values.yaml

* Update README.md

* Update charts/actions-runner-controller/values.yaml

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Update values.yaml

* chore: comment out queuelimit setting

* docs: format cleanup

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-08-23 09:40:50 +09:00
Adam Panzer
cc9fe33ef5 Change type: to kind: (#1740)
Per the CRD spec here https://github.com/actions-runner-controller/actions-runner-controller/blob/master/charts/actions-runner-controller/crds/actions.summerwind.dev_horizontalrunnerautoscalers.yaml#L115-L127

It's `kind:` not `type:`
2022-08-23 09:35:14 +09:00
Matt Domko
4a5a85fd61 Replaced 'kubectl apply' with 'kubectl create' in README (#1728)
- Updated as per issue 1317
- Version bump so that folks copy/pasting get the latest version

https://github.com/actions-runner-controller/actions-runner-controller/
issues/1317
2022-08-21 22:54:31 +09:00
David Young
56b26fd751 Fix minor spelling error (#1727)
Just a typo fix :)
2022-08-21 22:00:52 +09:00
João Carlos Ferra de Almeida
36e95dad47 Fix/multitenancy enterprise url (#1725)
* Fix #1714

* Add Comment
2022-08-16 20:20:06 +09:00
Callum Tait
3724b46033 chore(deps): update dependency actions/runner to v2.295.0 (#1723) 2022-08-16 20:11:46 +09:00
Rahul Kumar
538e2783d7 Update Metric Types and typos (#1719)
* Update valid options in metrics types

* FIX: Typos

* FIX: Update metric types in helm chart
2022-08-15 23:12:22 +09:00
Rahul Kumar
72ca998266 Add Additional Autoscaling Metrics to Prometheus (#1720)
* Add prometheus metrics for autoscaling

* Add desc for prometheus-metrics

* FIX: Typo

* Remove replicas_desired_before in metrics

* Remove Num prefix in metricws
2022-08-15 23:12:00 +09:00
Rahul Kumar
d439ed5c81 Update GHCR name to repo name in publish wf (#1721) 2022-08-15 09:46:50 +09:00
Vijay-train
58c2bdf2bb Create QuickStartGuide.md (#1691)
* Create QuickStartGuide.md

Creating a new Quickstart guide that captures simple onboarding instructions. The intent is for first time users to be able to follow this guide and get their environment running and try out ARC. A link to this guide would be added to the repo readme once this PR get merged.

* Update QuickStartGuide.md

Fixed a typo - removed "$" from codeblock "$ kubectl apply -f runnerdeployment.yaml"

* Update QuickStartGuide.md

Eliminated need to specify PAT in Custom_values.yaml. Instead passing as parameter while installing helm chart. This eliminates need to store PAT in a file and also eliminates a setup step.

* Fixed minor typos

Fixed types identified by @nebuk89

* Minor formatting in links and periods.

Fixed formatting to include space after period and commas. Fixed formatting on some links to include quotes
2022-08-14 13:19:41 +09:00
Yusuke Kuoka
fe9164b025 doc: Encourage everyone to explicitly set HRA scaleTargetRef kind (#1633)
Ref #1343
2022-08-14 13:04:03 +09:00
renovate[bot]
06141b39b4 chore(deps): update helm/chart-testing-action action to v2.3.0 (#1710)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-13 14:30:59 +01:00
renovate[bot]
ac4c3fd365 chore(deps): update azure/setup-helm action to v3.3 (#1667)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-08-13 12:51:30 +01:00
Callum Tait
dc29e31bcc ci: add rootless dnd to renovate (#1711) 2022-08-12 10:41:30 +09:00
renovate[bot]
784019f3d7 chore(deps): update dependency actions/runner to v2.295.0 2022-08-11 11:36:27 +00:00
Natalie Somersall
fc55477c1c remove fuse-overlayfs (#1690) 2022-08-04 13:25:55 +09:00
Yusuke Kuoka
3f78f71137 Start publishing runner-dind-rootless image (#1689)
Follow-up for #1644
2022-08-04 10:37:12 +09:00
oreonl
e511401e51 fix: don't base64 decode secret strings (#1683) 2022-08-03 11:47:07 +09:00
Natalie Somersall
37aa1a0b8c Add rootless DinD runner (#1644)
* add rootless dind images

* add small blurb on rootless dind

* Add ToC entry for README section
2022-08-03 11:45:02 +09:00
João Carlos Ferra de Almeida
bea0775bec Fix small typo in README (#1687)
Changed from `Kuernetes` to `Kubernetes` in the **Multitenancy** chapter.

By the way why not use [the vale-action](https://github.com/errata-ai/vale-action) to automate linting in the markdown files? If you'd like I can probably find some time to do it. Just a small token of appreciation for an awesome project!
2022-08-03 11:28:40 +09:00
Yusuke Kuoka
79a494b2aa doc: Note to fully populate the pool of PVs before checking if the cache is effective (#1655) 2022-07-17 19:44:07 +09:00
Yusuke Kuoka
97404144eb Fix excessive runnerreplicaset update issue since 0.25.0 (#1650)
Fixes #1643
2022-07-17 19:43:24 +09:00
Yusuke Kuoka
b77489d098 Fix E2E to not fail due to missing storageclass for RunnerDeployment w/ kubernetes container mode (#1649) 2022-07-17 19:43:13 +09:00
Yusuke Kuoka
4152afbd30 Fix E2E against local cluster to not fail on helm-upgrade (#1648) 2022-07-17 19:43:01 +09:00
Yusuke Kuoka
29f621e1c8 chart: Remove support for extensions/v1beta1 and networking.k8s.io/v1beta1 (#1632)
* chart: Remove support for extensions/v1beta1 and networking.k8s.io/v1beta1

`networking.k8s.io/v1` has been available since v1.19.
As of today, AWS EKS supports v1.19+ and Oracle Cloud supports v1.20+. GKE and AKS supports v1.21+. The upstream Kubernetes project maintains v1.22+.
So it should be safe to remove it now.

* fixup! chart: Remove support for extensions/v1beta1 and networking.k8s.io/v1beta1
2022-07-17 19:42:35 +09:00
renovate[bot]
5651ba6ead fix(deps): update kubernetes packages to v0.24.3 (#1647)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-15 10:59:44 +09:00
Cory Miller
759cc4b47f Update version of YQ in Makefile (#1634) 2022-07-15 10:59:13 +09:00
Yusuke Kuoka
4ede0c18d0 Fix the new ct chart lint error 2022-07-15 10:23:33 +09:00
Yusuke Kuoka
9091d9b756 chart: Bump version/appVersion to 0.20.2/0.25.2 2022-07-15 10:23:33 +09:00
renovate[bot]
a09c2564d9 fix(deps): update module github.com/bradleyfalzon/ghinstallation/v2 to v2.1.0 (#1637)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-15 10:20:42 +09:00
renovate[bot]
a555c90fd5 chore(deps): update dependency golang to v1.18.4 (#1639)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-15 10:20:29 +09:00
Yusuke Kuoka
38644cf4e8 Remove redundant flags from webhook-based autoscaler (#1630)
* Remove redundant flags from webhook-based autoscaler

Ref #623

* fixup! Remove redundant flags from webhook-based autoscaler
2022-07-15 09:58:30 +09:00
Jonathan Wiemers
23f357db10 Adds way to allow additional environment variables from secretKeyRef (#1565)
* adds additionalFullEnv to allow additional secret refs

* Update charts/actions-runner-controller/templates/deployment.yaml

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* adds examples into values.yaml

* fix

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-07-15 09:57:30 +09:00
Felipe Galindo Sanchez
584745b67d Minor improvements for runner groups
- Add group in runners columns
- Add constant for runner group and labels
2022-07-15 09:47:25 +09:00
AJ Schmidt
df9592dc99 docs: Update README.md (#1645)a 2022-07-13 18:13:11 +01:00
Yusuke Kuoka
8071ac7066 Remove github-api-cache-duration flag and code (#1631)
This removes the flag and code for the legacy GitHub API cache. We already migrated to fully use the new HTTP cache based API cache functionality which had been added via #1127 and available since ARC 0.22.0. Since then, the legacy one had been no-op and therefore removing it is safe.

Ref #1412
2022-07-12 20:37:24 +09:00
toast-gear
3c33eca501 docs: remove superfluous file names 2022-07-12 09:45:51 +09:00
toast-gear
aa827474b2 docs: clearer wording 2022-07-12 09:45:51 +09:00
toast-gear
c75c9f9226 docs: use consistent wording 2022-07-12 09:45:51 +09:00
toast-gear
c09a04ec01 docs: add default label considerations 2022-07-12 09:45:51 +09:00
Yusuke Kuoka
618276e3d3 Enhance support for multi-tenancy (#1371)
This enhances every ARC controller and the various K8s custom resources so that the user can now configure a custom GitHub API credentials (that is different from the default one configured per the ARC instance).

Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1067#issuecomment-1043716646
2022-07-12 09:45:00 +09:00
renovate[bot]
18dd89c884 chore(deps): update azure/setup-helm action to v3.1 (#1628)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-12 09:19:02 +09:00
k.bigwheel (kazufumi nishida)
98b17dc0a5 Fix the dind image to work with the latest entrypoint.sh (#1624)
Fixes #1621
2022-07-12 09:11:04 +09:00
Giovanni Barillari
c658dcfa6d fix #1621: add missing COPY statements to dind docker image 2022-07-11 20:44:35 +09:00
renovate[bot]
c4996d4bbd fix(deps): update module sigs.k8s.io/controller-runtime to v0.12.3 2022-07-11 10:52:14 +09:00
Callum Tait
7a3fa4f362 docs: correct the comparison
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-07-11 10:43:09 +09:00
toast-gear
1bfd743e69 docs: add pod exmaple too 2022-07-11 10:43:09 +09:00
toast-gear
734f3bd63a docs: put shell k8s commands back 2022-07-11 10:43:09 +09:00
toast-gear
409dc4c114 docs: remove ephemeral and simplify 2022-07-11 10:43:09 +09:00
toast-gear
4b9a6c6700 docs: remove runner kind 2022-07-11 10:43:09 +09:00
Yusuke Kuoka
86e1a4a8f3 Fix helm lint error and the unability to install the chart with the default values 2022-07-10 16:16:32 +09:00
Yusuke Kuoka
544d620bc3 e2e: Ensure ARC is roll-updated on deployment even if the container image tag name does not change 2022-07-10 16:16:32 +09:00
Yusuke Kuoka
1cfe1974c4 Add missing job-related permissions to runner pods with k8s container mode 2022-07-10 16:16:32 +09:00
Yusuke Kuoka
7e4b6ebd6d chart: Add rbac.allowGrantingKubernetesContainerModePermissions 2022-07-10 16:16:32 +09:00
Felipe Galindo Sanchez
11cb9b7882 feat: allow to discover runner statuses (#1268)
* feat: allow to discover runner statuses

* fix manifests

* Bump runner version to 2.289.1 which includes the hooks support

* Add feedback from review

* Update reference to newRunnerPod

* Fix TestNewRunnerPodFromRunnerController and make hooks file names job specific

* Fix additional TestNewRunnerPod test

* Cover additional feedback from review

* fix rbac manager role

* Add permissions to service account for container mode if not provided

* Rename flag to runner.statusUpdateHook.enabled and fix needsServiceAccount

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-07-10 15:11:29 +09:00
Tamás Kádár
10b88bf070 Fix typos in README (#1613) 2022-07-10 08:49:35 +09:00
Callum Tait
8b619e7c6f chore: bump helm chart (#1619) 2022-07-10 08:25:55 +09:00
everpcpc
fea1457f12 fix: annotate pod instead of label on UnregistrationFailureMessage (#1615)
The error message should go to annotation instead of pod label, since we only check annotations on autoscale:

https://github.com/actions-runner-controller/actions-runner-controller/blob/master/controllers/autoscaling.go#L338-L342

Then there is no need to truncate or normalize the error message.
2022-07-09 11:45:05 +09:00
Yusuke Kuoka
473295e3fc Enhance the E2E test to be runnable against remote clusters on e.g. AWS EKS (#1610)
This contains apparently enough changes to the current E2E test code to make it runnable against remote Kubernetes clusters. I was actually able to make the test passing against my AWS EKS based test clusters with these changes. You still need to trigger it manually from a local checkout of the ARC repo today. But this might be the foundation for automated E2E tests against major cloud providers.
2022-07-07 20:48:07 +09:00
Yusuke Kuoka
9f6f962fc7 Add toubleshooting for cert-manager ca error (#1598)
I encountered this once while E2E testing ARC with K8s 1.22 and cert-manager 1.1.1. The K8s version is too high / The cert-manager is too low so you generally need to fix either. In a standard scenario, it should be more feasible and meaningful to upgrade cert-manager to a recent enough version that supports the new Kubernetes version.
2022-07-07 11:27:49 +09:00
Yusuke Kuoka
2a475f25c7 Use Argo Tunnel for exposing the autoscaler's webhook server (#1595)
I've been manually setting up Argo Tunnel to expose the webhook server while running E2E tests so that I can cover the webhook-based autoscaling. This automates the setup process so that we can automatiaclly bring up and down cloudflared before/after the test run, so that it can be a part of our upcoming automated E2E test.
2022-07-07 11:27:27 +09:00
Viktor Lindgren
dd9f25ea78 Update README.md (#1606) 2022-07-06 08:57:54 +09:00
Yusuke Kuoka
b8e4eee904 Make it easier to E2E test on various K8s versions (#1599) 2022-07-06 08:57:21 +09:00
386 changed files with 62736 additions and 6270 deletions

View File

@@ -1,8 +1,18 @@
name: Bug Report
description: File a bug report
title: "Bug"
labels: ["bug"]
title: "<Please write what didn't work for you here>"
labels: ["bug", "needs triage"]
body:
- type: checkboxes
id: read-troubleshooting-guide
attributes:
label: Checks
description: Please check all the boxes below before submitting
options:
- label: I've already read https://github.com/actions/actions-runner-controller/blob/master/TROUBLESHOOTING.md and I'm sure my issue is not covered in the troubleshooting guide.
required: true
- label: I'm not using a custom entrypoint in my runner image
required: true
- type: input
id: controller-version
attributes:
@@ -41,7 +51,7 @@ body:
label: cert-manager installation
description: Confirm that you've installed cert-manager correctly by answering a few questions
placeholder: |
- Did you follow https://github.com/actions-runner-controller/actions-runner-controller#installation? If not, describe the installation process so that we can reproduce your environment.
- Did you follow https://github.com/actions/actions-runner-controller#installation? If not, describe the installation process so that we can reproduce your environment.
- Are you sure you've installed cert-manager from an official source?
(Note that we won't provide user support for cert-manager itself. Make sure cert-manager is fully working before testing ARC or reporting a bug
validations:
@@ -50,16 +60,18 @@ body:
id: checks
attributes:
label: Checks
description: Please check the boxes below before submitting
description: Please check all the boxes below before submitting
options:
- label: This isn't a question or user support case (For Q&A and community support, go to [Discussions](https://github.com/actions-runner-controller/actions-runner-controller/discussions). It might also be a good idea to contract with any of contributors and maintainers if your business is so critical and therefore you need priority support
- label: This isn't a question or user support case (For Q&A and community support, go to [Discussions](https://github.com/actions/actions-runner-controller/discussions). It might also be a good idea to contract with any of contributors and maintainers if your business is so critical and therefore you need priority support
required: true
- label: I've read [releasenotes](https://github.com/actions-runner-controller/actions-runner-controller/tree/master/docs/releasenotes) before submitting this issue and I'm sure it's not due to any recently-introduced backward-incompatible changes
- label: I've read [releasenotes](https://github.com/actions/actions-runner-controller/tree/master/docs/releasenotes) before submitting this issue and I'm sure it's not due to any recently-introduced backward-incompatible changes
required: true
- label: My actions-runner-controller version (v0.x.y) does support the feature
required: true
- label: I've already upgraded ARC (including the CRDs, see charts/actions-runner-controller/docs/UPGRADING.md for details) to the latest and it didn't fix the issue
required: true
- label: I've migrated to the workflow job webhook event (if you using webhook driven scaling)
required: true
- type: textarea
id: resource-definitions
attributes:
@@ -129,8 +141,8 @@ body:
- type: textarea
id: controller-logs
attributes:
label: Controller Logs
description: "NEVER EVER OMIT THIS! Include logs from `actions-runner-controller`'s controller-manager pod"
label: Whole Controller Logs
description: "NEVER EVER OMIT THIS! Include logs from `actions-runner-controller`'s controller-manager pod. Don't omit the parts you think irrelevant!"
render: shell
placeholder: |
PROVIDE THE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
@@ -149,11 +161,11 @@ body:
- type: textarea
id: runner-pod-logs
attributes:
label: Runner Pod Logs
description: "Include logs from runner pod(s)"
label: Whole Runner Pod Logs
description: "Include logs from runner pod(s). Please don't omit the parts you think irrelevant!"
render: shell
placeholder: |
PROVIDE THE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
PROVIDE THE WHOLE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
To grab the runner pod logs:
@@ -165,6 +177,8 @@ body:
kubectl -n $NS logs $POD_NAME -c runner > runnerpod_runner.log
kubectl -n $NS logs $POD_NAME -c docker > runnerpod_docker.log
If any of the containers are getting terminated immediately, try adding `--previous` to the kubectl-logs command to obtain logs emitted before the termination.
validations:
required: true
- type: textarea

View File

@@ -1,15 +1,14 @@
# Blank issues are mainly for maintainers who are known to write complete issue descriptions without need to following a form
blank_issues_enabled: true
blank_issues_enabled: false
contact_links:
- name: Sponsor ARC Maintainers
about: If your business relies on the continued maintainance of actions-runner-controller, please consider sponsoring the project and the maintainers.
url: https://github.com/actions-runner-controller/actions-runner-controller/tree/master/CODEOWNERS
url: https://github.com/actions/actions-runner-controller/tree/master/CODEOWNERS
- name: Ideas and Feature Requests
about: Wanna request a feature? Create a discussion and collect :+1:s first.
url: https://github.com/actions-runner-controller/actions-runner-controller/discussions/new?category=ideas
url: https://github.com/actions/actions-runner-controller/discussions/new?category=ideas
- name: Questions and User Support
about: Need support using ARC? We use Discussions as the place to provide community support.
url: https://github.com/actions-runner-controller/actions-runner-controller/discussions/new?category=questions
url: https://github.com/actions/actions-runner-controller/discussions/new?category=questions
- name: Need Paid Support?
about: Consider contracting with any of the actions-runner-controller maintainers and contributors.
url: https://github.com/actions-runner-controller/actions-runner-controller/tree/master/CODEOWNERS
url: https://github.com/actions/actions-runner-controller/tree/master/CODEOWNERS

View File

@@ -1,19 +1,21 @@
---
name: Feature request
about: Suggest an idea for this project
labels: ["enhancement", "needs triage"]
title: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
### What would you like added?
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
*A clear and concise description of what you want to happen.*
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Note: Feature requests to integrate vendor specific cloud tools (e.g. `awscli`, `gcloud-sdk`, `azure-cli`) will likely be rejected as the Runner image aims to be vendor agnostic.
**Additional context**
Add any other context or screenshots about the feature request here.
### Why is this needed?
*A clear and concise description of any alternative solutions or features you've considered.*
### Additional context
*Add any other context or screenshots about the feature request here.*

View File

@@ -0,0 +1,64 @@
name: 'Setup ARC E2E Test Action'
description: 'Build controller image, create kind cluster, load the image, and exchange ARC configure token.'
inputs:
github-app-id:
description: 'GitHub App Id for exchange access token'
required: true
github-app-pk:
description: "GitHub App private key for exchange access token"
required: true
github-app-org:
description: 'The organization the GitHub App has installed on'
required: true
docker-image-name:
description: "Local docker image name for building"
required: true
docker-image-tag:
description: "Tag of ARC Docker image for building"
required: true
outputs:
token:
description: 'Token to use for configure ARC'
value: ${{steps.config-token.outputs.token}}
runs:
using: "composite"
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
# Pinning v0.9.1 for Buildx and BuildKit v0.10.6
# BuildKit v0.11 which has a bug causing intermittent
# failures pushing images to GHCR
version: v0.9.1
driver-opts: image=moby/buildkit:v0.10.6
- name: Build controller image
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64
load: true
build-args: |
DOCKER_IMAGE_NAME=${{inputs.docker-image-name}}
VERSION=${{inputs.docker-image-tag}}
tags: |
${{inputs.docker-image-name}}:${{inputs.docker-image-tag}}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Create minikube cluster and load image
shell: bash
run: |
minikube start
minikube image load ${{inputs.docker-image-name}}:${{inputs.docker-image-tag}}
- name: Get configure token
id: config-token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ inputs.github-app-id }}
application_private_key: ${{ inputs.github-app-pk }}
organization: ${{ inputs.github-app-org }}

View File

@@ -14,18 +14,13 @@ inputs:
description: "GHCR password. Usually set from the secrets.GITHUB_TOKEN variable"
required: true
outputs:
sha_short:
description: "The short SHA used for image builds"
value: ${{ steps.vars.outputs.sha_short }}
runs:
using: "composite"
steps:
- name: Get Short SHA
id: vars
run: |
echo ::set-output name=sha_short::${GITHUB_SHA::7}
echo "sha_short=${GITHUB_SHA::7}" >> $GITHUB_ENV
shell: bash
- name: Set up QEMU
@@ -37,14 +32,14 @@ runs:
version: latest
- name: Login to DockerHub
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' }}
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' && inputs.password != '' }}
uses: docker/login-action@v2
with:
username: ${{ inputs.username }}
password: ${{ inputs.password }}
- name: Login to GitHub Container Registry
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' }}
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' && inputs.ghcr_password != '' }}
uses: docker/login-action@v2
with:
registry: ghcr.io

11
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "gomod" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "weekly"

View File

@@ -1,41 +0,0 @@
{
"extends": ["config:base"],
"labels": ["dependencies"],
"packageRules": [
{
// automatically merge an update of runner
"matchPackageNames": ["actions/runner"],
"extractVersion": "^v(?<version>.*)$",
"automerge": true
}
],
"regexManagers": [
{
// use https://github.com/actions/runner/releases
"fileMatch": [
".github/workflows/runners.yaml"
],
"matchStrings": ["RUNNER_VERSION: +(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
},
{
"fileMatch": [
"runner/Makefile",
"Makefile"
],
"matchStrings": ["RUNNER_VERSION \\?= +(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
},
{
"fileMatch": [
"runner/actions-runner.dockerfile",
"runner/actions-runner-dind.dockerfile"
],
"matchStrings": ["RUNNER_VERSION=+(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
}
]
}

View File

@@ -0,0 +1,16 @@
name: ARC Reusable Workflow
on:
workflow_dispatch:
inputs:
date_time:
description: 'Datetime for runner name uniqueness, format: %Y-%m-%d-%H-%M-%S-%3N, example: 2023-02-14-13-00-16-791'
required: true
jobs:
arc-runner-job:
strategy:
fail-fast: false
matrix:
job: [1, 2, 3]
runs-on: arc-runner-${{ inputs.date_time }}
steps:
- run: echo "Hello World!" >> $GITHUB_STEP_SUMMARY

734
.github/workflows/e2e-test-linux-vm.yaml vendored Normal file
View File

@@ -0,0 +1,734 @@
name: CI ARC E2E Linux VM Test
on:
push:
branches:
- master
pull_request:
branches:
- master
workflow_dispatch:
inputs:
target_org:
description: The org of the test repository.
required: true
default: actions-runner-controller
target_repo:
description: The repository to install the ARC.
required: true
default: arc_e2e_test_dummy
env:
TARGET_ORG: actions-runner-controller
TARGET_REPO: arc_e2e_test_dummy
IMAGE_NAME: "arc-test-image"
IMAGE_VERSION: "dev"
jobs:
default-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
single-namespace-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
kubectl create namespace arc-runners
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
--set flags.watchSingleNamespace=arc-runners \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
dind-mode-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: arc-test-dind-workflow.yaml
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set containerMode.type="dind" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
kubernetes-mode-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: "arc-test-kubernetes-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
echo "Install openebs/dynamic-localpv-provisioner"
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install openebs openebs/openebs -n openebs --create-namespace
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set containerMode.type="kubernetes" \
--set containerMode.kubernetesModeWorkVolumeClaim.accessModes={"ReadWriteOnce"} \
--set containerMode.kubernetesModeWorkVolumeClaim.storageClassName="openebs-hostpath" \
--set containerMode.kubernetesModeWorkVolumeClaim.resources.requests.storage="1Gi" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
auth-proxy-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--name squid \
--publish 3128:3128 \
huangtingluo/squid-proxy:latest
kubectl create namespace arc-runners
kubectl create secret generic proxy-auth \
--namespace=arc-runners \
--from-literal=username=github \
--from-literal=password='actions'
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:3128" \
--set proxy.https.credentialSecretRef="proxy-auth" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
anonymous-proxy-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--name squid \
--publish 3128:3128 \
ubuntu/squid:latest
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:3128" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{ env.WORKFLOW_FILE }}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF

23
.github/workflows/golangci-lint.yaml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: golangci-lint
on:
push:
branches:
- master
pull_request:
permissions:
contents: read
pull-requests: read
jobs:
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: 1.19
- uses: actions/checkout@v3
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
only-new-issues: true
version: v1.51.1

View File

@@ -1,21 +1,38 @@
name: Publish ARC
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
on:
release:
types:
- published
workflow_dispatch:
inputs:
release_tag_name:
description: 'Tag name of the release to publish'
required: true
push_to_registries:
description: 'Push images to registries'
required: true
type: boolean
default: false
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: write
packages: write
env:
TARGET_ORG: actions-runner-controller
TARGET_REPO: actions-runner-controller
jobs:
release-controller:
name: Release
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USER }}
# gha-runner-scale-set has its own release workflow.
# We don't want to publish a new actions-runner-controller image
# we release gha-runner-scale-set.
if: ${{ !startsWith(github.event.inputs.release_tag_name, 'gha-runner-scale-set-') }}
steps:
- name: Checkout
uses: actions/checkout@v3
@@ -35,8 +52,14 @@ jobs:
tar zxvf ghr_v0.13.0_linux_amd64.tar.gz
sudo mv ghr_v0.13.0_linux_amd64/ghr /usr/local/bin
- name: Set version
run: echo "VERSION=$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')" >> $GITHUB_ENV
- name: Set version env variable
run: |
# Define the release tag name based on the event type
if [[ "${{ github.event_name }}" == "release" ]]; then
echo "VERSION=$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')" >> $GITHUB_ENV
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "VERSION=${{ inputs.release_tag_name }}" >> $GITHUB_ENV
fi
- name: Upload artifacts
env:
@@ -44,27 +67,39 @@ jobs:
run: |
make github-release
- name: Setup Docker Environment
id: vars
uses: ./.github/actions/setup-docker-environment
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
ghcr_username: ${{ github.actor }}
ghcr_password: ${{ secrets.GITHUB_TOKEN }}
application_id: ${{ secrets.ACTIONS_ACCESS_APP_ID }}
application_private_key: ${{ secrets.ACTIONS_ACCESS_PK }}
organization: ${{ env.TARGET_ORG }}
- name: Build and Push
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:latest
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }}
ghcr.io/actions-runner-controller/actions-runner-controller:latest
ghcr.io/actions-runner-controller/actions-runner-controller:${{ env.VERSION }}
ghcr.io/actions-runner-controller/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Resolve push to registries
run: |
# Define the push to registries based on the event type
if [[ "${{ github.event_name }}" == "release" ]]; then
echo "PUSH_TO_REGISTRIES=true" >> $GITHUB_ENV
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "PUSH_TO_REGISTRIES=${{ inputs.push_to_registries }}" >> $GITHUB_ENV
fi
- name: Trigger Build And Push Images To Registries
run: |
# Authenticate
gh auth login --with-token <<< ${{ steps.get_workflow_token.outputs.token }}
# Trigger the workflow run
jq -n '{"event_type": "arc", "client_payload": {"release_tag_name": "${{ env.VERSION }}", "push_to_registries": "${{ env.PUSH_TO_REGISTRIES }}" }}' \
| gh api -X POST /repos/actions-runner-controller/releases/dispatches --input -
- name: Job summary
run: |
echo "The [publish-arc](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/publish-arc.yaml) workflow has been triggered!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Release tag: ${{ env.VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- Push to registries: ${{ env.PUSH_TO_REGISTRIES }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "[https://github.com/actions-runner-controller/releases/actions/workflows/publish-arc.yaml](https://github.com/actions-runner-controller/releases/actions/workflows/publish-arc.yaml)" >> $GITHUB_STEP_SUMMARY

View File

@@ -1,18 +1,31 @@
name: Publish Canary Image
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
on:
push:
branches:
- master
paths-ignore:
- '**.md'
- '.github/actions/**'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/e2e-test-dispatch-workflow.yaml'
- '.github/workflows/e2e-test-linux-vm.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/runners.yaml'
- '.github/workflows/validate-entrypoint.yaml'
- '.github/renovate.*'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/publish-runner-scale-set.yaml'
- '.github/workflows/release-runners.yaml'
- '.github/workflows/run-codeql.yaml'
- '.github/workflows/run-first-interaction.yaml'
- '.github/workflows/run-stale.yaml'
- '.github/workflows/update-runners.yaml'
- '.github/workflows/validate-arc.yaml'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/validate-gha-chart.yaml'
- '.github/workflows/validate-runners.yaml'
- '.github/dependabot.yml'
- '.github/RELEASE_NOTE_TEMPLATE.md'
- 'runner/**'
- '.gitignore'
- 'PROJECT'
@@ -22,37 +35,95 @@ on:
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: read
packages: write
packages: write
env:
# Safeguard to prevent pushing images to registeries after build
PUSH_TO_REGISTRIES: true
jobs:
canary-build:
name: Build and Publish Canary Image
legacy-canary-build:
name: Build and Publish Legacy Canary Image
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USER }}
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
TARGET_ORG: actions-runner-controller
TARGET_REPO: actions-runner-controller
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Docker Environment
id: vars
uses: ./.github/actions/setup-docker-environment
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
ghcr_username: ${{ github.actor }}
ghcr_password: ${{ secrets.GITHUB_TOKEN }}
application_id: ${{ secrets.ACTIONS_ACCESS_APP_ID }}
application_private_key: ${{ secrets.ACTIONS_ACCESS_PK }}
organization: ${{ env.TARGET_ORG }}
# Considered unstable builds
# See Issue #285, PR #286, and PR #323 for more information
- name: Trigger Build And Push Images To Registries
run: |
# Authenticate
gh auth login --with-token <<< ${{ steps.get_workflow_token.outputs.token }}
# Trigger the workflow run
jq -n '{"event_type": "canary", "client_payload": {"sha": "${{ github.sha }}", "push_to_registries": ${{ env.PUSH_TO_REGISTRIES }}}}' \
| gh api -X POST /repos/actions-runner-controller/releases/dispatches --input -
- name: Job summary
run: |
echo "The [publish-canary](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/publish-canary.yaml) workflow has been triggered!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- sha: ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
echo "- Push to registries: ${{ env.PUSH_TO_REGISTRIES }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "[https://github.com/actions-runner-controller/releases/actions/workflows/publish-canary.yaml](https://github.com/actions-runner-controller/releases/actions/workflows/publish-canary.yaml)" >> $GITHUB_STEP_SUMMARY
canary-build:
name: Build and Publish gha-runner-scale-set-controller Canary Image
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Normalization is needed because upper case characters are not allowed in the repository name
# and the short sha is needed for image tagging
- name: Resolve parameters
id: resolve_parameters
run: |
echo "INFO: Resolving short sha"
echo "short_sha=$(git rev-parse --short ${{ github.ref }})" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: latest
# Unstable builds - run at your own risk
- name: Build and Push
uses: docker/build-push-action@v3
with:
file: Dockerfile
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64
push: true
build-args: VERSION=canary-"${{ github.ref }}"
push: ${{ env.PUSH_TO_REGISTRIES }}
tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:canary
ghcr.io/actions-runner-controller/actions-runner-controller:canary
cache-from: type=gha,scope=arc-canary
cache-to: type=gha,mode=max,scope=arc-canary
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:canary
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:canary-${{ steps.resolve_parameters.outputs.short_sha }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -1,5 +1,7 @@
name: Publish Helm Chart
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
on:
push:
branches:
@@ -8,6 +10,8 @@ on:
- 'charts/**'
- '.github/workflows/publish-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
- '!**.md'
workflow_dispatch:
@@ -31,7 +35,7 @@ jobs:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v3.0
uses: azure/setup-helm@v3.4
with:
version: ${{ env.HELM_VERSION }}
@@ -57,7 +61,7 @@ jobs:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.2.1
uses: helm/chart-testing-action@v2.3.1
- name: Run chart-testing (list-changed)
id: list-changed
@@ -73,7 +77,7 @@ jobs:
- name: Create kind cluster
if: steps.list-changed.outputs.changed == 'true'
uses: helm/kind-action@v1.3.0
uses: helm/kind-action@v1.4.0
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
@@ -86,20 +90,31 @@ jobs:
if: steps.list-changed.outputs.changed == 'true'
run: ct install --config charts/.ci/ct-config.yaml
# WARNING: This relies on the latest release being inat the top of the JSON from GitHub and a clean chart.yaml
# WARNING: This relies on the latest release being at the top of the JSON from GitHub and a clean chart.yaml
- name: Check if Chart Publish is Needed
id: publish-chart-step
run: |
CHART_TEXT=$(curl -fs https://raw.githubusercontent.com/actions-runner-controller/actions-runner-controller/master/charts/actions-runner-controller/Chart.yaml)
CHART_TEXT=$(curl -fs https://raw.githubusercontent.com/${{ github.repository }}/master/charts/actions-runner-controller/Chart.yaml)
NEW_CHART_VERSION=$(echo "$CHART_TEXT" | grep version: | cut -d ' ' -f 2)
RELEASE_LIST=$(curl -fs https://api.github.com/repos/actions-runner-controller/actions-runner-controller/releases | jq .[].tag_name | grep actions-runner-controller | cut -d '"' -f 2 | cut -d '-' -f 4)
RELEASE_LIST=$(curl -fs https://api.github.com/repos/${{ github.repository }}/releases | jq .[].tag_name | grep actions-runner-controller | cut -d '"' -f 2 | cut -d '-' -f 4)
LATEST_RELEASED_CHART_VERSION=$(echo $RELEASE_LIST | cut -d ' ' -f 1)
echo "Chart version in master : $NEW_CHART_VERSION"
echo "Latest release chart version : $LATEST_RELEASED_CHART_VERSION"
echo "CHART_VERSION_IN_MASTER=$NEW_CHART_VERSION" >> $GITHUB_ENV
echo "LATEST_CHART_VERSION=$LATEST_RELEASED_CHART_VERSION" >> $GITHUB_ENV
if [[ $NEW_CHART_VERSION != $LATEST_RELEASED_CHART_VERSION ]]; then
echo "::set-output name=publish::true"
echo "publish=true" >> $GITHUB_OUTPUT
else
echo "publish=false" >> $GITHUB_OUTPUT
fi
- name: Job summary
run: |
echo "Chart linting has been completed." >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "- chart version in master: ${{ env.CHART_VERSION_IN_MASTER }}" >> $GITHUB_STEP_SUMMARY
echo "- latest chart version: ${{ env.LATEST_CHART_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- publish new chart: ${{ steps.publish-chart-step.outputs.publish }}" >> $GITHUB_STEP_SUMMARY
publish-chart:
if: needs.lint-chart.outputs.publish-chart == 'true'
needs: lint-chart
@@ -107,8 +122,11 @@ jobs:
runs-on: ubuntu-latest
permissions:
contents: write # for helm/chart-releaser-action to push chart release and create a release
env:
CHART_TARGET_ORG: actions-runner-controller
CHART_TARGET_REPO: actions-runner-controller.github.io
CHART_TARGET_BRANCH: master
steps:
- name: Checkout
uses: actions/checkout@v3
@@ -120,8 +138,68 @@ jobs:
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.4.0
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ secrets.ACTIONS_ACCESS_APP_ID }}
application_private_key: ${{ secrets.ACTIONS_ACCESS_PK }}
organization: ${{ env.CHART_TARGET_ORG }}
- name: Install chart-releaser
uses: helm/chart-releaser-action@v1.4.1
with:
install_only: true
install_dir: ${{ github.workspace }}/bin
- name: Package and upload release assets
run: |
cr package \
${{ github.workspace }}/charts/actions-runner-controller/ \
--package-path .cr-release-packages
cr upload \
--owner "$(echo ${{ github.repository }} | cut -d '/' -f 1)" \
--git-repo "$(echo ${{ github.repository }} | cut -d '/' -f 2)" \
--package-path .cr-release-packages \
--token ${{ secrets.GITHUB_TOKEN }}
- name: Generate updated index.yaml
run: |
cr index \
--owner "$(echo ${{ github.repository }} | cut -d '/' -f 1)" \
--git-repo "$(echo ${{ github.repository }} | cut -d '/' -f 2)" \
--index-path ${{ github.workspace }}/index.yaml \
--pages-branch 'gh-pages' \
--pages-index-path 'index.yaml'
# Chart Release was never intended to publish to a different repo
# this workaround is intended to move the index.yaml to the target repo
# where the github pages are hosted
- name: Checkout pages repository
uses: actions/checkout@v3
with:
repository: ${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}
path: ${{ env.CHART_TARGET_REPO }}
ref: ${{ env.CHART_TARGET_BRANCH }}
token: ${{ steps.get_workflow_token.outputs.token }}
- name: Copy index.yaml
run: |
cp ${{ github.workspace }}/index.yaml ${{ env.CHART_TARGET_REPO }}/actions-runner-controller/index.yaml
- name: Commit and push
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
git add .
git commit -m "Update index.yaml"
git push
working-directory: ${{ github.workspace }}/${{ env.CHART_TARGET_REPO }}
- name: Job summary
run: |
echo "New helm chart has been published" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "- New [index.yaml](https://github.com/${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}/tree/main/actions-runner-controller) pushed" >> $GITHUB_STEP_SUMMARY

View File

@@ -0,0 +1,201 @@
name: Publish Runner Scale Set Controller Charts
on:
workflow_dispatch:
inputs:
ref:
description: 'The branch, tag or SHA to cut a release from'
required: false
type: string
default: ''
release_tag_name:
description: 'The name to tag the controller image with'
required: true
type: string
default: 'canary'
push_to_registries:
description: 'Push images to registries'
required: true
type: boolean
default: false
publish_gha_runner_scale_set_controller_chart:
description: 'Publish new helm chart for gha-runner-scale-set-controller'
required: true
type: boolean
default: false
publish_gha_runner_scale_set_chart:
description: 'Publish new helm chart for gha-runner-scale-set'
required: true
type: boolean
default: false
env:
HELM_VERSION: v3.8.0
permissions:
packages: write
jobs:
build-push-image:
name: Build and push controller image
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
# If inputs.ref is empty, it'll resolve to the default branch
ref: ${{ inputs.ref }}
- name: Resolve parameters
id: resolve_parameters
run: |
resolvedRef="${{ inputs.ref }}"
if [ -z "$resolvedRef" ]
then
resolvedRef="${{ github.ref }}"
fi
echo "resolved_ref=$resolvedRef" >> $GITHUB_OUTPUT
echo "INFO: Resolving short SHA for $resolvedRef"
echo "short_sha=$(git rev-parse --short $resolvedRef)" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
# Pinning v0.9.1 for Buildx and BuildKit v0.10.6
# BuildKit v0.11 which has a bug causing intermittent
# failures pushing images to GHCR
version: v0.9.1
driver-opts: image=moby/buildkit:v0.10.6
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build & push controller image
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
build-args: VERSION=${{ inputs.release_tag_name }}
push: ${{ inputs.push_to_registries }}
tags: |
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:${{ inputs.release_tag_name }}
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:${{ inputs.release_tag_name }}-${{ steps.resolve_parameters.outputs.short_sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Job summary
run: |
echo "The [publish-runner-scale-set.yaml](https://github.com/actions/actions-runner-controller/blob/main/.github/workflows/publish-runner-scale-set.yaml) workflow run was completed successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- Release tag: ${{ inputs.release_tag_name }}" >> $GITHUB_STEP_SUMMARY
echo "- Push to registries: ${{ inputs.push_to_registries }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
publish-helm-chart-gha-runner-scale-set-controller:
if: ${{ inputs.publish_gha_runner_scale_set_controller_chart == true }}
needs: build-push-image
name: Publish Helm chart for gha-runner-scale-set-controller
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
# If inputs.ref is empty, it'll resolve to the default branch
ref: ${{ inputs.ref }}
- name: Resolve parameters
id: resolve_parameters
run: |
resolvedRef="${{ inputs.ref }}"
if [ -z "$resolvedRef" ]
then
resolvedRef="${{ github.ref }}"
fi
echo "INFO: Resolving short SHA for $resolvedRef"
echo "short_sha=$(git rev-parse --short $resolvedRef)" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Publish new helm chart for gha-runner-scale-set-controller
run: |
echo ${{ secrets.GITHUB_TOKEN }} | helm registry login ghcr.io --username ${{ github.actor }} --password-stdin
GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG=$(cat charts/gha-runner-scale-set-controller/Chart.yaml | grep version: | cut -d " " -f 2)
echo "GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG=${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}" >> $GITHUB_ENV
helm package charts/gha-runner-scale-set-controller/ --version="${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}"
helm push gha-runner-scale-set-controller-"${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}".tgz oci://ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-charts
- name: Job summary
run: |
echo "New helm chart for gha-runner-scale-set-controller published successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- gha-runner-scale-set-controller Chart version: ${{ env.GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG }}" >> $GITHUB_STEP_SUMMARY
publish-helm-chart-gha-runner-scale-set:
if: ${{ inputs.publish_gha_runner_scale_set_chart == true }}
needs: build-push-image
name: Publish Helm chart for gha-runner-scale-set
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
# If inputs.ref is empty, it'll resolve to the default branch
ref: ${{ inputs.ref }}
- name: Resolve parameters
id: resolve_parameters
run: |
resolvedRef="${{ inputs.ref }}"
if [ -z "$resolvedRef" ]
then
resolvedRef="${{ github.ref }}"
fi
echo "INFO: Resolving short SHA for $resolvedRef"
echo "short_sha=$(git rev-parse --short $resolvedRef)" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Publish new helm chart for gha-runner-scale-set
run: |
echo ${{ secrets.GITHUB_TOKEN }} | helm registry login ghcr.io --username ${{ github.actor }} --password-stdin
GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG=$(cat charts/gha-runner-scale-set/Chart.yaml | grep version: | cut -d " " -f 2)
echo "GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG=${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}" >> $GITHUB_ENV
helm package charts/gha-runner-scale-set/ --version="${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}"
helm push gha-runner-scale-set-"${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}".tgz oci://ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-charts
- name: Job summary
run: |
echo "New helm chart for gha-runner-scale-set published successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- gha-runner-scale-set Chart version: ${{ env.GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG }}" >> $GITHUB_STEP_SUMMARY

72
.github/workflows/release-runners.yaml vendored Normal file
View File

@@ -0,0 +1,72 @@
name: Runners
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
on:
# We must do a trigger on a push: instead of a types: closed so GitHub Secrets
# are available to the workflow run
push:
branches:
- 'master'
paths:
- 'runner/VERSION'
- '.github/workflows/release-runners.yaml'
env:
# Safeguard to prevent pushing images to registeries after build
PUSH_TO_REGISTRIES: true
TARGET_ORG: actions-runner-controller
TARGET_WORKFLOW: release-runners.yaml
DOCKER_VERSION: 20.10.23
RUNNER_CONTAINER_HOOKS_VERSION: 0.2.0
jobs:
build-runners:
name: Trigger Build and Push of Runner Images
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Get runner version
id: runner_version
run: |
version=$(echo -n $(cat runner/VERSION))
echo runner_version=$version >> $GITHUB_OUTPUT
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ secrets.ACTIONS_ACCESS_APP_ID }}
application_private_key: ${{ secrets.ACTIONS_ACCESS_PK }}
organization: ${{ env.TARGET_ORG }}
- name: Trigger Build And Push Runner Images To Registries
env:
RUNNER_VERSION: ${{ steps.runner_version.outputs.runner_version }}
run: |
# Authenticate
gh auth login --with-token <<< ${{ steps.get_workflow_token.outputs.token }}
# Trigger the workflow run
gh workflow run ${{ env.TARGET_WORKFLOW }} -R ${{ env.TARGET_ORG }}/releases \
-f runner_version=${{ env.RUNNER_VERSION }} \
-f docker_version=${{ env.DOCKER_VERSION }} \
-f runner_container_hooks_version=${{ env.RUNNER_CONTAINER_HOOKS_VERSION }} \
-f sha='${{ github.sha }}' \
-f push_to_registries=${{ env.PUSH_TO_REGISTRIES }}
- name: Job summary
env:
RUNNER_VERSION: ${{ steps.runner_version.outputs.runner_version }}
run: |
echo "The [release-runners.yaml](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/release-runners.yaml) workflow has been triggered!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- runner_version: ${{ env.RUNNER_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- docker_version: ${{ env.DOCKER_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- runner_container_hooks_version: ${{ env.RUNNER_CONTAINER_HOOKS_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- sha: ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
echo "- push_to_registries: ${{ env.PUSH_TO_REGISTRIES }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "[https://github.com/actions-runner-controller/releases/actions/workflows/release-runners.yaml](https://github.com/actions-runner-controller/releases/actions/workflows/release-runners.yaml)" >> $GITHUB_STEP_SUMMARY

View File

@@ -0,0 +1,29 @@
name: first-interaction
on:
issues:
types: [opened]
pull_request:
branches: [master]
types: [opened]
jobs:
check_for_first_interaction:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/first-interaction@main
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
issue-message: |
Hello! Thank you for filing an issue.
The maintainers will triage your issue shortly.
In the meantime, please take a look at the [troubleshooting guide](https://github.com/actions/actions-runner-controller/blob/master/TROUBLESHOOTING.md) for bug reports.
If this is a feature request, please review our [contribution guidelines](https://github.com/actions/actions-runner-controller/blob/master/CONTRIBUTING.md).
pr-message: |
Hello! Thank you for your contribution.
Please review our [contribution guidelines](https://github.com/actions/actions-runner-controller/blob/master/CONTRIBUTING.md) to understand the project's testing and code conventions.

View File

@@ -14,7 +14,7 @@ jobs:
issues: write # for actions/stale to close stale issues
pull-requests: write # for actions/stale to close stale PRs
steps:
- uses: actions/stale@v5
- uses: actions/stale@v6
with:
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
# turn off stale for both issues and PRs

View File

@@ -1,83 +0,0 @@
name: Runners
on:
pull_request:
types:
- opened
- synchronize
- reopened
branches:
- 'master'
paths:
- 'runner/**'
- '!runner/Makefile'
- '.github/workflows/runners.yaml'
- '!**.md'
# We must do a trigger on a push: instead of a types: closed so GitHub Secrets
# are available to the workflow run
push:
branches:
- 'master'
paths:
- 'runner/**'
- '!runner/Makefile'
- '.github/workflows/runners.yaml'
- '!**.md'
env:
RUNNER_VERSION: 2.294.0
DOCKER_VERSION: 20.10.12
RUNNER_CONTAINER_HOOKS_VERSION: 0.1.2
DOCKERHUB_USERNAME: summerwind
jobs:
build-runners:
name: Build ${{ matrix.name }}-${{ matrix.os-name }}-${{ matrix.os-version }}
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
strategy:
fail-fast: false
matrix:
include:
- name: actions-runner
os-name: ubuntu
os-version: 20.04
- name: actions-runner-dind
os-name: ubuntu
os-version: 20.04
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Docker Environment
id: vars
uses: ./.github/actions/setup-docker-environment
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
ghcr_username: ${{ github.actor }}
ghcr_password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Push Versioned Tags
uses: docker/build-push-action@v3
with:
context: ./runner
file: ./runner/${{ matrix.name }}.dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }}
build-args: |
RUNNER_VERSION=${{ env.RUNNER_VERSION }}
DOCKER_VERSION=${{ env.DOCKER_VERSION }}
RUNNER_CONTAINER_HOOKS_VERSION=${{ env.RUNNER_CONTAINER_HOOKS_VERSION }}
tags: |
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}-${{ steps.vars.outputs.sha_short }}
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:latest
ghcr.io/${{ github.repository }}/${{ matrix.name }}:latest
ghcr.io/${{ github.repository }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}
ghcr.io/${{ github.repository }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}-${{ steps.vars.outputs.sha_short }}
cache-from: type=gha,scope=build-${{ matrix.name }}
cache-to: type=gha,mode=max,scope=build-${{ matrix.name }}

109
.github/workflows/update-runners.yaml vendored Normal file
View File

@@ -0,0 +1,109 @@
# This workflows polls releases from actions/runner and in case of a new one it
# updates files containing runner version and opens a pull request.
name: Update runners
on:
schedule:
# run daily
- cron: "0 9 * * *"
workflow_dispatch:
jobs:
# check_versions compares our current version and the latest available runner
# version and sets them as outputs.
check_versions:
runs-on: ubuntu-latest
env:
GH_TOKEN: ${{ github.token }}
outputs:
current_version: ${{ steps.versions.outputs.current_version }}
latest_version: ${{ steps.versions.outputs.latest_version }}
steps:
- uses: actions/checkout@v3
- name: Get current and latest versions
id: versions
run: |
CURRENT_VERSION=$(echo -n $(cat runner/VERSION))
echo "Current version: $CURRENT_VERSION"
echo current_version=$CURRENT_VERSION >> $GITHUB_OUTPUT
LATEST_VERSION=$(gh release list --exclude-drafts --exclude-pre-releases --limit 1 -R actions/runner | grep -oP '(?<=v)[0-9.]+' | head -1)
echo "Latest version: $LATEST_VERSION"
echo latest_version=$LATEST_VERSION >> $GITHUB_OUTPUT
# check_pr checks if a PR for the same update already exists. It only runs if
# runner latest version != our current version. If no existing PR is found,
# it sets a PR name as output.
check_pr:
runs-on: ubuntu-latest
needs: check_versions
if: needs.check_versions.outputs.current_version != needs.check_versions.outputs.latest_version
outputs:
pr_name: ${{ steps.pr_name.outputs.pr_name }}
env:
GH_TOKEN: ${{ github.token }}
steps:
- name: debug
run:
echo ${{ needs.check_versions.outputs.current_version }}
echo ${{ needs.check_versions.outputs.latest_version }}
- uses: actions/checkout@v3
- name: PR Name
id: pr_name
env:
LATEST_VERSION: ${{ needs.check_versions.outputs.latest_version }}
run: |
PR_NAME="Update runner to version ${LATEST_VERSION}"
result=$(gh pr list --search "$PR_NAME" --json number --jq ".[].number" --limit 1)
if [ -z "$result" ]
then
echo "No existing PRs found, setting output with pr_name=$PR_NAME"
echo pr_name=$PR_NAME >> $GITHUB_OUTPUT
else
echo "Found a PR with title '$PR_NAME' already existing: ${{ github.server_url }}/${{ github.repository }}/pull/$result"
fi
# update_version updates runner version in the files listed below, commits
# the changes and opens a pull request as `github-actions` bot.
update_version:
runs-on: ubuntu-latest
needs:
- check_versions
- check_pr
if: needs.check_pr.outputs.pr_name
permissions:
pull-requests: write
contents: write
actions: write
env:
GH_TOKEN: ${{ github.token }}
CURRENT_VERSION: ${{ needs.check_versions.outputs.current_version }}
LATEST_VERSION: ${{ needs.check_versions.outputs.latest_version }}
PR_NAME: ${{ needs.check_pr.outputs.pr_name }}
steps:
- uses: actions/checkout@v3
- name: New branch
run: git checkout -b update-runner-$LATEST_VERSION
- name: Update files
run: |
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/VERSION
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" test/e2e/e2e_test.go
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" .github/workflows/e2e-test-linux-vm.yaml
- name: Commit changes
run: |
# from https://github.com/orgs/community/discussions/26560
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config user.name "github-actions[bot]"
git add .
git commit -m "$PR_NAME"
git push -u origin HEAD
- name: Create pull request
run: gh pr create -f

View File

@@ -34,9 +34,9 @@ jobs:
- name: Set-up Go
uses: actions/setup-go@v3
with:
go-version: '1.18.2'
go-version: '1.19'
check-latest: false
- uses: actions/cache@v3
with:
path: ~/go/pkg/mod

View File

@@ -1,12 +1,24 @@
name: Validate Helm Chart
on:
pull_request:
branches:
- master
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
push:
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.10.0
@@ -26,7 +38,8 @@ jobs:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v3.0
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
@@ -52,7 +65,7 @@ jobs:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.2.1
uses: helm/chart-testing-action@v2.3.1
- name: Run chart-testing (list-changed)
id: list-changed
@@ -67,7 +80,7 @@ jobs:
ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
uses: helm/kind-action@v1.3.0
uses: helm/kind-action@v1.4.0
if: steps.list-changed.outputs.changed == 'true'
# We need cert-manager already installed in the cluster because we assume the CRDs exist
@@ -78,5 +91,6 @@ jobs:
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: |
ct install --config charts/.ci/ct-config.yaml

View File

@@ -0,0 +1,134 @@
name: Validate Helm Chart (gha-runner-scale-set-controller and gha-runner-scale-set)
on:
pull_request:
branches:
- master
paths:
- 'charts/**'
- '.github/workflows/validate-gha-chart.yaml'
- '!charts/actions-runner-controller/**'
- '!**.md'
push:
paths:
- 'charts/**'
- '.github/workflows/validate-gha-chart.yaml'
- '!charts/actions-runner-controller/**'
- '!**.md'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.16.1
HELM_VERSION: v3.8.0
permissions:
contents: read
jobs:
validate-chart:
name: Lint Chart
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
- name: Set up latest version chart-testing
run: |
echo 'deb [trusted=yes] https://repo.goreleaser.com/apt/ /' | sudo tee /etc/apt/sources.list.d/goreleaser.list
sudo apt update
sudo apt install goreleaser
git clone https://github.com/helm/chart-testing
cd chart-testing
unset CT_CONFIG_DIR
goreleaser build --clean --skip-validate
./dist/chart-testing_linux_amd64_v1/ct version
echo 'Adding ct directory to PATH...'
echo "$RUNNER_TEMP/chart-testing/dist/chart-testing_linux_amd64_v1" >> "$GITHUB_PATH"
echo 'Setting CT_CONFIG_DIR...'
echo "CT_CONFIG_DIR=$RUNNER_TEMP/chart-testing/etc" >> "$GITHUB_ENV"
working-directory: ${{ runner.temp }}
- name: Run chart-testing (list-changed)
id: list-changed
run: |
ct version
changed=$(ct list-changed --config charts/.ci/ct-config-gha.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: |
ct lint --config charts/.ci/ct-config-gha.yaml
- name: Set up docker buildx
uses: docker/setup-buildx-action@v2
if: steps.list-changed.outputs.changed == 'true'
with:
version: latest
- name: Build controller image
uses: docker/build-push-action@v3
if: steps.list-changed.outputs.changed == 'true'
with:
file: Dockerfile
platforms: linux/amd64
load: true
build-args: |
DOCKER_IMAGE_NAME=test-arc
VERSION=dev
tags: |
test-arc:dev
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Create kind cluster
uses: helm/kind-action@v1.4.0
if: steps.list-changed.outputs.changed == 'true'
with:
cluster_name: chart-testing
- name: Load image into cluster
if: steps.list-changed.outputs.changed == 'true'
run: |
export DOCKER_IMAGE_NAME=test-arc
export VERSION=dev
export IMG_RESULT=load
make docker-buildx
kind load docker-image test-arc:dev --name chart-testing
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: |
ct install --config charts/.ci/ct-config-gha.yaml

View File

@@ -6,13 +6,33 @@ on:
- '**'
paths:
- 'runner/**'
- 'test/entrypoint/**'
- 'test/startup/**'
- '!**.md'
permissions:
contents: read
jobs:
shellcheck:
name: runner / shellcheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: shellcheck
uses: reviewdog/action-shellcheck@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
path: "./runner"
pattern: |
*.sh
*.bash
update-status
# Make this consistent with `make shellsheck`
shellcheck_flags: "--shell bash --source-path runner"
exclude: "./.git/*"
check_all_files_with_shebangs: "false"
# Set this to "true" once we addressed all the shellcheck findings
fail_on_error: "false"
test-runner-entrypoint:
name: Test entrypoint
runs-on: ubuntu-latest
@@ -22,4 +42,4 @@ jobs:
- name: Run tests
run: |
make acceptance/runner/entrypoint
make acceptance/runner/startup

1
.gitignore vendored
View File

@@ -29,6 +29,7 @@ bin
.env
.test.env
*.pem
!github/actions/testdata/*.pem
# OS
.DS_STORE

17
.golangci.yaml Normal file
View File

@@ -0,0 +1,17 @@
run:
timeout: 3m
output:
format: github-actions
linters-settings:
errcheck:
exclude-functions:
- (net/http.ResponseWriter).Write
- (*net/http.Server).Shutdown
- (*github.com/actions/actions-runner-controller/simulator.VisibleRunnerGroups).Add
- (*github.com/actions/actions-runner-controller/testing.Kind).Stop
issues:
exclude-rules:
- path: controllers/suite_test.go
linters:
- staticcheck
text: "SA1019"

View File

@@ -1,2 +1,2 @@
# actions-runner-controller maintainers
* @mumoshu @toast-gear
* @mumoshu @toast-gear @actions/actions-runtime @nikola-jokic

74
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,74 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at opensource@github.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@@ -1,85 +1,53 @@
## Contributing
# Contribution Guide
### Testing Controller Built from a Pull Request
- [Contribution Guide](#contribution-guide)
- [Welcome](#welcome)
- [Before contributing code](#before-contributing-code)
- [How to Contribute a Patch](#how-to-contribute-a-patch)
- [Developing the Controller](#developing-the-controller)
- [Developing the Runners](#developing-the-runners)
- [Tests](#tests)
- [Running Ginkgo Tests](#running-ginkgo-tests)
- [Running End to End Tests](#running-end-to-end-tests)
- [Rerunning a failed test](#rerunning-a-failed-test)
- [Testing in a non-kind cluster](#testing-in-a-non-kind-cluster)
- [Code conventions](#code-conventions)
- [Opening the Pull Request](#opening-the-pull-request)
- [Helm Version Changes](#helm-version-changes)
- [Testing Controller Built from a Pull Request](#testing-controller-built-from-a-pull-request)
We always appreciate your help in testing open pull requests by deploying custom builds of actions-runner-controller onto your own environment, so that we are extra sure we didn't break anything.
## Welcome
It is especially true when the pull request is about GitHub Enterprise, both GHEC and GHES, as [maintainers don't have GitHub Enterprise environments for testing](/README.md#github-enterprise-support).
This document is the single source of truth for how to contribute to the code base.
Feel free to browse the [open issues](https://github.com/actions/actions-runner-controller/issues) or file a new one, all feedback is welcome!
By reading this guide, we hope to give you all of the information you need to be able to pick up issues, contribute new features, and get your work
reviewed and merged.
The process would look like the below:
## Before contributing code
- Clone this repository locally
- Checkout the branch. If you use the `gh` command, run `gh pr checkout $PR_NUMBER`
- Run `NAME=$DOCKER_USER/actions-runner-controller VERSION=canary make docker-build docker-push` for a custom container image build
- Update your actions-runner-controller's controller-manager deployment to use the new image, `$DOCKER_USER/actions-runner-controller:canary`
We welcome code patches, but to make sure things are well coordinated you should discuss any significant change before starting the work.
The maintainers ask that you signal your intention to contribute to the project using the issue tracker.
If there is an existing issue that you want to work on, please let us know so we can get it assigned to you.
If you noticed a bug or want to add a new feature, there are issue templates you can fill out.
Please also note that you need to replace `$DOCKER_USER` with your own DockerHub account name.
When filing a feature request, the maintainers will review the change and give you a decision on whether we are willing to accept the feature into the project.
For significantly large and/or complex features, we may request that you write up an architectural decision record ([ADR](https://github.blog/2020-08-13-why-write-adrs/)) detailing the change.
Please use the [template](/adrs/0000-TEMPLATE.md) as guidance.
### How to Contribute a Patch
<!--
TODO: Add a pre-requisite section describing what developers should
install in order get started on ARC.
-->
Depending on what you are patching depends on how you should go about it. Below are some guides on how to test patches locally as well as develop the controller and runners.
## How to Contribute a Patch
When submitting a PR for a change please provide evidence that your change works as we still need to work on improving the CI of the project. Some resources are provided for helping achieve this, see this guide for details.
Depending on what you are patching depends on how you should go about it.
Below are some guides on how to test patches locally as well as develop the controller and runners.
#### Running an End to End Test
When submitting a PR for a change please provide evidence that your change works as we still need to work on improving the CI of the project.
Some resources are provided for helping achieve this, see this guide for details.
> **Notes for Ubuntu 20.04+ users**
>
> If you're using Ubuntu 20.04 or greater, you might have installed `docker` with `snap`.
>
> If you want to stick with `snap`-provided `docker`, do not forget to set `TMPDIR` to
> somewhere under `$HOME`.
> Otherwise `kind load docker-image` fail while running `docker save`.
> See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap for more information.
To test your local changes against both PAT and App based authentication please run the `acceptance` make target with the authentication configuration details provided:
```shell
# This sets `VERSION` envvar to some appropriate value
. hack/make-env.sh
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
make acceptance
```
**Rerunning a failed test**
When one of tests run by `make acceptance` failed, you'd probably like to rerun only the failed one.
It can be done by `make acceptance/run` and by setting the combination of `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm|kubectl` and `ACCEPTANCE_TEST_SECRET_TYPE=token|app` values that failed (note, you just need to set the corresponding authentication configuration in this circumstance)
In the example below, we rerun the test for the combination `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token` only:
```shell
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm
ACCEPTANCE_TEST_SECRET_TYPE=token \
make acceptance/run
```
**Testing in a non-kind cluster**
If you prefer to test in a non-kind cluster, you can instead run:
```shell
KUBECONFIG=path/to/kubeconfig \
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make docker-build acceptance/setup \
acceptance/deploy \
acceptance/tests
```
#### Developing the Controller
### Developing the Controller
Rerunning the whole acceptance test suite from scratch on every little change to the controller, the runner, and the chart would be counter-productive.
@@ -119,13 +87,14 @@ NAME=$DOCKER_USER/actions-runner make \
(kubectl get po -ojsonpath={.items[*].metadata.name} | xargs -n1 kubectl delete po)
```
#### Developing the Runners
### Developing the Runners
**Tests**
#### Tests
A set of example pipelines (./acceptance/pipelines) are provided in this repository which you can use to validate your runners are working as expected. When raising a PR please run the relevant suites to prove your change hasn't broken anything.
A set of example pipelines (./acceptance/pipelines) are provided in this repository which you can use to validate your runners are working as expected.
When raising a PR please run the relevant suites to prove your change hasn't broken anything.
**Running Ginkgo Tests**
#### Running Ginkgo Tests
You can run the integration test suite that is written in Ginkgo with:
@@ -135,13 +104,14 @@ make test-with-deps
This will firstly install a few binaries required to setup the integration test environment and then runs `go test` to start the Ginkgo test.
If you don't want to use `make`, like when you're running tests from your IDE, install required binaries to `/usr/local/kubebuilder/bin`. That's the directory in which controller-runtime's `envtest` framework locates the binaries.
If you don't want to use `make`, like when you're running tests from your IDE, install required binaries to `/usr/local/kubebuilder/bin`.
That's the directory in which controller-runtime's `envtest` framework locates the binaries.
```shell
sudo mkdir -p /usr/local/kubebuilder/bin
make kube-apiserver etcd
sudo mv test-assets/{etcd,kube-apiserver} /usr/local/kubebuilder/bin/
go test -v -run TestAPIs github.com/actions-runner-controller/actions-runner-controller/controllers
go test -v -run TestAPIs github.com/actions/actions-runner-controller/controllers/actions.summerwind.net
```
To run Ginkgo tests selectively, set the pattern of target test names to `GINKGO_FOCUS`.
@@ -149,9 +119,101 @@ All the Ginkgo test that matches `GINKGO_FOCUS` will be run.
```shell
GINKGO_FOCUS='[It] should create a new Runner resource from the specified template, add a another Runner on replicas increased, and removes all the replicas when set to 0' \
go test -v -run TestAPIs github.com/actions-runner-controller/actions-runner-controller/controllers
go test -v -run TestAPIs github.com/actions/actions-runner-controller/controllers/actions.summerwind.net
```
#### Helm Version Bumps
### Running End to End Tests
In general we ask you not to bump the version in your PR, the maintainers in general manage the publishing of a new chart.
> **Notes for Ubuntu 20.04+ users**
>
> If you're using Ubuntu 20.04 or greater, you might have installed `docker` with `snap`.
>
> If you want to stick with `snap`-provided `docker`, do not forget to set `TMPDIR` to somewhere under `$HOME`.
> Otherwise `kind load docker-image` fail while running `docker save`.
> See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap for more information.
To test your local changes against both PAT and App based authentication please run the `acceptance` make target with the authentication configuration details provided:
```shell
# This sets `VERSION` envvar to some appropriate value
. hack/make-env.sh
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
make acceptance
```
#### Rerunning a failed test
When one of tests run by `make acceptance` failed, you'd probably like to rerun only the failed one.
It can be done by `make acceptance/run` and by setting the combination of `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm|kubectl` and `ACCEPTANCE_TEST_SECRET_TYPE=token|app` values that failed (note, you just need to set the corresponding authentication configuration in this circumstance)
In the example below, we rerun the test for the combination `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token` only:
```shell
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make acceptance/run
```
#### Testing in a non-kind cluster
If you prefer to test in a non-kind cluster, you can instead run:
```shell
KUBECONFIG=path/to/kubeconfig \
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make docker-build acceptance/setup \
acceptance/deploy \
acceptance/tests
```
### Code conventions
Before shipping your PR, please check the following items to make sure CI passes.
- Run `go mod tidy` if you made changes to dependencies.
- Format the code using `gofmt`
- Run the `golangci-lint` tool locally.
- We recommend you use `make lint` to run the tool using a Docker container matching the CI version.
### Opening the Pull Request
Send PR, add issue number to description
## Helm Version Changes
In general we ask you not to bump the version in your PR.
The maintainers will manage releases and publishing new charts.
## Testing Controller Built from a Pull Request
We always appreciate your help in testing open pull requests by deploying custom builds of actions-runner-controller onto your own environment, so that we are extra sure we didn't break anything.
It is especially true when the pull request is about GitHub Enterprise, both GHEC and GHES, as [maintainers don't have GitHub Enterprise environments for testing](docs/about-arc.md#github-enterprise-support).
The process would look like the below:
- Clone this repository locally
- Checkout the branch. If you use the `gh` command, run `gh pr checkout $PR_NUMBER`
- Run `NAME=$DOCKER_USER/actions-runner-controller VERSION=canary make docker-build docker-push` for a custom container image build
- Update your actions-runner-controller's controller-manager deployment to use the new image, `$DOCKER_USER/actions-runner-controller:canary`
Please also note that you need to replace `$DOCKER_USER` with your own DockerHub account name.
## Release process
Only the maintainers can release a new version of actions-runner-controller, publish a new version of the helm charts, and runner images.
All release workflows have been moved to [actions-runner-controller/releases](https://github.com/actions-runner-controller/releases) since the packages are owned by the former organization.

View File

@@ -1,11 +1,10 @@
# Build the manager binary
FROM --platform=$BUILDPLATFORM golang:1.18.3 as builder
FROM --platform=$BUILDPLATFORM golang:1.19.4 as builder
WORKDIR /workspace
# Make it runnable on a distroless image/without libc
ENV CGO_ENABLED=0
# Copy the Go Modules manifests
COPY go.mod go.sum ./
@@ -25,20 +24,23 @@ RUN go mod download
# With the above commmand,
# TARGETOS can be "linux", TARGETARCH can be "amd64", "arm64", and "arm", TARGETVARIANT can be "v7".
ARG TARGETPLATFORM TARGETOS TARGETARCH TARGETVARIANT
ARG TARGETPLATFORM TARGETOS TARGETARCH TARGETVARIANT VERSION=dev
# We intentionally avoid `--mount=type=cache,mode=0777,target=/go/pkg/mod` in the `go mod download` and the `go build` runs
# to avoid https://github.com/moby/buildkit/issues/2334
# We can use docker layer cache so the build is fast enogh anyway
# We also use per-platform GOCACHE for the same reason.
env GOCACHE /build/${TARGETPLATFORM}/root/.cache/go-build
ENV GOCACHE /build/${TARGETPLATFORM}/root/.cache/go-build
# Build
RUN --mount=target=. \
--mount=type=cache,mode=0777,target=${GOCACHE} \
export GOOS=${TARGETOS} GOARCH=${TARGETARCH} GOARM=${TARGETVARIANT#v} && \
go build -o /out/manager main.go && \
go build -o /out/github-webhook-server ./cmd/githubwebhookserver
go build -trimpath -ldflags="-s -w -X 'github.com/actions/actions-runner-controller/build.Version=${VERSION}'" -o /out/manager main.go && \
go build -trimpath -ldflags="-s -w" -o /out/github-runnerscaleset-listener ./cmd/githubrunnerscalesetlistener && \
go build -trimpath -ldflags="-s -w" -o /out/github-webhook-server ./cmd/githubwebhookserver && \
go build -trimpath -ldflags="-s -w" -o /out/actions-metrics-server ./cmd/actionsmetricsserver && \
go build -trimpath -ldflags="-s -w" -o /out/sleep ./cmd/sleep
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
@@ -48,7 +50,10 @@ WORKDIR /
COPY --from=builder /out/manager .
COPY --from=builder /out/github-webhook-server .
COPY --from=builder /out/actions-metrics-server .
COPY --from=builder /out/github-runnerscaleset-listener .
COPY --from=builder /out/sleep .
USER nonroot:nonroot
USER 65532:65532
ENTRYPOINT ["/manager"]

166
Makefile
View File

@@ -1,11 +1,11 @@
ifdef DOCKER_USER
NAME ?= ${DOCKER_USER}/actions-runner-controller
DOCKER_IMAGE_NAME ?= ${DOCKER_USER}/actions-runner-controller
else
NAME ?= summerwind/actions-runner-controller
DOCKER_IMAGE_NAME ?= summerwind/actions-runner-controller
endif
DOCKER_USER ?= $(shell echo ${NAME} | cut -d / -f1)
VERSION ?= latest
RUNNER_VERSION ?= 2.294.0
DOCKER_USER ?= $(shell echo ${DOCKER_IMAGE_NAME} | cut -d / -f1)
VERSION ?= dev
RUNNER_VERSION ?= 2.303.0
TARGETPLATFORM ?= $(shell arch)
RUNNER_NAME ?= ${DOCKER_USER}/actions-runner
RUNNER_TAG ?= ${VERSION}
@@ -19,6 +19,7 @@ KUBECONTEXT ?= kind-acceptance
CLUSTER ?= acceptance
CERT_MANAGER_VERSION ?= v1.1.1
KUBE_RBAC_PROXY_VERSION ?= v0.11.0
SHELLCHECK_VERSION ?= 0.8.0
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:generateEmbeddedObjectMeta=true"
@@ -31,6 +32,20 @@ GOBIN=$(shell go env GOBIN)
endif
TEST_ASSETS=$(PWD)/test-assets
TOOLS_PATH=$(PWD)/.tools
OS_NAME := $(shell uname -s | tr A-Z a-z)
# The etcd packages that coreos maintain use different extensions for each *nix OS on their github release page.
# ETCD_EXTENSION: the storage format file extension listed on the release page.
# EXTRACT_COMMAND: the appropriate CLI command for extracting this file format.
ifeq ($(OS_NAME), darwin)
ETCD_EXTENSION:=zip
EXTRACT_COMMAND:=unzip
else
ETCD_EXTENSION:=tar.gz
EXTRACT_COMMAND:=tar -xzf
endif
# default list of platforms for which multiarch image is built
ifeq (${PLATFORMS}, )
@@ -51,12 +66,15 @@ endif
all: manager
lint:
docker run --rm -v $(PWD):/app -w /app golangci/golangci-lint:v1.49.0 golangci-lint run
GO_TEST_ARGS ?= -short
# Run tests
test: generate fmt vet manifests
go test $(GO_TEST_ARGS) ./... -coverprofile cover.out
go test -fuzz=Fuzz -fuzztime=10s -run=Fuzz* ./controllers
test: generate fmt vet manifests shellcheck
go test $(GO_TEST_ARGS) `go list ./... | grep -v ./test_e2e_arc` -coverprofile cover.out
go test -fuzz=Fuzz -fuzztime=10s -run=Fuzz* ./controllers/actions.summerwind.net
test-with-deps: kube-apiserver etcd kubectl
# See https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest#pkg-constants
@@ -68,14 +86,20 @@ test-with-deps: kube-apiserver etcd kubectl
# Build manager binary
manager: generate fmt vet
go build -o bin/manager main.go
go build -o bin/github-runnerscaleset-listener ./cmd/githubrunnerscalesetlistener
# Run against the configured Kubernetes cluster in ~/.kube/config
run: generate fmt vet manifests
go run ./main.go
run-scaleset: generate fmt vet
CONTROLLER_MANAGER_POD_NAMESPACE=default \
CONTROLLER_MANAGER_CONTAINER_IMAGE="${DOCKER_IMAGE_NAME}:${VERSION}" \
go run ./main.go --auto-scaling-runner-set-only
# Install CRDs into a cluster
install: manifests
kustomize build config/crd | kubectl apply -f -
kustomize build config/crd | kubectl apply --server-side -f -
# Uninstall CRDs from a cluster
uninstall: manifests
@@ -83,8 +107,8 @@ uninstall: manifests
# Deploy controller in the configured Kubernetes cluster in ~/.kube/config
deploy: manifests
cd config/manager && kustomize edit set image controller=${NAME}:${VERSION}
kustomize build config/default | kubectl apply -f -
cd config/manager && kustomize edit set image controller=${DOCKER_IMAGE_NAME}:${VERSION}
kustomize build config/default | kubectl apply --server-side -f -
# Generate manifests e.g. CRD, RBAC etc.
manifests: manifests-gen-crds chart-crds
@@ -92,11 +116,77 @@ manifests: manifests-gen-crds chart-crds
manifests-gen-crds: controller-gen yq
$(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
for YAMLFILE in config/crd/bases/actions*.yaml; do \
$(YQ) write --inplace "$$YAMLFILE" spec.preserveUnknownFields false; \
$(YQ) '.spec.preserveUnknownFields = false' --inplace "$$YAMLFILE" ; \
done
make manifests-gen-crds-fix DELETE_KEY=x-kubernetes-list-type
make manifests-gen-crds-fix DELETE_KEY=x-kubernetes-list-map-keys
manifests-gen-crds-fix: DELETE_KEY ?=
manifests-gen-crds-fix:
#runners
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
#runnerreplicasets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
#runnerdeployments
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
#runnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.volumeClaimTemplates.items.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
#autoscalingrunnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
#ehemeralrunnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
# ephemeralrunners
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
chart-crds:
cp config/crd/bases/*.yaml charts/actions-runner-controller/crds/
cp config/crd/bases/actions.github.com_autoscalingrunnersets.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_autoscalinglisteners.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_ephemeralrunnersets.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_ephemeralrunners.yaml charts/gha-runner-scale-set-controller/crds/
rm charts/actions-runner-controller/crds/actions.github.com_autoscalingrunnersets.yaml
rm charts/actions-runner-controller/crds/actions.github.com_autoscalinglisteners.yaml
rm charts/actions-runner-controller/crds/actions.github.com_ephemeralrunnersets.yaml
rm charts/actions-runner-controller/crds/actions.github.com_ephemeralrunners.yaml
# Run go fmt against code
fmt:
@@ -110,6 +200,10 @@ vet:
generate: controller-gen
$(CONTROLLER_GEN) object:headerFile=./hack/boilerplate.go.txt paths="./..."
# Run shellcheck on runner scripts
shellcheck: shellcheck-install
$(TOOLS_PATH)/shellcheck --shell bash --source-path runner runner/*.sh
docker-buildx:
export DOCKER_CLI_EXPERIMENTAL=enabled ;\
export DOCKER_BUILDKIT=1
@@ -119,18 +213,19 @@ docker-buildx:
docker buildx build --platform ${PLATFORMS} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
-t "${NAME}:${VERSION}" \
--build-arg VERSION=${VERSION} \
-t "${DOCKER_IMAGE_NAME}:${VERSION}" \
-f Dockerfile \
. ${PUSH_ARG}
# Push the docker image
docker-push:
docker push ${NAME}:${VERSION}
docker push ${DOCKER_IMAGE_NAME}:${VERSION}
docker push ${RUNNER_NAME}:${RUNNER_TAG}
# Generate the release manifest file
release: manifests
cd config/manager && kustomize edit set image controller=${NAME}:${VERSION}
cd config/manager && kustomize edit set image controller=${DOCKER_IMAGE_NAME}:${VERSION}
mkdir -p release
kustomize build config/default > release/actions-runner-controller.yaml
@@ -154,7 +249,7 @@ acceptance/kind:
# Otherwise `load docker-image` fail while running `docker save`.
# See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap
acceptance/load:
kind load docker-image ${NAME}:${VERSION} --name ${CLUSTER}
kind load docker-image ${DOCKER_IMAGE_NAME}:${VERSION} --name ${CLUSTER}
kind load docker-image quay.io/brancz/kube-rbac-proxy:$(KUBE_RBAC_PROXY_VERSION) --name ${CLUSTER}
kind load docker-image ${RUNNER_NAME}:${RUNNER_TAG} --name ${CLUSTER}
kind load docker-image docker:dind --name ${CLUSTER}
@@ -184,7 +279,7 @@ acceptance/teardown:
kind delete cluster --name ${CLUSTER}
acceptance/deploy:
NAME=${NAME} DOCKER_USER=${DOCKER_USER} VERSION=${VERSION} RUNNER_NAME=${RUNNER_NAME} RUNNER_TAG=${RUNNER_TAG} TEST_REPO=${TEST_REPO} \
DOCKER_IMAGE_NAME=${DOCKER_IMAGE_NAME} DOCKER_USER=${DOCKER_USER} VERSION=${VERSION} RUNNER_NAME=${RUNNER_NAME} RUNNER_TAG=${RUNNER_TAG} TEST_REPO=${TEST_REPO} \
TEST_ORG=${TEST_ORG} TEST_ORG_REPO=${TEST_ORG_REPO} SYNC_PERIOD=${SYNC_PERIOD} \
USE_RUNNERSET=${USE_RUNNERSET} \
TEST_EPHEMERAL=${TEST_EPHEMERAL} \
@@ -193,8 +288,8 @@ acceptance/deploy:
acceptance/tests:
acceptance/checks.sh
acceptance/runner/entrypoint:
cd test/entrypoint/ && bash test.sh
acceptance/runner/startup:
cd test/startup/ && bash test.sh
# We use -count=1 instead of `go clean -testcache`
# See https://terratest.gruntwork.io/docs/testing-best-practices/avoid-test-caching/
@@ -242,13 +337,30 @@ ifeq (, $(wildcard $(GOBIN)/yq))
YQ_TMP_DIR=$$(mktemp -d) ;\
cd $$YQ_TMP_DIR ;\
go mod init tmp ;\
go install github.com/mikefarah/yq/v3@3.4.0 ;\
go install github.com/mikefarah/yq/v4@v4.25.3 ;\
rm -rf $$YQ_TMP_DIR ;\
}
endif
YQ=$(GOBIN)/yq
OS_NAME := $(shell uname -s | tr A-Z a-z)
# find or download shellcheck
# download shellcheck if necessary
shellcheck-install:
ifeq (, $(wildcard $(TOOLS_PATH)/shellcheck))
echo "Downloading shellcheck"
@{ \
set -e ;\
SHELLCHECK_TMP_DIR=$$(mktemp -d) ;\
cd $$SHELLCHECK_TMP_DIR ;\
curl -LO https://github.com/koalaman/shellcheck/releases/download/v$(SHELLCHECK_VERSION)/shellcheck-v$(SHELLCHECK_VERSION).$(OS_NAME).x86_64.tar.xz ;\
tar Jxvf shellcheck-v$(SHELLCHECK_VERSION).$(OS_NAME).x86_64.tar.xz ;\
cd $(CURDIR) ;\
mkdir -p $(TOOLS_PATH) ;\
mv $$SHELLCHECK_TMP_DIR/shellcheck-v$(SHELLCHECK_VERSION)/shellcheck $(TOOLS_PATH)/ ;\
rm -rf $$SHELLCHECK_TMP_DIR ;\
}
endif
SHELLCHECK=$(TOOLS_PATH)/shellcheck
# find or download etcd
etcd:
@@ -258,12 +370,10 @@ ifeq (, $(wildcard $(TEST_ASSETS)/etcd))
set -xe ;\
INSTALL_TMP_DIR=$$(mktemp -d) ;\
cd $$INSTALL_TMP_DIR ;\
wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
wget https://github.com/coreos/etcd/releases/download/v3.4.22/etcd-v3.4.22-$(OS_NAME)-amd64.$(ETCD_EXTENSION);\
mkdir -p $(TEST_ASSETS) ;\
tar zxvf kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/etcd $(TEST_ASSETS)/etcd ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kube-apiserver $(TEST_ASSETS)/kube-apiserver ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kubectl $(TEST_ASSETS)/kubectl ;\
$(EXTRACT_COMMAND) etcd-v3.4.22-$(OS_NAME)-amd64.$(ETCD_EXTENSION) ;\
mv etcd-v3.4.22-$(OS_NAME)-amd64/etcd $(TEST_ASSETS)/etcd ;\
rm -rf $$INSTALL_TMP_DIR ;\
}
ETCD_BIN=$(TEST_ASSETS)/etcd
@@ -285,9 +395,7 @@ ifeq (, $(wildcard $(TEST_ASSETS)/kube-apiserver))
wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mkdir -p $(TEST_ASSETS) ;\
tar zxvf kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/etcd $(TEST_ASSETS)/etcd ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kube-apiserver $(TEST_ASSETS)/kube-apiserver ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kubectl $(TEST_ASSETS)/kubectl ;\
rm -rf $$INSTALL_TMP_DIR ;\
}
KUBE_APISERVER_BIN=$(TEST_ASSETS)/kube-apiserver
@@ -309,8 +417,6 @@ ifeq (, $(wildcard $(TEST_ASSETS)/kubectl))
wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mkdir -p $(TEST_ASSETS) ;\
tar zxvf kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/etcd $(TEST_ASSETS)/etcd ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kube-apiserver $(TEST_ASSETS)/kube-apiserver ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kubectl $(TEST_ASSETS)/kubectl ;\
rm -rf $$INSTALL_TMP_DIR ;\
}

14
PROJECT
View File

@@ -1,5 +1,5 @@
domain: summerwind.dev
repo: github.com/actions-runner-controller/actions-runner-controller
repo: github.com/actions/actions-runner-controller
resources:
- group: actions
kind: Runner
@@ -10,4 +10,16 @@ resources:
- group: actions
kind: RunnerDeployment
version: v1alpha1
- group: actions
kind: AutoscalingRunnerSet
version: v1alpha1
- group: actions
kind: EphemeralRunnerSet
version: v1alpha1
- group: actions
kind: EphemeralRunner
version: v1alpha1
- group: actions
kind: AutoscalingListener
version: v1alpha1
version: "2"

1741
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -1,22 +1,31 @@
# Security Policy
Thanks for helping make GitHub safe for everyone.
## Sponsoring the project
## Security
This project is maintained by a small team of two and therefore lacks the resource to provide security fixes in a timely manner.
GitHub takes the security of our software products and services seriously, including all of the open source code repositories managed through our GitHub organizations, such as [GitHub](https://github.com/GitHub).
If you have important business(es) that relies on this project, please consider sponsoring the project so that the maintainer(s) can commit to providing such service.
Even though [open source repositories are outside of the scope of our bug bounty program](https://bounty.github.com/index.html#scope) and therefore not eligible for bounty rewards, we will ensure that your finding gets passed along to the appropriate maintainers for remediation.
Please refer to https://github.com/sponsors/actions-runner-controller for available tiers.
## Reporting Security Issues
## Supported Versions
If you believe you have found a security vulnerability in any GitHub-owned repository, please report it to us through coordinated disclosure.
| Version | Supported |
| ------- | ------------------ |
| 0.23.0 | :white_check_mark: |
| < 0.23.0| :x: |
**Please do not report security vulnerabilities through public GitHub issues, discussions, or pull requests.**
## Reporting a Vulnerability
Instead, please send an email to opensource-security[@]github.com.
To report a security issue, please email ykuoka+arcsecurity(at)gmail.com with a description of the issue, the steps you took to create the issue, affected versions, and, if known, mitigations for the issue.
Please include as much of the information listed below as you can to help us better understand and resolve the issue:
A maintainer will try to respond within 5 working days. If the issue is confirmed as a vulnerability, a Security Advisory will be opened. This project tries to follow a 90 day disclosure timeline.
* The type of issue (e.g., buffer overflow, SQL injection, or cross-site scripting)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
## Policy
See [GitHub's Safe Harbor Policy](https://docs.github.com/en/github/site-policy/github-bug-bounty-program-legal-safe-harbor#1-safe-harbor-terms)

View File

@@ -4,19 +4,21 @@
* [Installation](#installation)
* [InternalError when calling webhook: context deadline exceeded](#internalerror-when-calling-webhook-context-deadline-exceeded)
* [Invalid header field value](#invalid-header-field-value)
* [Helm chart install failure: certificate signed by unknown authority](#helm-chart-install-failure-certificate-signed-by-unknown-authority)
* [Operations](#operations)
* [Stuck runner kind or backing pod](#stuck-runner-kind-or-backing-pod)
* [Delay in jobs being allocated to runners](#delay-in-jobs-being-allocated-to-runners)
* [Runner coming up before network available](#runner-coming-up-before-network-available)
* [Outgoing network action hangs indefinitely](#outgoing-network-action-hangs-indefinitely)
* [Unable to scale to zero with TotalNumberOfQueuedAndInProgressWorkflowRuns](#unable-to-scale-to-zero-with-totalnumberofqueuedandinprogressworkflowruns)
* [Slow / failure to boot dind sidecar (default runner)](#slow--failure-to-boot-dind-sidecar-default-runner)
## Tools
A list of tools which are helpful for troubleshooting
* https://github.com/rewanthtammana/kubectl-fields Kubernetes resources hierarchy parsing tool
* https://github.com/stern/stern Multi pod and container log tailing for Kubernetes
* [Kubernetes resources hierarchy parsing tool `kubectl-fields`](https://github.com/rewanthtammana/kubectl-fields)
* [Multi pod and container log tailing for Kubernetes `stern`](https://github.com/stern/stern)
## Installation
@@ -28,7 +30,7 @@ Troubeshooting runbooks that relate to ARC installation problems
This issue can come up for various reasons like leftovers from previous installations or not being able to access the K8s service's clusterIP associated with the admission webhook server (of ARC).
```
```text
Internal error occurred: failed calling webhook "mutate.runnerdeployment.actions.summerwind.dev":
Post "https://actions-runner-controller-webhook.actions-runner-system.svc:443/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment?timeout=10s": context deadline exceeded
```
@@ -37,22 +39,24 @@ Post "https://actions-runner-controller-webhook.actions-runner-system.svc:443/mu
First we will try the common solution of checking webhook leftovers from previous installations:
1. ```bash
kubectl get validatingwebhookconfiguration -A
kubectl get mutatingwebhookconfiguration -A
```
2. If you see any webhooks related to actions-runner-controller, delete them:
1. ```bash
kubectl get validatingwebhookconfiguration -A
kubectl get mutatingwebhookconfiguration -A
```
2. If you see any webhooks related to actions-runner-controller, delete them:
```bash
kubectl delete mutatingwebhookconfiguration actions-runner-controller-mutating-webhook-configuration
kubectl delete validatingwebhookconfiguration actions-runner-controller-validating-webhook-configuration
```
If that didn't work then probably your K8s control-plane is somehow unable to access the K8s service's clusterIP associated with the admission webhook server:
1. You're running apiserver as a binary and you didn't make service cluster IPs available to the host network.
2. You're running the apiserver in the pod but your pod network (i.e. CNI plugin installation and config) is not good so your pods(like kube-apiserver) in the K8s control-plane nodes can't access ARC's admission webhook server pod(s) in probably data-plane nodes.
Another reason could be due to GKEs firewall settings you may run into the following errors when trying to deploy runners on a private GKE cluster:
Another reason could be due to GKEs firewall settings you may run into the following errors when trying to deploy runners on a private GKE cluster:
To fix this, you may either:
@@ -61,7 +65,7 @@ To fix this, you may either:
```sh
# With helm, you'd set `webhookPort` to the port number of your choice
# See https://github.com/actions-runner-controller/actions-runner-controller/pull/1410/files for more information
# See https://github.com/actions/actions-runner-controller/pull/1410/files for more information
helm upgrade --install --namespace actions-runner-system --create-namespace \
--wait actions-runner-controller actions-runner-controller/actions-runner-controller \
--set webhookPort=10250
@@ -91,7 +95,7 @@ To fix this, you may either:
**Problem**
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error
{
"controller": "runner",
"request": "actions-runner-system/runner-deployment-dk7q8-dk5c9",
@@ -102,9 +106,41 @@ To fix this, you may either:
**Solution**
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added, either:
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documented flags
### Helm chart install failure: certificate signed by unknown authority
**Problem**
```text
Error: UPGRADE FAILED: failed to create resource: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority
```
Apparently, it's failing while `helm` is creating one of resources defined in the ARC chart and the cause was that cert-manager's webhook is not working correctly, due to the missing or the invalid CA certficate.
You'd try to tail logs from the `cert-manager-cainjector` and see it's failing with an error like:
```text
$ kubectl -n cert-manager logs cert-manager-cainjector-7cdbb9c945-g6bt4
I0703 03:31:55.159339 1 start.go:91] "starting" version="v1.1.1" revision="3ac7418070e22c87fae4b22603a6b952f797ae96"
I0703 03:31:55.615061 1 leaderelection.go:243] attempting to acquire leader lease kube-system/cert-manager-cainjector-leader-election...
I0703 03:32:10.738039 1 leaderelection.go:253] successfully acquired lease kube-system/cert-manager-cainjector-leader-election
I0703 03:32:10.739941 1 recorder.go:52] cert-manager/controller-runtime/manager/events "msg"="Normal" "message"="cert-manager-cainjector-7cdbb9c945-g6bt4_88e4bc70-eded-4343-a6fb-0ddd6434eb55 became leader" "object"={"kind":"ConfigMap","namespace":"kube-system","name":"cert-manager-cainjector-leader-election","uid":"942a021e-364c-461a-978c-f54a95723cdc","apiVersion":"v1","resourceVersion":"1576"} "reason"="LeaderElection"
E0703 03:32:11.192128 1 start.go:119] cert-manager/ca-injector "msg"="manager goroutine exited" "error"=null
I0703 03:32:12.339197 1 request.go:645] Throttling request took 1.047437675s, request: GET:https://10.96.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
E0703 03:32:13.143790 1 start.go:151] cert-manager/ca-injector "msg"="Error registering certificate based controllers. Retrying after 5 seconds." "error"="no matches for kind \"MutatingWebhookConfiguration\" in version \"admissionregistration.k8s.io/v1beta1\""
Error: error registering secret controller: no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
```
**Solution**
Your cluster is based on a new enough Kubernetes of version 1.22 or greater which does not support the legacy `admissionregistration.k8s.io/v1beta1` API anymore, and your `cert-manager` is not up-to-date hence it's still trying to use the leagcy Kubernetes API.
In many cases, it's not an option to downgrade Kubernetes. So, just upgrade `cert-manager` to a more recent version that does have have the support for the specific Kubernetes version you're using.
See <https://cert-manager.io/docs/installation/supported-releases/> for the list of available cert-manager versions.
## Operations
@@ -120,7 +156,7 @@ Sometimes either the runner kind (`kubectl get runners`) or it's underlying pod
Remove the finaliser from the relevent runner kind or pod
```
```text
# Get all kind runners and remove the finalizer
$ kubectl get runners --no-headers | awk {'print $1'} | xargs kubectl patch runner --type merge -p '{"metadata":{"finalizers":null}}'
@@ -135,7 +171,7 @@ are in a namespace not shared with anything else_
**Problem**
ARC isn't involved in jobs actually getting allocated to a runner. ARC is responsible for orchestrating runners and the runner lifecycle. Why some people see large delays in job allocation is not clear however it has been https://github.com/actions-runner-controller/actions-runner-controller/issues/1387#issuecomment-1122593984 that this is caused from the self-update process somehow.
ARC isn't involved in jobs actually getting allocated to a runner. ARC is responsible for orchestrating runners and the runner lifecycle. Why some people see large delays in job allocation is not clear however it has been confirmed https://github.com/actions/actions-runner-controller/issues/1387#issuecomment-1122593984 that this is caused from the self-update process somehow.
**Solution**
@@ -162,7 +198,7 @@ spec:
If you're running your action runners on a service mesh like Istio, you might
have problems with runner configuration accompanied by logs like:
```
```text
....
runner Starting Runner listener with startup type: service
runner Started listener process
@@ -177,11 +213,11 @@ configuration script tries to communicate with the network.
More broadly, there are many other circumstances where the runner pod coming up first can cause issues.
**Solution**<br />
**Solution**
> Added originally to help users with older istio instances.
> Newer Istio instances can use Istio's `holdApplicationUntilProxyStarts` attribute ([istio/istio#11130](https://github.com/istio/istio/issues/11130)) to avoid having to delay starting up the runner.
> Please read the discussion in [#592](https://github.com/actions-runner-controller/actions-runner-controller/pull/592) for more information.
> Please read the discussion in [#592](https://github.com/actions/actions-runner-controller/pull/592) for more information.
You can add a delay to the runner's entrypoint script by setting the `STARTUP_DELAY_IN_SECONDS` environment variable for the runner pod. This will cause the script to sleep X seconds, this works with any runner kind.
@@ -199,7 +235,7 @@ spec:
value: "5"
```
## Outgoing network action hangs indefinitely
### Outgoing network action hangs indefinitely
**Problem**
@@ -224,10 +260,30 @@ spec:
env: []
```
There may be more places you need to tweak for MTU.
Please consult issues like #651 for more information.
If the issue still persists, you can set the `ARC_DOCKER_MTU_PROPAGATION` to propagate the host MTU to networks created
by the GitHub Runner. For instance:
## Unable to scale to zero with TotalNumberOfQueuedAndInProgressWorkflowRuns
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: github-runner
namespace: github-system
spec:
replicas: 6
template:
spec:
dockerMTU: 1400
repository: $username/$repo
env:
- name: ARC_DOCKER_MTU_PROPAGATION
value: "true"
```
You can read the discussion regarding this issue in
[#1406](https://github.com/actions/actions-runner-controller/issues/1046).
### Unable to scale to zero with TotalNumberOfQueuedAndInProgressWorkflowRuns
**Problem**
@@ -235,6 +291,16 @@ HRA doesn't scale the RunnerDeployment to zero, even though you did configure HR
**Solution**
You very likely have some dangling workflow jobs stuck in `queued` or `in_progress` as seen in [#1057](https://github.com/actions-runner-controller/actions-runner-controller/issues/1057#issuecomment-1133439061).
You very likely have some dangling workflow jobs stuck in `queued` or `in_progress` as seen in [#1057](https://github.com/actions/actions-runner-controller/issues/1057#issuecomment-1133439061).
Manually call [the "list workflow runs" API](https://docs.github.com/en/rest/actions/workflow-runs#list-workflow-runs-for-a-repository), and [remove the dangling workflow job(s)](https://docs.github.com/en/rest/actions/workflow-runs#delete-a-workflow-run).
### Slow / failure to boot dind sidecar (default runner)
**Problem**
If you noticed that it takes several minutes for sidecar dind container to be created or it exits with with error just after being created it might indicate that you are experiencing disk performance issue. You might see message `failed to reserve container name` when scaling up multiple runners at once. When you ssh on kubernetes node that problematic pods were scheduled on you can use tools like `atop`, `htop` or `iotop` to check IO usage and cpu time percentage used on iowait. If you see that disk usage is high (80-100%) and iowaits are taking a significant chunk of you cpu time (normally it should not be higher than 10%) it means that performance is being bottlenecked by slow disk.
**Solution**
The solution is to switch to using faster storage, if you are experiencing this issue you are probably using HDD storage. Switching to SSD storage fixed the problem in my case. Most cloud providers have a list of storage options to use just pick something faster that your current disk, for on prem clusters you will need to invest in some SSDs.

100
acceptance/argotunnel.sh Executable file
View File

@@ -0,0 +1,100 @@
#!/usr/bin/env bash
# See https://developers.cloudflare.com/cloudflare-one/tutorials/many-cfd-one-tunnel/
kubectl create ns tunnel || :
kubectl -n tunnel delete secret tunnel-credentials || :
kubectl -n tunnel create secret generic tunnel-credentials \
--from-file=credentials.json=$HOME/.cloudflared/${TUNNEL_ID}.json || :
cat <<MANIFEST | kubectl -n tunnel ${OP} -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudflared
spec:
selector:
matchLabels:
app: cloudflared
replicas: 2 # You could also consider elastic scaling for this deployment
template:
metadata:
labels:
app: cloudflared
spec:
containers:
- name: cloudflared
image: cloudflare/cloudflared:latest
args:
- tunnel
# Points cloudflared to the config file, which configures what
# cloudflared will actually do. This file is created by a ConfigMap
# below.
- --config
- /etc/cloudflared/config/config.yaml
- run
livenessProbe:
httpGet:
# Cloudflared has a /ready endpoint which returns 200 if and only if
# it has an active connection to the edge.
path: /ready
port: 2000
failureThreshold: 1
initialDelaySeconds: 10
periodSeconds: 10
volumeMounts:
- name: config
mountPath: /etc/cloudflared/config
readOnly: true
# Each tunnel has an associated "credentials file" which authorizes machines
# to run the tunnel. cloudflared will read this file from its local filesystem,
# and it'll be stored in a k8s secret.
- name: creds
mountPath: /etc/cloudflared/creds
readOnly: true
volumes:
- name: creds
secret:
secretName: tunnel-credentials
# Create a config.yaml file from the ConfigMap below.
- name: config
configMap:
name: cloudflared
items:
- key: config.yaml
path: config.yaml
---
# This ConfigMap is just a way to define the cloudflared config.yaml file in k8s.
# It's useful to define it in k8s, rather than as a stand-alone .yaml file, because
# this lets you use various k8s templating solutions (e.g. Helm charts) to
# parameterize your config, instead of just using string literals.
apiVersion: v1
kind: ConfigMap
metadata:
name: cloudflared
data:
config.yaml: |
# Name of the tunnel you want to run
tunnel: ${TUNNEL_NAME}
credentials-file: /etc/cloudflared/creds/credentials.json
# Serves the metrics server under /metrics and the readiness server under /ready
metrics: 0.0.0.0:2000
# Autoupdates applied in a k8s pod will be lost when the pod is removed or restarted, so
# autoupdate doesn't make sense in Kubernetes. However, outside of Kubernetes, we strongly
# recommend using autoupdate.
no-autoupdate: true
ingress:
# The first rule proxies traffic to the httpbin sample Service defined in app.yaml
- hostname: ${TUNNEL_HOSTNAME}
service: http://actions-runner-controller-actions-metrics-server.actions-runner-system:80
path: /metrics$
- hostname: ${TUNNEL_HOSTNAME}
service: http://actions-runner-controller-github-webhook-server.actions-runner-system:80
# This rule matches any traffic which didn't match a previous rule, and responds with HTTP 404.
- service: http_status:404
MANIFEST
kubectl -n tunnel delete po -l app=cloudflared || :

View File

@@ -35,14 +35,53 @@ else
echo 'Skipped deploying secret "github-webhook-server". Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
fi
if [ -n "${WEBHOOK_GITHUB_TOKEN}" ] && [ -z "${CREATE_SECRETS_USING_HELM}" ]; then
kubectl -n actions-runner-system delete secret \
actions-metrics-server || :
kubectl -n actions-runner-system create secret generic \
actions-metrics-server \
--from-literal=github_token=${WEBHOOK_GITHUB_TOKEN:?WEBHOOK_GITHUB_TOKEN must not be empty}
else
echo 'Skipped deploying secret "actions-metrics-server". Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
fi
tool=${ACCEPTANCE_TEST_DEPLOYMENT_TOOL}
TEST_ID=${TEST_ID:-default}
if [ "${tool}" == "helm" ]; then
set -v
CHART=${CHART:-charts/actions-runner-controller}
flags=()
if [ "${IMAGE_PULL_SECRET}" != "" ]; then
flags+=( --set imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
flags+=( --set image.actionsRunnerImagePullSecrets[0].name=${IMAGE_PULL_SECRET})
flags+=( --set githubWebhookServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
flags+=( --set actionsMetricsServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
fi
if [ "${CHART_VERSION}" != "" ]; then
flags+=( --version ${CHART_VERSION})
fi
if [ "${LOG_FORMAT}" != "" ]; then
flags+=( --set logFormat=${LOG_FORMAT})
flags+=( --set githubWebhookServer.logFormat=${LOG_FORMAT})
flags+=( --set actionsMetricsServer.logFormat=${LOG_FORMAT})
fi
if [ -n "${CREATE_SECRETS_USING_HELM}" ]; then
if [ -z "${WEBHOOK_GITHUB_TOKEN}" ]; then
echo 'Failed deploying secret "actions-metrics-server" using helm. Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
exit 1
fi
flags+=( --set actionsMetricsServer.secret.create=true)
flags+=( --set actionsMetricsServer.secret.github_token=${WEBHOOK_GITHUB_TOKEN})
fi
set -vx
helm upgrade --install actions-runner-controller \
charts/actions-runner-controller \
${CHART} \
-n actions-runner-system \
--create-namespace \
--set syncPeriod=${SYNC_PERIOD} \
@@ -51,6 +90,8 @@ if [ "${tool}" == "helm" ]; then
--set image.tag=${VERSION} \
--set podAnnotations.test-id=${TEST_ID} \
--set githubWebhookServer.podAnnotations.test-id=${TEST_ID} \
--set actionsMetricsServer.podAnnotations.test-id=${TEST_ID} \
${flags[@]} --set image.imagePullPolicy=${IMAGE_PULL_POLICY} \
-f ${VALUES_FILE}
set +v
# To prevent `CustomResourceDefinition.apiextensions.k8s.io "runners.actions.summerwind.dev" is invalid: metadata.annotations: Too long: must have at most 262144 bytes`

View File

@@ -6,6 +6,8 @@ OP=${OP:-apply}
RUNNER_LABEL=${RUNNER_LABEL:-self-hosted}
cat acceptance/testdata/kubernetes_container_mode.envsubst.yaml | NAMESPACE=${RUNNER_NAMESPACE} envsubst | kubectl apply -f -
if [ -n "${TEST_REPO}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerset envsubst | kubectl ${OP} -f -

View File

@@ -0,0 +1,86 @@
# USAGE:
# cat acceptance/testdata/kubernetes_container_mode.envsubst.yaml | NAMESPACE=default envsubst | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-mode-runner
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch",]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "delete"]
# Needed to report test success by crating a cm from within workflow job step
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: runner-status-updater
rules:
- apiGroups: ["actions.summerwind.dev"]
resources: ["runners/status"]
verbs: ["get", "update", "patch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ${RUNNER_SERVICE_ACCOUNT_NAME}
namespace: ${NAMESPACE}
---
# To verify it's working, try:
# kubectl auth can-i --as system:serviceaccount:default:runner get pod
# If incomplete, workflows and jobs would fail with an error message like:
# Error: Error: The Service account needs the following permissions [{"group":"","verbs":["get","list","create","delete"],"resource":"pods","subresource":""},{"group":"","verbs":["get","create"],"resource":"pods","subresource":"exec"},{"group":"","verbs":["get","list","watch"],"resource":"pods","subresource":"log"},{"group":"batch","verbs":["get","list","create","delete"],"resource":"jobs","subresource":""},{"group":"","verbs":["create","delete","get","list"],"resource":"secrets","subresource":""}] on the pod resource in the 'default' namespace. Please contact your self hosted runner administrator.
# Error: Process completed with exit code 1.
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
name: runner-k8s-mode-runner
namespace: ${NAMESPACE}
subjects:
- kind: ServiceAccount
name: ${RUNNER_SERVICE_ACCOUNT_NAME}
namespace: ${NAMESPACE}
roleRef:
kind: ClusterRole
name: k8s-mode-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: runner-runner-stat-supdater
namespace: ${NAMESPACE}
subjects:
- kind: ServiceAccount
name: ${RUNNER_SERVICE_ACCOUNT_NAME}
namespace: ${NAMESPACE}
roleRef:
kind: ClusterRole
name: runner-status-updater
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: org-runnerdeploy-runner-work-dir
labels:
content: org-runnerdeploy-runner-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

View File

@@ -1,3 +1,23 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-runner-work-dir
labels:
content: ${NAME}-runner-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-rootless-dind-work-dir
labels:
content: ${NAME}-rootless-dind-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
@@ -39,10 +59,68 @@ spec:
labels:
- "${RUNNER_LABEL}"
serviceAccountName: ${RUNNER_SERVICE_ACCOUNT_NAME}
terminationGracePeriodSeconds: ${RUNNER_TERMINATION_GRACE_PERIOD_SECONDS}
env:
- name: RUNNER_GRACEFUL_STOP_TIMEOUT
value: "${RUNNER_GRACEFUL_STOP_TIMEOUT}"
- name: ROLLING_UPDATE_PHASE
value: "${ROLLING_UPDATE_PHASE}"
- name: ARC_DOCKER_MTU_PROPAGATION
value: "true"
# https://github.com/docker/docs/issues/8663
- name: DOCKER_DEFAULT_ADDRESS_POOL_BASE
value: "172.17.0.0/12"
- name: DOCKER_DEFAULT_ADDRESS_POOL_SIZE
value: "24"
- name: WAIT_FOR_DOCKER_SECONDS
value: "3"
dockerMTU: 1400
dockerEnv:
- name: RUNNER_GRACEFUL_STOP_TIMEOUT
value: "${RUNNER_GRACEFUL_STOP_TIMEOUT}"
# Fix the following no space left errors with rootless-dind runners that can happen while running buildx build:
# ------
# > [4/5] RUN go mod download:
# ------
# ERROR: failed to solve: failed to prepare yxsw8lv9hqnuafzlfta244l0z: mkdir /home/runner/.local/share/docker/vfs/dir/yxsw8lv9hqnuafzlfta244l0z/usr/local/go/src/cmd/compile/internal/types2/testdata: no space left on device
# Error: Process completed with exit code 1.
#
volumeMounts:
- name: rootless-dind-work-dir
# Omit the /share/docker part of the /home/runner/.local/share/docker as
# that part is created by dockerd.
mountPath: /home/runner/.local
readOnly: false
volumes:
- name: rootless-dind-work-dir
ephemeral:
volumeClaimTemplate:
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "${NAME}-rootless-dind-work-dir"
resources:
requests:
storage: 3Gi
#
# Non-standard working directory
#
# workDir: "/"
# # Uncomment the below to enable the kubernetes container mode
# # See https://github.com/actions/actions-runner-controller#runner-with-k8s-jobs
containerMode: ${RUNNER_CONTAINER_MODE}
workVolumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: "${NAME}-runner-work-dir"
resources:
requests:
storage: 10Gi
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler

View File

@@ -54,6 +54,16 @@ provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-rootless-dind-work-dir
labels:
content: ${NAME}-rootless-dind-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerSet
metadata:
@@ -112,14 +122,27 @@ spec:
labels:
app: ${NAME}
spec:
serviceAccountName: ${RUNNER_SERVICE_ACCOUNT_NAME}
terminationGracePeriodSeconds: ${RUNNER_TERMINATION_GRACE_PERIOD_SECONDS}
containers:
# # Uncomment only when non-dind-runner / you're using docker sidecar
# - name: docker
# # Image is required for the dind sidecar definition within RunnerSet spec
# image: "docker:dind"
# env:
# - name: RUNNER_GRACEFUL_STOP_TIMEOUT
# value: "${RUNNER_GRACEFUL_STOP_TIMEOUT}"
- name: runner
imagePullPolicy: IfNotPresent
env:
- name: RUNNER_GRACEFUL_STOP_TIMEOUT
value: "${RUNNER_GRACEFUL_STOP_TIMEOUT}"
- name: RUNNER_FEATURE_FLAG_EPHEMERAL
value: "${RUNNER_FEATURE_FLAG_EPHEMERAL}"
- name: GOMODCACHE
value: "/home/runner/.cache/go-mod"
- name: ROLLING_UPDATE_PHASE
value: "${ROLLING_UPDATE_PHASE}"
# PV-backed runner work dir
volumeMounts:
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
@@ -152,20 +175,28 @@ spec:
# https://github.com/actions/setup-go/blob/56a61c9834b4a4950dbbf4740af0b8a98c73b768/src/installer.ts#L144
mountPath: "/opt/hostedtoolcache"
# Valid only when dockerdWithinRunnerContainer=false
- name: docker
# PV-backed runner work dir
volumeMounts:
- name: work
mountPath: /runner/_work
# Cache docker image layers, in case dockerdWithinRunnerContainer=false
- name: var-lib-docker
mountPath: /var/lib/docker
# image: mumoshu/actions-runner-dind:dev
# - name: docker
# # PV-backed runner work dir
# volumeMounts:
# - name: work
# mountPath: /runner/_work
# # Cache docker image layers, in case dockerdWithinRunnerContainer=false
# - name: var-lib-docker
# mountPath: /var/lib/docker
# # image: mumoshu/actions-runner-dind:dev
# For buildx cache
- name: cache
mountPath: "/home/runner/.cache"
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
# # For buildx cache
# - name: cache
# mountPath: "/home/runner/.cache"
# For fixing no space left error on rootless dind runner
- name: rootless-dind-work-dir
# Omit the /share/docker part of the /home/runner/.local/share/docker as
# that part is created by dockerd.
mountPath: /home/runner/.local
readOnly: false
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
# volumes:
# - name: work
# ephemeral:
@@ -177,6 +208,24 @@ spec:
# resources:
# requests:
# storage: 10Gi
# Fix the following no space left errors with rootless-dind runners that can happen while running buildx build:
# ------
# > [4/5] RUN go mod download:
# ------
# ERROR: failed to solve: failed to prepare yxsw8lv9hqnuafzlfta244l0z: mkdir /home/runner/.local/share/docker/vfs/dir/yxsw8lv9hqnuafzlfta244l0z/usr/local/go/src/cmd/compile/internal/types2/testdata: no space left on device
# Error: Process completed with exit code 1.
#
volumes:
- name: rootless-dind-work-dir
ephemeral:
volumeClaimTemplate:
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "${NAME}-rootless-dind-work-dir"
resources:
requests:
storage: 3Gi
volumeClaimTemplates:
- metadata:
name: vol1

View File

@@ -1,6 +1,18 @@
# Set actions-runner-controller settings for testing
logLevel: "-4"
imagePullSecrets: []
image:
# This needs to be an empty array rather than a single-item array with empty name.
# Otherwise you end up with the following error on helm-upgrade:
# Error: UPGRADE FAILED: failed to create patch: map: map[] does not contain declared merge key: name && failed to create patch: map: map[] does not contain declared merge key: name
actionsRunnerImagePullSecrets: []
runner:
statusUpdateHook:
enabled: true
rbac:
allowGrantingKubernetesContainerModePermissions: true
githubWebhookServer:
imagePullSecrets: []
logLevel: "-4"
enabled: true
labels: {}
@@ -21,3 +33,23 @@ githubWebhookServer:
protocol: TCP
name: http
nodePort: 31000
actionsMetricsServer:
imagePullSecrets: []
logLevel: "-4"
enabled: true
labels: {}
replicaCount: 1
secret:
enabled: true
# create: true
name: "actions-metrics-server"
### GitHub Webhook Configuration
#github_webhook_secret_token: ""
service:
type: NodePort
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
nodePort: 31001

View File

@@ -0,0 +1,109 @@
# ADR 2022-10-17: Produce the runner image for the scaleset client
**Date**: 2022-10-17
**Status**: Done
# Breaking Changes
We aim to provide an similar experience (as close as possible) between self-hosted and GitHub-hosted runners. To achieve this, we are making the following changes to align our self-hosted runner container image with the Ubuntu runners managed by GitHub.
Here are the changes:
- We created a USER `runner(1001)` and a GROUP `docker(123)`
- `sudo` has been on the image and the `runner` will be a passwordless sudoer.
- The runner binary was placed placed under `/home/runner/` and launched using `/home/runner/run.sh`
- The runner's work directory is `/home/runner/_work`
- `$HOME` will point to `/home/runner`
- The container image user will be the `runner(1001)`
The latest Dockerfile can be found at: https://github.com/actions/runner/blob/main/images/Dockerfile
# Context
users can bring their own runner images, the contract we require is:
- It must have a runner binary under `/actions-runner` i.e. `/actions-runner/run.sh` exists
- The `WORKDIR` is set to `/actions-runner`
- If the user inside the container is root, the environment variable `RUNNER_ALLOW_RUNASROOT` should be set to `1`
The existing [ARC runner images](https://github.com/orgs/actions-runner-controller/packages?tab=packages&q=actions-runner) will not work with the new ARC mode out-of-box for the following reason:
- The current runner image requires the caller to pass runner configuration info, ex: URL and Config Token
- The current runner image has the runner binary under `/runner` which violates the contract described above
- The current runner image requires a special entrypoint script in order to work around some volume mount limitation for setting up DinD.
Since we expose the raw runner PodSpec to our end users, they can modify the helm `values.yaml` to adjust the runner container to their needs.
# Guiding Principles
- Build image is separated in two stages.
## The first stage (build)
- Reuses the same base image, so it is faster to build.
- Installs utilities needed to download assets (`runner` and `runner-container-hooks`).
- Downloads the runner and stores it into `/actions-runner` directory.
- Downloads the runner-container-hooks and stores it into `/actions-runner/k8s` directory.
- You can use build arguments to control the runner version, the target platform and runner container hooks version.
Preview (the published runner image might vary):
```Dockerfile
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0 as build
ARG RUNNER_ARCH="x64"
ARG RUNNER_VERSION=2.298.2
ARG RUNNER_CONTAINER_HOOKS_VERSION=0.1.3
RUN apt update -y && apt install curl unzip -y
WORKDIR /actions-runner
RUN curl -f -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-${RUNNER_ARCH}-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./runner.tar.gz \
&& rm runner.tar.gz
RUN curl -f -L -o runner-container-hooks.zip https://github.com/actions/runner-container-hooks/releases/download/v${RUNNER_CONTAINER_HOOKS_VERSION}/actions-runner-hooks-k8s-${RUNNER_CONTAINER_HOOKS_VERSION}.zip \
&& unzip ./runner-container-hooks.zip -d ./k8s \
&& rm runner-container-hooks.zip
```
## The main image:
- Copies assets from the build stage to `/actions-runner`
- Does not provide an entrypoint. The entrypoint should be set within the container definition.
Preview:
```Dockerfile
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0
WORKDIR /actions-runner
COPY --from=build /actions-runner .
```
## Example of pod spec with the init container copying assets
```yaml
apiVersion: v1
kind: Pod
metadata:
name: <name>
spec:
containers:
- name: runner
image: <image>
command: ["/runner/run.sh"]
volumeMounts:
- name: runner
mountPath: /runner
initContainers:
- name: setup
image: <image>
command: ["sh", "-c", "cp -r /actions-runner/* /runner/"]
volumeMounts:
- name: runner
mountPath: /runner
volumes:
- name: runner
emptyDir: {}
```

View File

@@ -0,0 +1,56 @@
# ADR 2022-10-27: Lifetime of RunnerScaleSet on Service
**Date**: 2022-10-27
**Status**: Done
## Context
We have created the RunnerScaleSet object and APIs around it on the GitHub Actions service for better support of any self-hosted runner auto-scale solution, like [actions-runner-controller](https://github.com/actions-runner-controller/actions-runner-controller).
The `RunnerScaleSet` object will represent a set of homogeneous self-hosted runners to the Actions service job routing system.
A `RunnerScaleSet` client (ARC) needs to communicate with the Actions service via HTTP long-poll in a certain protocol to get a workflow job successfully landed on one of its homogeneous self-hosted runners.
In this ADR, we discuss the following within the context of actions-runner-controller's new scaling mode:
- Who and how to create a RunnerScaleSet on the service?
- Who and how to delete a RunnerScaleSet on the service?
- What will happen to all the runners and jobs when the deletion happens?
## RunnerScaleSet creation
- `AutoScalingRunnerSet` custom resource controller will create the `RunnerScaleSet` object in the Actions service on any `AutoScalingRunnerSet` resource deployment.
- The creation is via REST API on Actions service `POST _apis/runtime/runnerscalesets`
- The creation needs to use the runner registration token (admin).
- `RunnerScaleSet.Name` == `AutoScalingRunnerSet.metadata.Name`
- The created `RunnerScaleSet` will only have 1 label and it's the `RunnerScaleSet`'s name
- `AutoScalingRunnerSet` controller will store the `RunnerScaleSet.Id` as an annotation on the k8s resource for future lookup.
## RunnerScaleSet modification
- When the user patch existing `AutoScalingRunnerSet`'s RunnerScaleSet related properly, ex: `runnerGroupName`, `runnerWorkDir`, the controller needs to make an HTTP PATCH call to the `_apis/runtime/runnerscalesets/2` endpoint in order to update the object on the service.
- We will put the deployed `AutoScalingRunnerSet` resource in an error state when the user tries to patch the resource with a different `githubConfigUrl`
> Basically, you can't move a deployed `AutoScalingRunnerSet` across GitHub entity, repoA->repoB, repoA->OrgC, etc.
> We evaluated blocking the change before instead of erroring at runtime and that we decided not to go down this route because it forces us to re-introduce admission webhooks (require cert-manager).
## RunnerScaleSet deletion
- `AutoScalingRunnerSet` custom resource controller will delete the `RunnerScaleSet` object in the Actions service on any `AutoScalingRunnerSet` resource deletion.
> `AutoScalingRunnerSet` deletion will contain several steps:
>
> - Stop the listener app so no more new jobs coming and no more scaling up/down.
> - Request scale down to 0
> - Force stop all runners
> - Wait for the scale down to 0
> - Delete the `RunnerScaleSet` object from service via REST API
- The deletion is via REST API on Actions service `DELETE _apis/runtime/runnerscalesets/1`
- The deletion needs to use the runner registration token (admin).
The user's `RunnerScaleSet` will be deleted from the service by `DormantRunnerScaleSetCleanupJob` if the particular `AutoScalingRunnerSet` has not connected to the service for the past 7 days. We have a similar rule for self-hosted runners.
## Jobs and Runners on deletion
- `RunnerScaleSet` deletion will be blocked if there is any job assigned to a runner within the `RunnerScaleSet`, which has to scale down to 0 before deletion.
- Any job that has been assigned to the `RunnerScaleSet` but hasn't been assigned to a runner within the `RunnerScaleSet` will get thrown back to the queue and wait for assignment again.
- Any offline runners within the `RunnerScaleSet` will be deleted from the service side.

View File

@@ -0,0 +1,54 @@
# ADR 2022-11-04: Technical detail about actions-runner-controller repository transfer
**Date**: 2022-11-04
**Status**: Done
# Context
As part of ARC Private Beta: Repository Migration & Open Sourcing Process, we have decided to transfer the current [actions-runner-controller repository](https://github.com/actions-runner-controller/actions-runner-controller) into the [Actions org](https://github.com/actions).
**Goals:**
- A clear signal that GitHub will start taking over ARC and provide support.
- Since we are going to deprecate the existing auto-scale mode in ARC at some point, we want to have a clear separation between the legacy mode (not supported) and the new mode (supported).
- Avoid disrupting users as much as we can, existing ARC users will not notice any difference after the repository transfer, they can keep upgrading to the newer version of ARC and keep using the legacy mode.
**Challenges**
- The original creator's name (`summerwind`) is all over the place, including some critical parts of ARC:
- The k8s user resource API's full name is `actions.summerwind.dev/v1alpha1/RunnerDeployment`, renaming it to `actions.github.com` is a breaking change and will force the user to rebuild their entire k8s cluster.
- All docker images around ARC (controller + default runner) is published to [dockerhub/summerwind](https://hub.docker.com/u/summerwind)
- The helm chart for ARC is currently hosted on [GitHub pages](https://actions-runner-controller.github.io/actions-runner-controller) for https://github.com/actions-runner-controller/actions-runner-controller, moving the repository means we will break users who install ARC via the helm chart
# Decisions
## APIs group names for k8s custom resources, `actions.summerwind` or `actions.github`
- We will not rename any existing ARC resources API name after moving the repository under Actions org. (keep `summerwind` for old stuff)
- For any new resource API we are going to add, those will be named properly under GitHub, ex: `actions.github.com/v1alpha1/AutoScalingRunnerSet`
Benefits:
- A clear separation from existing ARC:
- Easy for the support engineer to triage income tickets and figure out whether we need to support the use case from the user
- We won't break existing users when they upgrade to a newer version of ARC after the repository transfer
Based on the spike done by `@nikola-jokic`, we have confidence that we can host multiple resources with different API names under the same repository, and the published ARC controller can handle both resources properly.
## ARC Docker images
We will not start using the GitHub container registry for hosting ARC images (controller + runner images) right after the repository transfer.
But over time, we will start using GHCR for hosting those images along with our deprecation story.
## Helm chart
We will recreate the https://github.com/actions-runner-controller/actions-runner-controller repository after the repository transfer.
The recreated repository will only contain the helm chart assets which keep powering the https://actions-runner-controller.github.io/actions-runner-controller for users to install ARC via Helm.
Long term, we will switch to hosting the helm chart on GHCR (OCI) instead of using GitHub Pages.
This will require a one-time change to our users by running
`helm repo remove actions-runner-controller` and `helm repo add actions-runner-controller oci://ghcr.io/actions`

View File

@@ -0,0 +1,89 @@
# ADR 2022-12-05: Adding labels to our resources
**Date**: 2022-12-05
**Status**: Superceded [^1]
## Context
users need to provide us with logs so that we can help support and troubleshoot their issues. We need a way for our users to filter and retrieve the logs we need.
## Proposal
A good start would be a catch-all label to get all logs that are
ARC-related: one of the [recommended labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/)
is `app.kubernetes.io/part-of` and we can set that for all ARC components
to be `actions-runner-controller`.
Assuming standard logging that would allow us to get all ARC logs by running
```bash
kubectl logs -l 'app.kubernetes.io/part-of=actions-runner-controller'
```
which would be very useful for development to begin with.
The proposal is to add these sets of labels to the pods ARC creates:
#### controller-manager
Labels to be set by the Helm chart:
```yaml
metadata:
labels:
app.kubernetes.io/part-of: actions-runner-controller
app.kubernetes.io/component: controller-manager
app.kubernetes.io/version: "x.x.x"
```
#### Listener
Labels to be set by controller at creation:
```yaml
metadata:
labels:
app.kubernetes.io/part-of: actions-runner-controller
app.kubernetes.io/component: runner-scale-set-listener
app.kubernetes.io/version: "x.x.x"
actions.github.com/scale-set-name: scale-set-name # this corresponds to metadata.name as set for AutoscalingRunnerSet
# the following labels are to be extracted by the config URL
actions.github.com/enterprise: enterprise
actions.github.com/organization: organization
actions.github.com/repository: repository
```
#### Runner
Labels to be set by controller at creation:
```yaml
metadata:
labels:
app.kubernetes.io/part-of: actions-runner-controller
app.kubernetes.io/component: runner
app.kubernetes.io/version: "x.x.x"
actions.github.com/scale-set-name: scale-set-name # this corresponds to metadata.name as set for AutoscalingRunnerSet
actions.github.com/runner-name: runner-name
actions.github.com/runner-group-name: runner-group-name
# the following labels are to be extracted by the config URL
actions.github.com/enterprise: enterprise
actions.github.com/organization: organization
actions.github.com/repository: repository
```
This would allow us to ask users:
> Can you please send us the logs coming from pods labelled 'app.kubernetes.io/part-of=actions-runner-controller'?
Or for example if they're having problems specifically with runners:
> Can you please send us the logs coming from pods labelled 'app.kubernetes.io/component=runner'?
This way users don't have to understand ARC moving parts but we still have a
way to target them specifically if we need to.
[^1]: Superseded by [ADR 2023-04-14](2023-04-14-adding-labels-k8s-resources.md)

View File

@@ -0,0 +1,94 @@
# ADR 2022-12-27: Pick the right runner to scale down
**Date**: 2022-12-27
**Status**: Done
## Context
- A custom resource `EphemeralRunnerSet` manage a set of custom resource `EphemeralRunners`
- The `EphemeralRunnerSet` has `Replicas` in its `Spec`, and the responsibility of the `EphemeralRunnerSet_controller` is to reconcile a given `EphemeralRunnerSet` to have
the same amount of `EphemeralRunners` as the `Spec.Replicas` defined.
- This means the `EphemeralRunnerSet_controller` will scale up the `EphemeralRunnerSet` by creating more `EphemeralRunner` in the case of the `Spec.Replicas` is higher than
the current amount of `EphemeralRunners`.
- This also means the `EphemeralRunnerSet_controller` will scale down the `EphemeralRunnerSet` by finding some existing `EphemeralRunner` to delete in the case of
the `Spec.Replicas` is less than the current amount of `EphemeralRunners`.
This ADR is about how can we find the right existing `EphemeralRunner` to delete when we need to scale down.
## Current approach
1. `EphemeralRunnerSet_controller` figure out how many `EphemeralRunner` it needs to delete, ex: need to scale down from 10 to 2 means we need to delete 8 `EphemeralRunner`
2. `EphemeralRunnerSet_controller` find all `EphemeralRunner` that is in the `Running` or `Pending` phase.
> `Pending` means the `EphemeralRunner` is still probably creating and a runner has not yet configured with the Actions service.
> `Running` means the `EphemeralRunner` is created and a runner has probably configured with Actions service, the runner may sit there idle,
> or maybe actively running a workflow job. We don't have a clear answer for it from the ARC side. (Actions service knows it for sure)
3. `EphemeralRunnerSet_controller` make an HTTP DELETE request to the Actions service for each `EphemeralRunner` from the previous step and ask the Actions service to delete the runner via `RunnerId`.
(The `RunnerId` is generated after the runner registered with the Actions service, and stored on the `EphemeralRunner.Status.RunnerId`)
> - The HTTP DELETE request looks like the following:
> `DELETE https://pipelines.actions.githubusercontent.com/WoxlUxJHrKEzIp4Nz3YmrmLlZBonrmj9xCJ1lrzcJ9ZsD1Tnw7/_apis/distributedtask/pools/0/agents/1024`
> The Actions service will return 2 types of responses:
>
> 1. 204 (No Content): The runner with Id 1024 has been successfully removed from the service or the runner with Id 1024 doesn't exist.
> 2. 400 (Bad Request) with JSON body that contains an error message like `JobStillRunningException`: The service can't remove this runner at this point since it has been
> assigned to a job request, the client won't be able to remove the runner until the runner finishes its current assigned job request.
4. `EphemeralRunnerSet_controller` will ignore any deletion error from runners that are still running a job, and keep trying deletion until the amount of `204` equals the amount of
`EphemeralRunner` needs to delete.
## The problem with the current approach
In a busy `AutoScalingRunnerSet`, the scale up and down may happen all the time as jobs are queued up and jobs finished.
We will make way too many HTTP requests to the Actions service and ask it to try to delete a certain runner, and rely on the exception from the service to figure out what to do next.
The runner deletion request is not cheap to the service, for synchronization, the `JobStillRunningException` is raised from the DB call for the request.
So we are wasting resources on both the Actions service (extra load to the database) and the actions-runner-controller (useless outgoing HTTP requests).
In the test ARC that I deployed to Azure, the ARC controller tried to delete RunnerId 12408 for `bbq-beets/ting-test` a total of 35 times within 10 minutes.
## Root cause
The `EphemeralRunnerSet_controller` doesn't know whether a given `EphemeralRunner` is actually running a workflow job or not
(it only knows the runner is configured at the service), so it can't filter out the `EphemeralRunner`.
## Additional context
The legacy ARC's custom resource allows the runner image to leverage the RunnerJobHook feature to update the status of the runner custom resource in K8S (Mark the runner as running workflow run Id XXX).
This brings a good value to users as it can provide some insight about which runner is running which job for all the runners in the cluster and it looks pretty close to what we want to fix the [root cause](#root-cause)
However, the legacy ARC approach means the service account for running the runner pod needs to have elevated permission to update the custom resource,
this would be a big `NO` from a security point of view since we may not trust the code running inside the runner pod.
## Possible Solution
The nature of the k8s controller-runtime means we might reconcile the resource base on stale cache data.
I think our goal for the solution should be:
- Reduce wasteful HTTP requests on a scale-down as much as we can.
- We can accept that we might make 1 or 2 wasteful requests to Actions service, but we can't accept making 5/10+ of them.
- See if we can meet feature parity with what the RunnerJobHook support with compromise any security concerns.
Since the root cause of why the reconciliation can't skip an `EphemeralRunner` is that we don't know whether an `EphemeralRunner` is running a job,
a simple thought is how about we somehow attach some info to the `EphemeralRunner` to indicate it's currently running a job?
How about we send this info from the service to the auto-scaling-listener via the existing HTTP long-poll
and let the listener patch the `EphemeralRunner.Status` to indicate it's running a job?
> The listener is normally in a separate namespace with elevated permission and it's something we can trust.
Changes:
- Introduce a new message type `JobStarted` (in addition to the existing `JobAvailable/JobAssigned/JobCompleted`) on the service side, the message is sent when a runner of the `RunnerScaleSet` get assigned to a job,
`RequestId`, `RunnerId`, and `RunnerName` will be included in the message.
- Add `RequestId (int)` to `EphemeralRunner.Status`, this will indicate which job the runner is running.
- The `AutoScalingListener` will base on the payload of this new message to patch `EphemeralRunners/RunnerName/Status` with the `RequestId`
- When `EphemeralRunnerSet_controller` try to find `EphemeralRunner` to delete on a scale down, it will skip any `EphemeralRunner` that has `EphemeralRunner.Status.RequestId` set.
- In the future, we can expose more info to this `JobStarted` message and introduce more property under `EphemeralRunner.Status` to reach feature parity with legacy ARC's RunnerJobHook

View File

@@ -0,0 +1,42 @@
# ADR 2023-02-02: Automate updating runner version
**Date**: 2023-02-02
**Status**: Proposed
## Context
When a new [runner](https://github.com/actions/runner) version is released, new
images need to be built in
[actions-runner-controller/releases](https://github.com/actions-runner-controller/releases).
This is currently started by the
[release-runners](https://github.com/actions/actions-runner-controller/blob/master/.github/workflows/release-runners.yaml)
workflow, although this only starts when the set of file containing the runner
version is updated (and this is currently done manually).
## Decision
We can have another workflow running on a cadence (hourly seems sensible) and checking for new runner
releases, creating a PR updating `RUNNER_VERSION` in:
- `.github/workflows/release-runners.yaml`
- `Makefile`
- `runner/Makefile`
- `test/e2e/e2e_test.go`
Once that PR is merged, the existing workflow will pick things up.
## Consequences
We don't have to add an extra step to the runner release process and a direct
dependency on ARC. Since images won't be built until the generated PR is merged
we still have room to wait before triggering a build should there be any
problems with the runner release.
## Considered alternatives
We also considered firing the workflow to create the PR via
`repository_dispatch` as part of the release process of runner itself, but we
discarded it because that would have required a PAT or a GitHub app with `repo`
scope within the Actions org and would have added a new direct dependency on the
runner side.

View File

@@ -0,0 +1,138 @@
# ADR 2023-02-10: Limit Permissions for Service Accounts in Actions-Runner-Controller
**Date**: 2023-02-10
**Status**: Pending
## Context
- `actions-runner-controller` is a Kubernetes CRD (with controller) built using https://github.com/kubernetes-sigs/controller-runtime
- [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) has a default cache based k8s API client.Reader to make query k8s API server more efficiency.
- The cache-based API client requires cluster scope `list` and `watch` permission for any resource the controller may query.
- This documentation only scopes to the AutoscalingRunnerSet CRD and its controller.
## Service accounts and their role binding in actions-runner-controller
There are 3 service accounts involved for a working `AutoscalingRunnerSet` based `actions-runner-controller`
1. Service account for each Ephemeral runner Pod
This should have the lowest privilege (not any `RoleBinding` nor `ClusterRoleBinding`) by default, in the case of `containerMode=kubernetes`, it will get certain write permission with `RoleBinding` to limit the permission to a single namespace.
> References:
>
> - ./charts/gha-runner-scale-set/templates/no_permission_serviceaccount.yaml
> - ./charts/gha-runner-scale-set/templates/kube_mode_role.yaml
> - ./charts/gha-runner-scale-set/templates/kube_mode_role_binding.yaml
> - ./charts/gha-runner-scale-set/templates/kube_mode_serviceaccount.yaml
2. Service account for AutoScalingListener Pod
This has a `RoleBinding` to a single namespace with a `Role` that has permission to `PATCH` `EphemeralRunnerSet` and `EphemeralRunner`.
3. Service account for the controller manager
Since the CRD controller is a singleton installed in the cluster that manages the CRD across multiple namespaces by default, the service account of the controller manager pod has a `ClusterRoleBinding` to a `ClusterRole` with broader permissions.
The current `ClusterRole` has the following permissions:
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingListeners` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunners` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `Pods` (with `Status` sub-resource)
- **Get/List/Create/Delete/Update/Patch/Watch on `Secrets`**
- Get/List/Create/Delete/Update/Patch/Watch on `Roles`
- Get/List/Create/Delete/Update/Patch/Watch on `RoleBindings`
- Get/List/Create/Delete/Update/Patch/Watch on `ServiceAccounts`
> Full list can be found at: https://github.com/actions/actions-runner-controller/blob/facae69e0b189d3b5dd659f36df8a829516d2896/charts/actions-runner-controller-2/templates/manager_role.yaml
## Limit cluster role permission on Secrets
The cluster scope `List` `Secrets` permission might be a blocker for adopting `actions-runner-controller` for certain customers as they may have certain restriction in their cluster that simply doesn't allow any service account to have cluster scope `List Secrets` permission.
To help these customers and improve security for `actions-runner-controller` in general, we will try to limit the `ClusterRole` permission of the controller manager's service account down to the following:
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingListeners` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunners` (with `Status` and `Finalizer` sub-resource)
- List/Watch on `Pods`
- List/Watch on `Roles`
- List/Watch on `RoleBindings`
- List/Watch on `ServiceAccounts`
> We will change the default cache-based client to bypass cache on reading `Secrets` and `ConfigMaps`(ConfigMap is used when you configure `githubServerTLS`), so we can eliminate the need for `List` and `Watch` `Secrets` permission in cluster scope.
Introduce a new `Role` for the controller and `RoleBinding` the `Role` with the controller's `ServiceAccount` in the namespace the controller is deployed. This role will grant the controller's service account required permission to work with `AutoScalingListeners` in the controller namespace.
- Get/Create/Delete on `Pods`
- Get on `Pods/status`
- Get/Create/Delete/Update/Patch on `Secrets`
- Get/Create/Delete/Update/Patch on `ServiceAccounts`
The `Role` and `RoleBinding` creation will happen during the `helm install demo oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller`
During `helm install demo oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller`, we will store the controller's service account info as labels on the controller `Deployment`.
Ex:
```yaml
actions.github.com/controller-service-account-namespace: {{ .Release.Namespace }}
actions.github.com/controller-service-account-name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
```
Introduce a new `Role` per `AutoScalingRunnerSet` installation and `RoleBinding` the `Role` with the controller's `ServiceAccount` in the namespace that each `AutoScalingRunnerSet` deployed with the following permission.
- Get/Create/Delete/Update/Patch/List on `Secrets`
- Create/Delete on `Pods`
- Get on `Pods/status`
- Get/Create/Delete/Update/Patch on `Roles`
- Get/Create/Delete/Update/Patch on `RoleBindings`
- Get on `ConfigMaps`
The `Role` and `RoleBinding` creation will happen during `helm install demo oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set` to grant the controller's service account required permissions to operate in the namespace the `AutoScalingRunnerSet` deployed.
The `gha-runner-scale-set` helm chart will try to find the `Deployment` of the controller using `helm lookup`, and get the service account info from the labels of the controller `Deployment` (`actions.github.com/controller-service-account-namespace` and `actions.github.com/controller-service-account-name`).
The `gha-runner-scale-set` helm chart will use this service account to properly render the `RoleBinding` template.
The `gha-runner-scale-set` helm chart will also allow customers to explicitly provide the controller service account info, in case the `helm lookup` couldn't locate the right controller `Deployment`.
New sections in `values.yaml` of `gha-runner-scale-set`:
```yaml
## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
controllerServiceAccount:
namespace: arc-system
name: test-arc-gha-runner-scale-set-controller
```
## Install ARC to only watch/react resources in a single namespace
In case the user doesn't want to have any `ClusterRole`, they can choose to install the `actions-runner-controller` in a mode that only requires a `Role` with `RoleBinding` in a particular namespace.
In this mode, the `actions-runner-controller` will only be able to watch the `AutoScalingRunnerSet` resource in a single namespace.
If you want to deploy multiple `AutoScalingRunnerSet` into different namespaces, you will need to install `actions-runner-controller` in this mode multiple times as well and have each installation watch the namespace you want to deploy an `AutoScalingRunnerSet`
You will install `actions-runner-controller` with something like `helm install arc --namespace arc-system --set watchSingleNamespace=test-namespace oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller` (the `test-namespace` namespace needs to be created first).
You will deploy the `AutoScalingRunnerSet` with something like `helm install demo --namespace TestNamespace oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set`
In this mode, you will end up with a manager `Role` that has all Get/List/Create/Delete/Update/Patch/Watch permissions on resources we need, and a `RoleBinding` to bind the `Role` with the controller `ServiceAccount` in the watched single namespace and the controller namespace, ex: `test-namespace` and `arc-system` in the above example.
The downside of this mode:
- When you have multiple controllers deployed, they will still use the same version of the CRD. So you will need to make sure every controller you deployed has to be the same version as each other.
- You can't mismatch install both `actions-runner-controller` in this mode (watchSingleNamespace) with the regular installation mode (watchAllClusterNamespaces) in your cluster.

View File

@@ -0,0 +1,89 @@
# ADR 2023-04-14: Adding labels to our resources
**Date**: 2023-04-14
**Status**: Done [^1]
## Context
Users need to provide us with logs so that we can help support and troubleshoot their issues. We need a way for our users to filter and retrieve the logs we need.
## Proposal
A good start would be a catch-all label to get all logs that are
ARC-related: one of the [recommended labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/)
is `app.kubernetes.io/part-of` and we can set that for all ARC components
to be `actions-runner-controller`.
Assuming standard logging that would allow us to get all ARC logs by running
```bash
kubectl logs -l 'app.kubernetes.io/part-of=gha-runner-scale-set-controller'
```
which would be very useful for development to begin with.
The proposal is to add these sets of labels to the pods ARC creates:
#### controller-manager
Labels to be set by the Helm chart:
```yaml
metadata:
labels:
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/component: controller-manager
app.kubernetes.io/version: "x.x.x"
```
#### Listener
Labels to be set by controller at creation:
```yaml
metadata:
labels:
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/component: runner-scale-set-listener
app.kubernetes.io/version: "x.x.x"
actions.github.com/scale-set-name: scale-set-name # this corresponds to metadata.name as set for AutoscalingRunnerSet
# the following labels are to be extracted by the config URL
actions.github.com/enterprise: enterprise
actions.github.com/organization: organization
actions.github.com/repository: repository
```
#### Runner
Labels to be set by controller at creation:
```yaml
metadata:
labels:
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/component: runner
app.kubernetes.io/version: "x.x.x"
actions.github.com/scale-set-name: scale-set-name # this corresponds to metadata.name as set for AutoscalingRunnerSet
actions.github.com/runner-name: runner-name
actions.github.com/runner-group-name: runner-group-name
# the following labels are to be extracted by the config URL
actions.github.com/enterprise: enterprise
actions.github.com/organization: organization
actions.github.com/repository: repository
```
This would allow us to ask users:
> Can you please send us the logs coming from pods labelled 'app.kubernetes.io/part-of=gha-runner-scale-set-controller'?
Or for example if they're having problems specifically with runners:
> Can you please send us the logs coming from pods labelled 'app.kubernetes.io/component=runner'?
This way users don't have to understand ARC moving parts but we still have a
way to target them specifically if we need to.
[^1]: [ADR 2022-12-05](2022-12-05-adding-labels-k8s-resources.md)

View File

@@ -0,0 +1,18 @@
# Title
<!-- ADR titles should typically be imperative sentences. -->
**Status**: (Proposed|Accepted|Rejected|Superceded|Deprecated)
## Context
_What is the issue or background knowledge necessary for future readers
to understand why this ADR was written?_
## Decision
_**What** is the change being proposed? **How** will it be implemented?_
## Consequences
_What becomes easier or more difficult to do because of this change?_

View File

@@ -0,0 +1,94 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// AutoscalingListenerSpec defines the desired state of AutoscalingListener
type AutoscalingListenerSpec struct {
// Required
GitHubConfigUrl string `json:"githubConfigUrl,omitempty"`
// Required
GitHubConfigSecret string `json:"githubConfigSecret,omitempty"`
// Required
RunnerScaleSetId int `json:"runnerScaleSetId,omitempty"`
// Required
AutoscalingRunnerSetNamespace string `json:"autoscalingRunnerSetNamespace,omitempty"`
// Required
AutoscalingRunnerSetName string `json:"autoscalingRunnerSetName,omitempty"`
// Required
EphemeralRunnerSetName string `json:"ephemeralRunnerSetName,omitempty"`
// Required
// +kubebuilder:validation:Minimum:=0
MaxRunners int `json:"maxRunners,omitempty"`
// Required
// +kubebuilder:validation:Minimum:=0
MinRunners int `json:"minRunners,omitempty"`
// Required
Image string `json:"image,omitempty"`
// Required
ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"`
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
}
// AutoscalingListenerStatus defines the observed state of AutoscalingListener
type AutoscalingListenerStatus struct{}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:printcolumn:JSONPath=".spec.githubConfigUrl",name=GitHub Configure URL,type=string
//+kubebuilder:printcolumn:JSONPath=".spec.autoscalingRunnerSetNamespace",name=AutoscalingRunnerSet Namespace,type=string
//+kubebuilder:printcolumn:JSONPath=".spec.autoscalingRunnerSetName",name=AutoscalingRunnerSet Name,type=string
// AutoscalingListener is the Schema for the autoscalinglisteners API
type AutoscalingListener struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec AutoscalingListenerSpec `json:"spec,omitempty"`
Status AutoscalingListenerStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// AutoscalingListenerList contains a list of AutoscalingListener
type AutoscalingListenerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []AutoscalingListener `json:"items"`
}
func init() {
SchemeBuilder.Register(&AutoscalingListener{}, &AutoscalingListenerList{})
}

View File

@@ -0,0 +1,289 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"crypto/x509"
"fmt"
"net/http"
"net/url"
"strings"
"github.com/actions/actions-runner-controller/hash"
"golang.org/x/net/http/httpproxy"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:printcolumn:JSONPath=".spec.minRunners",name=Minimum Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".spec.maxRunners",name=Maximum Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.currentRunners",name=Current Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.state",name=State,type=string
//+kubebuilder:printcolumn:JSONPath=".status.pendingEphemeralRunners",name=Pending Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.runningEphemeralRunners",name=Running Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.finishedEphemeralRunners",name=Finished Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.deletingEphemeralRunners",name=Deleting Runners,type=integer
// AutoscalingRunnerSet is the Schema for the autoscalingrunnersets API
type AutoscalingRunnerSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec AutoscalingRunnerSetSpec `json:"spec,omitempty"`
Status AutoscalingRunnerSetStatus `json:"status,omitempty"`
}
// AutoscalingRunnerSetSpec defines the desired state of AutoscalingRunnerSet
type AutoscalingRunnerSetSpec struct {
// Required
GitHubConfigUrl string `json:"githubConfigUrl,omitempty"`
// Required
GitHubConfigSecret string `json:"githubConfigSecret,omitempty"`
// +optional
RunnerGroup string `json:"runnerGroup,omitempty"`
// +optional
RunnerScaleSetName string `json:"runnerScaleSetName,omitempty"`
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
// Required
Template corev1.PodTemplateSpec `json:"template,omitempty"`
// +optional
// +kubebuilder:validation:Minimum:=0
MaxRunners *int `json:"maxRunners,omitempty"`
// +optional
// +kubebuilder:validation:Minimum:=0
MinRunners *int `json:"minRunners,omitempty"`
}
type GitHubServerTLSConfig struct {
// Required
CertificateFrom *TLSCertificateSource `json:"certificateFrom,omitempty"`
}
func (c *GitHubServerTLSConfig) ToCertPool(keyFetcher func(name, key string) ([]byte, error)) (*x509.CertPool, error) {
if c.CertificateFrom == nil {
return nil, fmt.Errorf("certificateFrom not specified")
}
if c.CertificateFrom.ConfigMapKeyRef == nil {
return nil, fmt.Errorf("configMapKeyRef not specified")
}
cert, err := keyFetcher(c.CertificateFrom.ConfigMapKeyRef.Name, c.CertificateFrom.ConfigMapKeyRef.Key)
if err != nil {
return nil, fmt.Errorf(
"failed to fetch key %q in configmap %q: %w",
c.CertificateFrom.ConfigMapKeyRef.Key,
c.CertificateFrom.ConfigMapKeyRef.Name,
err,
)
}
systemPool, err := x509.SystemCertPool()
if err != nil {
return nil, fmt.Errorf("failed to get system cert pool: %w", err)
}
pool := systemPool.Clone()
if !pool.AppendCertsFromPEM(cert) {
return nil, fmt.Errorf("failed to parse certificate")
}
return pool, nil
}
type TLSCertificateSource struct {
// Required
ConfigMapKeyRef *corev1.ConfigMapKeySelector `json:"configMapKeyRef,omitempty"`
}
type ProxyConfig struct {
// +optional
HTTP *ProxyServerConfig `json:"http,omitempty"`
// +optional
HTTPS *ProxyServerConfig `json:"https,omitempty"`
// +optional
NoProxy []string `json:"noProxy,omitempty"`
}
func (c *ProxyConfig) toHTTPProxyConfig(secretFetcher func(string) (*corev1.Secret, error)) (*httpproxy.Config, error) {
config := &httpproxy.Config{
NoProxy: strings.Join(c.NoProxy, ","),
}
if c.HTTP != nil {
u, err := url.Parse(c.HTTP.Url)
if err != nil {
return nil, fmt.Errorf("failed to parse proxy http url %q: %w", c.HTTP.Url, err)
}
if c.HTTP.CredentialSecretRef != "" {
secret, err := secretFetcher(c.HTTP.CredentialSecretRef)
if err != nil {
return nil, fmt.Errorf(
"failed to get secret %s for http proxy: %w",
c.HTTP.CredentialSecretRef,
err,
)
}
u.User = url.UserPassword(
string(secret.Data["username"]),
string(secret.Data["password"]),
)
}
config.HTTPProxy = u.String()
}
if c.HTTPS != nil {
u, err := url.Parse(c.HTTPS.Url)
if err != nil {
return nil, fmt.Errorf("failed to parse proxy https url %q: %w", c.HTTPS.Url, err)
}
if c.HTTPS.CredentialSecretRef != "" {
secret, err := secretFetcher(c.HTTPS.CredentialSecretRef)
if err != nil {
return nil, fmt.Errorf(
"failed to get secret %s for https proxy: %w",
c.HTTPS.CredentialSecretRef,
err,
)
}
u.User = url.UserPassword(
string(secret.Data["username"]),
string(secret.Data["password"]),
)
}
config.HTTPSProxy = u.String()
}
return config, nil
}
func (c *ProxyConfig) ToSecretData(secretFetcher func(string) (*corev1.Secret, error)) (map[string][]byte, error) {
config, err := c.toHTTPProxyConfig(secretFetcher)
if err != nil {
return nil, err
}
data := map[string][]byte{}
data["http_proxy"] = []byte(config.HTTPProxy)
data["https_proxy"] = []byte(config.HTTPSProxy)
data["no_proxy"] = []byte(config.NoProxy)
return data, nil
}
func (c *ProxyConfig) ProxyFunc(secretFetcher func(string) (*corev1.Secret, error)) (func(*http.Request) (*url.URL, error), error) {
config, err := c.toHTTPProxyConfig(secretFetcher)
if err != nil {
return nil, err
}
proxyFunc := func(req *http.Request) (*url.URL, error) {
return config.ProxyFunc()(req.URL)
}
return proxyFunc, nil
}
type ProxyServerConfig struct {
// Required
Url string `json:"url,omitempty"`
// +optional
CredentialSecretRef string `json:"credentialSecretRef,omitempty"`
}
// AutoscalingRunnerSetStatus defines the observed state of AutoscalingRunnerSet
type AutoscalingRunnerSetStatus struct {
// +optional
CurrentRunners int `json:"currentRunners"`
// +optional
State string `json:"state"`
// EphemeralRunner counts separated by the stage ephemeral runners are in, taken from the EphemeralRunnerSet
//+optional
PendingEphemeralRunners int `json:"pendingEphemeralRunners"`
// +optional
RunningEphemeralRunners int `json:"runningEphemeralRunners"`
// +optional
FailedEphemeralRunners int `json:"failedEphemeralRunners"`
}
func (ars *AutoscalingRunnerSet) ListenerSpecHash() string {
arsSpec := ars.Spec.DeepCopy()
spec := arsSpec
return hash.ComputeTemplateHash(&spec)
}
func (ars *AutoscalingRunnerSet) RunnerSetSpecHash() string {
type runnerSetSpec struct {
GitHubConfigUrl string
GitHubConfigSecret string
RunnerGroup string
RunnerScaleSetName string
Proxy *ProxyConfig
GitHubServerTLS *GitHubServerTLSConfig
Template corev1.PodTemplateSpec
}
spec := &runnerSetSpec{
GitHubConfigUrl: ars.Spec.GitHubConfigUrl,
GitHubConfigSecret: ars.Spec.GitHubConfigSecret,
RunnerGroup: ars.Spec.RunnerGroup,
RunnerScaleSetName: ars.Spec.RunnerScaleSetName,
Proxy: ars.Spec.Proxy,
GitHubServerTLS: ars.Spec.GitHubServerTLS,
Template: ars.Spec.Template,
}
return hash.ComputeTemplateHash(&spec)
}
//+kubebuilder:object:root=true
// AutoscalingRunnerSetList contains a list of AutoscalingRunnerSet
type AutoscalingRunnerSetList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []AutoscalingRunnerSet `json:"items"`
}
func init() {
SchemeBuilder.Register(&AutoscalingRunnerSet{}, &AutoscalingRunnerSetList{})
}

View File

@@ -0,0 +1,133 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.githubConfigUrl",name="GitHub Config URL",type=string
// +kubebuilder:printcolumn:JSONPath=".status.runnerId",name=RunnerId,type=number
// +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string
// +kubebuilder:printcolumn:JSONPath=".status.jobRepositoryName",name=JobRepository,type=string
// +kubebuilder:printcolumn:JSONPath=".status.jobWorkflowRef",name=JobWorkflowRef,type=string
// +kubebuilder:printcolumn:JSONPath=".status.workflowRunId",name=WorkflowRunId,type=number
// +kubebuilder:printcolumn:JSONPath=".status.jobDisplayName",name=JobDisplayName,type=string
// +kubebuilder:printcolumn:JSONPath=".status.message",name=Message,type=string
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// EphemeralRunner is the Schema for the ephemeralrunners API
type EphemeralRunner struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec EphemeralRunnerSpec `json:"spec,omitempty"`
Status EphemeralRunnerStatus `json:"status,omitempty"`
}
// EphemeralRunnerSpec defines the desired state of EphemeralRunner
type EphemeralRunnerSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// +required
GitHubConfigUrl string `json:"githubConfigUrl,omitempty"`
// +required
GitHubConfigSecret string `json:"githubConfigSecret,omitempty"`
// +required
RunnerScaleSetId int `json:"runnerScaleSetId,omitempty"`
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
// +optional
ProxySecretRef string `json:"proxySecretRef,omitempty"`
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
// +required
corev1.PodTemplateSpec `json:",inline"`
}
// EphemeralRunnerStatus defines the observed state of EphemeralRunner
type EphemeralRunnerStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Turns true only if the runner is online.
// +optional
Ready bool `json:"ready"`
// Phase describes phases where EphemeralRunner can be in.
// The underlying type is a PodPhase, but the meaning is more restrictive
//
// The PodFailed phase should be set only when EphemeralRunner fails to start
// after multiple retries. That signals that this EphemeralRunner won't work,
// and manual inspection is required
//
// The PodSucceded phase should be set only when confirmed that EphemeralRunner
// actually executed the job and has been removed from the service.
// +optional
Phase corev1.PodPhase `json:"phase,omitempty"`
// +optional
Reason string `json:"reason,omitempty"`
// +optional
Message string `json:"message,omitempty"`
// +optional
RunnerId int `json:"runnerId,omitempty"`
// +optional
RunnerName string `json:"runnerName,omitempty"`
// +optional
RunnerJITConfig string `json:"runnerJITConfig,omitempty"`
// +optional
Failures map[string]bool `json:"failures,omitempty"`
// +optional
JobRequestId int64 `json:"jobRequestId,omitempty"`
// +optional
JobRepositoryName string `json:"jobRepositoryName,omitempty"`
// +optional
JobWorkflowRef string `json:"jobWorkflowRef,omitempty"`
// +optional
WorkflowRunId int64 `json:"workflowRunId,omitempty"`
// +optional
JobDisplayName string `json:"jobDisplayName,omitempty"`
}
//+kubebuilder:object:root=true
// EphemeralRunnerList contains a list of EphemeralRunner
type EphemeralRunnerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []EphemeralRunner `json:"items"`
}
func init() {
SchemeBuilder.Register(&EphemeralRunner{}, &EphemeralRunnerList{})
}

View File

@@ -0,0 +1,75 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EphemeralRunnerSetSpec defines the desired state of EphemeralRunnerSet
type EphemeralRunnerSetSpec struct {
// Replicas is the number of desired EphemeralRunner resources in the k8s namespace.
Replicas int `json:"replicas,omitempty"`
EphemeralRunnerSpec EphemeralRunnerSpec `json:"ephemeralRunnerSpec,omitempty"`
}
// EphemeralRunnerSetStatus defines the observed state of EphemeralRunnerSet
type EphemeralRunnerSetStatus struct {
// CurrentReplicas is the number of currently running EphemeralRunner resources being managed by this EphemeralRunnerSet.
CurrentReplicas int `json:"currentReplicas"`
// EphemeralRunner counts separated by the stage ephemeral runners are in
// +optional
PendingEphemeralRunners int `json:"pendingEphemeralRunners"`
// +optional
RunningEphemeralRunners int `json:"runningEphemeralRunners"`
// +optional
FailedEphemeralRunners int `json:"failedEphemeralRunners"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name="DesiredReplicas",type="integer"
// +kubebuilder:printcolumn:JSONPath=".status.currentReplicas", name="CurrentReplicas",type="integer"
//+kubebuilder:printcolumn:JSONPath=".status.pendingEphemeralRunners",name=Pending Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.runningEphemeralRunners",name=Running Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.finishedEphemeralRunners",name=Finished Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.deletingEphemeralRunners",name=Deleting Runners,type=integer
// EphemeralRunnerSet is the Schema for the ephemeralrunnersets API
type EphemeralRunnerSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec EphemeralRunnerSetSpec `json:"spec,omitempty"`
Status EphemeralRunnerSetStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// EphemeralRunnerSetList contains a list of EphemeralRunnerSet
type EphemeralRunnerSetList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []EphemeralRunnerSet `json:"items"`
}
func init() {
SchemeBuilder.Register(&EphemeralRunnerSet{}, &EphemeralRunnerSetList{})
}

View File

@@ -0,0 +1,36 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1 contains API Schema definitions for the batch v1 API group
// +kubebuilder:object:generate=true
// +groupName=actions.github.com
package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
var (
// GroupVersion is group version used to register these objects
GroupVersion = schema.GroupVersion{Group: "actions.github.com", Version: "v1alpha1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)

View File

@@ -0,0 +1,118 @@
package v1alpha1_test
import (
"net/http"
"testing"
corev1 "k8s.io/api/core/v1"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestProxyConfig_ToSecret(t *testing.T) {
config := &v1alpha1.ProxyConfig{
HTTP: &v1alpha1.ProxyServerConfig{
Url: "http://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
HTTPS: &v1alpha1.ProxyServerConfig{
Url: "https://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
NoProxy: []string{
"noproxy.example.com",
"noproxy2.example.com",
},
}
secretFetcher := func(string) (*corev1.Secret, error) {
return &corev1.Secret{
Data: map[string][]byte{
"username": []byte("username"),
"password": []byte("password"),
},
}, nil
}
result, err := config.ToSecretData(secretFetcher)
require.NoError(t, err)
require.NotNil(t, result)
assert.Equal(t, "http://username:password@proxy.example.com:8080", string(result["http_proxy"]))
assert.Equal(t, "https://username:password@proxy.example.com:8080", string(result["https_proxy"]))
assert.Equal(t, "noproxy.example.com,noproxy2.example.com", string(result["no_proxy"]))
}
func TestProxyConfig_ProxyFunc(t *testing.T) {
config := &v1alpha1.ProxyConfig{
HTTP: &v1alpha1.ProxyServerConfig{
Url: "http://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
HTTPS: &v1alpha1.ProxyServerConfig{
Url: "https://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
NoProxy: []string{
"noproxy.example.com",
"noproxy2.example.com",
},
}
secretFetcher := func(string) (*corev1.Secret, error) {
return &corev1.Secret{
Data: map[string][]byte{
"username": []byte("username"),
"password": []byte("password"),
},
}, nil
}
result, err := config.ProxyFunc(secretFetcher)
require.NoError(t, err)
tests := []struct {
name string
in string
out string
}{
{
name: "http target",
in: "http://target.com",
out: "http://username:password@proxy.example.com:8080",
},
{
name: "https target",
in: "https://target.com",
out: "https://username:password@proxy.example.com:8080",
},
{
name: "no proxy",
in: "https://noproxy.example.com",
out: "",
},
{
name: "no proxy 2",
in: "https://noproxy2.example.com",
out: "",
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req, err := http.NewRequest("GET", test.in, nil)
require.NoError(t, err)
u, err := result(req)
require.NoError(t, err)
if test.out == "" {
assert.Nil(t, u)
return
}
assert.Equal(t, test.out, u.String())
})
}
}

View File

@@ -0,0 +1,105 @@
package v1alpha1_test
import (
"crypto/tls"
"crypto/x509"
"net/http"
"os"
"path/filepath"
"testing"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/actions/actions-runner-controller/github/actions/testserver"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
v1 "k8s.io/api/core/v1"
)
func TestGitHubServerTLSConfig_ToCertPool(t *testing.T) {
t.Run("returns an error if CertificateFrom not specified", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: nil,
}
pool, err := c.ToCertPool(nil)
assert.Nil(t, pool)
require.Error(t, err)
assert.Equal(t, err.Error(), "certificateFrom not specified")
})
t.Run("returns an error if CertificateFrom.ConfigMapKeyRef not specified", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: &v1alpha1.TLSCertificateSource{},
}
pool, err := c.ToCertPool(nil)
assert.Nil(t, pool)
require.Error(t, err)
assert.Equal(t, err.Error(), "configMapKeyRef not specified")
})
t.Run("returns a valid cert pool with correct configuration", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: &v1alpha1.TLSCertificateSource{
ConfigMapKeyRef: &v1.ConfigMapKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: "name",
},
Key: "key",
},
},
}
certsFolder := filepath.Join(
"../../../",
"github",
"actions",
"testdata",
)
fetcher := func(name, key string) ([]byte, error) {
cert, err := os.ReadFile(filepath.Join(certsFolder, "rootCA.crt"))
require.NoError(t, err)
pool := x509.NewCertPool()
ok := pool.AppendCertsFromPEM(cert)
assert.True(t, ok)
return cert, nil
}
pool, err := c.ToCertPool(fetcher)
require.NoError(t, err)
require.NotNil(t, pool)
// can be used to communicate with a server
serverSuccessfullyCalled := false
server := testserver.NewUnstarted(t, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
serverSuccessfullyCalled = true
w.WriteHeader(http.StatusOK)
}))
cert, err := tls.LoadX509KeyPair(
filepath.Join(certsFolder, "server.crt"),
filepath.Join(certsFolder, "server.key"),
)
require.NoError(t, err)
server.TLS = &tls.Config{Certificates: []tls.Certificate{cert}}
server.StartTLS()
client := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
},
},
}
_, err = client.Get(server.URL)
assert.NoError(t, err)
assert.True(t, serverSuccessfullyCalled)
})
}

View File

@@ -0,0 +1,523 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
package v1alpha1
import (
"k8s.io/api/core/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListener) DeepCopyInto(out *AutoscalingListener) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListener.
func (in *AutoscalingListener) DeepCopy() *AutoscalingListener {
if in == nil {
return nil
}
out := new(AutoscalingListener)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingListener) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListenerList) DeepCopyInto(out *AutoscalingListenerList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]AutoscalingListener, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerList.
func (in *AutoscalingListenerList) DeepCopy() *AutoscalingListenerList {
if in == nil {
return nil
}
out := new(AutoscalingListenerList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingListenerList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListenerSpec) DeepCopyInto(out *AutoscalingListenerSpec) {
*out = *in
if in.ImagePullSecrets != nil {
in, out := &in.ImagePullSecrets, &out.ImagePullSecrets
*out = make([]v1.LocalObjectReference, len(*in))
copy(*out, *in)
}
if in.Proxy != nil {
in, out := &in.Proxy, &out.Proxy
*out = new(ProxyConfig)
(*in).DeepCopyInto(*out)
}
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerSpec.
func (in *AutoscalingListenerSpec) DeepCopy() *AutoscalingListenerSpec {
if in == nil {
return nil
}
out := new(AutoscalingListenerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListenerStatus) DeepCopyInto(out *AutoscalingListenerStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerStatus.
func (in *AutoscalingListenerStatus) DeepCopy() *AutoscalingListenerStatus {
if in == nil {
return nil
}
out := new(AutoscalingListenerStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSet) DeepCopyInto(out *AutoscalingRunnerSet) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSet.
func (in *AutoscalingRunnerSet) DeepCopy() *AutoscalingRunnerSet {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSet)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingRunnerSet) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSetList) DeepCopyInto(out *AutoscalingRunnerSetList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]AutoscalingRunnerSet, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSetList.
func (in *AutoscalingRunnerSetList) DeepCopy() *AutoscalingRunnerSetList {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSetList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingRunnerSetList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSetSpec) DeepCopyInto(out *AutoscalingRunnerSetSpec) {
*out = *in
if in.Proxy != nil {
in, out := &in.Proxy, &out.Proxy
*out = new(ProxyConfig)
(*in).DeepCopyInto(*out)
}
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template)
if in.MaxRunners != nil {
in, out := &in.MaxRunners, &out.MaxRunners
*out = new(int)
**out = **in
}
if in.MinRunners != nil {
in, out := &in.MinRunners, &out.MinRunners
*out = new(int)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSetSpec.
func (in *AutoscalingRunnerSetSpec) DeepCopy() *AutoscalingRunnerSetSpec {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSetSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSetStatus) DeepCopyInto(out *AutoscalingRunnerSetStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSetStatus.
func (in *AutoscalingRunnerSetStatus) DeepCopy() *AutoscalingRunnerSetStatus {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSetStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunner) DeepCopyInto(out *EphemeralRunner) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunner.
func (in *EphemeralRunner) DeepCopy() *EphemeralRunner {
if in == nil {
return nil
}
out := new(EphemeralRunner)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunner) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerList) DeepCopyInto(out *EphemeralRunnerList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]EphemeralRunner, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerList.
func (in *EphemeralRunnerList) DeepCopy() *EphemeralRunnerList {
if in == nil {
return nil
}
out := new(EphemeralRunnerList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunnerList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSet) DeepCopyInto(out *EphemeralRunnerSet) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSet.
func (in *EphemeralRunnerSet) DeepCopy() *EphemeralRunnerSet {
if in == nil {
return nil
}
out := new(EphemeralRunnerSet)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunnerSet) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSetList) DeepCopyInto(out *EphemeralRunnerSetList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]EphemeralRunnerSet, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSetList.
func (in *EphemeralRunnerSetList) DeepCopy() *EphemeralRunnerSetList {
if in == nil {
return nil
}
out := new(EphemeralRunnerSetList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunnerSetList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSetSpec) DeepCopyInto(out *EphemeralRunnerSetSpec) {
*out = *in
in.EphemeralRunnerSpec.DeepCopyInto(&out.EphemeralRunnerSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSetSpec.
func (in *EphemeralRunnerSetSpec) DeepCopy() *EphemeralRunnerSetSpec {
if in == nil {
return nil
}
out := new(EphemeralRunnerSetSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSetStatus) DeepCopyInto(out *EphemeralRunnerSetStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSetStatus.
func (in *EphemeralRunnerSetStatus) DeepCopy() *EphemeralRunnerSetStatus {
if in == nil {
return nil
}
out := new(EphemeralRunnerSetStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSpec) DeepCopyInto(out *EphemeralRunnerSpec) {
*out = *in
if in.Proxy != nil {
in, out := &in.Proxy, &out.Proxy
*out = new(ProxyConfig)
(*in).DeepCopyInto(*out)
}
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
in.PodTemplateSpec.DeepCopyInto(&out.PodTemplateSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSpec.
func (in *EphemeralRunnerSpec) DeepCopy() *EphemeralRunnerSpec {
if in == nil {
return nil
}
out := new(EphemeralRunnerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerStatus) DeepCopyInto(out *EphemeralRunnerStatus) {
*out = *in
if in.Failures != nil {
in, out := &in.Failures, &out.Failures
*out = make(map[string]bool, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerStatus.
func (in *EphemeralRunnerStatus) DeepCopy() *EphemeralRunnerStatus {
if in == nil {
return nil
}
out := new(EphemeralRunnerStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubServerTLSConfig) DeepCopyInto(out *GitHubServerTLSConfig) {
*out = *in
if in.CertificateFrom != nil {
in, out := &in.CertificateFrom, &out.CertificateFrom
*out = new(TLSCertificateSource)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GitHubServerTLSConfig.
func (in *GitHubServerTLSConfig) DeepCopy() *GitHubServerTLSConfig {
if in == nil {
return nil
}
out := new(GitHubServerTLSConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProxyConfig) DeepCopyInto(out *ProxyConfig) {
*out = *in
if in.HTTP != nil {
in, out := &in.HTTP, &out.HTTP
*out = new(ProxyServerConfig)
**out = **in
}
if in.HTTPS != nil {
in, out := &in.HTTPS, &out.HTTPS
*out = new(ProxyServerConfig)
**out = **in
}
if in.NoProxy != nil {
in, out := &in.NoProxy, &out.NoProxy
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProxyConfig.
func (in *ProxyConfig) DeepCopy() *ProxyConfig {
if in == nil {
return nil
}
out := new(ProxyConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProxyServerConfig) DeepCopyInto(out *ProxyServerConfig) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProxyServerConfig.
func (in *ProxyServerConfig) DeepCopy() *ProxyServerConfig {
if in == nil {
return nil
}
out := new(ProxyServerConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TLSCertificateSource) DeepCopyInto(out *TLSCertificateSource) {
*out = *in
if in.ConfigMapKeyRef != nil {
in, out := &in.ConfigMapKeyRef, &out.ConfigMapKeyRef
*out = new(v1.ConfigMapKeySelector)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TLSCertificateSource.
func (in *TLSCertificateSource) DeepCopy() *TLSCertificateSource {
if in == nil {
return nil
}
out := new(TLSCertificateSource)
in.DeepCopyInto(out)
return out
}

View File

@@ -60,6 +60,9 @@ type HorizontalRunnerAutoscalerSpec struct {
// The earlier a scheduled override is, the higher it is prioritized.
// +optional
ScheduledOverrides []ScheduledOverride `json:"scheduledOverrides,omitempty"`
// +optional
GitHubAPICredentialsFrom *GitHubAPICredentialsFrom `json:"githubAPICredentialsFrom,omitempty"`
}
type ScaleUpTrigger struct {
@@ -130,7 +133,7 @@ type ScaleTargetRef struct {
type MetricSpec struct {
// Type is the type of metric to be used for autoscaling.
// The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
// It can be TotalNumberOfQueuedAndInProgressWorkflowRuns or PercentageRunnersBusy.
Type string `json:"type,omitempty"`
// RepositoryNames is the list of repository names to be used for calculating the metric.
@@ -170,7 +173,7 @@ type MetricSpec struct {
}
// ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule.
// A schedule can optionally be recurring, so that the correspoding override happens every day, week, month, or year.
// A schedule can optionally be recurring, so that the corresponding override happens every day, week, month, or year.
type ScheduledOverride struct {
// StartTime is the time at which the first override starts.
StartTime metav1.Time `json:"startTime"`

View File

@@ -76,6 +76,16 @@ type RunnerConfig struct {
// +optional
ContainerMode string `json:"containerMode,omitempty"`
GitHubAPICredentialsFrom *GitHubAPICredentialsFrom `json:"githubAPICredentialsFrom,omitempty"`
}
type GitHubAPICredentialsFrom struct {
SecretRef SecretReference `json:"secretRef,omitempty"`
}
type SecretReference struct {
Name string `json:"name"`
}
// RunnerPodSpec defines the desired pod spec fields of the runner pod
@@ -160,6 +170,9 @@ type RunnerPodSpec struct {
// +optional
RuntimeClassName *string `json:"runtimeClassName,omitempty"`
// +optional
DnsPolicy corev1.DNSPolicy `json:"dnsPolicy,omitempty"`
// +optional
DnsConfig *corev1.PodDNSConfig `json:"dnsConfig,omitempty"`
@@ -183,11 +196,6 @@ func (rs *RunnerSpec) Validate(rootPath *field.Path) field.ErrorList {
errList = append(errList, field.Invalid(rootPath.Child("workVolumeClaimTemplate"), rs.WorkVolumeClaimTemplate, err.Error()))
}
err = rs.validateIsServiceAccountNameSet()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("serviceAccountName"), rs.ServiceAccountName, err.Error()))
}
return errList
}
@@ -226,17 +234,6 @@ func (rs *RunnerSpec) validateWorkVolumeClaimTemplate() error {
return rs.WorkVolumeClaimTemplate.validate()
}
func (rs *RunnerSpec) validateIsServiceAccountNameSet() error {
if rs.ContainerMode != "kubernetes" {
return nil
}
if rs.ServiceAccountName == "" {
return errors.New("service account name is required if container mode is kubernetes")
}
return nil
}
// RunnerStatus defines the observed state of Runner
type RunnerStatus struct {
// Turns true only if the runner pod is ready.
@@ -251,10 +248,60 @@ type RunnerStatus struct {
// +optional
Message string `json:"message,omitempty"`
// +optional
WorkflowStatus *WorkflowStatus `json:"workflow"`
// +optional
// +nullable
LastRegistrationCheckTime *metav1.Time `json:"lastRegistrationCheckTime,omitempty"`
}
// WorkflowStatus contains various information that is propagated
// from GitHub Actions workflow run environment variables to
// ease monitoring workflow run/job/steps that are triggerred on the runner.
type WorkflowStatus struct {
// +optional
// Name is the name of the workflow
// that is triggerred within the runner.
// It corresponds to GITHUB_WORKFLOW defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Name string `json:"name,omitempty"`
// +optional
// Repository is the owner and repository name of the workflow
// that is triggerred within the runner.
// It corresponds to GITHUB_REPOSITORY defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Repository string `json:"repository,omitempty"`
// +optional
// ReositoryOwner is the repository owner's name for the workflow
// that is triggerred within the runner.
// It corresponds to GITHUB_REPOSITORY_OWNER defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
RepositoryOwner string `json:"repositoryOwner,omitempty"`
// +optional
// GITHUB_RUN_NUMBER is the unique number for the current workflow run
// that is triggerred within the runner.
// It corresponds to GITHUB_RUN_ID defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
RunNumber string `json:"runNumber,omitempty"`
// +optional
// RunID is the unique number for the current workflow run
// that is triggerred within the runner.
// It corresponds to GITHUB_RUN_ID defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
RunID string `json:"runID,omitempty"`
// +optional
// Job is the name of the current job
// that is triggerred within the runner.
// It corresponds to GITHUB_JOB defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Job string `json:"job,omitempty"`
// +optional
// Action is the name of the current action or the step ID of the current step
// that is triggerred within the runner.
// It corresponds to GITHUB_ACTION defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Action string `json:"action,omitempty"`
}
// RunnerStatusRegistration contains runner registration status
type RunnerStatusRegistration struct {
Enterprise string `json:"enterprise,omitempty"`
@@ -315,8 +362,12 @@ func (w *WorkVolumeClaimTemplate) V1VolumeMount(mountPath string) corev1.VolumeM
// +kubebuilder:printcolumn:JSONPath=".spec.enterprise",name=Enterprise,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.organization",name=Organization,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.repository",name=Repository,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.group",name=Group,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.labels",name=Labels,type=string
// +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string
// +kubebuilder:printcolumn:JSONPath=".status.message",name=Message,type=string
// +kubebuilder:printcolumn:JSONPath=".status.workflow.repository",name=WF Repo,type=string
// +kubebuilder:printcolumn:JSONPath=".status.workflow.runID",name=WF Run,type=string
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// Runner is the Schema for the runners API
@@ -338,11 +389,7 @@ func (r Runner) IsRegisterable() bool {
}
now := metav1.Now()
if r.Status.Registration.ExpiresAt.Before(&now) {
return false
}
return true
return !r.Status.Registration.ExpiresAt.Before(&now)
}
// +kubebuilder:object:root=true

View File

@@ -33,7 +33,7 @@ type RunnerDeploymentSpec struct {
// EffectiveTime is the time the upstream controller requested to sync Replicas.
// It is usually populated by the webhook-based autoscaler via HRA.
// The value is inherited to RunnerRepicaSet(s) and used to prevent ephemeral runners from unnecessarily recreated.
// The value is inherited to RunnerReplicaSet(s) and used to prevent ephemeral runners from unnecessarily recreated.
//
// +optional
// +nullable

View File

@@ -90,6 +90,22 @@ func (in *CheckRunSpec) DeepCopy() *CheckRunSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubAPICredentialsFrom) DeepCopyInto(out *GitHubAPICredentialsFrom) {
*out = *in
out.SecretRef = in.SecretRef
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GitHubAPICredentialsFrom.
func (in *GitHubAPICredentialsFrom) DeepCopy() *GitHubAPICredentialsFrom {
if in == nil {
return nil
}
out := new(GitHubAPICredentialsFrom)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubEventScaleUpTriggerSpec) DeepCopyInto(out *GitHubEventScaleUpTriggerSpec) {
*out = *in
@@ -231,6 +247,11 @@ func (in *HorizontalRunnerAutoscalerSpec) DeepCopyInto(out *HorizontalRunnerAuto
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.GitHubAPICredentialsFrom != nil {
in, out := &in.GitHubAPICredentialsFrom, &out.GitHubAPICredentialsFrom
*out = new(GitHubAPICredentialsFrom)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerSpec.
@@ -425,6 +446,11 @@ func (in *RunnerConfig) DeepCopyInto(out *RunnerConfig) {
*out = new(string)
**out = **in
}
if in.GitHubAPICredentialsFrom != nil {
in, out := &in.GitHubAPICredentialsFrom, &out.GitHubAPICredentialsFrom
*out = new(GitHubAPICredentialsFrom)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerConfig.
@@ -1023,6 +1049,11 @@ func (in *RunnerSpec) DeepCopy() *RunnerSpec {
func (in *RunnerStatus) DeepCopyInto(out *RunnerStatus) {
*out = *in
in.Registration.DeepCopyInto(&out.Registration)
if in.WorkflowStatus != nil {
in, out := &in.WorkflowStatus, &out.WorkflowStatus
*out = new(WorkflowStatus)
**out = **in
}
if in.LastRegistrationCheckTime != nil {
in, out := &in.LastRegistrationCheckTime, &out.LastRegistrationCheckTime
*out = (*in).DeepCopy()
@@ -1136,6 +1167,21 @@ func (in *ScheduledOverride) DeepCopy() *ScheduledOverride {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SecretReference) DeepCopyInto(out *SecretReference) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SecretReference.
func (in *SecretReference) DeepCopy() *SecretReference {
if in == nil {
return nil
}
out := new(SecretReference)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkVolumeClaimTemplate) DeepCopyInto(out *WorkVolumeClaimTemplate) {
*out = *in
@@ -1171,3 +1217,18 @@ func (in *WorkflowJobSpec) DeepCopy() *WorkflowJobSpec {
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkflowStatus) DeepCopyInto(out *WorkflowStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkflowStatus.
func (in *WorkflowStatus) DeepCopy() *WorkflowStatus {
if in == nil {
return nil
}
out := new(WorkflowStatus)
in.DeepCopyInto(out)
return out
}

4
build/version.go Normal file
View File

@@ -0,0 +1,4 @@
package build
// This is overridden at build-time using go-build ldflags. dev is the fallback value
var Version = "NA"

View File

@@ -0,0 +1,9 @@
# This file defines the config for "ct" (chart tester) used by the helm linting GitHub workflow
lint-conf: charts/.ci/lint-config.yaml
chart-repos:
- jetstack=https://charts.jetstack.io
check-version-increment: false # Disable checking that the chart version has been bumped
charts:
- charts/gha-runner-scale-set-controller
- charts/gha-runner-scale-set
skip-clean-up: true

View File

@@ -3,3 +3,5 @@ lint-conf: charts/.ci/lint-config.yaml
chart-repos:
- jetstack=https://charts.jetstack.io
check-version-increment: false # Disable checking that the chart version has been bumped
charts:
- charts/actions-runner-controller

View File

@@ -15,15 +15,15 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.20.0
version: 0.22.0
# Used as the default manager tag value when no tag property is provided in the values.yaml
appVersion: 0.25.0
appVersion: 0.27.0
home: https://github.com/actions-runner-controller/actions-runner-controller
home: https://github.com/actions/actions-runner-controller
sources:
- https://github.com/actions-runner-controller/actions-runner-controller
- https://github.com/actions/actions-runner-controller
maintainers:
- name: actions-runner-controller

View File

@@ -4,108 +4,148 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
## Values
**_The values are documented as of HEAD, to review the configuration options for your chart version ensure you view this file at the relevant [tag](https://github.com/actions-runner-controller/actions-runner-controller/tags)_**
**_The values are documented as of HEAD, to review the configuration options for your chart version ensure you view this file at the relevant [tag](https://github.com/actions/actions-runner-controller/tags)_**
> _Default values are the defaults set in the charts `values.yaml`, some properties have default configurations in the code for when the property is omitted or invalid_
| Key | Description | Default |
|----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------|
| `labels` | Set labels to apply to all resources in the chart | |
| `replicaCount` | Set the number of controller pods | 1 |
| `webhookPort` | Set the containerPort for the webhook Pod | 9443 |
| `syncPeriod` | Set the period in which the controler reconciles the desired runners count | 10m |
| `enableLeaderElection` | Enable election configuration | true |
| `leaderElectionId` | Set the election ID for the controller group | |
| `githubEnterpriseServerURL` | Set the URL for a self-hosted GitHub Enterprise Server | |
| `githubURL` | Override GitHub URL to be used for GitHub API calls | |
| `githubUploadURL` | Override GitHub Upload URL to be used for GitHub API calls | |
| `runnerGithubURL` | Override GitHub URL to be used by runners during registration | |
| `logLevel` | Set the log level of the controller container | |
| `additionalVolumes` | Set additional volumes to add to the manager container | |
| `additionalVolumeMounts` | Set additional volume mounts to add to the manager container | |
| `authSecret.create` | Deploy the controller auth secret | false |
| `authSecret.name` | Set the name of the auth secret | controller-manager |
| `authSecret.annotations` | Set annotations for the auth Secret | |
| `authSecret.github_app_id` | The ID of your GitHub App. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_installation_id` | The ID of your GitHub App installation. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_private_key` | The multiline string of your GitHub App's private key. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_token` | Your chosen GitHub PAT token. **This can't be set at the same time as the `authSecret.github_app_*`** | |
| `authSecret.github_basicauth_username` | Username for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `authSecret.github_basicauth_password` | Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `dockerRegistryMirror` | The default Docker Registry Mirror used by runners. | |
| `hostNetwork` | The "hostNetwork" of the controller container | false |
| `image.repository` | The "repository/image" of the controller container | summerwind/actions-runner-controller |
| `image.tag` | The tag of the controller container | |
| `image.actionsRunnerRepositoryAndTag` | The "repository/image" of the actions runner container | summerwind/actions-runner:latest |
| `image.actionsRunnerImagePullSecrets` | Optional image pull secrets to be included in the runner pod's ImagePullSecrets | |
| `image.dindSidecarRepositoryAndTag` | The "repository/image" of the dind sidecar container | docker:dind |
| `image.pullPolicy` | The pull policy of the controller image | IfNotPresent |
| `metrics.serviceMonitor` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `metrics.serviceAnnotations` | Set annotations for the provisioned metrics service resource | |
| `metrics.port` | Set port of metrics service | 8443 |
| `metrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
| `metrics.proxy.image.repository` | The "repository/image" of the kube-proxy container | quay.io/brancz/kube-rbac-proxy |
| `metrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.10.0 |
| `metrics.serviceMonitorLabels` | Set labels to apply to ServiceMonitor resources | |
| `imagePullSecrets` | Specifies the secret to be used when pulling the controller pod containers | |
| `fullnameOverride` | Override the full resource names | |
| `nameOverride` | Override the resource name prefix | |
| `serviceAccount.annotations` | Set annotations to the service account | |
| `serviceAccount.create` | Deploy the controller pod under a service account | true |
| `podAnnotations` | Set annotations for the controller pod | |
| `podLabels` | Set labels for the controller pod | |
| `serviceAccount.name` | Set the name of the service account | |
| `securityContext` | Set the security context for each container in the controller pod | |
| `podSecurityContext` | Set the security context to controller pod | |
| `service.annotations` | Set annotations for the provisioned webhook service resource | |
| `service.port` | Set controller service ports | |
| `service.type` | Set controller service type | |
| `topologySpreadConstraints` | Set the controller pod topologySpreadConstraints | |
| `nodeSelector` | Set the controller pod nodeSelector | |
| `resources` | Set the controller pod resources | |
| `affinity` | Set the controller pod affinity rules | |
| `podDisruptionBudget.enabled` | Enables a PDB to ensure HA of controller pods | false |
| `podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |
| `tolerations` | Set the controller pod tolerations | |
| `env` | Set environment variables for the controller container | |
| `priorityClassName` | Set the controller pod priorityClassName | |
| `scope.watchNamespace` | Tells the controller and the github webhook server which namespace to watch if `scope.singleNamespace` is true | `Release.Namespace` (the default namespace of the helm chart). |
| `scope.singleNamespace` | Limit the controller to watch a single namespace | false |
| `certManagerEnabled` | Enable cert-manager. If disabled you must set admissionWebHooks.caBundle and create TLS secrets manually | true |
| `admissionWebHooks.caBundle` | Base64-encoded PEM bundle containing the CA that signed the webhook's serving certificate | |
| `githubWebhookServer.logLevel` | Set the log level of the githubWebhookServer container | |
| `githubWebhookServer.replicaCount` | Set the number of webhook server pods | 1 |
| `githubWebhookServer.useRunnerGroupsVisibility` | Enable supporting runner groups with custom visibility. This will incur in extra API calls and may blow up your budget. Currently, you also need to set `githubWebhookServer.secret.enabled` to enable this feature. | false |
| `githubWebhookServer.syncPeriod` | Set the period in which the controller reconciles the resources | 10m |
| `githubWebhookServer.enabled` | Deploy the webhook server pod | false |
| `githubWebhookServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false |
| `githubWebhookServer.secret.create` | Deploy the webhook hook secret | false |
| `githubWebhookServer.secret.name` | Set the name of the webhook hook secret | github-webhook-server |
| `githubWebhookServer.secret.github_webhook_secret_token` | Set the webhook secret token value | |
| `githubWebhookServer.imagePullSecrets` | Specifies the secret to be used when pulling the githubWebhookServer pod containers | |
| `githubWebhookServer.nameOverride` | Override the resource name prefix | |
| `githubWebhookServer.fullnameOverride` | Override the full resource names | |
| `githubWebhookServer.serviceAccount.create` | Deploy the githubWebhookServer under a service account | true |
| `githubWebhookServer.serviceAccount.annotations` | Set annotations for the service account | |
| `githubWebhookServer.serviceAccount.name` | Set the service account name | |
| `githubWebhookServer.podAnnotations` | Set annotations for the githubWebhookServer pod | |
| `githubWebhookServer.podLabels` | Set labels for the githubWebhookServer pod | |
| `githubWebhookServer.podSecurityContext` | Set the security context to githubWebhookServer pod | |
| `githubWebhookServer.securityContext` | Set the security context for each container in the githubWebhookServer pod | |
| `githubWebhookServer.resources` | Set the githubWebhookServer pod resources | |
| `githubWebhookServer.topologySpreadConstraints` | Set the githubWebhookServer pod topologySpreadConstraints | |
| `githubWebhookServer.nodeSelector` | Set the githubWebhookServer pod nodeSelector | |
| `githubWebhookServer.tolerations` | Set the githubWebhookServer pod tolerations | |
| `githubWebhookServer.affinity` | Set the githubWebhookServer pod affinity rules | |
| `githubWebhookServer.priorityClassName` | Set the githubWebhookServer pod priorityClassName | |
| `githubWebhookServer.service.type` | Set githubWebhookServer service type | |
| `githubWebhookServer.service.ports` | Set githubWebhookServer service ports | `[{"port":80, "targetPort:"http", "protocol":"TCP", "name":"http"}]` |
| `githubWebhookServer.ingress.enabled` | Deploy an ingress kind for the githubWebhookServer | false |
| `githubWebhookServer.ingress.annotations` | Set annotations for the ingress kind | |
| `githubWebhookServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
| `githubWebhookServer.ingress.tls` | Set tls configuration for ingress | |
| `githubWebhookServer.ingress.ingressClassName` | Set ingress class name | |
| `githubWebhookServer.podDisruptionBudget.enabled` | Enables a PDB to ensure HA of githubwebhook pods | false |
| `githubWebhookServer.podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `githubWebhookServer.podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |
| Key | Description | Default |
|----------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|
| `labels` | Set labels to apply to all resources in the chart | |
| `replicaCount` | Set the number of controller pods | 1 |
| `webhookPort` | Set the containerPort for the webhook Pod | 9443 |
| `syncPeriod` | Set the period in which the controller reconciles the desired runners count | 1m |
| `enableLeaderElection` | Enable election configuration | true |
| `leaderElectionId` | Set the election ID for the controller group | |
| `githubEnterpriseServerURL` | Set the URL for a self-hosted GitHub Enterprise Server | |
| `githubURL` | Override GitHub URL to be used for GitHub API calls | |
| `githubUploadURL` | Override GitHub Upload URL to be used for GitHub API calls | |
| `runnerGithubURL` | Override GitHub URL to be used by runners during registration | |
| `logLevel` | Set the log level of the controller container | |
| `logFormat` | Set the log format of the controller. Valid options are "text" and "json" | text |
| `additionalVolumes` | Set additional volumes to add to the manager container | |
| `additionalVolumeMounts` | Set additional volume mounts to add to the manager container | |
| `authSecret.create` | Deploy the controller auth secret | false |
| `authSecret.name` | Set the name of the auth secret | controller-manager |
| `authSecret.annotations` | Set annotations for the auth Secret | |
| `authSecret.github_app_id` | The ID of your GitHub App. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_installation_id` | The ID of your GitHub App installation. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_private_key` | The multiline string of your GitHub App's private key. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_token` | Your chosen GitHub PAT token. **This can't be set at the same time as the `authSecret.github_app_*`** | |
| `authSecret.github_basicauth_username` | Username for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `authSecret.github_basicauth_password` | Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `dockerRegistryMirror` | The default Docker Registry Mirror used by runners. | |
| `hostNetwork` | The "hostNetwork" of the controller container | false |
| `image.repository` | The "repository/image" of the controller container | summerwind/actions-runner-controller |
| `image.tag` | The tag of the controller container | |
| `image.actionsRunnerRepositoryAndTag` | The "repository/image" of the actions runner container | summerwind/actions-runner:latest |
| `image.actionsRunnerImagePullSecrets` | Optional image pull secrets to be included in the runner pod's ImagePullSecrets | |
| `image.dindSidecarRepositoryAndTag` | The "repository/image" of the dind sidecar container | docker:dind |
| `image.pullPolicy` | The pull policy of the controller image | IfNotPresent |
| `metrics.serviceMonitor` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `metrics.serviceAnnotations` | Set annotations for the provisioned metrics service resource | |
| `metrics.port` | Set port of metrics service | 8443 |
| `metrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
| `metrics.proxy.image.repository` | The "repository/image" of the kube-proxy container | quay.io/brancz/kube-rbac-proxy |
| `metrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.10.0 |
| `metrics.serviceMonitorLabels` | Set labels to apply to ServiceMonitor resources | |
| `imagePullSecrets` | Specifies the secret to be used when pulling the controller pod containers | |
| `fullnameOverride` | Override the full resource names | |
| `nameOverride` | Override the resource name prefix | |
| `serviceAccount.annotations` | Set annotations to the service account | |
| `serviceAccount.create` | Deploy the controller pod under a service account | true |
| `podAnnotations` | Set annotations for the controller pod | |
| `podLabels` | Set labels for the controller pod | |
| `serviceAccount.name` | Set the name of the service account | |
| `securityContext` | Set the security context for each container in the controller pod | |
| `podSecurityContext` | Set the security context to controller pod | |
| `service.annotations` | Set annotations for the provisioned webhook service resource | |
| `service.port` | Set controller service ports | |
| `service.type` | Set controller service type | |
| `topologySpreadConstraints` | Set the controller pod topologySpreadConstraints | |
| `nodeSelector` | Set the controller pod nodeSelector | |
| `resources` | Set the controller pod resources | |
| `affinity` | Set the controller pod affinity rules | |
| `podDisruptionBudget.enabled` | Enables a PDB to ensure HA of controller pods | false |
| `podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |
| `tolerations` | Set the controller pod tolerations | |
| `env` | Set environment variables for the controller container | |
| `priorityClassName` | Set the controller pod priorityClassName | |
| `scope.watchNamespace` | Tells the controller and the github webhook server which namespace to watch if `scope.singleNamespace` is true | `Release.Namespace` (the default namespace of the helm chart). |
| `scope.singleNamespace` | Limit the controller to watch a single namespace | false |
| `certManagerEnabled` | Enable cert-manager. If disabled you must set admissionWebHooks.caBundle and create TLS secrets manually | true |
| `runner.statusUpdateHook.enabled` | Use custom RBAC for runners (role, role binding and service account), this will enable reporting runner statuses | false |
| `admissionWebHooks.caBundle` | Base64-encoded PEM bundle containing the CA that signed the webhook's serving certificate | |
| `githubWebhookServer.logLevel` | Set the log level of the githubWebhookServer container | |
| `githubWebhookServer.logFormat` | Set the log format of the githubWebhookServer controller. Valid options are "text" and "json" | text |
| `githubWebhookServer.replicaCount` | Set the number of webhook server pods | 1 |
| `githubWebhookServer.useRunnerGroupsVisibility` | Enable supporting runner groups with custom visibility, you also need to set `githubWebhookServer.secret.enabled` to enable this feature. | false |
| `githubWebhookServer.enabled` | Deploy the webhook server pod | false |
| `githubWebhookServer.queueLimit` | Set the queue size limit in the githubWebhookServer | |
| `githubWebhookServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false |
| `githubWebhookServer.secret.create` | Deploy the webhook hook secret | false |
| `githubWebhookServer.secret.name` | Set the name of the webhook hook secret | github-webhook-server |
| `githubWebhookServer.secret.github_webhook_secret_token` | Set the webhook secret token value | |
| `githubWebhookServer.imagePullSecrets` | Specifies the secret to be used when pulling the githubWebhookServer pod containers | |
| `githubWebhookServer.nameOverride` | Override the resource name prefix | |
| `githubWebhookServer.fullnameOverride` | Override the full resource names | |
| `githubWebhookServer.serviceAccount.create` | Deploy the githubWebhookServer under a service account | true |
| `githubWebhookServer.serviceAccount.annotations` | Set annotations for the service account | |
| `githubWebhookServer.serviceAccount.name` | Set the service account name | |
| `githubWebhookServer.podAnnotations` | Set annotations for the githubWebhookServer pod | |
| `githubWebhookServer.podLabels` | Set labels for the githubWebhookServer pod | |
| `githubWebhookServer.podSecurityContext` | Set the security context to githubWebhookServer pod | |
| `githubWebhookServer.securityContext` | Set the security context for each container in the githubWebhookServer pod | |
| `githubWebhookServer.resources` | Set the githubWebhookServer pod resources | |
| `githubWebhookServer.topologySpreadConstraints` | Set the githubWebhookServer pod topologySpreadConstraints | |
| `githubWebhookServer.nodeSelector` | Set the githubWebhookServer pod nodeSelector | |
| `githubWebhookServer.tolerations` | Set the githubWebhookServer pod tolerations | |
| `githubWebhookServer.affinity` | Set the githubWebhookServer pod affinity rules | |
| `githubWebhookServer.priorityClassName` | Set the githubWebhookServer pod priorityClassName | |
| `githubWebhookServer.service.type` | Set githubWebhookServer service type | |
| `githubWebhookServer.service.ports` | Set githubWebhookServer service ports | `[{"port":80, "targetPort:"http", "protocol":"TCP", "name":"http"}]` |
| `githubWebhookServer.ingress.enabled` | Deploy an ingress kind for the githubWebhookServer | false |
| `githubWebhookServer.ingress.annotations` | Set annotations for the ingress kind | |
| `githubWebhookServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
| `githubWebhookServer.ingress.tls` | Set tls configuration for ingress | |
| `githubWebhookServer.ingress.ingressClassName` | Set ingress class name | |
| `githubWebhookServer.podDisruptionBudget.enabled` | Enables a PDB to ensure HA of githubwebhook pods | false |
| `githubWebhookServer.podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `githubWebhookServer.podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |
| `actionsMetricsServer.logLevel` | Set the log level of the actionsMetricsServer container | |
| `actionsMetricsServer.logFormat` | Set the log format of the actionsMetricsServer controller. Valid options are "text" and "json" | text |
| `actionsMetricsServer.enabled` | Deploy the actions metrics server pod | false |
| `actionsMetricsServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false |
| `actionsMetricsServer.secret.create` | Deploy the webhook hook secret | false |
| `actionsMetricsServer.secret.name` | Set the name of the webhook hook secret | github-webhook-server |
| `actionsMetricsServer.secret.github_webhook_secret_token` | Set the webhook secret token value | |
| `actionsMetricsServer.imagePullSecrets` | Specifies the secret to be used when pulling the actionsMetricsServer pod containers | |
| `actionsMetricsServer.nameOverride` | Override the resource name prefix | |
| `actionsMetricsServer.fullnameOverride` | Override the full resource names | |
| `actionsMetricsServer.serviceAccount.create` | Deploy the actionsMetricsServer under a service account | true |
| `actionsMetricsServer.serviceAccount.annotations` | Set annotations for the service account | |
| `actionsMetricsServer.serviceAccount.name` | Set the service account name | |
| `actionsMetricsServer.podAnnotations` | Set annotations for the actionsMetricsServer pod | |
| `actionsMetricsServer.podLabels` | Set labels for the actionsMetricsServer pod | |
| `actionsMetricsServer.podSecurityContext` | Set the security context to actionsMetricsServer pod | |
| `actionsMetricsServer.securityContext` | Set the security context for each container in the actionsMetricsServer pod | |
| `actionsMetricsServer.resources` | Set the actionsMetricsServer pod resources | |
| `actionsMetricsServer.topologySpreadConstraints` | Set the actionsMetricsServer pod topologySpreadConstraints | |
| `actionsMetricsServer.nodeSelector` | Set the actionsMetricsServer pod nodeSelector | |
| `actionsMetricsServer.tolerations` | Set the actionsMetricsServer pod tolerations | |
| `actionsMetricsServer.affinity` | Set the actionsMetricsServer pod affinity rules | |
| `actionsMetricsServer.priorityClassName` | Set the actionsMetricsServer pod priorityClassName | |
| `actionsMetricsServer.service.type` | Set actionsMetricsServer service type | |
| `actionsMetricsServer.service.ports` | Set actionsMetricsServer service ports | `[{"port":80, "targetPort:"http", "protocol":"TCP", "name":"http"}]` |
| `actionsMetricsServer.ingress.enabled` | Deploy an ingress kind for the actionsMetricsServer | false |
| `actionsMetricsServer.ingress.annotations` | Set annotations for the ingress kind | |
| `actionsMetricsServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
| `actionsMetricsServer.ingress.tls` | Set tls configuration for ingress | |
| `actionsMetricsServer.ingress.ingressClassName` | Set ingress class name | |
| `actionsMetrics.serviceMonitor` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `actionsMetrics.serviceAnnotations` | Set annotations for the provisioned actions metrics service resource | |
| `actionsMetrics.port` | Set port of actions metrics service | 8443 |
| `actionsMetrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
| `actionsMetrics.proxy.image.repository` | The "repository/image" of the kube-proxy container | quay.io/brancz/kube-rbac-proxy |
| `actionsMetrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.10.0 |
| `actionsMetrics.serviceMonitorLabels` | Set labels to apply to ServiceMonitor resources | |

View File

@@ -61,6 +61,16 @@ spec:
type: integer
type: object
type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
maxReplicas:
description: MaxReplicas is the maximum number of replicas the deployment is allowed to scale
type: integer
@@ -92,7 +102,7 @@ spec:
description: ScaleUpThreshold is the percentage of busy runners greater than which will trigger the hpa to scale runners up.
type: string
type:
description: Type is the type of metric to be used for autoscaling. The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
description: Type is the type of metric to be used for autoscaling. It can be TotalNumberOfQueuedAndInProgressWorkflowRuns or PercentageRunnersBusy.
type: string
type: object
type: array
@@ -170,7 +180,7 @@ spec:
scheduledOverrides:
description: ScheduledOverrides is the list of ScheduledOverride. It can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. The earlier a scheduled override is, the higher it is prioritized.
items:
description: ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. A schedule can optionally be recurring, so that the correspoding override happens every day, week, month, or year.
description: ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. A schedule can optionally be recurring, so that the corresponding override happens every day, week, month, or year.
properties:
endTime:
description: EndTime is the time at which the first override ends.

View File

@@ -49,7 +49,7 @@ spec:
description: RunnerDeploymentSpec defines the desired state of RunnerDeployment
properties:
effectiveTime:
description: EffectiveTime is the time the upstream controller requested to sync Replicas. It is usually populated by the webhook-based autoscaler via HRA. The value is inherited to RunnerRepicaSet(s) and used to prevent ephemeral runners from unnecessarily recreated.
description: EffectiveTime is the time the upstream controller requested to sync Replicas. It is usually populated by the webhook-based autoscaler via HRA. The value is inherited to RunnerReplicaSet(s) and used to prevent ephemeral runners from unnecessarily recreated.
format: date-time
nullable: true
type: string
@@ -946,7 +946,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -1081,6 +1081,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1381,6 +1393,9 @@ spec:
type: string
type: array
type: object
dnsPolicy:
description: DNSPolicy defines how a pod's DNS will be configured.
type: string
dockerEnabled:
type: boolean
dockerEnv:
@@ -1497,6 +1512,18 @@ spec:
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1635,7 +1662,7 @@ spec:
type: boolean
ephemeralContainers:
items:
description: "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. \n To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. \n This is a beta feature available on clusters that haven't disabled the EphemeralContainers feature gate."
description: "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. \n To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted."
properties:
args:
description: 'Arguments to the entrypoint. The image''s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container''s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
@@ -2138,6 +2165,18 @@ spec:
resources:
description: Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2415,6 +2454,16 @@ spec:
- name
type: object
type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group:
type: string
hostAliases:
@@ -2815,7 +2864,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -2950,6 +2999,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3243,6 +3304,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3315,7 +3388,7 @@ spec:
- type
type: object
supplementalGroups:
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows.
items:
format: int64
type: integer
@@ -3725,7 +3798,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -3860,6 +3933,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -4193,16 +4278,28 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
type: string
type: array
x-kubernetes-list-type: atomic
maxSkew:
description: 'MaxSkew describes the degree to which pods may be unevenly distributed. When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence to topologies that satisfy it. It''s a required field. Default value is 1 and 0 is not allowed.'
format: int32
type: integer
minDomains:
description: "MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. \n For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. \n This is an alpha field and requires enabling MinDomainsInPodTopologySpread feature gate."
description: "MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. \n For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. \n This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default)."
format: int32
type: integer
nodeAffinityPolicy:
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
nodeTaintsPolicy:
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
topologyKey:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes match the node selector. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
@@ -4529,7 +4626,7 @@ spec:
type: string
type: array
dataSource:
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4545,7 +4642,7 @@ spec:
- name
type: object
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4556,6 +4653,9 @@ spec:
name:
description: Name is the name of resource being referenced
type: string
namespace:
description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
type: string
required:
- kind
- name
@@ -4563,6 +4663,18 @@ spec:
resources:
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -5191,6 +5303,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:

View File

@@ -943,7 +943,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -1078,6 +1078,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1378,6 +1390,9 @@ spec:
type: string
type: array
type: object
dnsPolicy:
description: DNSPolicy defines how a pod's DNS will be configured.
type: string
dockerEnabled:
type: boolean
dockerEnv:
@@ -1494,6 +1509,18 @@ spec:
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1632,7 +1659,7 @@ spec:
type: boolean
ephemeralContainers:
items:
description: "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. \n To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. \n This is a beta feature available on clusters that haven't disabled the EphemeralContainers feature gate."
description: "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. \n To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted."
properties:
args:
description: 'Arguments to the entrypoint. The image''s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container''s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
@@ -2135,6 +2162,18 @@ spec:
resources:
description: Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2412,6 +2451,16 @@ spec:
- name
type: object
type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group:
type: string
hostAliases:
@@ -2812,7 +2861,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -2947,6 +2996,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3240,6 +3301,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3312,7 +3385,7 @@ spec:
- type
type: object
supplementalGroups:
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows.
items:
format: int64
type: integer
@@ -3722,7 +3795,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -3857,6 +3930,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -4190,16 +4275,28 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
type: string
type: array
x-kubernetes-list-type: atomic
maxSkew:
description: 'MaxSkew describes the degree to which pods may be unevenly distributed. When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence to topologies that satisfy it. It''s a required field. Default value is 1 and 0 is not allowed.'
format: int32
type: integer
minDomains:
description: "MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. \n For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. \n This is an alpha field and requires enabling MinDomainsInPodTopologySpread feature gate."
description: "MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. \n For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. \n This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default)."
format: int32
type: integer
nodeAffinityPolicy:
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
nodeTaintsPolicy:
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
topologyKey:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes match the node selector. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
@@ -4526,7 +4623,7 @@ spec:
type: string
type: array
dataSource:
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4542,7 +4639,7 @@ spec:
- name
type: object
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4553,6 +4650,9 @@ spec:
name:
description: Name is the name of resource being referenced
type: string
namespace:
description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
type: string
required:
- kind
- name
@@ -4560,6 +4660,18 @@ spec:
resources:
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -5188,6 +5300,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:

View File

@@ -24,12 +24,24 @@ spec:
- jsonPath: .spec.repository
name: Repository
type: string
- jsonPath: .spec.group
name: Group
type: string
- jsonPath: .spec.labels
name: Labels
type: string
- jsonPath: .status.phase
name: Status
type: string
- jsonPath: .status.message
name: Message
type: string
- jsonPath: .status.workflow.repository
name: WF Repo
type: string
- jsonPath: .status.workflow.runID
name: WF Run
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
@@ -884,7 +896,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -1019,6 +1031,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1319,6 +1343,9 @@ spec:
type: string
type: array
type: object
dnsPolicy:
description: DNSPolicy defines how a pod's DNS will be configured.
type: string
dockerEnabled:
type: boolean
dockerEnv:
@@ -1435,6 +1462,18 @@ spec:
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1573,7 +1612,7 @@ spec:
type: boolean
ephemeralContainers:
items:
description: "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. \n To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. \n This is a beta feature available on clusters that haven't disabled the EphemeralContainers feature gate."
description: "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. \n To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted."
properties:
args:
description: 'Arguments to the entrypoint. The image''s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container''s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
@@ -2076,6 +2115,18 @@ spec:
resources:
description: Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2353,6 +2404,16 @@ spec:
- name
type: object
type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group:
type: string
hostAliases:
@@ -2753,7 +2814,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -2888,6 +2949,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3181,6 +3254,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3253,7 +3338,7 @@ spec:
- type
type: object
supplementalGroups:
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows.
items:
format: int64
type: integer
@@ -3663,7 +3748,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -3798,6 +3883,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -4131,16 +4228,28 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
type: string
type: array
x-kubernetes-list-type: atomic
maxSkew:
description: 'MaxSkew describes the degree to which pods may be unevenly distributed. When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence to topologies that satisfy it. It''s a required field. Default value is 1 and 0 is not allowed.'
format: int32
type: integer
minDomains:
description: "MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. \n For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. \n This is an alpha field and requires enabling MinDomainsInPodTopologySpread feature gate."
description: "MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. \n For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. \n This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default)."
format: int32
type: integer
nodeAffinityPolicy:
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
nodeTaintsPolicy:
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
topologyKey:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes match the node selector. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
@@ -4467,7 +4576,7 @@ spec:
type: string
type: array
dataSource:
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4483,7 +4592,7 @@ spec:
- name
type: object
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4494,6 +4603,9 @@ spec:
name:
description: Name is the name of resource being referenced
type: string
namespace:
description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
type: string
required:
- kind
- name
@@ -4501,6 +4613,18 @@ spec:
resources:
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -5129,6 +5253,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -5194,6 +5330,31 @@ spec:
- expiresAt
- token
type: object
workflow:
description: WorkflowStatus contains various information that is propagated from GitHub Actions workflow run environment variables to ease monitoring workflow run/job/steps that are triggerred on the runner.
properties:
action:
description: Action is the name of the current action or the step ID of the current step that is triggerred within the runner. It corresponds to GITHUB_ACTION defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
job:
description: Job is the name of the current job that is triggerred within the runner. It corresponds to GITHUB_JOB defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
name:
description: Name is the name of the workflow that is triggerred within the runner. It corresponds to GITHUB_WORKFLOW defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
repository:
description: Repository is the owner and repository name of the workflow that is triggerred within the runner. It corresponds to GITHUB_REPOSITORY defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
repositoryOwner:
description: ReositoryOwner is the repository owner's name for the workflow that is triggerred within the runner. It corresponds to GITHUB_REPOSITORY_OWNER defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
runID:
description: RunID is the unique number for the current workflow run that is triggerred within the runner. It corresponds to GITHUB_RUN_ID defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
runNumber:
description: GITHUB_RUN_NUMBER is the unique number for the current workflow run that is triggerred within the runner. It corresponds to GITHUB_RUN_ID defined in https://docs.github.com/en/actions/learn-github-actions/environment-variables
type: string
type: object
type: object
type: object
served: true

View File

@@ -67,6 +67,16 @@ spec:
type: string
ephemeral:
type: boolean
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group:
type: string
image:
@@ -76,9 +86,17 @@ spec:
type: string
type: array
minReadySeconds:
description: Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate.
description: Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready)
format: int32
type: integer
ordinals:
description: ordinals controls the numbering of replica indices in a StatefulSet. The default ordinals behavior assigns a "0" index to the first replica and increments the index by one for each additional replica requested. Using the ordinals field requires the StatefulSetStartOrdinal feature gate to be enabled, which is alpha.
properties:
start:
description: 'start is the number representing the first replica''s index. It may be used to number replicas from an alternate index (eg: 1-indexed) over the default 0-indexed names, or to orchestrate progressive movement of replicas from one StatefulSet to another. If set, replica indices will be in the range: [.spec.ordinals.start, .spec.ordinals.start + .spec.replicas). If unset, defaults to 0. Replica indices will be in the range: [0, .spec.replicas).'
format: int32
type: integer
type: object
organization:
pattern: ^[^/]+$
type: string
@@ -142,7 +160,7 @@ spec:
description: 'serviceName is the name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where "pod-specific-string" is managed by the StatefulSet controller.'
type: string
template:
description: template is the object that describes the pod that will be created if insufficient replicas are detected. Each pod stamped out by the StatefulSet will fulfill this Template, but have a unique identity from the rest of the StatefulSet.
description: template is the object that describes the pod that will be created if insufficient replicas are detected. Each pod stamped out by the StatefulSet will fulfill this Template, but have a unique identity from the rest of the StatefulSet. Each pod will be named with the format <statefulsetname>-<podindex>. For example, a pod in a StatefulSet named "web" with index number "3" would be named "web-3".
properties:
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'
@@ -1006,7 +1024,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -1141,6 +1159,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -1448,9 +1478,9 @@ spec:
description: 'EnableServiceLinks indicates whether information about services should be injected into pod''s environment variables, matching the syntax of Docker links. Optional: Defaults to true.'
type: boolean
ephemeralContainers:
description: List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. This field is beta-level and available on clusters that haven't disabled the EphemeralContainers feature gate.
description: List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource.
items:
description: "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. \n To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. \n This is a beta feature available on clusters that haven't disabled the EphemeralContainers feature gate."
description: "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. \n To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted."
properties:
args:
description: 'Arguments to the entrypoint. The image''s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container''s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
@@ -1953,6 +1983,18 @@ spec:
resources:
description: Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -2254,6 +2296,9 @@ spec:
hostPID:
description: 'Use the host''s pid namespace. Optional: Default to false.'
type: boolean
hostUsers:
description: 'Use the host''s user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.'
type: boolean
hostname:
description: Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value.
type: string
@@ -2638,7 +2683,7 @@ spec:
description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
type: string
ports:
description: List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
items:
description: ContainerPort represents a network port in a single container.
properties:
@@ -2773,6 +2818,18 @@ spec:
resources:
description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -3057,7 +3114,7 @@ spec:
type: object
x-kubernetes-map-type: atomic
os:
description: "Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. \n If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions \n If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[*].securityContext.seLinuxOptions - spec.containers[*].securityContext.seccompProfile - spec.containers[*].securityContext.capabilities - spec.containers[*].securityContext.readOnlyRootFilesystem - spec.containers[*].securityContext.privileged - spec.containers[*].securityContext.allowPrivilegeEscalation - spec.containers[*].securityContext.procMount - spec.containers[*].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup This is a beta field and requires the IdentifyPodOS feature"
description: "Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. \n If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions \n If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[*].securityContext.seLinuxOptions - spec.containers[*].securityContext.seccompProfile - spec.containers[*].securityContext.capabilities - spec.containers[*].securityContext.readOnlyRootFilesystem - spec.containers[*].securityContext.privileged - spec.containers[*].securityContext.allowPrivilegeEscalation - spec.containers[*].securityContext.procMount - spec.containers[*].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup"
properties:
name:
description: 'Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null'
@@ -3096,6 +3153,31 @@ spec:
- conditionType
type: object
type: array
resourceClaims:
description: "ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name.
properties:
name:
description: Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL.
type: string
source:
description: Source describes where to find the ResourceClaim.
properties:
resourceClaimName:
description: ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod.
type: string
resourceClaimTemplateName:
description: "ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. \n The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The name of the ResourceClaim will be <pod name>-<resource name>, where <resource name> is the PodResourceClaim.Name. Pod validation will reject the pod if the concatenated name is not valid for a ResourceClaim (e.g. too long). \n An existing ResourceClaim with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated resource by mistake. Scheduling and pod startup are then blocked until the unrelated ResourceClaim is removed. \n This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim."
type: string
type: object
required:
- name
type: object
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
restartPolicy:
description: 'Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy'
type: string
@@ -3105,6 +3187,21 @@ spec:
schedulerName:
description: If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler.
type: string
schedulingGates:
description: "SchedulingGates is an opaque list of values that if specified will block scheduling the pod. More info: https://git.k8s.io/enhancements/keps/sig-scheduling/3521-pod-scheduling-readiness. \n This is an alpha-level feature enabled by PodSchedulingReadiness feature gate."
items:
description: PodSchedulingGate is associated to a Pod to guard its scheduling.
properties:
name:
description: Name of the scheduling gate. Each scheduling gate must have a unique name field.
type: string
required:
- name
type: object
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
securityContext:
description: 'SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field.'
properties:
@@ -3155,7 +3252,7 @@ spec:
- type
type: object
supplementalGroups:
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
description: A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows.
items:
format: int64
type: integer
@@ -3270,16 +3367,28 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
type: string
type: array
x-kubernetes-list-type: atomic
maxSkew:
description: 'MaxSkew describes the degree to which pods may be unevenly distributed. When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence to topologies that satisfy it. It''s a required field. Default value is 1 and 0 is not allowed.'
format: int32
type: integer
minDomains:
description: "MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. \n For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. \n This is an alpha field and requires enabling MinDomainsInPodTopologySpread feature gate."
description: "MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. \n For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. \n This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default)."
format: int32
type: integer
nodeAffinityPolicy:
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
nodeTaintsPolicy:
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
topologyKey:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes match the node selector. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
@@ -3576,7 +3685,7 @@ spec:
type: string
type: array
dataSource:
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -3592,7 +3701,7 @@ spec:
- name
type: object
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -3603,6 +3712,9 @@ spec:
name:
description: Name is the name of resource being referenced
type: string
namespace:
description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
type: string
required:
- kind
- name
@@ -3610,6 +3722,18 @@ spec:
resources:
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -4292,7 +4416,7 @@ spec:
type: string
type: array
dataSource:
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4308,7 +4432,7 @@ spec:
- name
type: object
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4319,6 +4443,9 @@ spec:
name:
description: Name is the name of resource being referenced
type: string
namespace:
description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
type: string
required:
- kind
- name
@@ -4326,6 +4453,18 @@ spec:
resources:
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:
@@ -4468,6 +4607,18 @@ spec:
resources:
description: ResourceRequirements describes the compute resource requirements.
properties:
claims:
description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable."
items:
description: ResourceClaim references one entry in PodSpec.ResourceClaims.
properties:
name:
description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.
type: string
required:
- name
type: object
type: array
limits:
additionalProperties:
anyOf:

View File

@@ -24,11 +24,13 @@ Due to the above you can't just do a `helm upgrade` to release the latest versio
# REMEMBER TO UPDATE THE CHART_VERSION TO RELEVANT CHART VERISON!!!!
CHART_VERSION=0.18.0
curl -L https://github.com/actions-runner-controller/actions-runner-controller/releases/download/actions-runner-controller-${CHART_VERSION}/actions-runner-controller-${CHART_VERSION}.tgz | tar zxv --strip 1 actions-runner-controller/crds
curl -L https://github.com/actions/actions-runner-controller/releases/download/actions-runner-controller-${CHART_VERSION}/actions-runner-controller-${CHART_VERSION}.tgz | tar zxv --strip 1 actions-runner-controller/crds
kubectl replace -f crds/
```
Note that in case you're going to create prometheus-operator `ServiceMonitor` resources via the chart, you'd need to deploy prometheus-operator-related CRDs as well.
2. Upgrade the Helm release
```shell

View File

@@ -0,0 +1,60 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "actions-runner-controller-actions-metrics-server.name" -}}
{{- default .Chart.Name .Values.actionsMetricsServer.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "actions-runner-controller-actions-metrics-server.instance" -}}
{{- printf "%s-%s" .Release.Name "actions-metrics-server" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "actions-runner-controller-actions-metrics-server.fullname" -}}
{{- if .Values.actionsMetricsServer.fullnameOverride }}
{{- .Values.actionsMetricsServer.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.actionsMetricsServer.nameOverride }}
{{- $instance := include "actions-runner-controller-actions-metrics-server.instance" . }}
{{- if contains $name $instance }}
{{- $instance | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s-%s" .Release.Name $name "actions-metrics-server" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "actions-runner-controller-actions-metrics-server.selectorLabels" -}}
app.kubernetes.io/name: {{ include "actions-runner-controller-actions-metrics-server.name" . }}
app.kubernetes.io/instance: {{ include "actions-runner-controller-actions-metrics-server.instance" . }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "actions-runner-controller-actions-metrics-server.serviceAccountName" -}}
{{- if .Values.actionsMetricsServer.serviceAccount.create }}
{{- default (include "actions-runner-controller-actions-metrics-server.fullname" .) .Values.actionsMetricsServer.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.actionsMetricsServer.serviceAccount.name }}
{{- end }}
{{- end }}
{{- define "actions-runner-controller-actions-metrics-server.secretName" -}}
{{- default (include "actions-runner-controller-actions-metrics-server.fullname" .) .Values.actionsMetricsServer.secret.name }}
{{- end }}
{{- define "actions-runner-controller-actions-metrics-server.roleName" -}}
{{- include "actions-runner-controller-actions-metrics-server.fullname" . }}
{{- end }}
{{- define "actions-runner-controller-actions-metrics-server.serviceMonitorName" -}}
{{- include "actions-runner-controller-actions-metrics-server.fullname" . | trunc 47 }}-service-monitor
{{- end }}

View File

@@ -114,4 +114,4 @@ Create the name of the service account to use
{{- define "actions-runner-controller.pdbName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 59 }}-pdb
{{- end }}
{{- end }}

View File

@@ -0,0 +1,162 @@
{{- if .Values.actionsMetricsServer.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.actionsMetricsServer.replicaCount }}
selector:
matchLabels:
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.actionsMetricsServer.podAnnotations }}
annotations:
kubectl.kubernetes.io/default-container: "actions-metrics-server"
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 8 }}
{{- with .Values.actionsMetricsServer.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.actionsMetricsServer.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "actions-runner-controller-actions-metrics-server.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.actionsMetricsServer.podSecurityContext | nindent 8 }}
{{- with .Values.actionsMetricsServer.priorityClassName }}
priorityClassName: "{{ . }}"
{{- end }}
containers:
- args:
{{- $metricsHost := .Values.metrics.proxy.enabled | ternary "127.0.0.1" "0.0.0.0" }}
{{- $metricsPort := .Values.metrics.proxy.enabled | ternary "8080" .Values.metrics.port }}
- "--metrics-addr={{ $metricsHost }}:{{ $metricsPort }}"
{{- if .Values.actionsMetricsServer.logLevel }}
- "--log-level={{ .Values.actionsMetricsServer.logLevel }}"
{{- end }}
{{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }}
{{- if .Values.actionsMetricsServer.logFormat }}
- "--log-format={{ .Values.actionsMetricsServer.logFormat }}"
{{- end }}
command:
- "/actions-metrics-server"
env:
- name: GITHUB_WEBHOOK_SECRET_TOKEN
valueFrom:
secretKeyRef:
key: github_webhook_secret_token
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
{{- if .Values.githubEnterpriseServerURL }}
- name: GITHUB_ENTERPRISE_URL
value: {{ .Values.githubEnterpriseServerURL }}
{{- end }}
{{- if .Values.githubURL }}
- name: GITHUB_URL
value: {{ .Values.githubURL }}
{{- end }}
{{- if .Values.githubUploadURL }}
- name: GITHUB_UPLOAD_URL
value: {{ .Values.githubUploadURL }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.enabled }}
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
key: github_token
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_ID
valueFrom:
secretKeyRef:
key: github_app_id
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_INSTALLATION_ID
valueFrom:
secretKeyRef:
key: github_app_installation_id
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_PRIVATE_KEY
valueFrom:
secretKeyRef:
key: github_app_private_key
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
{{- if .Values.authSecret.github_basicauth_username }}
- name: GITHUB_BASICAUTH_USERNAME
value: {{ .Values.authSecret.github_basicauth_username }}
{{- end }}
- name: GITHUB_BASICAUTH_PASSWORD
valueFrom:
secretKeyRef:
key: github_basicauth_password
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
{{- end }}
{{- range $key, $val := .Values.actionsMetricsServer.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: actions-metrics-server
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 8000
name: http
protocol: TCP
{{- if not .Values.metrics.proxy.enabled }}
- containerPort: {{ .Values.metrics.port }}
name: metrics-port
protocol: TCP
{{- end }}
resources:
{{- toYaml .Values.actionsMetricsServer.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.actionsMetricsServer.securityContext | nindent 12 }}
{{- if .Values.metrics.proxy.enabled }}
- args:
- "--secure-listen-address=0.0.0.0:{{ .Values.metrics.port }}"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=10"
image: "{{ .Values.metrics.proxy.image.repository }}:{{ .Values.metrics.proxy.image.tag }}"
name: kube-rbac-proxy
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.metrics.port }}
name: metrics-port
resources:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- end }}
terminationGracePeriodSeconds: 10
{{- with .Values.actionsMetricsServer.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.actionsMetricsServer.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.actionsMetricsServer.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.actionsMetricsServer.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,47 @@
{{- if .Values.actionsMetricsServer.ingress.enabled -}}
{{- $fullName := include "actions-runner-controller-actions-metrics-server.fullname" . -}}
{{- $svcPort := (index .Values.actionsMetricsServer.service.ports 0).port -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.actionsMetricsServer.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.actionsMetricsServer.ingress.tls }}
tls:
{{- range .Values.actionsMetricsServer.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
{{- with .Values.actionsMetricsServer.ingress.ingressClassName }}
ingressClassName: {{ . }}
{{- end }}
rules:
{{- range .Values.actionsMetricsServer.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- if .extraPaths }}
{{- toYaml .extraPaths | nindent 10 }}
{{- end }}
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,28 @@
{{- if .Values.actionsMetricsServer.enabled }}
{{- if .Values.actionsMetricsServer.secret.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque
data:
{{- if .Values.actionsMetricsServer.secret.github_webhook_secret_token }}
github_webhook_secret_token: {{ .Values.actionsMetricsServer.secret.github_webhook_secret_token | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_id }}
github_app_id: {{ .Values.actionsMetricsServer.secret.github_app_id | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_installation_id }}
github_app_installation_id: {{ .Values.actionsMetricsServer.secret.github_app_installation_id | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_private_key }}
github_app_private_key: {{ .Values.actionsMetricsServer.secret.github_app_private_key | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_token }}
github_token: {{ .Values.actionsMetricsServer.secret.github_token | toString | b64enc }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,26 @@
{{- if .Values.actionsMetricsServer.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- if .Values.actionsMetricsServer.service.annotations }}
annotations:
{{ toYaml .Values.actionsMetricsServer.service.annotations | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.actionsMetricsServer.service.type }}
ports:
{{ range $_, $port := .Values.actionsMetricsServer.service.ports -}}
- {{ $port | toYaml | nindent 6 }}
{{- end }}
{{- if .Values.metrics.serviceMonitor }}
- name: metrics-port
port: {{ .Values.metrics.port }}
targetPort: metrics-port
{{- end }}
selector:
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.actionsMetricsServer.enabled -}}
{{- if .Values.actionsMetricsServer.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.actionsMetricsServer.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,25 @@
{{- if and .Values.actionsMetricsServer.enabled .Values.actionsMetrics.serviceMonitor }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.actionsMetricsServer.serviceMonitorLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "actions-runner-controller-actions-metrics-server.serviceMonitorName" . }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- path: /metrics
port: metrics-port
{{- if .Values.actionsMetrics.proxy.enabled }}
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
scheme: https
tlsConfig:
insecureSkipVerify: true
{{- end }}
selector:
matchLabels:
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@@ -8,6 +8,7 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "actions-runner-controller.serviceMonitorName" . }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- path: /metrics

View File

@@ -1,5 +1,5 @@
{{- if .Values.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
labels:

View File

@@ -58,15 +58,18 @@ spec:
{{- if .Values.scope.singleNamespace }}
- "--watch-namespace={{ default .Release.Namespace .Values.scope.watchNamespace }}"
{{- end }}
{{- if .Values.githubAPICacheDuration }}
- "--github-api-cache-duration={{ .Values.githubAPICacheDuration }}"
{{- end }}
{{- if .Values.logLevel }}
- "--log-level={{ .Values.logLevel }}"
{{- end }}
{{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }}
{{- if .Values.runner.statusUpdateHook.enabled }}
- "--runner-status-update-hook"
{{- end }}
{{- if .Values.logFormat }}
- "--log-format={{ .Values.logFormat }}"
{{- end }}
command:
- "/manager"
env:
@@ -118,10 +121,14 @@ spec:
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
{{- end }}
{{- if kindIs "slice" .Values.env }}
{{- toYaml .Values.env | nindent 8 }}
{{- else }}
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: manager
imagePullPolicy: {{ .Values.image.pullPolicy }}

View File

@@ -39,7 +39,6 @@ spec:
{{- $metricsHost := .Values.metrics.proxy.enabled | ternary "127.0.0.1" "0.0.0.0" }}
{{- $metricsPort := .Values.metrics.proxy.enabled | ternary "8080" .Values.metrics.port }}
- "--metrics-addr={{ $metricsHost }}:{{ $metricsPort }}"
- "--sync-period={{ .Values.githubWebhookServer.syncPeriod }}"
{{- if .Values.githubWebhookServer.logLevel }}
- "--log-level={{ .Values.githubWebhookServer.logLevel }}"
{{- end }}
@@ -49,8 +48,20 @@ spec:
{{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }}
{{- if .Values.githubWebhookServer.queueLimit }}
- "--queue-limit={{ .Values.githubWebhookServer.queueLimit }}"
{{- end }}
{{- if .Values.githubWebhookServer.logFormat }}
- "--log-format={{ .Values.githubWebhookServer.logFormat }}"
{{- end }}
command:
- "/github-webhook-server"
{{- if .Values.githubWebhookServer.lifecycle }}
{{- with .Values.githubWebhookServer.lifecycle }}
lifecycle:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
env:
- name: GITHUB_WEBHOOK_SECRET_TOKEN
valueFrom:
@@ -143,7 +154,7 @@ spec:
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- end }}
terminationGracePeriodSeconds: 10
terminationGracePeriodSeconds: {{ .Values.githubWebhookServer.terminationGracePeriodSeconds }}
{{- with .Values.githubWebhookServer.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -1,13 +1,7 @@
{{- if .Values.githubWebhookServer.ingress.enabled -}}
{{- $fullName := include "actions-runner-controller-github-webhook-server.fullname" . -}}
{{- $svcPort := (index .Values.githubWebhookServer.service.ports 0).port -}}
{{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
apiVersion: networking.k8s.io/v1
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1/Ingress" }}
apiVersion: networking.k8s.io/v1beta1
{{- else if .Capabilities.APIVersions.Has "extensions/v1beta1/Ingress" }}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
@@ -42,19 +36,12 @@ spec:
{{- end }}
{{- range .paths }}
- path: {{ .path }}
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,5 +1,5 @@
{{- if .Values.githubWebhookServer.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
labels:

View File

@@ -23,4 +23,10 @@ spec:
{{- end }}
selector:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 4 }}
{{- if .Values.githubWebhookServer.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- range $ip := .Values.githubWebhookServer.service.loadBalancerSourceRanges }}
- {{ $ip -}}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -8,6 +8,7 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "actions-runner-controller-github-webhook-server.serviceMonitorName" . }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- path: /metrics

View File

@@ -258,3 +258,64 @@ rules:
- get
- list
- watch
{{- if .Values.runner.statusUpdateHook.enabled }}
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- create
- delete
- get
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- create
- delete
- get
{{- end }}
{{- if .Values.rbac.allowGrantingKubernetesContainerModePermissions }}
{{/* These permissions are required by ARC to create RBAC resources for the runner pod to use the kubernetes container mode. */}}
{{/* See https://github.com/actions/actions-runner-controller/pull/1268/files#r917331632 */}}
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
- get
- apiGroups:
- ""
resources:
- pods/log
verbs:
- get
- list
- watch
- apiGroups:
- "batch"
resources:
- jobs
verbs:
- get
- list
- create
- delete
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
{{- end }}

View File

@@ -1,4 +1,8 @@
{{/*
We will use a self managed CA if one is not provided by cert-manager
*/}}
{{- $ca := genCA "actions-runner-ca" 3650 }}
{{- $cert := genSignedCert (printf "%s.%s.svc" (include "actions-runner-controller.webhookServiceName" .) .Release.Namespace) nil (list (printf "%s.%s.svc" (include "actions-runner-controller.webhookServiceName" .) .Release.Namespace)) 3650 $ca }}
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
@@ -20,6 +24,8 @@ webhooks:
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ quote .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -47,7 +53,9 @@ webhooks:
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
caBundle: {{ quote .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -75,7 +83,9 @@ webhooks:
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
caBundle: {{ quote .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -103,7 +113,9 @@ webhooks:
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
caBundle: {{ quote .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -144,7 +156,9 @@ webhooks:
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
caBundle: {{ quote .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -172,7 +186,9 @@ webhooks:
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
caBundle: {{ quote .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -200,7 +216,9 @@ webhooks:
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
caBundle: {{ quote .Values.admissionWebHooks.caBundle }}
{{- else if not .Values.certManagerEnabled }}
caBundle: {{ $ca.Cert | b64enc | quote }}
{{- end }}
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
@@ -219,3 +237,18 @@ webhooks:
resources:
- runnerreplicasets
sideEffects: None
{{ if not (or (hasKey .Values.admissionWebHooks "caBundle") .Values.certManagerEnabled) }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ include "actions-runner-controller.servingCertName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: kubernetes.io/tls
data:
tls.crt: {{ $cert.Cert | b64enc | quote }}
tls.key: {{ $cert.Key | b64enc | quote }}
ca.crt: {{ $ca.Cert | b64enc | quote }}
{{- end }}

View File

@@ -15,12 +15,6 @@ enableLeaderElection: true
# Must be unique if more than one controller installed onto the same namespace.
#leaderElectionId: "actions-runner-controller"
# DEPRECATED: This has been removed as unnecessary in #1192
# The controller tries its best not to repeat the duplicate GitHub API call
# within this duration.
# Defaults to syncPeriod - 10s.
#githubAPICacheDuration: 30s
# The URL of your GitHub Enterprise server, if you're using one.
#githubEnterpriseServerURL: https://github.example.com
@@ -36,7 +30,7 @@ enableLeaderElection: true
#
# Do set authSecret.enabled=false and set env if you want full control over
# the GitHub authn related envvars of the container.
# See https://github.com/actions-runner-controller/actions-runner-controller/pull/937 for more details.
# See https://github.com/actions/actions-runner-controller/pull/937 for more details.
authSecret:
enabled: true
create: false
@@ -67,6 +61,18 @@ imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
runner:
statusUpdateHook:
enabled: false
rbac:
{}
# # This allows ARC to dynamically create a ServiceAccount and a Role for each Runner pod that uses "kubernetes" container mode,
# # by extending ARC's manager role to have the same permissions required by the pod runs the runner agent in "kubernetes" container mode.
# # Without this, Kubernetes blocks ARC to create the role to prevent a priviledge escalation.
# # See https://github.com/actions/actions-runner-controller/pull/1268/files#r917327010
# allowGrantingKubernetesContainerModePermissions: true
serviceAccount:
# Specifies whether a service account should be created
create: true
@@ -109,7 +115,7 @@ metrics:
enabled: true
image:
repository: quay.io/brancz/kube-rbac-proxy
tag: v0.13.0
tag: v0.13.1
resources:
{}
@@ -143,10 +149,20 @@ priorityClassName: ""
env:
{}
# specify additional environment variables for the controller pod.
# It's possible to specify either key vale pairs e.g.:
# http_proxy: "proxy.com:8080"
# https_proxy: "proxy.com:8080"
# no_proxy: ""
# or a list of complete environment variable definitions e.g.:
# - name: GITHUB_APP_INSTALLATION_ID
# valueFrom:
# secretKeyRef:
# key: some_key_in_the_secret
# name: some-secret-name
# optional: true
## specify additional volumes to mount in the manager container, this can be used
## to specify additional storage of material or to inject files from ConfigMaps
## into the running container
@@ -169,14 +185,18 @@ admissionWebHooks:
#caBundle: "Ci0tLS0tQk...<base64-encoded PEM bundle containing the CA that signed the webhook's serving certificate>...tLS0K"
# There may be alternatives to setting `hostNetwork: true`, see
# https://github.com/actions-runner-controller/actions-runner-controller/issues/1005#issuecomment-993097155
# https://github.com/actions/actions-runner-controller/issues/1005#issuecomment-993097155
#hostNetwork: true
## specify log format for actions runner controller. Valid options are "text" and "json"
logFormat: text
githubWebhookServer:
enabled: false
replicaCount: 1
syncPeriod: 10m
useRunnerGroupsVisibility: false
## specify log format for github webhook server. Valid options are "text" and "json"
logFormat: text
secret:
enabled: false
create: false
@@ -211,6 +231,112 @@ githubWebhookServer:
tolerations: []
affinity: {}
priorityClassName: ""
service:
type: ClusterIP
annotations: {}
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
#nodePort: someFixedPortForUseWithTerraformCdkCfnEtc
loadBalancerSourceRanges: []
ingress:
enabled: false
ingressClassName: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
# - path: /*
# pathType: ImplementationSpecific
# Extra paths that are not automatically connected to the server. This is useful when working with annotation based services.
extraPaths: []
# - path: /*
# backend:
# serviceName: ssl-redirect
# servicePort: use-annotation
## for Kubernetes >=1.19 (when "networking.k8s.io/v1" is used)
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# name: use-annotation
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# Only one of minAvailable or maxUnavailable can be set
podDisruptionBudget:
enabled: false
# minAvailable: 1
# maxUnavailable: 3
# queueLimit: 100
terminationGracePeriodSeconds: 10
lifecycle: {}
actionsMetrics:
serviceAnnotations: {}
# Set serviceMonitor=true to create a service monitor
# as a part of the helm release.
# Do note that you also need actionsMetricsServer.enabled=true
# to deploy the actions-metrics-server whose k8s service is referenced by the service monitor.
serviceMonitor: false
serviceMonitorLabels: {}
port: 8443
proxy:
enabled: true
image:
repository: quay.io/brancz/kube-rbac-proxy
tag: v0.13.1
actionsMetricsServer:
enabled: false
# DO NOT CHANGE THIS!
# See the thread below for more context.
# https://github.com/actions/actions-runner-controller/pull/1814#discussion_r974758924
replicaCount: 1
## specify log format for actions metrics server. Valid options are "text" and "json"
logFormat: text
secret:
enabled: false
create: false
name: "actions-metrics-server"
### GitHub Webhook Configuration
github_webhook_secret_token: ""
### GitHub Apps Configuration
## NOTE: IDs MUST be strings, use quotes
#github_app_id: ""
#github_app_installation_id: ""
#github_app_private_key: |
### GitHub PAT Configuration
#github_token: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
priorityClassName: ""
service:
type: ClusterIP
annotations: {}
@@ -250,8 +376,3 @@ githubWebhookServer:
# hosts:
# - chart-example.local
# Only one of minAvailable or maxUnavailable can be set
podDisruptionBudget:
enabled: false
# minAvailable: 1
# maxUnavailable: 3

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,33 @@
apiVersion: v2
name: gha-runner-scale-set-controller
description: A Helm chart for install actions-runner-controller CRD
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.3.0"
home: https://github.com/actions/actions-runner-controller
sources:
- "https://github.com/actions/actions-runner-controller"
maintainers:
- name: actions
url: https://github.com/actions

Some files were not shown because too many files have changed in this diff Show More