Compare commits

...

135 Commits

Author SHA1 Message Date
mumoshu
e2d3489171 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2023-11-27 07:04:47 +00:00
mumoshu
90db051e3e Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2023-11-27 05:06:40 +00:00
mumoshu
41e135be59 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2023-08-30 06:02:42 +00:00
mumoshu
11938d728d Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2023-08-28 05:25:26 +00:00
Link-
bfd63fb6f9 Update index.yaml
Signed-off-by: Link- <Link-@users.noreply.github.com>
2023-05-12 13:19:52 +00:00
mumoshu
c70e760b19 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2023-04-17 13:01:55 +00:00
Link-
3c7e32eb9f Update index.yaml
Signed-off-by: Link- <Link-@users.noreply.github.com>
2023-04-06 12:05:57 +00:00
Yusuke Kuoka
ccc7b81b5c chart-repo: Update index.yaml to include 0.22.0 and 0.23.0 (#2452) 2023-03-30 05:22:21 -04:00
Jan van den Berg
701f8427a0 docs: typo + codefence fix (#2053)
* Little typo in PRIVATE_KEY_FILE_PATH.
* Code fence `shell script` does not work and does not render well, `shell` (just as above) does work.
2022-11-28 11:28:47 +00:00
toast-gear
20322eb2c9 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-10-25 19:13:34 +00:00
Yusuke Kuoka
72edcbba10 Show me as the verified publisher on ArtifactHub (#1815)
Ref #1502

See https://artifacthub.io/docs/topics/repositories/helm-charts/#helm-charts-repositories for more information on how ArtifactHub works for Helm charts and how we can set up this metadata file.

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-20 18:49:15 +09:00
mumoshu
5396f9322b Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-09-13 00:10:15 +00:00
mumoshu
65b0cdc588 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-07-15 01:24:15 +00:00
mumoshu
b7dbf997ec Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-07-09 23:26:54 +00:00
mumoshu
c04f1daeab Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-07-05 02:20:10 +00:00
mumoshu
84b7abe2ce Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-06-15 02:35:41 +00:00
mumoshu
85422d15a8 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-06-03 13:02:23 +00:00
toast-gear
d3e9c43c34 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-29 12:55:04 +00:00
mumoshu
6a9b0d74fd Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-04-08 01:59:57 +00:00
toast-gear
3e1fbfa830 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-03 09:16:27 +00:00
toast-gear
c842be4501 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-03-29 06:47:43 +00:00
toast-gear
132867482f Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-03-16 07:58:36 +00:00
toast-gear
6a2a90164f Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-02-21 09:25:02 +00:00
mumoshu
f7952743e5 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-02-18 01:58:55 +00:00
mumoshu
1492a0d0f9 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-02-09 00:30:45 +00:00
mumoshu
26d9758452 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-02-08 03:57:19 +00:00
Callum Tait
6ac125f060 Revert "dics: typo (#1063)" (#1080)
This reverts commit 48af148297.
2022-01-28 22:28:44 +00:00
Ashith Wilson
48af148297 dics: typo (#1063) 2022-01-28 22:27:20 +00:00
toast-gear
818c1bd3dd Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2021-12-08 21:59:25 +00:00
toast-gear
a4c569f552 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2021-11-15 19:59:31 +00:00
toast-gear
55d5550ad4 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2021-10-18 21:06:27 +00:00
toast-gear
ddc29b1d38 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2021-10-02 09:05:47 +00:00
mumoshu
b684553da2 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-09-24 00:41:09 +00:00
mumoshu
ec8a74f219 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-09-15 00:39:32 +00:00
mumoshu
73e6a91de3 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-08-31 00:47:20 +00:00
mumoshu
e1c62ee5e5 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-07-03 06:17:45 +00:00
mumoshu
a192a76ca9 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-30 11:43:05 +00:00
mumoshu
a7d378ca09 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-30 00:54:28 +00:00
mumoshu
285cfd69cd Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-29 06:51:20 +00:00
mumoshu
c1fb952a94 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-27 07:51:38 +00:00
mumoshu
b1916a0e1a Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-11 00:21:50 +00:00
mumoshu
44972a284c Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-09 23:31:21 +00:00
toast-gear
6dd93508e7 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2021-06-08 18:25:40 +00:00
mumoshu
930efd244d Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-04-18 04:59:38 +00:00
mumoshu
60f577ea04 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-04-06 01:10:56 +00:00
mumoshu
31a16d3c2e Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-19 02:16:24 +00:00
mumoshu
c53a03372d Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-18 23:57:04 +00:00
mumoshu
e9caad7dec Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-18 01:37:25 +00:00
mumoshu
7a21693912 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-17 22:37:12 +00:00
mumoshu
942fc9fe00 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-16 01:53:11 +00:00
mumoshu
a2096046d5 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-14 01:22:24 +00:00
mumoshu
a7cb21605c Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-09 06:04:20 +00:00
mumoshu
c495ce47ed Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-05 11:28:36 +00:00
mumoshu
f1a1941455 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-03 00:22:03 +00:00
mumoshu
a19eab8382 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-26 00:27:48 +00:00
mumoshu
4ee7e5541f Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-18 11:21:46 +00:00
mumoshu
013d5bd2b2 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-17 23:44:33 +00:00
mumoshu
c1d36ebaef Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-16 08:17:08 +00:00
mumoshu
71eb2ae333 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-16 00:46:35 +00:00
mumoshu
dd1ad63ca9 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-07 07:47:05 +00:00
mumoshu
de7e37509c Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-01-29 00:30:24 +00:00
mumoshu
51918fecbe Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-01-25 00:15:39 +00:00
mumoshu
4fb7d154d6 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-01-24 06:37:49 +00:00
Yusuke Kuoka
ace95d72ab Fix self-update failuers due to /runner/externals mount (#253)
* Fix self-update failuers due to /runner/externals mount

Fixes #252

* Tested Self-update Fixes (#269)

Adding fixes to #253 as confirmed and tested in https://github.com/summerwind/actions-runner-controller/issues/264#issuecomment-764549833 by @jolestar, @achedeuzot and @hfuss 🙇 🍻

Co-authored-by: Hayden Fuss <wifu1234@gmail.com>
2021-01-24 10:58:35 +09:00
Johannes Nicolai
42493d5e01 Adding --name-space parameter in example (#259)
* when setting a GitHub Enterprise server URL without a namespace, an error occurs: "error: the server doesn't have a resource type "controller-manager"
* setting default namespace "actions-runner-system" makes the example work out of the box
2021-01-22 10:12:04 +09:00
Johannes Nicolai
94e8c6ffbf minReplicas <= desiredReplicas <= maxReplicas (#267)
* ensure that minReplicas <= desiredReplicas <= maxReplicas no matter what
* before this change, if the number of runners was much larger than the max number, the applied scale down factor might still result in a desired value > maxReplicas
* if for resource constraints in the cluster, runners would be permanently restarted, the number of runners could go up more than the reverse scale down factor until the next reconciliation round, resulting in a situation where the number of runners climbs up even though it should actually go down
* by checking whether the desiredReplicas is always <= maxReplicas, infinite scaling up loops can be prevented
2021-01-22 10:11:21 +09:00
callum-tait-pbx
563c79c1b9 feat/helm: add manager secret to Helm chart (#254)
* feat: adding maanger secret to Helm

* fix: correcting secret data format

* feat: adding in common labels

* fix: updating default values to have config

The auth config needs to be commented out by default as we don't want to deploy both configs empty. This may break stuff, so we want the user to actively uncomment the auth method they want instead

* chore: updating default format of cert

* chore: wording
2021-01-22 10:03:25 +09:00
Johannes Nicolai
cbb41cbd18 Updating custom container example (#260)
* use latest instead of outdated version
* use sudo for package install (required)
* use sudo for package meta data removal (required)
2021-01-22 09:57:42 +09:00
Johannes Nicolai
64a1a58acf GitHub runner groups have to be created first (#261)
* in contrast to runner labels, GitHub runner groups are not automatically created
2021-01-22 09:52:35 +09:00
Reinier Timmer
524cf1b379 Update runner to v2.275.1 (#239) 2020-12-18 08:38:39 +09:00
ZacharyBenamram
0dadddfc7d Update README for "PercentageRunnersBusy" HRA metric type (#237)
* adding readme for new hpa scheme

* callum's comments

Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-12-17 10:21:27 +09:00
ZacharyBenamram
48923fec56 Autoscaling: Percentage runners busy - remove magic number used for round up (#235)
* remove magic number for autoscaling

Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-12-15 14:38:01 +09:00
ZacharyBenamram
466b30728d Add "PercentageRunnersBusy" horizontal runner autoscaler metric type (#223)
* hpa scheme based off busy runners

* running make manifests

Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-12-13 08:48:19 +09:00
callum-tait-pbx
c13704d7e2 feat: custom labels (#231)
Co-authored-by: Callum Tait <callum.tait@PBXUK-HH-05772.photobox.priv>
2020-12-13 08:33:04 +09:00
callum-tait-pbx
fb49bbda75 feat: adding helm config for dind sidecar (#232)
Co-authored-by: Callum Tait <callum.tait@PBXUK-HH-05772.photobox.priv>
2020-12-13 08:31:24 +09:00
Reinier Timmer
8d6f77e07c Remove beta GitHub client implementations (#228) 2020-12-10 09:08:51 +09:00
Yusuke Kuoka
dfffd3fb62 feat: EKS IAM Roles for Service Accounts for Runner Pods (#226)
One of the pod recreation conditions has been modified to use hash of runner spec, so that the controller does not keep restarting pods mutated by admission webhooks. This naturally allows us, for example, to use IRSA for EKS that requires its admission webhook to mutate the runner pod to have additional, IRSA-related volumes, volume mounts and env.

Resolves #200
2020-12-08 17:56:06 +09:00
Juho Saarinen
f710a54110 Don't compare runner connetion token at restart need check (#227)
Fixes #143
2020-12-08 08:48:35 +09:00
Yusuke Kuoka
85c29a95f5 runner: Add support for ruby/setup-ruby (#224)
It turned out previous versions of runner images were unable to run actions that require `AGENT_TOOLSDIRECTORY` or `libyaml` to exist in the runner environment. One of notable examples of such actions is [`ruby/setup-ruby`](https://github.com/ruby/setup-ruby).

This change adds the support for those actions, by setting up AGENT_TOOLSDIRECTORY and installing libyaml-dev within runner images.
2020-12-06 11:53:38 +09:00
Erik Nobel
a2b335ad6a Github pkg: Bump github package to version 33 (#222) 2020-12-06 10:01:47 +09:00
Tom Bamford
56c57cbf71 ci: Replace deprecated crazy-max buildx action to use alternative docker actions (#197)
Deprecated action `crazy-max/setup-buildx-action@v1` has been replaced with:
  `docker/setup-qemu-action@v1`
  `docker/setup-buildx-action@v1`
  `docker/login-action@v1`
  `docker/build-push-action@v2`

See: https://github.com/crazy-max/ghaction-docker-buildx
2020-12-06 10:00:10 +09:00
Ahmad Hamade
837563c976 Adding priorityClassName to helm chart (#215)
* Adding priorityClassName to helm chart and README file

* removed README and revert chart version
2020-11-30 09:04:25 +09:00
ZacharyBenamram
df99f394b4 Remove 10 minute buffer to token expiration (#214)
Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-11-30 09:03:27 +09:00
Shinnosuke Sawada
be25715e1e Use TLS for secure docker connection (#192) 2020-11-30 08:57:33 +09:00
Yusuke Kuoka
4ca825eef0 Publish runner images for v2.274.2
Ref #212
2020-11-27 08:49:58 +09:00
Yusuke Kuoka
e5101554b3 Fix release workflow to not use add-path
Fixes #208
2020-11-26 08:39:03 +09:00
Reinier Timmer
ee8fb5a388 parametrized working directory (#185)
* parametrized working directory

* manifests v3.0
2020-11-25 08:55:26 +09:00
Erik Nobel
4e93879b8f [BUG?]: Create mountpoint for /externals/ (#203)
* runner/controller: Add externals directory mount point

* Runner: Create hack for moving content of /runner/externals/ dir

* Externals dir Mount: mount examples for '__e/node12/bin/node' not found error
2020-11-25 08:53:47 +09:00
Shinnosuke Sawada
6ce6737f61 add dockerEnabled document (#193)
Follow-up for #191
2020-11-17 09:31:34 +09:00
Shinnosuke Sawada
4371de9733 add dockerEnabled option (#191)
Add dockerEnabled option for users who does not need docker and want not to run privileged container.
if `dockerEnabled == false`, dind container not run, and there are no privileged container.

Do the same as closed #96
2020-11-16 09:41:12 +09:00
Yusuke Kuoka
1fd752fca2 Use tcp DOCKER_HOST instead of sharing docker.sock (#177)
docker:dind container creates `/var/run/docker.sock` with root user and root group.
so, docker command in runner container needs root privileges to use docker.sock and docker action fails because lack of permission.

Use tcp connection between runner and docker container, so runner container doesn't need root privileges to run docker, and can run docker action.

Fixes #174
2020-11-16 09:32:29 +09:00
Yusuke Kuoka
ccd86dce69 chart: Update CRDs to make Runnner{Deployment,ReplicaSet} replicas optional (#189)
Ref #186
2020-11-14 22:09:06 +09:00
Yusuke Kuoka
3d531ffcdd refactor/sync-period (#188)
* refactor: adding explicit sync-period flag

* docs: adding mroe detail for sync period config

* docs: spelling 🙃

Co-authored-by: Callum Tait <callum.tait@PBXUK-HH-05772.photobox.priv>
2020-11-14 22:07:22 +09:00
Yusuke Kuoka
1658f51fcb Make Runner{Deployment,ReplicaSet} replicas actually optional (#186)
If omitted, it properly defaults to 1.

Fixes #64
2020-11-14 22:06:33 +09:00
Yusuke Kuoka
9a22bb5086 Initial version of Helm chart (#187)
Acceptance tests are passing with the chart. In addition to standard chart values, syncPeriod is supported.
Please use it as a foundation for further collaboration.

Ref #184
Inspired by #91
Related #61
2020-11-14 22:06:14 +09:00
Yusuke Kuoka
71436f0466 Final fixes before merging 2020-11-14 22:00:41 +09:00
Yusuke Kuoka
b63879f59f Ensure the chart is passing acceptance tests 2020-11-14 21:58:16 +09:00
Callum Tait
c623ce0765 docs: spelling 🙃 2020-11-14 12:32:14 +00:00
Yusuke Kuoka
01117041b8 chart: controller-manager should not be autoscaled 2020-11-14 20:37:40 +09:00
Yusuke Kuoka
42a272051d chart: Use the correct image 2020-11-14 20:37:22 +09:00
Yusuke Kuoka
6a4c29d30e Set ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm to run acceptance tests with chart 2020-11-14 20:31:37 +09:00
Yusuke Kuoka
ce48dc58e6 Add crds to the chart 2020-11-14 20:31:21 +09:00
Callum Tait
04dde518f0 docs: adding mroe detail for sync period config 2020-11-14 11:22:09 +00:00
Callum Tait
1b98c8f811 refactor: adding explicit sync-period flag 2020-11-14 11:19:16 +00:00
Yusuke Kuoka
14b34efa77 Start collaborating to develop a Helm chart
Ref #184
2020-11-14 20:15:42 +09:00
Yusuke Kuoka
bbfe03f02b Add acceptance test (#168)
To ease verifying the controller to work before submitting/merging PRs and releasing a new version of the controller.
2020-11-14 20:07:14 +09:00
Yusuke Kuoka
8ccf64080c Merge pull request #182 from callum-tait-pbx/docs/improvements
docs/improvements
2020-11-14 12:02:24 +09:00
Callum Tait
492ea4b583 docs: adding a common errors section 2020-11-13 08:38:25 +00:00
Callum Tait
3446d4d761 docs: reordering to be more logical 2020-11-13 08:27:18 +00:00
Javier de Pedro López
1f82032fe3 Improvements in the documentation for permissions and replication (#175) 2020-11-13 09:36:22 +09:00
Elliot Maincourt
5b2272d80a Add read-only permissions to actions for being able to autoscale (#180) 2020-11-13 09:27:01 +09:00
Reinier Timmer
0870250f9b Enable matrix build for runner image (#179) 2020-11-13 09:24:12 +09:00
Shinnosuke Sawada
a4061d0625 gofmt ed 2020-11-12 09:20:06 +09:00
Juho Saarinen
7846f26199 Increase memory limit (#173)
We saw controller quite often OOMing, so first help to increase limit a bit.
2020-11-12 09:14:33 +09:00
Reinier Timmer
8217ebcaac Updated Runner to 2.274.1 (#176)
* Updated Runner to 2.274.1

* Update image tag in workflow
2020-11-12 09:14:06 +09:00
Shinnosuke Sawada
83857ba7e0 use tcp DOCKER_HOST instead of sharing docker.sock 2020-11-12 08:07:52 +09:00
Yusuke Kuoka
e613219a89 Fix token registration broken since v0.11.0 (#167)
Fixes #166
2020-11-11 09:38:05 +09:00
Reinier Timmer
bc35bdfa85 fixed label argument in entrypoint (#162)
* fixed label argument in entrypoint

* Removed quotes from RUNNER_GROUP_ARG
2020-11-11 08:44:51 +09:00
Yusuke Kuoka
ece8fd8fe4 Bump Go to 1.15 (#160)
Closes #104
2020-11-10 17:16:04 +09:00
Dan Webb
dcf8524b5c Adds RUNNER_GROUP argument to the runner registration (#157)
* Adds RUNNER_GROUP argument to the runner registration

Adds the ability to register a runner to a predefined runner_group

Resolves #137

* Update README with runner group example

- Updates the README with instructions of how to add the runner to a
  group
- Fix code fencing for shell and yaml blocks in the README
- Use consistent bullet points (dash not asterisk)
2020-11-10 17:15:54 +09:00
Yusuke Kuoka
4eb45d3c7f Fix build error 2020-11-10 17:09:16 +09:00
Juho Saarinen
1c30bdf35b Add GHE URL to transport (#152)
Fixes #149
2020-11-10 17:05:09 +09:00
Yusuke Kuoka
3f335ca628 Fix panic on startup when misconfigured (#154)
Fixes #153
2020-11-10 17:03:33 +09:00
Juho Saarinen
f2a2ab7ede Check token validity only when creating new pod (#159)
Fixes #143
2020-11-10 17:02:30 +09:00
Juho Saarinen
40c5050978 Added support for other than public GitHub URL (#146)
Refactoring a bit
2020-10-28 22:15:53 +09:00
Juho Saarinen
99a53a6e79 Releasing latest controller from master push (#147)
Fixes #135
2020-10-28 22:13:35 +09:00
Yusuke Kuoka
6d78fb07b3 Fix permission error with the default setup since v0.9.4 (#142)
Fixes #138
2020-10-25 11:25:48 +09:00
Yusuke Kuoka
faaca10fba Rename Runner.Spec.dockerWithinRunnerContainer to docker"d"WithinRunnerContainer (#134)
* Rename Runner.Spec.dockerWithinRunnerContainer to dockerdWithinRunnerContainer

Ref https://github.com/summerwind/actions-runner-controller/pull/126#issuecomment-712501790
2020-10-21 21:32:40 +09:00
Juho Saarinen
d16dfac0f8 Restart if pod ends up succeeded (#136)
Fixes #132
2020-10-21 21:32:26 +09:00
Juho Saarinen
af483d83da Possibility to define resources for DIND container (#125)
Ref #119
2020-10-21 10:26:19 +09:00
Juho Saarinen
92920926fe Configurable "runner and DinD in a single container" (#126) 2020-10-20 08:48:28 +09:00
Brendan Galloway
7d0bfb77e3 Inject Env Vars into Runner defined Container Spec (#127)
The runner token is now injected into the `runner` container defined within Runner.Spec.Containers[]
2020-10-20 08:43:53 +09:00
Brendan Galloway
c4074130e8 Patch additional protocol instances during manifest generation (#129)
Fixes #128
2020-10-19 09:57:53 +09:00
Yusuke Kuoka
be2e61f209 Add "alternatives" section to README (#124)
Grow together :)
2020-10-15 08:40:02 +09:00
Juho Saarinen
da818a898a Manifests valid for K8s 1.18 and 1.19 (#123)
Fixes #113
Fixes #116
2020-10-15 08:39:46 +09:00
73 changed files with 9586 additions and 17179 deletions

View File

@@ -11,54 +11,54 @@ on:
paths: paths:
- runner/patched/* - runner/patched/*
- runner/Dockerfile - runner/Dockerfile
- runner/dindrunner.Dockerfile
- runner/entrypoint.sh - runner/entrypoint.sh
- .github/workflows/build-runner.yml - .github/workflows/build-runner.yml
name: Runner name: Runner
jobs: jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Build name: Build ${{ matrix.name }}
strategy:
matrix:
include:
- name: actions-runner
dockerfile: Dockerfile
- name: actions-runner-dind
dockerfile: dindrunner.Dockerfile
env: env:
RUNNER_VERSION: 2.273.5 RUNNER_VERSION: 2.275.1
DOCKER_VERSION: 19.03.12 DOCKER_VERSION: 19.03.12
DOCKERHUB_USERNAME: ${{ github.repository_owner }} DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx - name: Set up Docker Buildx
id: buildx uses: docker/setup-buildx-action@v1
uses: crazy-max/ghaction-docker-buildx@v1
with: with:
buildx-version: latest version: latest
- name: Build Container Image - name: Login to DockerHub
working-directory: runner uses: docker/login-action@v1
if: ${{ github.event_name == 'pull_request' }}
run: |
docker buildx build \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
--platform linux/amd64,linux/arm64 \
--tag ${DOCKERHUB_USERNAME}/actions-runner:v${RUNNER_VERSION} \
--tag ${DOCKERHUB_USERNAME}/actions-runner:latest \
-f Dockerfile .
- name: Login to GitHub Docker Registry
run: echo "${DOCKERHUB_PASSWORD}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin
if: ${{ github.event_name == 'push' }} if: ${{ github.event_name == 'push' }}
env: with:
DOCKERHUB_USERNAME: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
DOCKERHUB_PASSWORD: ${{ secrets.DOCKER_ACCESS_TOKEN }} password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and Push Container Image - name: Build [and Push]
working-directory: runner uses: docker/build-push-action@v2
if: ${{ github.event_name == 'push' }} with:
run: | context: ./runner
docker buildx build \ file: ./runner/${{ matrix.dockerfile }}
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \ platforms: linux/amd64,linux/arm64
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \ push: ${{ github.event_name != 'pull_request' }}
--platform linux/amd64,linux/arm64 \ build-args: |
--tag ${DOCKERHUB_USERNAME}/actions-runner:v${RUNNER_VERSION} \ RUNNER_VERSION=${{ env.RUNNER_VERSION }}
--tag ${DOCKERHUB_USERNAME}/actions-runner:latest \ DOCKER_VERSION=${{ env.DOCKER_VERSION }}
-f Dockerfile . --push tags: |
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:latest

View File

@@ -6,6 +6,8 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Release name: Release
env:
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v2
@@ -22,30 +24,33 @@ jobs:
sudo mv ghr_v0.13.0_linux_amd64/ghr /usr/local/bin sudo mv ghr_v0.13.0_linux_amd64/ghr /usr/local/bin
- name: Set version - name: Set version
run: echo "::set-env name=VERSION::$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')" run: echo "VERSION=$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')" >> $GITHUB_ENV
- name: Upload artifacts - name: Upload artifacts
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: make github-release run: make github-release
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx - name: Set up Docker Buildx
id: buildx id: buildx
uses: crazy-max/ghaction-docker-buildx@v1 uses: docker/setup-buildx-action@v1
with: with:
buildx-version: latest version: latest
- name: Login to GitHub Docker Registry - name: Login to DockerHub
run: echo "${DOCKERHUB_PASSWORD}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin uses: docker/login-action@v1
env: with:
DOCKERHUB_USERNAME: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
DOCKERHUB_PASSWORD: ${{ secrets.DOCKER_ACCESS_TOKEN }} password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v2
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}
- name: Build Container Image
env:
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
run: |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag ${DOCKERHUB_USERNAME}/actions-runner-controller:${{ env.VERSION }} \
-f Dockerfile . --push

40
.github/workflows/wip.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
on:
push:
branches:
- master
paths-ignore:
- "runner/**"
jobs:
build:
runs-on: ubuntu-latest
name: release-latest
env:
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v2
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:latest

3
.gitignore vendored
View File

@@ -23,3 +23,6 @@ bin
*.swp *.swp
*.swo *.swo
*~ *~
.envrc
*.pem

View File

@@ -1,5 +1,5 @@
# Build the manager binary # Build the manager binary
FROM golang:1.13 as builder FROM golang:1.15 as builder
ARG TARGETPLATFORM ARG TARGETPLATFORM

View File

@@ -1,5 +1,8 @@
NAME ?= summerwind/actions-runner-controller NAME ?= summerwind/actions-runner-controller
VERSION ?= latest VERSION ?= latest
# From https://github.com/VictoriaMetrics/operator/pull/44
YAML_DROP=$(YQ) delete --inplace
YAML_DROP_PREFIX=spec.validation.openAPIV3Schema.properties.spec.properties
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion) # Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:trivialVersions=true" CRD_OPTIONS ?= "crd:trivialVersions=true"
@@ -56,9 +59,14 @@ deploy: manifests
kustomize build config/default | kubectl apply -f - kustomize build config/default | kubectl apply -f -
# Generate manifests e.g. CRD, RBAC etc. # Generate manifests e.g. CRD, RBAC etc.
manifests: controller-gen manifests: manifests-118 fix118 chart-crds
manifests-118: controller-gen
$(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases $(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
chart-crds:
cp config/crd/bases/*.yaml charts/actions-runner-controller/crds/
# Run go fmt against code # Run go fmt against code
fmt: fmt:
go fmt ./... go fmt ./...
@@ -67,6 +75,22 @@ fmt:
vet: vet:
go vet ./... go vet ./...
# workaround for CRD issue with k8s 1.18 & controller-gen
# ref: https://github.com/kubernetes/kubernetes/issues/91395
fix118: yq
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.ephemeralContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.ephemeralContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).ephemeralContainers.items.properties
# Generate code # Generate code
generate: controller-gen generate: controller-gen
$(CONTROLLER_GEN) object:headerFile=./hack/boilerplate.go.txt paths="./..." $(CONTROLLER_GEN) object:headerFile=./hack/boilerplate.go.txt paths="./..."
@@ -97,6 +121,37 @@ release: manifests
mkdir -p release mkdir -p release
kustomize build config/default > release/actions-runner-controller.yaml kustomize build config/default > release/actions-runner-controller.yaml
.PHONY: release/clean
release/clean:
rm -rf release
.PHONY: acceptance
acceptance: release/clean docker-build docker-push release
ACCEPTANCE_TEST_SECRET_TYPE=token make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
ACCEPTANCE_TEST_SECRET_TYPE=app make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=app make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
acceptance/kind:
kind create cluster --name acceptance
kubectl cluster-info --context kind-acceptance
acceptance/setup:
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.yaml #kubectl create namespace actions-runner-system
kubectl -n cert-manager wait deploy/cert-manager-cainjector --for condition=available --timeout 60s
kubectl -n cert-manager wait deploy/cert-manager-webhook --for condition=available --timeout 60s
kubectl -n cert-manager wait deploy/cert-manager --for condition=available --timeout 60s
kubectl create namespace actions-runner-system || true
# Adhocly wait for some time until cert-manager's admission webhook gets ready
sleep 5
acceptance/teardown:
kind delete cluster --name acceptance
acceptance/tests:
acceptance/deploy.sh
acceptance/checks.sh
# Upload release file to GitHub. # Upload release file to GitHub.
github-release: release github-release: release
ghr ${VERSION} release/ ghr ${VERSION} release/
@@ -105,6 +160,7 @@ github-release: release
# download controller-gen if necessary # download controller-gen if necessary
controller-gen: controller-gen:
ifeq (, $(shell which controller-gen)) ifeq (, $(shell which controller-gen))
ifeq (, $(wildcard $(GOBIN)/controller-gen))
@{ \ @{ \
set -e ;\ set -e ;\
CONTROLLER_GEN_TMP_DIR=$$(mktemp -d) ;\ CONTROLLER_GEN_TMP_DIR=$$(mktemp -d) ;\
@@ -113,7 +169,25 @@ ifeq (, $(shell which controller-gen))
go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.3.0 ;\ go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.3.0 ;\
rm -rf $$CONTROLLER_GEN_TMP_DIR ;\ rm -rf $$CONTROLLER_GEN_TMP_DIR ;\
} }
endif
CONTROLLER_GEN=$(GOBIN)/controller-gen CONTROLLER_GEN=$(GOBIN)/controller-gen
else else
CONTROLLER_GEN=$(shell which controller-gen) CONTROLLER_GEN=$(shell which controller-gen)
endif endif
# find or download yq
# download yq if necessary
# Use always go-version to get consistent line wraps etc.
yq:
ifeq (, $(wildcard $(GOBIN)/yq))
echo "Downloading yq"
@{ \
set -e ;\
YQ_TMP_DIR=$$(mktemp -d) ;\
cd $$YQ_TMP_DIR ;\
go mod init tmp ;\
go get github.com/mikefarah/yq/v3@3.4.0 ;\
rm -rf $$YQ_TMP_DIR ;\
}
endif
YQ=$(GOBIN)/yq

235
README.md
View File

@@ -17,9 +17,19 @@ actions-runner-controller uses [cert-manager](https://cert-manager.io/docs/insta
Install the custom resource and actions-runner-controller itself. This will create actions-runner-system namespace in your Kubernetes and deploy the required resources. Install the custom resource and actions-runner-controller itself. This will create actions-runner-system namespace in your Kubernetes and deploy the required resources.
``` ```
$ kubectl apply -f https://github.com/summerwind/actions-runner-controller/releases/latest/download/actions-runner-controller.yaml kubectl apply -f https://github.com/summerwind/actions-runner-controller/releases/latest/download/actions-runner-controller.yaml
``` ```
### Github Enterprise support
If you use either Github Enterprise Cloud or Server (and have recent enought version supporting Actions), you can use **actions-runner-controller** with those, too. Authentication works same way as with public Github (repo and organization level).
```shell
kubectl set env deploy controller-manager -c manager GITHUB_ENTERPRISE_URL=<GHEC/S URL> --namespace actions-runner-system
```
[Enterprise level](https://docs.github.com/en/enterprise-server@2.22/actions/hosting-your-own-runners/adding-self-hosted-runners#adding-a-self-hosted-runner-to-an-enterprise) runners are not working yet as there's no API definition for those.
## Setting up authentication with GitHub API ## Setting up authentication with GitHub API
There are two ways for actions-runner-controller to authenticate with the GitHub API: There are two ways for actions-runner-controller to authenticate with the GitHub API:
@@ -27,17 +37,23 @@ There are two ways for actions-runner-controller to authenticate with the GitHub
1. Using GitHub App. 1. Using GitHub App.
2. Using Personal Access Token. 2. Using Personal Access Token.
Regardless of which authentication method you use, the same permissions are required, those permissions are:
- Repository: Administration (read/write)
- Repository: Actions (read)
- Organization: Self-hosted runners (read/write)
**NOTE: It is extremely important to only follow one of the sections below and not both.** **NOTE: It is extremely important to only follow one of the sections below and not both.**
### Using GitHub App ### Using GitHub App
You can create a GitHub App for either your account or any organization. If you want to create a GitHub App for your account, open the following link to the creation page, enter any unique name in the "GitHub App name" field, and hit the "Create GitHub App" button at the bottom of the page. You can create a GitHub App for either your account or any organization. If you want to create a GitHub App for your account, open the following link to the creation page, enter any unique name in the "GitHub App name" field, and hit the "Create GitHub App" button at the bottom of the page.
- [Create GitHub Apps on your account](https://github.com/settings/apps/new?url=http://github.com/summerwind/actions-runner-controller&webhook_active=false&public=false&administration=write) - [Create GitHub Apps on your account](https://github.com/settings/apps/new?url=http://github.com/summerwind/actions-runner-controller&webhook_active=false&public=false&administration=write&actions=read)
If you want to create a GitHub App for your organization, replace the `:org` part of the following URL with your organization name before opening it. Then enter any unique name in the "GitHub App name" field, and hit the "Create GitHub App" button at the bottom of the page to create a GitHub App. If you want to create a GitHub App for your organization, replace the `:org` part of the following URL with your organization name before opening it. Then enter any unique name in the "GitHub App name" field, and hit the "Create GitHub App" button at the bottom of the page to create a GitHub App.
- [Create GitHub Apps on your organization](https://github.com/organizations/:org/settings/apps/new?url=http://github.com/summerwind/actions-runner-controller&webhook_active=false&public=false&administration=write&organization_self_hosted_runners=write) - [Create GitHub Apps on your organization](https://github.com/organizations/:org/settings/apps/new?url=http://github.com/summerwind/actions-runner-controller&webhook_active=false&public=false&administration=write&organization_self_hosted_runners=write&actions=read)
You will see an *App ID* on the page of the GitHub App you created as follows, the value of this App ID will be used later. You will see an *App ID* on the page of the GitHub App you created as follows, the value of this App ID will be used later.
@@ -58,7 +74,7 @@ When the installation is complete, you will be taken to a URL in one of the foll
Finally, register the App ID (`APP_ID`), Installation ID (`INSTALLATION_ID`), and downloaded private key file (`PRIVATE_KEY_FILE_PATH`) to Kubernetes as Secret. Finally, register the App ID (`APP_ID`), Installation ID (`INSTALLATION_ID`), and downloaded private key file (`PRIVATE_KEY_FILE_PATH`) to Kubernetes as Secret.
``` ```shell
$ kubectl create secret generic controller-manager \ $ kubectl create secret generic controller-manager \
-n actions-runner-system \ -n actions-runner-system \
--from-literal=github_app_id=${APP_ID} \ --from-literal=github_app_id=${APP_ID} \
@@ -80,8 +96,8 @@ Open the Create Token page from the following link, grant the `repo` and/or `adm
Register the created token (`GITHUB_TOKEN`) as a Kubernetes secret. Register the created token (`GITHUB_TOKEN`) as a Kubernetes secret.
``` ```shell
$ kubectl create secret generic controller-manager \ kubectl create secret generic controller-manager \
-n actions-runner-system \ -n actions-runner-system \
--from-literal=github_token=${GITHUB_TOKEN} --from-literal=github_token=${GITHUB_TOKEN}
``` ```
@@ -97,7 +113,7 @@ There are two ways to use this controller:
To launch a single self-hosted runner, you need to create a manifest file includes *Runner* resource as follows. This example launches a self-hosted runner with name *example-runner* for the *summerwind/actions-runner-controller* repository. To launch a single self-hosted runner, you need to create a manifest file includes *Runner* resource as follows. This example launches a self-hosted runner with name *example-runner* for the *summerwind/actions-runner-controller* repository.
``` ```yaml
# runner.yaml # runner.yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner kind: Runner
@@ -110,14 +126,14 @@ spec:
Apply the created manifest file to your Kubernetes. Apply the created manifest file to your Kubernetes.
``` ```shell
$ kubectl apply -f runner.yaml $ kubectl apply -f runner.yaml
runner.actions.summerwind.dev/example-runner created runner.actions.summerwind.dev/example-runner created
``` ```
You can see that the Runner resource has been created. You can see that the Runner resource has been created.
``` ```shell
$ kubectl get runners $ kubectl get runners
NAME REPOSITORY STATUS NAME REPOSITORY STATUS
example-runner summerwind/actions-runner-controller Running example-runner summerwind/actions-runner-controller Running
@@ -125,7 +141,7 @@ example-runner summerwind/actions-runner-controller Running
You can also see that the runner pod has been running. You can also see that the runner pod has been running.
``` ```shell
$ kubectl get pods $ kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
example-runner 2/2 Running 0 1m example-runner 2/2 Running 0 1m
@@ -141,7 +157,7 @@ Now you can use your self-hosted runner. See the [official documentation](https:
To add the runner to an organization, you only need to replace the `repository` field with `organization`, so the runner will register itself to the organization. To add the runner to an organization, you only need to replace the `repository` field with `organization`, so the runner will register itself to the organization.
``` ```yaml
# runner.yaml # runner.yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner kind: Runner
@@ -175,14 +191,14 @@ spec:
Apply the manifest file to your cluster: Apply the manifest file to your cluster:
``` ```shell
$ kubectl apply -f runner.yaml $ kubectl apply -f runner.yaml
runnerdeployment.actions.summerwind.dev/example-runnerdeploy created runnerdeployment.actions.summerwind.dev/example-runnerdeploy created
``` ```
You can see that 2 runners have been created as specified by `replicas: 2`: You can see that 2 runners have been created as specified by `replicas: 2`:
``` ```shell
$ kubectl get runners $ kubectl get runners
NAME REPOSITORY STATUS NAME REPOSITORY STATUS
example-runnerdeploy2475h595fr mumoshu/actions-runner-controller-ci Running example-runnerdeploy2475h595fr mumoshu/actions-runner-controller-ci Running
@@ -195,7 +211,7 @@ example-runnerdeploy2475ht2qbr mumoshu/actions-runner-controller-ci Running
In the below example, `actions-runner` checks for pending workflow runs for each sync period, and scale to e.g. 3 if there're 3 pending jobs at sync time. In the below example, `actions-runner` checks for pending workflow runs for each sync period, and scale to e.g. 3 if there're 3 pending jobs at sync time.
``` ```yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment kind: RunnerDeployment
metadata: metadata:
@@ -220,12 +236,12 @@ spec:
- summerwind/actions-runner-controller - summerwind/actions-runner-controller
``` ```
Please also note that the sync period is set to 10 minutes by default and it's configurable via `--sync-period` flag. The scale out performance is controlled via the manager containers startup `--sync-period` argument. The default value is 10 minutes to prevent unconfigured deployments rate limiting themselves from the GitHub API. The period can be customised in the `config/default/manager_auth_proxy_patch.yaml` patch for those that are building the solution via the kustomize setup.
Additionally, the autoscaling feature has an anti-flapping option that prevents periodic loop of scaling up and down. Additionally, the autoscaling feature has an anti-flapping option that prevents periodic loop of scaling up and down.
By default, it doesn't scale down until the grace period of 10 minutes passes after a scale up. The grace period can be configured by setting `scaleDownDelaySecondsAfterScaleUp`: By default, it doesn't scale down until the grace period of 10 minutes passes after a scale up. The grace period can be configured by setting `scaleDownDelaySecondsAfterScaleUp`:
``` ```yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment kind: RunnerDeployment
metadata: metadata:
@@ -251,6 +267,50 @@ spec:
- summerwind/actions-runner-controller - summerwind/actions-runner-controller
``` ```
If you do not want to manage an explicit list of repositories to scale, an alternate autoscaling scheme that can be applied is the PercentageRunnersBusy scheme. The number of desired pods are evaulated by checking how many runners are currently busy and applying a scaleup or scale down factor if certain thresholds are met. By setting the metric type to PercentageRunnersBusy, the HorizontalRunnerAutoscaler will query github for the number of busy runners which live in the RunnerDeployment namespace. Scaleup and scaledown thresholds are the percentage of busy runners at which the number of desired runners are re-evaluated. Scaleup and scaledown factors are the multiplicative factor applied to the current number of runners used to calculate the number of desired runners. This scheme is also especially useful if you want multiple controllers in various clusters, each responsible for scaling their own runner pods per namespace.
```yaml
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
scaleDownDelaySecondsAfterScaleOut: 60
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75'
scaleDownThreshold: '0.3'
scaleUpFactor: '1.4'
scaleDownFactor: '0.7'
```
## Runner with DinD
When using default runner, runner pod starts up 2 containers: runner and DinD (Docker-in-Docker). This might create issues if there's `LimitRange` set to namespace.
```yaml
# dindrunnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-dindrunnerdeploy
spec:
replicas: 2
template:
spec:
image: summerwind/actions-runner-dind
dockerdWithinRunnerContainer: true
repository: mumoshu/actions-runner-controller-ci
env: []
```
This also helps with resources, as you don't need to give resources separately to docker and runner.
## Additional tweaks ## Additional tweaks
You can pass details through the spec selector. Here's an eg. of what you may like to do: You can pass details through the spec selector. Here's an eg. of what you may like to do:
@@ -283,6 +343,19 @@ spec:
requests: requests:
cpu: "2.0" cpu: "2.0"
memory: "4Gi" memory: "4Gi"
# If set to false, there are no privileged container and you cannot use docker.
dockerEnabled: false
# If set to true, runner pod container only 1 container that's expected to be able to run docker, too.
# image summerwind/actions-runner-dind or custom one should be used with true -value
dockerdWithinRunnerContainer: false
# Valid if dockerdWithinRunnerContainer is not true
dockerdContainerResources:
limits:
cpu: "4.0"
memory: "8Gi"
requests:
cpu: "2.0"
memory: "4Gi"
sidecarContainers: sidecarContainers:
- name: mysql - name: mysql
image: mysql:5.7 image: mysql:5.7
@@ -291,6 +364,10 @@ spec:
value: abcd1234 value: abcd1234
securityContext: securityContext:
runAsUser: 0 runAsUser: 0
# if workDir is not specified, the default working directory is /runner/_work
# this setting allows you to customize the working directory location
# for example, the below setting is the same as on the ubuntu-18.04 image
workDir: /home/runner/work
``` ```
## Runner labels ## Runner labels
@@ -330,27 +407,72 @@ jobs:
Note that if you specify `self-hosted` in your workflow, then this will run your job on _any_ self-hosted runner, regardless of the labels that they have. Note that if you specify `self-hosted` in your workflow, then this will run your job on _any_ self-hosted runner, regardless of the labels that they have.
## Runner Groups
Runner groups can be used to limit which repositories are able to use the GitHub Runner at an Organisation level. Runner groups have to be [created in GitHub first](https://docs.github.com/en/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups) before they can be referenced.
To add the runner to the group `NewGroup`, specify the group in your `Runner` or `RunnerDeployment` spec.
```yaml
# runnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: custom-runner
spec:
replicas: 1
template:
spec:
group: NewGroup
```
## Using EKS IAM role for service accounts
`actions-runner-controller` v0.15.0 or later has support for EKS IAM role for service accounts.
As similar as for regular pods and deployments, you firstly need an existing service account with the IAM role associated.
Create one using e.g. `eksctl`. You can refer to [the EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) for more details.
Once you set up the service account, all you need is to add `serviceAccountName` and `fsGroup` to any pods that uses
the IAM-role enabled service account.
For `RunnerDeployment`, you can set those two fields under the runner spec at `RunnerDeployment.Spec.Template`:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeploy
spec:
template:
spec:
repository: USER/REO
serviceAccountName: my-service-account
securityContext:
fsGroup: 1447
```
## Software installed in the runner image ## Software installed in the runner image
The GitHub hosted runners include a large amount of pre-installed software packages. For Ubuntu 18.04, this list can be found at https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md The GitHub hosted runners include a large amount of pre-installed software packages. For Ubuntu 18.04, this list can be found at <https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md>
The container image is based on Ubuntu 18.04, but it does not contain all of the software installed on the GitHub runners. It contains the following subset of packages from the GitHub runners: The container image is based on Ubuntu 18.04, but it does not contain all of the software installed on the GitHub runners. It contains the following subset of packages from the GitHub runners:
* Basic CLI packages - Basic CLI packages
* git (2.26) - git (2.26)
* docker - docker
* build-essentials - build-essentials
The virtual environments from GitHub contain a lot more software packages (different versions of Java, Node.js, Golang, .NET, etc) which are not provided in the runner image. Most of these have dedicated setup actions which allow the tools to be installed on-demand in a workflow, for example: `actions/setup-java` or `actions/setup-node` The virtual environments from GitHub contain a lot more software packages (different versions of Java, Node.js, Golang, .NET, etc) which are not provided in the runner image. Most of these have dedicated setup actions which allow the tools to be installed on-demand in a workflow, for example: `actions/setup-java` or `actions/setup-node`
If there is a need to include packages in the runner image for which there is no setup action, then this can be achieved by building a custom container image for the runner. The easiest way is to start with the `summerwind/actions-runner` image and installing the extra dependencies directly in the docker image: If there is a need to include packages in the runner image for which there is no setup action, then this can be achieved by building a custom container image for the runner. The easiest way is to start with the `summerwind/actions-runner` image and installing the extra dependencies directly in the docker image:
```yaml ```shell
FROM summerwind/actions-runner:v2.169.1 FROM summerwind/actions-runner:latest
RUN sudo apt update -y \ RUN sudo apt update -y \
&& apt install YOUR_PACKAGE && sudo apt install YOUR_PACKAGE
&& rm -rf /var/lib/apt/lists/* && sudo rm -rf /var/lib/apt/lists/*
``` ```
You can then configure the runner to use a custom docker image by configuring the `image` field of a `Runner` or `RunnerDeployment`: You can then configure the runner to use a custom docker image by configuring the `image` field of a `Runner` or `RunnerDeployment`:
@@ -364,3 +486,66 @@ spec:
repository: summerwind/actions-runner-controller repository: summerwind/actions-runner-controller
image: YOUR_CUSTOM_DOCKER_IMAGE image: YOUR_CUSTOM_DOCKER_IMAGE
``` ```
## Common Errors
### invalid header field value
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error {"controller": "runner", "request": "actions-runner-system/runner-deployment-dk7q8-dk5c9", "error": "failed to create registration token: Post \"https://api.github.com/orgs/$YOUR_ORG_HERE/actions/runners/registration-token\": net/http: invalid header field value \"Bearer $YOUR_TOKEN_HERE\\n\" for key Authorization"}
```
**Solutions**<br />
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documeneted flags
# Developing
If you'd like to modify the controller to fork or contribute, I'd suggest using the following snippet for running
the acceptance test:
```shell
# This sets `VERSION` envvar to some appropriate value
. hack/make-env.sh
NAME=$DOCKER_USER/actions-runner-controller \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
make docker-build docker-push acceptance
```
Please follow the instructions explained in [Using Personal Access Token](#using-personal-access-token) to obtain
`GITHUB_TOKEN`, and those in [Using GitHub App](#using-github-app) to obtain `APP_ID`, `INSTALLATION_ID`, and
`PRIVATE_KEY_FILE_PATH`.
The test creates a one-off `kind` cluster, deploys `cert-manager` and `actions-runner-controller`,
creates a `RunnerDeployment` custom resource for a public Git repository to confirm that the
controller is able to bring up a runner pod with the actions runner registration token installed.
If you prefer to test in a non-kind cluster, you can instead run:
```shell
KUBECONFIG=path/to/kubeconfig \
NAME=$DOCKER_USER/actions-runner-controller \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make docker-build docker-push \
acceptance/setup acceptance/tests
```
# Alternatives
The following is a list of alternative solutions that may better fit you depending on your use-case:
- <https://github.com/evryfs/github-actions-runner-operator/>
Although the situation can change over time, as of writing this sentence, the benefits of using `actions-runner-controller` over the alternatives are:
- `actions-runner-controller` has the ability to autoscale runners based on number of pending/progressing jobs (#99)
- `actions-runner-controller` is able to gracefully stop runners (#103)
- `actions-runner-controller` has ARM support

29
acceptance/checks.sh Executable file
View File

@@ -0,0 +1,29 @@
#!/usr/bin/env bash
set -e
runner_name=
while [ -z "${runner_name}" ]; do
echo Finding the runner... 1>&2
sleep 1
runner_name=$(kubectl get runner --output=jsonpath="{.items[*].metadata.name}")
done
echo Found runner ${runner_name}.
pod_name=
while [ -z "${pod_name}" ]; do
echo Finding the runner pod... 1>&2
sleep 1
pod_name=$(kubectl get pod --output=jsonpath="{.items[*].metadata.name}" | grep ${runner_name})
done
echo Found pod ${pod_name}.
echo Waiting for pod ${runner_name} to become ready... 1>&2
kubectl wait pod/${runner_name} --for condition=ready --timeout 180s
echo All tests passed. 1>&2

42
acceptance/deploy.sh Executable file
View File

@@ -0,0 +1,42 @@
#!/usr/bin/env bash
set -e
tpe=${ACCEPTANCE_TEST_SECRET_TYPE}
if [ "${tpe}" == "token" ]; then
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_token=${GITHUB_TOKEN:?GITHUB_TOKEN must not be empty}
elif [ "${tpe}" == "app" ]; then
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_app_id=${APP_ID:?must not be empty} \
--from-literal=github_app_installation_id=${INSTALLATION_ID:?must not be empty} \
--from-file=github_app_private_key=${PRIVATE_KEY_FILE_PATH:?must not be empty}
else
echo "ACCEPTANCE_TEST_SECRET_TYPE must be set to either \"token\" or \"app\"" 1>&2
exit 1
fi
tool=${ACCEPTANCE_TEST_DEPLOYMENT_TOOL}
if [ "${tool}" == "helm" ]; then
helm upgrade --install actions-runner-controller \
charts/actions-runner-controller \
-n actions-runner-system \
--create-namespace \
--set syncPeriod=5m
kubectl -n actions-runner-system wait deploy/actions-runner-controller --for condition=available
else
kubectl apply \
-n actions-runner-system \
-f release/actions-runner-controller.yaml
kubectl -n actions-runner-system wait deploy/controller-manager --for condition=available --timeout 60s
fi
# Adhocly wait for some time until actions-runner-controller's admission webhook gets ready
sleep 20
kubectl apply \
-f acceptance/testdata/runnerdeploy.yaml

9
acceptance/testdata/runnerdeploy.yaml vendored Normal file
View File

@@ -0,0 +1,9 @@
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeploy
spec:
# replicas: 1
template:
spec:
repository: mumoshu/actions-runner-controller-ci

View File

@@ -56,6 +56,26 @@ type MetricSpec struct {
// For example, a repository name is the REPO part of `github.com/USER/REPO`. // For example, a repository name is the REPO part of `github.com/USER/REPO`.
// +optional // +optional
RepositoryNames []string `json:"repositoryNames,omitempty"` RepositoryNames []string `json:"repositoryNames,omitempty"`
// ScaleUpThreshold is the percentage of busy runners greater than which will
// trigger the hpa to scale runners up.
// +optional
ScaleUpThreshold string `json:"scaleUpThreshold,omitempty"`
// ScaleDownThreshold is the percentage of busy runners less than which will
// trigger the hpa to scale the runners down.
// +optional
ScaleDownThreshold string `json:"scaleDownThreshold,omitempty"`
// ScaleUpFactor is the multiplicative factor applied to the current number of runners used
// to determine how many pods should be added.
// +optional
ScaleUpFactor string `json:"scaleUpFactor,omitempty"`
// ScaleDownFactor is the multiplicative factor applied to the current number of runners used
// to determine how many pods should be removed.
// +optional
ScaleDownFactor string `json:"scaleDownFactor,omitempty"`
} }
type HorizontalRunnerAutoscalerStatus struct { type HorizontalRunnerAutoscalerStatus struct {

View File

@@ -36,9 +36,14 @@ type RunnerSpec struct {
// +optional // +optional
Labels []string `json:"labels,omitempty"` Labels []string `json:"labels,omitempty"`
// +optional
Group string `json:"group,omitempty"`
// +optional // +optional
Containers []corev1.Container `json:"containers,omitempty"` Containers []corev1.Container `json:"containers,omitempty"`
// +optional // +optional
DockerdContainerResources corev1.ResourceRequirements `json:"dockerdContainerResources,omitempty"`
// +optional
Resources corev1.ResourceRequirements `json:"resources,omitempty"` Resources corev1.ResourceRequirements `json:"resources,omitempty"`
// +optional // +optional
VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"` VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"`
@@ -54,6 +59,8 @@ type RunnerSpec struct {
// +optional // +optional
Volumes []corev1.Volume `json:"volumes,omitempty"` Volumes []corev1.Volume `json:"volumes,omitempty"`
// +optional
WorkDir string `json:"workDir,omitempty"`
// +optional // +optional
InitContainers []corev1.Container `json:"initContainers,omitempty"` InitContainers []corev1.Container `json:"initContainers,omitempty"`
@@ -77,6 +84,10 @@ type RunnerSpec struct {
EphemeralContainers []corev1.EphemeralContainer `json:"ephemeralContainers,omitempty"` EphemeralContainers []corev1.EphemeralContainer `json:"ephemeralContainers,omitempty"`
// +optional // +optional
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"` TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
// +optional
DockerdWithinRunnerContainer *bool `json:"dockerdWithinRunnerContainer,omitempty"`
// +optional
DockerEnabled *bool `json:"dockerEnabled,omitempty"`
} }
// ValidateRepository validates repository field. // ValidateRepository validates repository field.

View File

@@ -22,11 +22,13 @@ import (
const ( const (
AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns = "TotalNumberOfQueuedAndInProgressWorkflowRuns" AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns = "TotalNumberOfQueuedAndInProgressWorkflowRuns"
AutoscalingMetricTypePercentageRunnersBusy = "PercentageRunnersBusy"
) )
// RunnerReplicaSetSpec defines the desired state of RunnerDeployment // RunnerReplicaSetSpec defines the desired state of RunnerDeployment
type RunnerDeploymentSpec struct { type RunnerDeploymentSpec struct {
// +optional // +optional
// +nullable
Replicas *int `json:"replicas,omitempty"` Replicas *int `json:"replicas,omitempty"`
Template RunnerTemplate `json:"template"` Template RunnerTemplate `json:"template"`

View File

@@ -22,7 +22,9 @@ import (
// RunnerReplicaSetSpec defines the desired state of RunnerReplicaSet // RunnerReplicaSetSpec defines the desired state of RunnerReplicaSet
type RunnerReplicaSetSpec struct { type RunnerReplicaSetSpec struct {
Replicas *int `json:"replicas"` // +optional
// +nullable
Replicas *int `json:"replicas,omitempty"`
Template RunnerTemplate `json:"template"` Template RunnerTemplate `json:"template"`
} }

View File

@@ -435,6 +435,7 @@ func (in *RunnerSpec) DeepCopyInto(out *RunnerSpec) {
(*in)[i].DeepCopyInto(&(*out)[i]) (*in)[i].DeepCopyInto(&(*out)[i])
} }
} }
in.DockerdContainerResources.DeepCopyInto(&out.DockerdContainerResources)
in.Resources.DeepCopyInto(&out.Resources) in.Resources.DeepCopyInto(&out.Resources)
if in.VolumeMounts != nil { if in.VolumeMounts != nil {
in, out := &in.VolumeMounts, &out.VolumeMounts in, out := &in.VolumeMounts, &out.VolumeMounts
@@ -524,6 +525,16 @@ func (in *RunnerSpec) DeepCopyInto(out *RunnerSpec) {
*out = new(int64) *out = new(int64)
**out = **in **out = **in
} }
if in.DockerdWithinRunnerContainer != nil {
in, out := &in.DockerdWithinRunnerContainer, &out.DockerdWithinRunnerContainer
*out = new(bool)
**out = **in
}
if in.DockerEnabled != nil {
in, out := &in.DockerEnabled, &out.DockerEnabled
*out = new(bool)
**out = **in
}
} }
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSpec. // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSpec.

4
artifacthub-repo.yml Normal file
View File

@@ -0,0 +1,4 @@
repositoryID: 6e120248-b034-45e5-b16c-6015ecfa7c6c
owners:
- name: mumoshu
email: ykuoka@gmail.com

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,23 @@
apiVersion: v2
name: actions-runner-controller
description: A Kubernetes controller that operates self-hosted runners for GitHub Actions on your Kubernetes cluster.
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 0.11.2

View File

@@ -0,0 +1,136 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: horizontalrunnerautoscalers.actions.summerwind.dev
spec:
additionalPrinterColumns:
- JSONPath: .spec.minReplicas
name: Min
type: number
- JSONPath: .spec.maxReplicas
name: Max
type: number
- JSONPath: .status.desiredReplicas
name: Desired
type: number
group: actions.summerwind.dev
names:
kind: HorizontalRunnerAutoscaler
listKind: HorizontalRunnerAutoscalerList
plural: horizontalrunnerautoscalers
singular: horizontalrunnerautoscaler
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler
API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: HorizontalRunnerAutoscalerSpec defines the desired state of
HorizontalRunnerAutoscaler
properties:
maxReplicas:
description: MinReplicas is the maximum number of replicas the deployment
is allowed to scale
type: integer
metrics:
description: Metrics is the collection of various metric targets to
calculate desired number of runners
items:
properties:
repositoryNames:
description: RepositoryNames is the list of repository names to
be used for calculating the metric. For example, a repository
name is the REPO part of `github.com/USER/REPO`.
items:
type: string
type: array
scaleDownFactor:
description: ScaleDownFactor is the multiplicative factor applied
to the current number of runners used to determine how many
pods should be removed.
type: string
scaleDownThreshold:
description: ScaleDownThreshold is the percentage of busy runners
less than which will trigger the hpa to scale the runners down.
type: string
scaleUpFactor:
description: ScaleUpFactor is the multiplicative factor applied
to the current number of runners used to determine how many
pods should be added.
type: string
scaleUpThreshold:
description: ScaleUpThreshold is the percentage of busy runners
greater than which will trigger the hpa to scale runners up.
type: string
type:
description: Type is the type of metric to be used for autoscaling.
The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
type: string
type: object
type: array
minReplicas:
description: MinReplicas is the minimum number of replicas the deployment
is allowed to scale
type: integer
scaleDownDelaySecondsAfterScaleOut:
description: ScaleDownDelaySecondsAfterScaleUp is the approximate delay
for a scale down followed by a scale up Used to prevent flapping (down->up->down->...
loop)
type: integer
scaleTargetRef:
description: ScaleTargetRef sis the reference to scaled resource like
RunnerDeployment
properties:
name:
type: string
type: object
type: object
status:
properties:
desiredReplicas:
description: DesiredReplicas is the total number of desired, non-terminated
and latest pods to be set for the primary RunnerSet This doesn't include
outdated pods while upgrading the deployment and replacing the runnerset.
type: integer
lastSuccessfulScaleOutTime:
format: date-time
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed
for the target. It corresponds to e.g. RunnerDeployment's generation,
which is updated on mutation by the API Server.
format: int64
type: integer
type: object
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "actions-runner-controller.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "actions-runner-controller.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "actions-runner-controller.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "actions-runner-controller.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

View File

@@ -0,0 +1,101 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "actions-runner-controller.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "actions-runner-controller.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "actions-runner-controller.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "actions-runner-controller.labels" -}}
helm.sh/chart: {{ include "actions-runner-controller.chart" . }}
{{ include "actions-runner-controller.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- range $k, $v := .Values.labels }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "actions-runner-controller.selectorLabels" -}}
app.kubernetes.io/name: {{ include "actions-runner-controller.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "actions-runner-controller.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "actions-runner-controller.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{- define "actions-runner-controller.leaderElectionRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-leader-election
{{- end }}
{{- define "actions-runner-controller.authProxyRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-proxy
{{- end }}
{{- define "actions-runner-controller.managerRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-manager
{{- end }}
{{- define "actions-runner-controller.runnerEditorRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-runner-editor
{{- end }}
{{- define "actions-runner-controller.runnerViewerRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-runner-viewer
{{- end }}
{{- define "actions-runner-controller.webhookServiceName" -}}
{{- include "actions-runner-controller.fullname" . }}-webhook
{{- end }}
{{- define "actions-runner-controller.authProxyServiceName" -}}
{{- include "actions-runner-controller.fullname" . }}-controller-manager-metrics-service
{{- end }}
{{- define "actions-runner-controller.selfsignedIssuerName" -}}
{{- include "actions-runner-controller.fullname" . }}-selfsigned-issuer
{{- end }}
{{- define "actions-runner-controller.servingCertName" -}}
{{- include "actions-runner-controller.fullname" . }}-serving-cert
{{- end }}

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "actions-runner-controller.authProxyRoleName" . }}
rules:
- apiGroups: ["authentication.k8s.io"]
resources:
- tokenreviews
verbs: ["create"]
- apiGroups: ["authorization.k8s.io"]
resources:
- subjectaccessreviews
verbs: ["create"]

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller.authProxyRoleName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller.authProxyRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
name: {{ include "actions-runner-controller.authProxyServiceName" . }}
namespace: {{ .Release.Namespace }}
spec:
ports:
- name: https
port: 8443
targetPort: https
selector:
{{- include "actions-runner-controller.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,24 @@
# The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io
# WARNING: Targets CertManager 0.11 check https://docs.cert-manager.io/en/latest/tasks/upgrading/index.html for breaking changes
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ include "actions-runner-controller.selfsignedIssuerName" . }}
namespace: {{ .Namespace }}
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ include "actions-runner-controller.servingCertName" . }}
namespace: {{ .Namespace }}
spec:
dnsNames:
- {{ include "actions-runner-controller.webhookServiceName" . }}.{{ .Release.Namespace }}.svc
- {{ include "actions-runner-controller.webhookServiceName" . }}.{{ .Release.Namespace }}.svc.cluster.local
issuerRef:
kind: Issuer
name: {{ include "actions-runner-controller.selfsignedIssuerName" . }}
secretName: webhook-server-cert # this secret will not be prefixed, since it's not managed by kustomize

View File

@@ -0,0 +1,106 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "actions-runner-controller.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "actions-runner-controller.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "actions-runner-controller.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "actions-runner-controller.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- with .Values.priorityClassName }}
priorityClassName: "{{ . }}"
{{- end }}
containers:
- args:
- "--metrics-addr=127.0.0.1:8080"
- "--enable-leader-election"
- "--sync-period={{ .Values.syncPeriod }}"
- "--docker-image={{ .Values.image.dindSidecarRepositoryAndTag }}"
command:
- "/manager"
env:
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
key: github_token
name: controller-manager
optional: true
- name: GITHUB_APP_ID
valueFrom:
secretKeyRef:
key: github_app_id
name: controller-manager
optional: true
- name: GITHUB_APP_INSTALLATION_ID
valueFrom:
secretKeyRef:
key: github_app_installation_id
name: controller-manager
optional: true
- name: GITHUB_APP_PRIVATE_KEY
value: /etc/actions-runner-controller/github_app_private_key
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: manager
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- mountPath: "/etc/actions-runner-controller"
name: controller-manager
readOnly: true
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
- args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=10"
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
terminationGracePeriodSeconds: 10
volumes:
- name: controller-manager
secret:
secretName: controller-manager
- name: cert
secret:
defaultMode: 420
secretName: webhook-server-cert
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,33 @@
# permissions to do leader election.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "actions-runner-controller.leaderElectionRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- configmaps/status
verbs:
- get
- update
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "actions-runner-controller.leaderElectionRoleName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "actions-runner-controller.leaderElectionRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,165 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.managerRoleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runnerreplicasets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerreplicasets/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerreplicasets/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller.managerRoleName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller.managerRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,14 @@
{{- if or .Values.authSecret.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: controller-manager
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque
data:
{{- range $k, $v := .Values.authSecret }}
{{ $k }}: {{ $v | toString | b64enc }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,26 @@
# permissions to do edit runners.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "actions-runner-controller.runnerEditorRoleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- runners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/status
verbs:
- get
- patch
- update

View File

@@ -0,0 +1,20 @@
# permissions to do viewer runners.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "actions-runner-controller.runnerViewerRoleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- runners
verbs:
- get
- list
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/status
verbs:
- get

View File

@@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,128 @@
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.fullname" . }}-mutating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "actions-runner-controller.servingCertName" . }}
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-actions-summerwind-dev-v1alpha1-runner
failurePolicy: Fail
name: mutate.runner.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runners
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-actions-summerwind-dev-v1alpha1-runnerdeployment
failurePolicy: Fail
name: mutate.runnerdeployment.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerdeployments
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-actions-summerwind-dev-v1alpha1-runnerreplicaset
failurePolicy: Fail
name: mutate.runnerreplicaset.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerreplicasets
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.fullname" . }}-validating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "actions-runner-controller.servingCertName" . }}
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-actions-summerwind-dev-v1alpha1-runner
failurePolicy: Fail
name: validate.runner.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runners
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-actions-summerwind-dev-v1alpha1-runnerdeployment
failurePolicy: Fail
name: validate.runnerdeployment.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerdeployments
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-actions-summerwind-dev-v1alpha1-runnerreplicaset
failurePolicy: Fail
name: validate.runnerreplicaset.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerreplicasets

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: 443
targetPort: 9443
protocol: TCP
name: https
selector:
{{- include "actions-runner-controller.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,100 @@
# Default values for actions-runner-controller.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
labels: {}
replicaCount: 1
syncPeriod: 10m
# Only 1 authentication method can be deployed at a time
# Uncomment the configuration you are applying and fill in the details
authSecret:
enabled: false
### GitHub Apps Configuration
#github_app_id: ""
#github_app_installation_id: ""
#github_app_private_key: |
### GitHub PAT Configuration
#github_token: ""
image:
repository: summerwind/actions-runner-controller
# Overrides the manager image tag whose default is the chart appVersion if the tag key is commented out
tag: "latest"
dindSidecarRepositoryAndTag: "docker:dind"
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 443
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
# Leverage a PriorityClass to ensure your pods survive resource shortages
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# PriorityClass: system-cluster-critical
priorityClassName: ""

View File

@@ -64,6 +64,24 @@ spec:
items: items:
type: string type: string
type: array type: array
scaleDownFactor:
description: ScaleDownFactor is the multiplicative factor applied
to the current number of runners used to determine how many
pods should be removed.
type: string
scaleDownThreshold:
description: ScaleDownThreshold is the percentage of busy runners
less than which will trigger the hpa to scale the runners down.
type: string
scaleUpFactor:
description: ScaleUpFactor is the multiplicative factor applied
to the current number of runners used to determine how many
pods should be added.
type: string
scaleUpThreshold:
description: ScaleUpThreshold is the percentage of busy runners
greater than which will trigger the hpa to scale runners up.
type: string
type: type:
description: Type is the type of metric to be used for autoscaling. description: Type is the type of metric to be used for autoscaling.
The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -23,3 +23,4 @@ spec:
args: args:
- "--metrics-addr=127.0.0.1:8080" - "--metrics-addr=127.0.0.1:8080"
- "--enable-leader-election" - "--enable-leader-election"
- "--sync-period=10m"

View File

@@ -57,7 +57,7 @@ spec:
resources: resources:
limits: limits:
cpu: 100m cpu: 100m
memory: 30Mi memory: 100Mi
requests: requests:
cpu: 100m cpu: 100m
memory: 20Mi memory: 20Mi

View File

@@ -4,9 +4,19 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"math"
"strconv"
"strings" "strings"
"github.com/summerwind/actions-runner-controller/api/v1alpha1" "github.com/summerwind/actions-runner-controller/api/v1alpha1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
defaultScaleUpThreshold = 0.8
defaultScaleDownThreshold = 0.3
defaultScaleUpFactor = 1.3
defaultScaleDownFactor = 0.7
) )
func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) { func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
@@ -16,8 +26,20 @@ func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alp
return nil, fmt.Errorf("horizontalrunnerautoscaler %s/%s is missing maxReplicas", hra.Namespace, hra.Name) return nil, fmt.Errorf("horizontalrunnerautoscaler %s/%s is missing maxReplicas", hra.Namespace, hra.Name)
} }
var repos [][]string metrics := hra.Spec.Metrics
if len(metrics) == 0 || metrics[0].Type == v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns {
return r.calculateReplicasByQueuedAndInProgressWorkflowRuns(rd, hra)
} else if metrics[0].Type == v1alpha1.AutoscalingMetricTypePercentageRunnersBusy {
return r.calculateReplicasByPercentageRunnersBusy(rd, hra)
} else {
return nil, fmt.Errorf("validting autoscaling metrics: unsupported metric type %q", metrics[0].Type)
}
}
func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByQueuedAndInProgressWorkflowRuns(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
var repos [][]string
metrics := hra.Spec.Metrics
repoID := rd.Spec.Template.Spec.Repository repoID := rd.Spec.Template.Spec.Repository
if repoID == "" { if repoID == "" {
orgName := rd.Spec.Template.Spec.Organization orgName := rd.Spec.Template.Spec.Organization
@@ -25,13 +47,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alp
return nil, fmt.Errorf("asserting runner deployment spec to detect bug: spec.template.organization should not be empty on this code path") return nil, fmt.Errorf("asserting runner deployment spec to detect bug: spec.template.organization should not be empty on this code path")
} }
metrics := hra.Spec.Metrics if len(metrics[0].RepositoryNames) == 0 {
if len(metrics) == 0 {
return nil, fmt.Errorf("validating autoscaling metrics: one or more metrics is required")
} else if tpe := metrics[0].Type; tpe != v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns {
return nil, fmt.Errorf("validting autoscaling metrics: unsupported metric type %q: only supported value is %s", tpe, v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns)
} else if len(metrics[0].RepositoryNames) == 0 {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment") return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment")
} }
@@ -135,3 +151,99 @@ func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alp
return &replicas, nil return &replicas, nil
} }
func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunnersBusy(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
ctx := context.Background()
orgName := rd.Spec.Template.Spec.Organization
minReplicas := *hra.Spec.MinReplicas
maxReplicas := *hra.Spec.MaxReplicas
metrics := hra.Spec.Metrics[0]
scaleUpThreshold := defaultScaleUpThreshold
scaleDownThreshold := defaultScaleDownThreshold
scaleUpFactor := defaultScaleUpFactor
scaleDownFactor := defaultScaleDownFactor
if metrics.ScaleUpThreshold != "" {
sut, err := strconv.ParseFloat(metrics.ScaleUpThreshold, 64)
if err != nil {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleUpThreshold cannot be parsed into a float64")
}
scaleUpThreshold = sut
}
if metrics.ScaleDownThreshold != "" {
sdt, err := strconv.ParseFloat(metrics.ScaleDownThreshold, 64)
if err != nil {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleDownThreshold cannot be parsed into a float64")
}
scaleDownThreshold = sdt
}
if metrics.ScaleUpFactor != "" {
suf, err := strconv.ParseFloat(metrics.ScaleUpFactor, 64)
if err != nil {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleUpFactor cannot be parsed into a float64")
}
scaleUpFactor = suf
}
if metrics.ScaleDownFactor != "" {
sdf, err := strconv.ParseFloat(metrics.ScaleDownFactor, 64)
if err != nil {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleDownFactor cannot be parsed into a float64")
}
scaleDownFactor = sdf
}
// return the list of runners in namespace. Horizontal Runner Autoscaler should only be responsible for scaling resources in its own ns.
var runnerList v1alpha1.RunnerList
if err := r.List(ctx, &runnerList, client.InNamespace(rd.Namespace)); err != nil {
return nil, err
}
runnerMap := make(map[string]struct{})
for _, items := range runnerList.Items {
runnerMap[items.Name] = struct{}{}
}
// ListRunners will return all runners managed by GitHub - not restricted to ns
runners, err := r.GitHubClient.ListRunners(ctx, orgName, "")
if err != nil {
return nil, err
}
numRunners := len(runnerList.Items)
numRunnersBusy := 0
for _, runner := range runners {
if _, ok := runnerMap[*runner.Name]; ok && runner.GetBusy() {
numRunnersBusy++
}
}
var desiredReplicas int
fractionBusy := float64(numRunnersBusy) / float64(numRunners)
if fractionBusy >= scaleUpThreshold {
desiredReplicas = int(math.Ceil(float64(numRunners) * scaleUpFactor))
} else if fractionBusy < scaleDownThreshold {
desiredReplicas = int(float64(numRunners) * scaleDownFactor)
} else {
desiredReplicas = *rd.Spec.Replicas
}
if desiredReplicas < minReplicas {
desiredReplicas = minReplicas
} else if desiredReplicas > maxReplicas {
desiredReplicas = maxReplicas
}
r.Log.V(1).Info(
"Calculated desired replicas",
"computed_replicas_desired", desiredReplicas,
"spec_replicas_min", minReplicas,
"spec_replicas_max", maxReplicas,
"current_replicas", rd.Spec.Replicas,
"num_runners", numRunners,
"num_runners_busy", numRunnersBusy,
)
rd.Status.Replicas = &desiredReplicas
replicas := desiredReplicas
return &replicas, nil
}

View File

@@ -16,7 +16,10 @@ import (
) )
func newGithubClient(server *httptest.Server) *github.Client { func newGithubClient(server *httptest.Server) *github.Client {
client, err := github.NewClientWithAccessToken("token") c := github.Config{
Token: "token",
}
client, err := c.NewClient()
if err != nil { if err != nil {
panic(err) panic(err)
} }

View File

@@ -137,6 +137,7 @@ var _ = Context("Inside of a new namespace", func() {
Spec: actionsv1alpha1.RunnerSpec{ Spec: actionsv1alpha1.RunnerSpec{
Repository: "test/valid", Repository: "test/valid",
Image: "bar", Image: "bar",
Group: "baz",
Env: []corev1.EnvVar{ Env: []corev1.EnvVar{
{Name: "FOO", Value: "FOOVALUE"}, {Name: "FOO", Value: "FOOVALUE"},
}, },

View File

@@ -19,7 +19,7 @@ package controllers
import ( import (
"context" "context"
"fmt" "fmt"
"reflect" "github.com/summerwind/actions-runner-controller/hash"
"strings" "strings"
"github.com/go-logr/logr" "github.com/go-logr/logr"
@@ -39,6 +39,8 @@ import (
const ( const (
containerName = "runner" containerName = "runner"
finalizerName = "runner.actions.summerwind.dev" finalizerName = "runner.actions.summerwind.dev"
LabelKeyPodTemplateHash = "pod-template-hash"
) )
// RunnerReconciler reconciles a Runner object // RunnerReconciler reconciles a Runner object
@@ -120,39 +122,18 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
if !runner.IsRegisterable() {
rt, err := r.GitHubClient.GetRegistrationToken(ctx, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil {
r.Recorder.Event(&runner, corev1.EventTypeWarning, "FailedUpdateRegistrationToken", "Updating registration token failed")
log.Error(err, "Failed to get new registration token")
return ctrl.Result{}, err
}
updated := runner.DeepCopy()
updated.Status.Registration = v1alpha1.RunnerStatusRegistration{
Organization: runner.Spec.Organization,
Repository: runner.Spec.Repository,
Labels: runner.Spec.Labels,
Token: rt.GetToken(),
ExpiresAt: metav1.NewTime(rt.GetExpiresAt().Time),
}
if err := r.Status().Update(ctx, updated); err != nil {
log.Error(err, "Failed to update runner status")
return ctrl.Result{}, err
}
r.Recorder.Event(&runner, corev1.EventTypeNormal, "RegistrationTokenUpdated", "Successfully update registration token")
log.Info("Updated registration token", "repository", runner.Spec.Repository)
return ctrl.Result{}, nil
}
var pod corev1.Pod var pod corev1.Pod
if err := r.Get(ctx, req.NamespacedName, &pod); err != nil { if err := r.Get(ctx, req.NamespacedName, &pod); err != nil {
if !errors.IsNotFound(err) { if !errors.IsNotFound(err) {
return ctrl.Result{}, err return ctrl.Result{}, err
} }
if updated, err := r.updateRegistrationToken(ctx, runner); err != nil {
return ctrl.Result{}, err
} else if updated {
return ctrl.Result{Requeue: true}, nil
}
newPod, err := r.newPod(runner) newPod, err := r.newPod(runner)
if err != nil { if err != nil {
log.Error(err, "Could not create pod") log.Error(err, "Could not create pod")
@@ -167,7 +148,11 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
r.Recorder.Event(&runner, corev1.EventTypeNormal, "PodCreated", fmt.Sprintf("Created pod '%s'", newPod.Name)) r.Recorder.Event(&runner, corev1.EventTypeNormal, "PodCreated", fmt.Sprintf("Created pod '%s'", newPod.Name))
log.Info("Created runner pod", "repository", runner.Spec.Repository) log.Info("Created runner pod", "repository", runner.Spec.Repository)
} else { } else {
if runner.Status.Phase != string(pod.Status.Phase) { // If pod has ended up succeeded we need to restart it
// Happens e.g. when dind is in runner and run completes
restart := pod.Status.Phase == corev1.PodSucceeded
if !restart && runner.Status.Phase != string(pod.Status.Phase) {
updated := runner.DeepCopy() updated := runner.DeepCopy()
updated.Status.Phase = string(pod.Status.Phase) updated.Status.Phase = string(pod.Status.Phase)
updated.Status.Reason = pod.Status.Reason updated.Status.Reason = pod.Status.Reason
@@ -185,8 +170,6 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
return ctrl.Result{}, err return ctrl.Result{}, err
} }
restart := false
if pod.Status.Phase == corev1.PodRunning { if pod.Status.Phase == corev1.PodRunning {
for _, status := range pod.Status.ContainerStatuses { for _, status := range pod.Status.ContainerStatuses {
if status.Name != containerName { if status.Name != containerName {
@@ -199,6 +182,12 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
} }
} }
if updated, err := r.updateRegistrationToken(ctx, runner); err != nil {
return ctrl.Result{}, err
} else if updated {
return ctrl.Result{Requeue: true}, nil
}
newPod, err := r.newPod(runner) newPod, err := r.newPod(runner)
if err != nil { if err != nil {
log.Error(err, "Could not create pod") log.Error(err, "Could not create pod")
@@ -211,14 +200,21 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
if !runnerBusy && (!reflect.DeepEqual(pod.Spec.Containers[0].Env, newPod.Spec.Containers[0].Env) || pod.Spec.Containers[0].Image != newPod.Spec.Containers[0].Image) { // See the `newPod` function called above for more information
// about when this hash changes.
curHash := pod.Labels[LabelKeyPodTemplateHash]
newHash := newPod.Labels[LabelKeyPodTemplateHash]
if !runnerBusy && curHash != newHash {
restart = true restart = true
} }
// Don't do anything if there's no need to restart the runner
if !restart { if !restart {
return ctrl.Result{}, err return ctrl.Result{}, err
} }
// Delete current pod if recreation is needed
if err := r.Delete(ctx, &pod); err != nil { if err := r.Delete(ctx, &pod); err != nil {
log.Error(err, "Failed to delete pod resource") log.Error(err, "Failed to delete pod resource")
return ctrl.Result{}, err return ctrl.Result{}, err
@@ -274,10 +270,45 @@ func (r *RunnerReconciler) unregisterRunner(ctx context.Context, org, repo, name
return true, nil return true, nil
} }
func (r *RunnerReconciler) updateRegistrationToken(ctx context.Context, runner v1alpha1.Runner) (bool, error) {
if runner.IsRegisterable() {
return false, nil
}
log := r.Log.WithValues("runner", runner.Name)
rt, err := r.GitHubClient.GetRegistrationToken(ctx, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil {
r.Recorder.Event(&runner, corev1.EventTypeWarning, "FailedUpdateRegistrationToken", "Updating registration token failed")
log.Error(err, "Failed to get new registration token")
return false, err
}
updated := runner.DeepCopy()
updated.Status.Registration = v1alpha1.RunnerStatusRegistration{
Organization: runner.Spec.Organization,
Repository: runner.Spec.Repository,
Labels: runner.Spec.Labels,
Token: rt.GetToken(),
ExpiresAt: metav1.NewTime(rt.GetExpiresAt().Time),
}
if err := r.Status().Update(ctx, updated); err != nil {
log.Error(err, "Failed to update runner status")
return false, err
}
r.Recorder.Event(&runner, corev1.EventTypeNormal, "RegistrationTokenUpdated", "Successfully update registration token")
log.Info("Updated registration token", "repository", runner.Spec.Repository)
return true, nil
}
func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) { func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
var ( var (
privileged bool = true privileged bool = true
group int64 = 0 dockerdInRunner bool = runner.Spec.DockerdWithinRunnerContainer != nil && *runner.Spec.DockerdWithinRunnerContainer
dockerEnabled bool = runner.Spec.DockerEnabled == nil || *runner.Spec.DockerEnabled
) )
runnerImage := runner.Spec.Image runnerImage := runner.Spec.Image
@@ -285,6 +316,11 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
runnerImage = r.RunnerImage runnerImage = r.RunnerImage
} }
workDir := runner.Spec.WorkDir
if workDir == "" {
workDir = "/runner/_work"
}
runnerImagePullPolicy := runner.Spec.ImagePullPolicy runnerImagePullPolicy := runner.Spec.ImagePullPolicy
if runnerImagePullPolicy == "" { if runnerImagePullPolicy == "" {
runnerImagePullPolicy = corev1.PullAlways runnerImagePullPolicy = corev1.PullAlways
@@ -307,18 +343,67 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
Name: "RUNNER_LABELS", Name: "RUNNER_LABELS",
Value: strings.Join(runner.Spec.Labels, ","), Value: strings.Join(runner.Spec.Labels, ","),
}, },
{
Name: "RUNNER_GROUP",
Value: runner.Spec.Group,
},
{ {
Name: "RUNNER_TOKEN", Name: "RUNNER_TOKEN",
Value: runner.Status.Registration.Token, Value: runner.Status.Registration.Token,
}, },
{
Name: "DOCKERD_IN_RUNNER",
Value: fmt.Sprintf("%v", dockerdInRunner),
},
{
Name: "GITHUB_URL",
Value: r.GitHubClient.GithubBaseURL,
},
{
Name: "RUNNER_WORKDIR",
Value: workDir,
},
} }
env = append(env, runner.Spec.Env...) env = append(env, runner.Spec.Env...)
labels := map[string]string{}
for k, v := range runner.Labels {
labels[k] = v
}
// This implies that...
//
// (1) We recreate the runner pod whenever the runner has changes in:
// - metadata.labels (excluding "runner-template-hash" added by the parent RunnerReplicaSet
// - metadata.annotations
// - metadata.spec (including image, env, organization, repository, group, and so on)
// - GithubBaseURL setting of the controller (can be configured via GITHUB_ENTERPRISE_URL)
//
// (2) We don't recreate the runner pod when there are changes in:
// - runner.status.registration.token
// - This token expires and changes hourly, but you don't need to recreate the pod due to that.
// It's the opposite.
// An unexpired token is required only when the runner agent is registering itself on launch.
//
// In other words, the registered runner doesn't get invalidated on registration token expiration.
// A registered runner's session and the a registration token seem to have two different and independent
// lifecycles.
//
// See https://github.com/summerwind/actions-runner-controller/issues/143 for more context.
labels[LabelKeyPodTemplateHash] = hash.FNVHashStringObjects(
filterLabels(runner.Labels, LabelKeyRunnerTemplateHash),
runner.Annotations,
runner.Spec,
r.GitHubClient.GithubBaseURL,
)
pod := corev1.Pod{ pod := corev1.Pod{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: runner.Name, Name: runner.Name,
Namespace: runner.Namespace, Namespace: runner.Namespace,
Labels: runner.Labels, Labels: labels,
Annotations: runner.Annotations, Annotations: runner.Annotations,
}, },
Spec: corev1.PodSpec{ Spec: corev1.PodSpec{
@@ -330,58 +415,106 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
ImagePullPolicy: runnerImagePullPolicy, ImagePullPolicy: runnerImagePullPolicy,
Env: env, Env: env,
EnvFrom: runner.Spec.EnvFrom, EnvFrom: runner.Spec.EnvFrom,
VolumeMounts: []corev1.VolumeMount{
{
Name: "work",
MountPath: "/runner/_work",
},
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{ SecurityContext: &corev1.SecurityContext{
RunAsGroup: &group, // Runner need to run privileged if it contains DinD
Privileged: runner.Spec.DockerdWithinRunnerContainer,
}, },
Resources: runner.Spec.Resources, Resources: runner.Spec.Resources,
}, },
{
Name: "docker",
Image: r.DockerImage,
VolumeMounts: []corev1.VolumeMount{
{
Name: "work",
MountPath: "/runner/_work",
},
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
},
},
Volumes: []corev1.Volume{
{
Name: "work",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
{
Name: "docker",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
}, },
}, },
} }
if !dockerdInRunner && dockerEnabled {
runnerVolumeName := "runner"
runnerVolumeMountPath := "/runner"
pod.Spec.Volumes = []corev1.Volume{
{
Name: "work",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
{
Name: runnerVolumeName,
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
{
Name: "certs-client",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
}
pod.Spec.Containers[0].VolumeMounts = []corev1.VolumeMount{
{
Name: "work",
MountPath: workDir,
},
{
Name: runnerVolumeName,
MountPath: runnerVolumeMountPath,
},
{
Name: "certs-client",
MountPath: "/certs/client",
ReadOnly: true,
},
}
pod.Spec.Containers[0].Env = append(pod.Spec.Containers[0].Env, []corev1.EnvVar{
{
Name: "DOCKER_HOST",
Value: "tcp://localhost:2376",
},
{
Name: "DOCKER_TLS_VERIFY",
Value: "1",
},
{
Name: "DOCKER_CERT_PATH",
Value: "/certs/client",
},
}...)
pod.Spec.Containers = append(pod.Spec.Containers, corev1.Container{
Name: "docker",
Image: r.DockerImage,
VolumeMounts: []corev1.VolumeMount{
{
Name: "work",
MountPath: workDir,
},
{
Name: runnerVolumeName,
MountPath: runnerVolumeMountPath,
},
{
Name: "certs-client",
MountPath: "/certs/client",
},
},
Env: []corev1.EnvVar{
{
Name: "DOCKER_TLS_CERTDIR",
Value: "/certs",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
})
}
if len(runner.Spec.Containers) != 0 { if len(runner.Spec.Containers) != 0 {
pod.Spec.Containers = runner.Spec.Containers pod.Spec.Containers = runner.Spec.Containers
for i := 0; i < len(pod.Spec.Containers); i++ {
if pod.Spec.Containers[i].Name == containerName {
pod.Spec.Containers[i].Env = append(pod.Spec.Containers[i].Env, env...)
}
}
} }
if len(runner.Spec.VolumeMounts) != 0 { if len(runner.Spec.VolumeMounts) != 0 {

View File

@@ -52,7 +52,7 @@ type RunnerReplicaSetReconciler struct {
func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) { func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background() ctx := context.Background()
log := r.Log.WithValues("runner", req.NamespacedName) log := r.Log.WithValues("runnerreplicaset", req.NamespacedName)
var rs v1alpha1.RunnerReplicaSet var rs v1alpha1.RunnerReplicaSet
if err := r.Get(ctx, req.NamespacedName, &rs); err != nil { if err := r.Get(ctx, req.NamespacedName, &rs); err != nil {

View File

@@ -6,7 +6,7 @@ import (
"net/http/httptest" "net/http/httptest"
"time" "time"
"github.com/google/go-github/v32/github" "github.com/google/go-github/v33/github"
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes/scheme" "k8s.io/client-go/kubernetes/scheme"

13
controllers/utils.go Normal file
View File

@@ -0,0 +1,13 @@
package controllers
func filterLabels(labels map[string]string, filter string) map[string]string {
filtered := map[string]string{}
for k, v := range labels {
if k != filter {
filtered[k] = v
}
}
return filtered
}

34
controllers/utils_test.go Normal file
View File

@@ -0,0 +1,34 @@
package controllers
import (
"reflect"
"testing"
)
func Test_filterLabels(t *testing.T) {
type args struct {
labels map[string]string
filter string
}
tests := []struct {
name string
args args
want map[string]string
}{
{
name: "ok",
args: args{
labels: map[string]string{LabelKeyRunnerTemplateHash: "abc", LabelKeyPodTemplateHash: "def"},
filter: LabelKeyRunnerTemplateHash,
},
want: map[string]string{LabelKeyPodTemplateHash: "def"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := filterLabels(tt.args.labels, tt.args.filter); !reflect.DeepEqual(got, tt.want) {
t.Errorf("filterLabels() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -6,7 +6,7 @@ import (
"net/http/httptest" "net/http/httptest"
"strconv" "strconv"
"github.com/google/go-github/v32/github" "github.com/google/go-github/v33/github"
"github.com/gorilla/mux" "github.com/gorilla/mux"
) )

View File

@@ -4,48 +4,76 @@ import (
"context" "context"
"fmt" "fmt"
"net/http" "net/http"
"net/url"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/bradleyfalzon/ghinstallation" "github.com/bradleyfalzon/ghinstallation"
"github.com/google/go-github/v32/github" "github.com/google/go-github/v33/github"
"golang.org/x/oauth2" "golang.org/x/oauth2"
) )
// Config contains configuration for Github client
type Config struct {
EnterpriseURL string `split_words:"true"`
AppID int64 `split_words:"true"`
AppInstallationID int64 `split_words:"true"`
AppPrivateKey string `split_words:"true"`
Token string
}
// Client wraps GitHub client with some additional // Client wraps GitHub client with some additional
type Client struct { type Client struct {
*github.Client *github.Client
regTokens map[string]*github.RegistrationToken regTokens map[string]*github.RegistrationToken
mu sync.Mutex mu sync.Mutex
// GithubBaseURL to Github without API suffix.
GithubBaseURL string
} }
// NewClient returns a client authenticated as a GitHub App. // NewClient creates a Github Client
func NewClient(appID, installationID int64, privateKeyPath string) (*Client, error) { func (c *Config) NewClient() (*Client, error) {
tr, err := ghinstallation.NewKeyFromFile(http.DefaultTransport, appID, installationID, privateKeyPath) var (
if err != nil { httpClient *http.Client
return nil, fmt.Errorf("authentication failed: %v", err) client *github.Client
)
githubBaseURL := "https://github.com/"
if len(c.Token) > 0 {
httpClient = oauth2.NewClient(context.Background(), oauth2.StaticTokenSource(
&oauth2.Token{AccessToken: c.Token},
))
} else {
tr, err := ghinstallation.NewKeyFromFile(http.DefaultTransport, c.AppID, c.AppInstallationID, c.AppPrivateKey)
if err != nil {
return nil, fmt.Errorf("authentication failed: %v", err)
}
if len(c.EnterpriseURL) > 0 {
githubAPIURL, err := getEnterpriseApiUrl(c.EnterpriseURL)
if err != nil {
return nil, fmt.Errorf("enterprise url incorrect: %v", err)
}
tr.BaseURL = githubAPIURL
}
httpClient = &http.Client{Transport: tr}
} }
gh := github.NewClient(&http.Client{Transport: tr}) if len(c.EnterpriseURL) > 0 {
var err error
client, err = github.NewEnterpriseClient(c.EnterpriseURL, c.EnterpriseURL, httpClient)
if err != nil {
return nil, fmt.Errorf("enterprise client creation failed: %v", err)
}
githubBaseURL = fmt.Sprintf("%s://%s%s", client.BaseURL.Scheme, client.BaseURL.Host, strings.TrimSuffix(client.BaseURL.Path, "api/v3/"))
} else {
client = github.NewClient(httpClient)
}
return &Client{ return &Client{
Client: gh, Client: client,
regTokens: map[string]*github.RegistrationToken{}, regTokens: map[string]*github.RegistrationToken{},
mu: sync.Mutex{}, mu: sync.Mutex{},
}, nil GithubBaseURL: githubBaseURL,
}
// NewClientWithAccessToken returns a client authenticated with personal access token.
func NewClientWithAccessToken(token string) (*Client, error) {
tc := oauth2.NewClient(context.Background(), oauth2.StaticTokenSource(
&oauth2.Token{AccessToken: token},
))
return &Client{
Client: github.NewClient(tc),
regTokens: map[string]*github.RegistrationToken{},
mu: sync.Mutex{},
}, nil }, nil
} }
@@ -57,7 +85,7 @@ func (c *Client) GetRegistrationToken(ctx context.Context, org, repo, name strin
key := getRegistrationKey(org, repo) key := getRegistrationKey(org, repo)
rt, ok := c.regTokens[key] rt, ok := c.regTokens[key]
if ok && rt.GetExpiresAt().After(time.Now().Add(-10*time.Minute)) { if ok && rt.GetExpiresAt().After(time.Now()) {
return rt, nil return rt, nil
} }
@@ -152,25 +180,25 @@ func (c *Client) cleanup() {
func (c *Client) createRegistrationToken(ctx context.Context, owner, repo string) (*github.RegistrationToken, *github.Response, error) { func (c *Client) createRegistrationToken(ctx context.Context, owner, repo string) (*github.RegistrationToken, *github.Response, error) {
if len(repo) > 0 { if len(repo) > 0 {
return c.Client.Actions.CreateRegistrationToken(ctx, owner, repo) return c.Client.Actions.CreateRegistrationToken(ctx, owner, repo)
} else {
return CreateOrganizationRegistrationToken(ctx, c, owner)
} }
return c.Client.Actions.CreateOrganizationRegistrationToken(ctx, owner)
} }
func (c *Client) removeRunner(ctx context.Context, owner, repo string, runnerID int64) (*github.Response, error) { func (c *Client) removeRunner(ctx context.Context, owner, repo string, runnerID int64) (*github.Response, error) {
if len(repo) > 0 { if len(repo) > 0 {
return c.Client.Actions.RemoveRunner(ctx, owner, repo, runnerID) return c.Client.Actions.RemoveRunner(ctx, owner, repo, runnerID)
} else {
return RemoveOrganizationRunner(ctx, c, owner, runnerID)
} }
return c.Client.Actions.RemoveOrganizationRunner(ctx, owner, runnerID)
} }
func (c *Client) listRunners(ctx context.Context, owner, repo string, opts *github.ListOptions) (*github.Runners, *github.Response, error) { func (c *Client) listRunners(ctx context.Context, owner, repo string, opts *github.ListOptions) (*github.Runners, *github.Response, error) {
if len(repo) > 0 { if len(repo) > 0 {
return c.Client.Actions.ListRunners(ctx, owner, repo, opts) return c.Client.Actions.ListRunners(ctx, owner, repo, opts)
} else {
return ListOrganizationRunners(ctx, c, owner, opts)
} }
return c.Client.Actions.ListOrganizationRunners(ctx, owner, opts)
} }
// Validates owner and repo arguments. Both are optional, but at least one should be specified // Validates owner and repo arguments. Both are optional, but at least one should be specified
@@ -187,9 +215,8 @@ func getOwnerAndRepo(org, repo string) (string, string, error) {
func getRegistrationKey(org, repo string) string { func getRegistrationKey(org, repo string) string {
if len(org) > 0 { if len(org) > 0 {
return org return org
} else {
return repo
} }
return repo
} }
func splitOwnerAndRepo(repo string) (string, string, error) { func splitOwnerAndRepo(repo string) (string, string, error) {
@@ -199,3 +226,21 @@ func splitOwnerAndRepo(repo string) (string, string, error) {
} }
return chunk[0], chunk[1], nil return chunk[0], chunk[1], nil
} }
func getEnterpriseApiUrl(baseURL string) (string, error) {
baseEndpoint, err := url.Parse(baseURL)
if err != nil {
return "", err
}
if !strings.HasSuffix(baseEndpoint.Path, "/") {
baseEndpoint.Path += "/"
}
if !strings.HasSuffix(baseEndpoint.Path, "/api/v3/") &&
!strings.HasPrefix(baseEndpoint.Host, "api.") &&
!strings.Contains(baseEndpoint.Host, ".api.") {
baseEndpoint.Path += "api/v3/"
}
// Trim trailing slash, otherwise there's double slash added to token endpoint
return fmt.Sprintf("%s://%s%s", baseEndpoint.Scheme, baseEndpoint.Host, strings.TrimSuffix(baseEndpoint.Path, "/")), nil
}

View File

@@ -1,95 +0,0 @@
package github
// this contains BETA API clients, that are currently not (yet) in go-github
// once these functions have been added there, they can be removed from here
// code was reused from https://github.com/google/go-github
import (
"context"
"fmt"
"net/url"
"reflect"
"github.com/google/go-github/v32/github"
"github.com/google/go-querystring/query"
)
// CreateOrganizationRegistrationToken creates a token that can be used to add a self-hosted runner on an organization.
//
// GitHub API docs: https://developer.github.com/v3/actions/self-hosted-runners/#create-a-registration-token-for-an-organization
func CreateOrganizationRegistrationToken(ctx context.Context, client *Client, owner string) (*github.RegistrationToken, *github.Response, error) {
u := fmt.Sprintf("orgs/%v/actions/runners/registration-token", owner)
req, err := client.NewRequest("POST", u, nil)
if err != nil {
return nil, nil, err
}
registrationToken := new(github.RegistrationToken)
resp, err := client.Do(ctx, req, registrationToken)
if err != nil {
return nil, resp, err
}
return registrationToken, resp, nil
}
// ListOrganizationRunners lists all the self-hosted runners for an organization.
//
// GitHub API docs: https://developer.github.com/v3/actions/self-hosted-runners/#list-self-hosted-runners-for-an-organization
func ListOrganizationRunners(ctx context.Context, client *Client, owner string, opts *github.ListOptions) (*github.Runners, *github.Response, error) {
u := fmt.Sprintf("orgs/%v/actions/runners", owner)
u, err := addOptions(u, opts)
if err != nil {
return nil, nil, err
}
req, err := client.NewRequest("GET", u, nil)
if err != nil {
return nil, nil, err
}
runners := &github.Runners{}
resp, err := client.Do(ctx, req, &runners)
if err != nil {
return nil, resp, err
}
return runners, resp, nil
}
// RemoveOrganizationRunner forces the removal of a self-hosted runner in a repository using the runner id.
//
// GitHub API docs: https://developer.github.com/v3/actions/self_hosted_runners/#remove-a-self-hosted-runner
func RemoveOrganizationRunner(ctx context.Context, client *Client, owner string, runnerID int64) (*github.Response, error) {
u := fmt.Sprintf("orgs/%v/actions/runners/%v", owner, runnerID)
req, err := client.NewRequest("DELETE", u, nil)
if err != nil {
return nil, err
}
return client.Do(ctx, req, nil)
}
// addOptions adds the parameters in opt as URL query parameters to s. opt
// must be a struct whose fields may contain "url" tags.
func addOptions(s string, opts interface{}) (string, error) {
v := reflect.ValueOf(opts)
if v.Kind() == reflect.Ptr && v.IsNil() {
return s, nil
}
u, err := url.Parse(s)
if err != nil {
return s, err
}
qs, err := query.Values(opts)
if err != nil {
return s, err
}
u.RawQuery = qs.Encode()
return u.String(), nil
}

View File

@@ -7,14 +7,17 @@ import (
"testing" "testing"
"time" "time"
"github.com/google/go-github/v32/github" "github.com/google/go-github/v33/github"
"github.com/summerwind/actions-runner-controller/github/fake" "github.com/summerwind/actions-runner-controller/github/fake"
) )
var server *httptest.Server var server *httptest.Server
func newTestClient() *Client { func newTestClient() *Client {
client, err := NewClientWithAccessToken("token") c := Config{
Token: "token",
}
client, err := c.NewClient()
if err != nil { if err != nil {
panic(err) panic(err)
} }

5
go.mod
View File

@@ -1,14 +1,17 @@
module github.com/summerwind/actions-runner-controller module github.com/summerwind/actions-runner-controller
go 1.13 go 1.15
require ( require (
github.com/bradleyfalzon/ghinstallation v1.1.1 github.com/bradleyfalzon/ghinstallation v1.1.1
github.com/davecgh/go-spew v1.1.1 github.com/davecgh/go-spew v1.1.1
github.com/go-logr/logr v0.1.0 github.com/go-logr/logr v0.1.0
github.com/google/go-github v17.0.0+incompatible // indirect
github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04 github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04
github.com/google/go-github/v33 v33.0.0
github.com/google/go-querystring v1.0.0 github.com/google/go-querystring v1.0.0
github.com/gorilla/mux v1.8.0 github.com/gorilla/mux v1.8.0
github.com/kelseyhightower/envconfig v1.4.0
github.com/onsi/ginkgo v1.8.0 github.com/onsi/ginkgo v1.8.0
github.com/onsi/gomega v1.5.0 github.com/onsi/gomega v1.5.0
github.com/stretchr/testify v1.4.0 // indirect github.com/stretchr/testify v1.4.0 // indirect

6
go.sum
View File

@@ -116,10 +116,14 @@ github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg= github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-github v17.0.0+incompatible h1:N0LgJ1j65A7kfXrZnUDaYCs/Sf4rEjNlfyDHW9dolSY=
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
github.com/google/go-github/v29 v29.0.2 h1:opYN6Wc7DOz7Ku3Oh4l7prmkOMwEcQxpFtxdU8N8Pts= github.com/google/go-github/v29 v29.0.2 h1:opYN6Wc7DOz7Ku3Oh4l7prmkOMwEcQxpFtxdU8N8Pts=
github.com/google/go-github/v29 v29.0.2/go.mod h1:CHKiKKPHJ0REzfwc14QMklvtHwCveD0PxlMjLlzAM5E= github.com/google/go-github/v29 v29.0.2/go.mod h1:CHKiKKPHJ0REzfwc14QMklvtHwCveD0PxlMjLlzAM5E=
github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04 h1:wEYk2h/GwOhImcVjiTIceP88WxVbXw2F+ARYUQMEsfg= github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04 h1:wEYk2h/GwOhImcVjiTIceP88WxVbXw2F+ARYUQMEsfg=
github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04/go.mod h1:rIEpZD9CTDQwDK9GDrtMTycQNA4JU3qBsCizh3q2WCI= github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04/go.mod h1:rIEpZD9CTDQwDK9GDrtMTycQNA4JU3qBsCizh3q2WCI=
github.com/google/go-github/v33 v33.0.0 h1:qAf9yP0qc54ufQxzwv+u9H0tiVOnPJxo0lI/JXqw3ZM=
github.com/google/go-github/v33 v33.0.0/go.mod h1:GMdDnVZY/2TsWgp/lkYnpSAh6TrzhANBBwm6k6TTEXg=
github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk= github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI= github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
@@ -158,6 +162,8 @@ github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCV
github.com/json-iterator/go v1.1.7 h1:KfgG9LzI+pYjr4xvmz/5H4FXjokeP+rlHLhv3iH62Fo= github.com/json-iterator/go v1.1.7 h1:KfgG9LzI+pYjr4xvmz/5H4FXjokeP+rlHLhv3iH62Fo=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8=
github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00= github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=

19
hack/make-env.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
COMMIT=$(git rev-parse HEAD)
TAG=$(git describe --exact-match --abbrev=0 --tags "${COMMIT}" 2> /dev/null || true)
BRANCH=$(git branch | grep \* | cut -d ' ' -f2 | sed -e 's/[^a-zA-Z0-9+=._:/-]*//g' || true)
VERSION=""
if [ -z "$TAG" ]; then
[[ -n "$BRANCH" ]] && VERSION="${BRANCH}-"
VERSION="${VERSION}${COMMIT:0:8}"
else
VERSION=$TAG
fi
if [ -n "$(git diff --shortstat 2> /dev/null | tail -n1)" ]; then
VERSION="${VERSION}-dirty"
fi
export VERSION

17
hash/fnv.go Normal file
View File

@@ -0,0 +1,17 @@
package hash
import (
"fmt"
"hash/fnv"
"k8s.io/apimachinery/pkg/util/rand"
)
func FNVHashStringObjects(objs ...interface{}) string {
hash := fnv.New32a()
for _, obj := range objs {
DeepHashObject(hash, obj)
}
return rand.SafeEncodeString(fmt.Sprint(hash.Sum32()))
}

25
hash/hash.go Normal file
View File

@@ -0,0 +1,25 @@
// Copyright 2015 The Kubernetes Authors.
// hash.go is copied from kubernetes's pkg/util/hash.go
// See https://github.com/kubernetes/kubernetes/blob/e1c617a88ec286f5f6cb2589d6ac562d095e1068/pkg/util/hash/hash.go#L25-L37
package hash
import (
"hash"
"github.com/davecgh/go-spew/spew"
)
// DeepHashObject writes specified object to hash using the spew library
// which follows pointers and prints actual values of the nested objects
// ensuring the hash does not change when a pointer changes.
func DeepHashObject(hasher hash.Hash, objectToWrite interface{}) {
hasher.Reset()
printer := spew.ConfigState{
Indent: " ",
SortKeys: true,
DisableMethods: true,
SpewKeys: true,
}
printer.Fprintf(hasher, "%#v", objectToWrite)
}

1089
index.yaml Normal file

File diff suppressed because it is too large Load Diff

73
main.go
View File

@@ -20,9 +20,9 @@ import (
"flag" "flag"
"fmt" "fmt"
"os" "os"
"strconv"
"time" "time"
"github.com/kelseyhightower/envconfig"
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1" actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
"github.com/summerwind/actions-runner-controller/controllers" "github.com/summerwind/actions-runner-controller/controllers"
"github.com/summerwind/actions-runner-controller/github" "github.com/summerwind/actions-runner-controller/github"
@@ -62,74 +62,37 @@ func main() {
runnerImage string runnerImage string
dockerImage string dockerImage string
ghToken string
ghAppID int64
ghAppInstallationID int64
ghAppPrivateKey string
) )
var c github.Config
err = envconfig.Process("github", &c)
if err != nil {
fmt.Fprintln(os.Stderr, "Error: Environment variable read failed.")
}
flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.") flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false, flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.") "Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
flag.StringVar(&runnerImage, "runner-image", defaultRunnerImage, "The image name of self-hosted runner container.") flag.StringVar(&runnerImage, "runner-image", defaultRunnerImage, "The image name of self-hosted runner container.")
flag.StringVar(&dockerImage, "docker-image", defaultDockerImage, "The image name of docker sidecar container.") flag.StringVar(&dockerImage, "docker-image", defaultDockerImage, "The image name of docker sidecar container.")
flag.StringVar(&ghToken, "github-token", "", "The personal access token of GitHub.") flag.StringVar(&c.Token, "github-token", c.Token, "The personal access token of GitHub.")
flag.Int64Var(&ghAppID, "github-app-id", 0, "The application ID of GitHub App.") flag.Int64Var(&c.AppID, "github-app-id", c.AppID, "The application ID of GitHub App.")
flag.Int64Var(&ghAppInstallationID, "github-app-installation-id", 0, "The installation ID of GitHub App.") flag.Int64Var(&c.AppInstallationID, "github-app-installation-id", c.AppInstallationID, "The installation ID of GitHub App.")
flag.StringVar(&ghAppPrivateKey, "github-app-private-key", "", "The path of a private key file to authenticate as a GitHub App") flag.StringVar(&c.AppPrivateKey, "github-app-private-key", c.AppPrivateKey, "The path of a private key file to authenticate as a GitHub App")
flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change") flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change")
flag.Parse() flag.Parse()
if ghToken == "" { logger := zap.New(func(o *zap.Options) {
ghToken = os.Getenv("GITHUB_TOKEN") o.Development = true
} })
if ghAppID == 0 {
appID, err := strconv.ParseInt(os.Getenv("GITHUB_APP_ID"), 10, 64)
if err == nil {
ghAppID = appID
}
}
if ghAppInstallationID == 0 {
appInstallationID, err := strconv.ParseInt(os.Getenv("GITHUB_APP_INSTALLATION_ID"), 10, 64)
if err == nil {
ghAppInstallationID = appInstallationID
}
}
if ghAppPrivateKey == "" {
ghAppPrivateKey = os.Getenv("GITHUB_APP_PRIVATE_KEY")
}
if ghAppID != 0 { ghClient, err = c.NewClient()
if ghAppInstallationID == 0 { if err != nil {
fmt.Fprintln(os.Stderr, "Error: The installation ID must be specified.") fmt.Fprintln(os.Stderr, "Error: Client creation failed.", err)
os.Exit(1)
}
if ghAppPrivateKey == "" {
fmt.Fprintln(os.Stderr, "Error: The path of a private key file must be specified.")
os.Exit(1)
}
ghClient, err = github.NewClient(ghAppID, ghAppInstallationID, ghAppPrivateKey)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: Failed to create GitHub client: %v\n", err)
os.Exit(1)
}
} else if ghToken != "" {
ghClient, err = github.NewClientWithAccessToken(ghToken)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: Failed to create GitHub client: %v\n", err)
os.Exit(1)
}
} else {
fmt.Fprintln(os.Stderr, "Error: GitHub App credentials or personal access token must be specified.")
os.Exit(1) os.Exit(1)
} }
ctrl.SetLogger(zap.New(func(o *zap.Options) { ctrl.SetLogger(logger)
o.Development = true
}))
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme, Scheme: scheme,

View File

@@ -1,9 +1,11 @@
FROM ubuntu:18.04 FROM ubuntu:18.04
ARG TARGETPLATFORM ARG TARGETPLATFORM
ARG RUNNER_VERSION=2.272.0 ARG RUNNER_VERSION=2.274.2
ARG DOCKER_VERSION=19.03.12 ARG DOCKER_VERSION=19.03.12
RUN test -n "$TARGETPLATFORM" || (echo "TARGETPLATFORM must be set" && false)
ENV DEBIAN_FRONTEND=noninteractive ENV DEBIAN_FRONTEND=noninteractive
RUN apt update -y \ RUN apt update -y \
&& apt install -y software-properties-common \ && apt install -y software-properties-common \
@@ -42,7 +44,8 @@ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& chmod +x /usr/local/bin/dumb-init && chmod +x /usr/local/bin/dumb-init
# Docker download supports arm64 as aarch64 & amd64 as x86_64 # Docker download supports arm64 as aarch64 & amd64 as x86_64
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \ RUN set -vx; \
export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& if [ "$ARCH" = "arm64" ]; then export ARCH=aarch64 ; fi \ && if [ "$ARCH" = "arm64" ]; then export ARCH=aarch64 ; fi \
&& if [ "$ARCH" = "amd64" ]; then export ARCH=x86_64 ; fi \ && if [ "$ARCH" = "amd64" ]; then export ARCH=x86_64 ; fi \
&& curl -L -o docker.tgz https://download.docker.com/linux/static/stable/${ARCH}/docker-${DOCKER_VERSION}.tgz \ && curl -L -o docker.tgz https://download.docker.com/linux/static/stable/${ARCH}/docker-${DOCKER_VERSION}.tgz \
@@ -50,23 +53,38 @@ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& install -o root -g root -m 755 docker/docker /usr/local/bin/docker \ && install -o root -g root -m 755 docker/docker /usr/local/bin/docker \
&& rm -rf docker docker.tgz \ && rm -rf docker docker.tgz \
&& adduser --disabled-password --gecos "" --uid 1000 runner \ && adduser --disabled-password --gecos "" --uid 1000 runner \
&& groupadd docker \
&& usermod -aG sudo runner \ && usermod -aG sudo runner \
&& usermod -aG docker runner \
&& echo "%sudo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers && echo "%sudo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers
# Runner download supports amd64 as x64 ENV RUNNER_ASSETS_DIR=/runnertmp
# Runner download supports amd64 as x64. Externalstmp is needed for making mount points work inside DinD.
#
# libyaml-dev is required for ruby/setup-ruby action.
# It is installed after installdependencies.sh and before removing /var/lib/apt/lists
# to avoid rerunning apt-update on its own.
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& if [ "$ARCH" = "amd64" ]; then export ARCH=x64 ; fi \ && if [ "$ARCH" = "amd64" ]; then export ARCH=x64 ; fi \
&& mkdir -p /runner \ && mkdir -p "$RUNNER_ASSETS_DIR" \
&& cd /runner \ && cd "$RUNNER_ASSETS_DIR" \
&& curl -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-${ARCH}-${RUNNER_VERSION}.tar.gz \ && curl -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-${ARCH}-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./runner.tar.gz \ && tar xzf ./runner.tar.gz \
&& rm runner.tar.gz \ && rm runner.tar.gz \
&& ./bin/installdependencies.sh \ && ./bin/installdependencies.sh \
&& mv ./externals ./externalstmp \
&& apt-get install -y libyaml-dev \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
COPY entrypoint.sh /runner RUN echo AGENT_TOOLSDIRECTORY=/opt/hostedtoolcache > .env \
COPY patched /runner/patched && mkdir /opt/hostedtoolcache \
&& chgrp runner /opt/hostedtoolcache \
&& chmod g+rwx /opt/hostedtoolcache
USER runner:runner COPY entrypoint.sh /
COPY patched $RUNNER_ASSETS_DIR/patched
USER runner
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"] ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]
CMD ["/runner/entrypoint.sh"] CMD ["/entrypoint.sh"]

View File

@@ -1,7 +1,8 @@
NAME ?= summerwind/actions-runner NAME ?= summerwind/actions-runner
DIND_RUNNER_NAME ?= ${NAME}-dind
TAG ?= latest TAG ?= latest
RUNNER_VERSION ?= 2.273.5 RUNNER_VERSION ?= 2.274.2
DOCKER_VERSION ?= 19.03.12 DOCKER_VERSION ?= 19.03.12
# default list of platforms for which multiarch image is built # default list of platforms for which multiarch image is built
@@ -22,11 +23,13 @@ else
endif endif
docker-build: docker-build:
docker build --build-arg RUNNER_VERSION=${RUNNER_VERSION} --build-arg DOCKER_VERSION=${DOCKER_VERSION} -t ${NAME}:${TAG} -t ${NAME}:v${RUNNER_VERSION} . docker build --build-arg TARGETPLATFORM=amd64 --build-arg RUNNER_VERSION=${RUNNER_VERSION} --build-arg DOCKER_VERSION=${DOCKER_VERSION} -t ${NAME}:${TAG} .
docker build --build-arg TARGETPLATFORM=amd64 --build-arg RUNNER_VERSION=${RUNNER_VERSION} --build-arg DOCKER_VERSION=${DOCKER_VERSION} -t ${DIND_RUNNER_NAME}:${TAG} -f dindrunner.Dockerfile .
docker-push: docker-push:
docker push ${NAME}:${TAG} docker push ${NAME}:${TAG}
docker push ${NAME}:v${RUNNER_VERSION} docker push ${DIND_RUNNER_NAME}:${TAG}
docker-buildx: docker-buildx:
export DOCKER_CLI_EXPERIMENTAL=enabled export DOCKER_CLI_EXPERIMENTAL=enabled
@@ -39,3 +42,9 @@ docker-buildx:
-t "${NAME}:latest" \ -t "${NAME}:latest" \
-f Dockerfile \ -f Dockerfile \
. ${PUSH_ARG} . ${PUSH_ARG}
docker buildx build --platform ${PLATFORMS} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
-t "${DIND_RUNNER_NAME}:latest" \
-f dindrunner.Dockerfile \
. ${PUSH_ARG}

View File

@@ -0,0 +1,113 @@
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
# Dev + DinD dependencies
RUN apt update \
&& apt install -y software-properties-common \
&& add-apt-repository -y ppa:git-core/ppa \
&& apt install -y \
build-essential \
curl \
ca-certificates \
dnsutils \
ftp \
git \
iproute2 \
iptables \
iputils-ping \
jq \
libunwind8 \
locales \
netcat \
openssh-client \
parallel \
rsync \
shellcheck \
sudo \
supervisor \
telnet \
time \
tzdata \
unzip \
upx \
wget \
zip \
zstd \
&& rm -rf /var/lib/apt/list/*
# Runner user
RUN adduser --disabled-password --gecos "" --uid 1000 runner \
&& groupadd docker \
&& usermod -aG sudo runner \
&& usermod -aG docker runner \
&& echo "%sudo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers
ARG TARGETPLATFORM
ARG RUNNER_VERSION=2.274.1
ARG DOCKER_CHANNEL=stable
ARG DOCKER_VERSION=19.03.13
ARG DEBUG=false
RUN test -n "$TARGETPLATFORM" || (echo "TARGETPLATFORM must be set" && false)
# Docker installation
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& if [ "$ARCH" = "arm64" ]; then export ARCH=aarch64 ; fi \
&& if [ "$ARCH" = "amd64" ]; then export ARCH=x86_64 ; fi \
&& if ! curl -L -o docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/${ARCH}/docker-${DOCKER_VERSION}.tgz"; then \
echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${ARCH}'"; \
exit 1; \
fi; \
echo "Downloaded Docker from https://download.docker.com/linux/static/${DOCKER_CHANNEL}/${ARCH}/docker-${DOCKER_VERSION}.tgz"; \
tar --extract \
--file docker.tgz \
--strip-components 1 \
--directory /usr/local/bin/ \
; \
rm docker.tgz; \
dockerd --version; \
docker --version
ENV RUNNER_ASSETS_DIR=/runnertmp
# Runner download supports amd64 as x64
#
# libyaml-dev is required for ruby/setup-ruby action.
# It is installed after installdependencies.sh and before removing /var/lib/apt/lists
# to avoid rerunning apt-update on its own.
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& if [ "$ARCH" = "amd64" ]; then export ARCH=x64 ; fi \
&& mkdir -p "$RUNNER_ASSETS_DIR" \
&& cd "$RUNNER_ASSETS_DIR" \
&& curl -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-${ARCH}-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./runner.tar.gz \
&& rm runner.tar.gz \
&& ./bin/installdependencies.sh \
&& apt-get install -y libyaml-dev \
&& rm -rf /var/lib/apt/lists/*
RUN echo AGENT_TOOLSDIRECTORY=/opt/hostedtoolcache > /runner.env \
&& mkdir /opt/hostedtoolcache \
&& chgrp runner /opt/hostedtoolcache \
&& chmod g+rwx /opt/hostedtoolcache
COPY modprobe startup.sh /usr/local/bin/
COPY supervisor/ /etc/supervisor/conf.d/
COPY logger.sh /opt/bash-utils/logger.sh
COPY entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/startup.sh /usr/local/bin/entrypoint.sh /usr/local/bin/modprobe
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& curl -L -o /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.2/dumb-init_1.2.2_${ARCH} \
&& chmod +x /usr/local/bin/dumb-init
VOLUME /var/lib/docker
COPY patched $RUNNER_ASSETS_DIR/patched
# No group definition, as that makes it harder to run docker.
USER runner
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]
CMD ["startup.sh"]

View File

@@ -1,11 +1,22 @@
#!/bin/bash #!/bin/bash
if [ -z "${GITHUB_URL}" ]; then
echo "Working with public GitHub" 1>&2
GITHUB_URL="https://github.com/"
else
length=${#GITHUB_URL}
last_char=${GITHUB_URL:length-1:1}
[[ $last_char != "/" ]] && GITHUB_URL="$GITHUB_URL/"; :
echo "Github endpoint URL ${GITHUB_URL}"
fi
if [ -z "${RUNNER_NAME}" ]; then if [ -z "${RUNNER_NAME}" ]; then
echo "RUNNER_NAME must be set" 1>&2 echo "RUNNER_NAME must be set" 1>&2
exit 1 exit 1
fi fi
if [ -n "${RUNNER_ORG}" -a -n "${RUNNER_REPO}" ]; then if [ -n "${RUNNER_ORG}" ] && [ -n "${RUNNER_REPO}" ]; then
ATTACH="${RUNNER_ORG}/${RUNNER_REPO}" ATTACH="${RUNNER_ORG}/${RUNNER_REPO}"
elif [ -n "${RUNNER_ORG}" ]; then elif [ -n "${RUNNER_ORG}" ]; then
ATTACH="${RUNNER_ORG}" ATTACH="${RUNNER_ORG}"
@@ -16,6 +27,10 @@ else
exit 1 exit 1
fi fi
if [ -n "${RUNNER_WORKDIR}" ]; then
WORKDIR_ARG="--work ${RUNNER_WORKDIR}"
fi
if [ -n "${RUNNER_LABELS}" ]; then if [ -n "${RUNNER_LABELS}" ]; then
LABEL_ARG="--labels ${RUNNER_LABELS}" LABEL_ARG="--labels ${RUNNER_LABELS}"
fi fi
@@ -25,8 +40,24 @@ if [ -z "${RUNNER_TOKEN}" ]; then
exit 1 exit 1
fi fi
if [ -z "${RUNNER_REPO}" ] && [ -n "${RUNNER_ORG}" ] && [ -n "${RUNNER_GROUP}" ];then
RUNNER_GROUP_ARG="--runnergroup ${RUNNER_GROUP}"
fi
# Hack due to https://github.com/summerwind/actions-runner-controller/issues/252#issuecomment-758338483
if [ ! -d /runner ]; then
echo "/runner should be an emptyDir mount. Please fix the pod spec." 1>&2
exit 1
fi
sudo chown -R runner:docker /runner
mv /runnertmp/* /runner/
cd /runner cd /runner
./config.sh --unattended --replace --name "${RUNNER_NAME}" --url "https://github.com/${ATTACH}" --token "${RUNNER_TOKEN}" ${LABEL_ARG} ./config.sh --unattended --replace --name "${RUNNER_NAME}" --url "${GITHUB_URL}${ATTACH}" --token "${RUNNER_TOKEN}" ${RUNNER_GROUP_ARG} ${LABEL_ARG} ${WORKDIR_ARG}
mkdir ./externals
# Hack due to the DinD volumes
mv ./externalstmp/* ./externals/
for f in runsvc.sh RunnerService.js; do for f in runsvc.sh RunnerService.js; do
diff {bin,patched}/${f} || : diff {bin,patched}/${f} || :

24
runner/logger.sh Normal file
View File

@@ -0,0 +1,24 @@
#!/bin/sh
# Logger from this post http://www.cubicrace.com/2016/03/log-tracing-mechnism-for-shell-scripts.html
function INFO(){
local function_name="${FUNCNAME[1]}"
local msg="$1"
timeAndDate=`date`
echo "[$timeAndDate] [INFO] [${0}] $msg"
}
function DEBUG(){
local function_name="${FUNCNAME[1]}"
local msg="$1"
timeAndDate=`date`
echo "[$timeAndDate] [DEBUG] [${0}] $msg"
}
function ERROR(){
local function_name="${FUNCNAME[1]}"
local msg="$1"
timeAndDate=`date`
echo "[$timeAndDate] [ERROR] $msg"
}

20
runner/modprobe Normal file
View File

@@ -0,0 +1,20 @@
#!/bin/sh
set -eu
# "modprobe" without modprobe
# https://twitter.com/lucabruno/status/902934379835662336
# this isn't 100% fool-proof, but it'll have a much higher success rate than simply using the "real" modprobe
# Docker often uses "modprobe -va foo bar baz"
# so we ignore modules that start with "-"
for module; do
if [ "${module#-}" = "$module" ]; then
ip link show "$module" || true
lsmod | grep "$module" || true
fi
done
# remove /usr/local/... from PATH so we can exec the real modprobe as a last resort
export PATH='/usr/sbin:/usr/bin:/sbin:/bin'
exec modprobe "$@"

37
runner/startup.sh Normal file
View File

@@ -0,0 +1,37 @@
#!/bin/bash
source /opt/bash-utils/logger.sh
function wait_for_process () {
local max_time_wait=30
local process_name="$1"
local waited_sec=0
while ! pgrep "$process_name" >/dev/null && ((waited_sec < max_time_wait)); do
INFO "Process $process_name is not running yet. Retrying in 1 seconds"
INFO "Waited $waited_sec seconds of $max_time_wait seconds"
sleep 1
((waited_sec=waited_sec+1))
if ((waited_sec >= max_time_wait)); then
return 1
fi
done
return 0
}
INFO "Starting supervisor"
sudo /usr/bin/supervisord -n >> /dev/null 2>&1 &
INFO "Waiting for processes to be running"
processes=(dockerd)
for process in "${processes[@]}"; do
wait_for_process "$process"
if [ $? -ne 0 ]; then
ERROR "$process is not running after max time"
exit 1
else
INFO "$process is running"
fi
done
# Wait processes to be running
entrypoint.sh

View File

@@ -0,0 +1,6 @@
[program:dockerd]
command=/usr/local/bin/dockerd
autostart=true
autorestart=true
stderr_logfile=/var/log/dockerd.err.log
stdout_logfile=/var/log/dockerd.out.log