Compare commits

...

63 Commits

Author SHA1 Message Date
Yusuke Kuoka
6762c5c096 Fix excessive runnerreplicaset update issue since 0.25.0 (#1651)
Fixes #1643
2022-07-15 06:41:57 +09:00
everpcpc
fea1457f12 fix: annotate pod instead of label on UnregistrationFailureMessage (#1615)
The error message should go to annotation instead of pod label, since we only check annotations on autoscale:

https://github.com/actions-runner-controller/actions-runner-controller/blob/master/controllers/autoscaling.go#L338-L342

Then there is no need to truncate or normalize the error message.
2022-07-09 11:45:05 +09:00
Yusuke Kuoka
473295e3fc Enhance the E2E test to be runnable against remote clusters on e.g. AWS EKS (#1610)
This contains apparently enough changes to the current E2E test code to make it runnable against remote Kubernetes clusters. I was actually able to make the test passing against my AWS EKS based test clusters with these changes. You still need to trigger it manually from a local checkout of the ARC repo today. But this might be the foundation for automated E2E tests against major cloud providers.
2022-07-07 20:48:07 +09:00
Yusuke Kuoka
9f6f962fc7 Add toubleshooting for cert-manager ca error (#1598)
I encountered this once while E2E testing ARC with K8s 1.22 and cert-manager 1.1.1. The K8s version is too high / The cert-manager is too low so you generally need to fix either. In a standard scenario, it should be more feasible and meaningful to upgrade cert-manager to a recent enough version that supports the new Kubernetes version.
2022-07-07 11:27:49 +09:00
Yusuke Kuoka
2a475f25c7 Use Argo Tunnel for exposing the autoscaler's webhook server (#1595)
I've been manually setting up Argo Tunnel to expose the webhook server while running E2E tests so that I can cover the webhook-based autoscaling. This automates the setup process so that we can automatiaclly bring up and down cloudflared before/after the test run, so that it can be a part of our upcoming automated E2E test.
2022-07-07 11:27:27 +09:00
Viktor Lindgren
dd9f25ea78 Update README.md (#1606) 2022-07-06 08:57:54 +09:00
Yusuke Kuoka
b8e4eee904 Make it easier to E2E test on various K8s versions (#1599) 2022-07-06 08:57:21 +09:00
Yusuke Kuoka
edbdef8d20 Bump chart version to 0.20.0 for ARC 0.25.0 (#1600)
We'll be merging this immediately after ARC 0.25.0 gets released.
2022-07-05 11:19:24 +09:00
Nguyễn Đức Chiến
a190fa97bb Fix helm charts (#1603) 2022-07-05 10:35:57 +09:00
Yusuke Kuoka
bfc5ea4727 Fix a regression in webhook-based autoscaler (#1596)
The regression resulted in the webhook-based autoscaler be unable to find visible runner groups and therefore unable to scale up and down the target RunnerDeployment/RunnerSet at all when the webhook-based autoscaler was provided GitHub API credentials to enable the runner groups support. This fixes that.

The regression was introduced via #1578 which is not released yet. Users of existing ARC releases are therefore not affected.
2022-07-04 20:17:09 +09:00
renovate[bot]
5a9e8545aa fix(deps): update golang.org/x/oauth2 digest to 2104d58 (#1593)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-02 14:06:21 +09:00
Yusuke Kuoka
4446ba57e1 Cover ARC upgrade in E2E test (#1592)
* Cover ARC upgrade in E2E test

so that we can make it extra sure that you can upgrade the existing installation of ARC to the next and also (hopefully) it is backward-compatible, or at least it does not break immediately after upgrading.

* Consolidate E2E tests for RS and RD

* Fix E2E for RD to pass

* Add some comment in E2E for how to release disk consumed after dozens of test runs
2022-07-01 21:32:05 +09:00
Martin Moon (문성주)
d62c8a4697 fix typo. (#1594) 2022-07-01 10:24:41 +09:00
Yusuke Kuoka
946d5b1fa7 Add release note for v0.25.0 (#1591) 2022-06-30 22:11:22 +09:00
renovate[bot]
da6b07660e fix(deps): update golang.org/x/oauth2 digest to 02e64fa (#1480)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-30 11:32:26 +09:00
Callum Tait
e3deb0d752 chore: move runner docker check (#1548) 2022-06-30 11:31:50 +09:00
Callum Tait
82641e5036 chore: move HOME to more logical place (#1460)
* chore: move HOME to more logical place

* chore: don't break the PATH

* chore: don't break the PATH

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-06-30 11:21:05 +09:00
Vladyslav Miletskyi
2fe6adf5b7 Runner Entrypoint: fix daemon.json (#1409)
* Runner Entrypoint: fix daemon.json

Do not owerwrite daemon.json if it already exists.
Usage: custom images, which are using public image as source.

* Update runner/startup.sh

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-06-30 11:03:12 +09:00
renovate[bot]
736126b793 chore(deps): update helm values quay.io/brancz/kube-rbac-proxy to v0.13.0 (#1589)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-30 09:51:38 +09:00
renovate[bot]
6abf5bbac8 fix(deps): update module github.com/stretchr/testify to v1.8.0 (#1584)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-30 09:50:55 +09:00
Yusuke Kuoka
dc4f116bda Reflect manual test scenario for containerMode=kubernetes to E2E (#1588)
With this my semi-automatic E2E manual testing becomes even easier :)
2022-06-30 09:09:58 +09:00
Callum Tait
cda10fd243 docs: remove once feature flag env var (#1590) 2022-06-30 09:09:37 +09:00
Yusuke Kuoka
b5d1a63bdf Enhance the acceptance runnerset yaml template for manual E2E (#1587)
The primary goal of this change is to let the tester know about the config difference between the explicitly configured ephemeral work volume vs the automatically configured work volume with workVolumeClaimTemplate+containerMode=kubernetes.
2022-06-29 22:15:50 +09:00
Yusuke Kuoka
6f3e23973d Bump E2E runner version to 2.294.0 (#1586)
so that every runner does not result in auto-updating itself on startup in E2E, which makes E2E take longer to complete.
2022-06-29 22:05:50 +09:00
Yusuke Kuoka
a517c1ff66 Fix old runner pods stuck in Terminating since #1579 (#1585)
Ref #1579
2022-06-29 22:02:42 +09:00
Yusuke Kuoka
9b28e633c1 Drop support for --once (#1580)
Ref #1196
2022-06-29 21:49:52 +09:00
Yusuke Kuoka
8161136cbd Fix PercentageRunnersBusy scaling delay (#1579)
* Use a dedicated pod label to say it is a runner pod

Follow-up for #1546

* Fix PercentageRunnersBusy scaling delay

Ref #1374
2022-06-29 20:49:21 +09:00
Nikola Jokic
a9ac5a1cbf extracted validations to a single point (#1582) 2022-06-29 20:32:00 +09:00
Callum Tait
d4f35cff4f ci: add paths to push trigger (#1583) 2022-06-29 20:30:07 +09:00
Yusuke Kuoka
f661249f07 Use the go-github impl of ListRunnerGroups with visible_to_repository (#1578)
Ref #1402
2022-06-29 09:53:03 +01:00
Mike
73e430ce54 Add a solution to InternalError webhook context timedout (#1558)
* added troubleshooting solution

* added error example

* added entry to the pages index

* sorted

Co-authored-by: Mike Joseph <mike@Mikes-MacBook-Pro-5618.local>
Co-authored-by: Mike Joseph <mike@efrontier.com>
2022-06-29 09:40:23 +09:00
renovate[bot]
858ef8979d chore(deps): update helm/kind-action action to v1.3.0 (#1532)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 09:05:26 +09:00
renovate[bot]
1ce0a183a6 chore(deps): update azure/setup-helm action to v3 (#1571)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 09:04:33 +09:00
renovate[bot]
63935d2053 fix(deps): update module github.com/stretchr/testify to v1.7.5 (#1510)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 08:39:09 +09:00
John Delivuk
fc63d6d26e Fix: Match Ingress API Version correctly. (#1541)
* Updating conditional to match the api version and kind

mend

* Updating conditional to match the api version and kind

mend
2022-06-29 08:30:11 +09:00
renovate[bot]
5ea08411e6 chore(deps): update dependency golang to v1.18.3 (#1509)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-06-29 08:29:14 +09:00
Giuseppe Crinò
067ed2e5ec docs: fix logic explanation for scale down delay (#1562)
Signed-off: Giuseppe Crinò <giuscri@gmail.com>
2022-06-29 08:26:28 +09:00
renovate[bot]
d86bd2bcd7 fix(deps): update module sigs.k8s.io/controller-runtime to v0.12.2 (#1449)
* fix(deps): update module sigs.k8s.io/controller-runtime to v0.12.2

* Regenerate manfiests with the updated k8s and controller-runtime deps

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-29 06:42:17 +09:00
Yusuke Kuoka
ddd417f756 Bump go-github to v45 (#1573)
* Bump go-github to v45

Ref #1402

* fixup! Bump go-github to v45
2022-06-29 06:34:58 +09:00
Thomas Boop
0386c0734c containerMode option to allow running jobs in k8's instead of docker (#1546)
* added containerMode=kubernetes env variables to the runner

* removed unused logging

* restored configs and charts

* restored makefile cert version and acceptance/run

* added workVolumeClaimTemplate in pod definition, including logic

* added claim template name based on the runner

* Apply suggestions from code review

update errors

* added concurrent cleanup before runner pod is deleted

* update manifests

* added retry after 30s if pod cleanup contains err

* added admission webhook check, made workVolumeClaimTemplate mandatory for k8s

* style changes and added comments

* added izZero timestamp check for deleting runner-linked pods

* changed order of local variable to avoid copy if p is deleted

* removed docker from container mode k8s

* restored charts, config, makefile

* restored forked files back and not the ARC ones

* created PersistentVolume on containerMode k8s

* create pv only if storage class name is local-storage

* removed actions if storage class name is local-storage

* added service account validation if container mode kubernetes

* changed the coding style to match rest of the ARC

* added validation to the runnerdeployment webhook

* specified fields more precisely, added webhook validation to the replicaset as well

* remake manifests

* wraped delete runner-linked-pods in kube mode

* fixed empty line

* fixed import

* makefile changes for hooks

* added cleanup secrets

* create manifests

* docs

* update access modes

* update dockerfile

* nit changes

* fixed dockerfile

* rewrite allowing reuse for runners and runnersets

* deepcopy forgot to stage

* changed privileged

* make manifests

* partly moved to finalizer, still need to apply finalizer first

* finalizer added if env variable used in container mode exists

* bump runner version

* error message moved from Error to Info on cleanup pods/secrets

* removed useless dereferencing, added transformation tests of workVolumeClaimTemplate

* Apply suggestions from code review

* Update controllers/utils_test.go

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* Update controllers/utils_test.go

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* add hook version to cli, update to 0.1.2

* Apply suggestions from code review

* Update controllers/utils_test.go

* Update runner/Makefile

* Fix missing secret permission and the error handling

* Fix a runnerpod reconciler finalizer to not trigger unnecessary retry

Co-authored-by: Nikola Jokic <nikola-jokic@github.com>
Co-authored-by: Nikola Jokic <97525037+nikola-jokic@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-28 14:12:40 +09:00
Yusuke Kuoka
af96de6184 Fix completed runner pod recreation not to be blocked after max out (#1568)
Ref https://github.com/actions-runner-controller/actions-runner-controller/pull/1477#issuecomment-1164154496
2022-06-28 13:50:07 +09:00
Arnaud
abb8615796 Webhook server configuration with kustomize (#1312)
* webhook server configuration with kustomize

* Update README.md

* Update README.md

* Update README.md

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-28 09:08:25 +09:00
Sam Weston
bc7a3cab1b Add priorityClassName to CRDs (#1513)
* Add pod priorityClassName to controller and crds

* Add missing bits in bases directory

* Regenerate crds
2022-06-28 08:45:19 +09:00
Yusuke Kuoka
e2c8163b8c Make webhook-based scale race-free (#1477)
* Make webhook-based scale operation asynchronous

This prevents race condition in the webhook-based autoscaler when it received another webhook event while processing another webhook event and both ended up
scaling up the same horizontal runner autoscaler.

Ref #1321

* Fix typos

* Update rather than Patch HRA to avoid race among webhook-based autoscaler servers

* Batch capacity reservation updates for efficient use of apiserver

* Fix potential never-ending HRA update conflicts in batch update

* Extract batchScaler out of webhook-based autoscaler for testability

* Fix log levels and batch scaler hang on start

* Correlate webhook event with scale trigger amount in logs

* Fix log message
2022-06-27 18:31:48 +09:00
Callum Tait
84d16c1c12 revert: "Overhauled startup.sh Script (#1454)" (#1561)
This reverts commit 071898c96b.
2022-06-23 12:39:32 +01:00
Richard Fussenegger
071898c96b Overhauled startup.sh Script (#1454)
This overhaul turns it into a shellcheck valid script with explicit error handling for all possible situations I could think of. This change takes https://github.com/actions-runner-controller/actions-runner-controller/pull/1409 into account and things can be merged in any order. There are a few important changes here to the logic:

- The wait logic for checking if docker comes up was fundamentally flawed because it checks for the PID. Docker will always come up and thus become visible in the process list, just to immediately die when it encounters an issue, after which supervisor starts it again. This means that our check so far is flaky due to the `sleep 1` it might encounter a PID, or it might not, and the existence of the PID does not mean anything. The `docker ps` check we have in the `entrypoint.sh` script does not suffer from this as it checks for a feature of docker and not a PID. I thus entirely removed the PID check, and instead I am handing things over to our `entrypoint.sh` script by setting the environment variables correctly.
- This change has an influence on the `docker0` interface MTU configuration, because the interface might or might not exist after we started docker. Hence, I changed this to a time boxed loop that tries for one minute to set up the interface's MTU. In case the command fails we log an error and continue with the run.
- I changed the entire MTU handling by validating its value before configuring it, logging an error and continuing without if it is set incorrectly. This ensures that we are not going to send our users on a bug hunt.
- The way we started supervisord did not make much sense to me. It sends itself into the background automatically, there is no need for us to do so with Bash.

The decision to not fail on errors but continue is a deliberate choice, because I believe that running a build is more important than having a perfectly configured system. However, this strategy might also hide issues for all users who are not properly checking their logs. It also makes testing harder. Hence, we could change all error conditions from graceful to panicking. We should then align the exit codes across `startup.sh` and `entrypoint.sh` to ensure that every possible error condition has its own unique error code for easy debugging.
2022-06-23 09:37:01 +09:00
renovate[bot]
f24e2fa44e chore(deps): update dependency actions/runner to v2.294.0 2022-06-22 21:45:32 +00:00
Callum Tait
3c7d3d6b57 ci: hardcode dockerhub username (#1555) 2022-06-22 16:15:50 +01:00
Callum Tait
23f091d7fa ci: don't login on a pr (#1554)
* ci: don't login on a pr

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-06-22 16:03:36 +01:00
Callum Tait
667764e027 chore: suggest gist first (#1539) 2022-06-18 17:38:37 +09:00
Callum Tait
de693c4191 ci: runners trigger on push (#1549)
* ci: runners trigger on push

* ci: comments

* ci: comments
2022-06-18 17:34:40 +09:00
Callum Tait
510fc9c834 ci: add GitHub packages to arc release (#1525)
* ci: add GitHub packages to arc release

* ci: use restrictive permissions
2022-06-15 11:37:19 +09:00
Callum Tait
7fd5e24961 chore: bump chart to app 0.24.1 (#1531) 2022-06-15 11:34:55 +09:00
Yusuke Kuoka
9974b1a2b7 e2e: Enable buildx in more images (#1530) 2022-06-14 09:29:30 +01:00
Yusuke Kuoka
bd91b73fd9 chore: update bug_report.yml (#1529) 2022-06-14 09:29:06 +01:00
Callum Tait
a7ae910ee4 docs: add CRD disclaimer to bug report (#1516) 2022-06-14 09:42:31 +09:00
Callum Tait
2733c36d0e ci: publish controller canary to github packages (#1524)
* ci: publish controller canary to github packages

* ci: include image name
2022-06-14 09:10:13 +09:00
Yusuke Kuoka
0ef9a22cd4 Fix confusing PV controller log (#1526)
Ref #1511
2022-06-14 08:35:04 +09:00
Renovate Bot
933b0c7888 chore(deps): update dependency actions/runner to v2.293.0 2022-06-13 17:09:29 +00:00
renovate[bot]
1b7ec33135 chore(deps): update actions/setup-python action to v4 (#1514)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-06-13 14:07:52 +01:00
Callum Tait
a62882d243 ci: fix permisions (#1512)
* ci: fix permisions

* chore: change to trigger build

* ci: add write permission to packages

* ci: remove conditionals for docker logins

* Update controllers/utils_test.go

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-09 10:25:56 +09:00
Callum Tait
0cd13fe51d ci: align pipeline files and setups (#1484)
* ci: align pipeline files and setups

* ci: more changes

* ci: various changes

* ci: fix setup-helm action ref

* ci: better pipeline name

* ci: more format aligning

* ci: more format aligning

* ci: better job name

* ci: supports multiple languages

* ci: better pipeline and job names

* ci: do a verb-noun thing for consistency

* ci: use 'arc' when talking holistically

* ci: add caching scope

* ci:  put canary in a scope

* ci: fix syntax error

* ci: better pipeline and job names

* ci: better job name

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-06-08 10:04:14 +09:00
Vinícius Garcia
01c8dc237e Fix example manifests for webhooks-based scaling (#1354)
* Fix example manifests for webhook based scaling

I tried running these on my k8s cluster and I got some easy to fix errors, so I am committing them here.

* Fix example manifests for webhook autoscaling with workflow_jobs

* Fix the explation on how to setup webhooks on your cluster

* Replace unclear comment with actual code examples

There was a comment instructing users to add minReplicas and
maxReplicas to all the HRA yamls, so I just removed it and added
these attributes to the yamls themselves for clarity.

* Make clear that using the ingress example is just a suggestion

* Apply some text improvements suggested by @mumoshu

* Update examples so the webhook server is exposed on a NodePort

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Remove an unnecessary field from one the examples

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Apply suggestion from @mumoshu

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* Remove namespace fields from webhook autoscaler examples

This change was suggested by @mumoshu

* Apply final suggestion from @mumoshu

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-06-07 08:33:09 +09:00
88 changed files with 4534 additions and 2467 deletions

View File

@@ -17,6 +17,12 @@ body:
label: Helm Chart Version
description: Run `helm list` and see what's shown under CHART VERSION. Any release tags prefixed with `actions-runner-controller-` are for chart releases
placeholder: ex. 0.11.0
- type: input
id: cert-manager-version
attributes:
label: CertManager Version
description: Run `kubectl get po -o yaml $CERT_MANAGER_POD` and see the image tag, or run `helm list` and see what's shown under APP VERSION for your cert-manager Helm release.
placeholder: ex. 1.8
- type: dropdown
id: deployment-method
attributes:
@@ -29,6 +35,17 @@ body:
- Other
validations:
required: true
- type: textarea
id: cert-manager
attributes:
label: cert-manager installation
description: Confirm that you've installed cert-manager correctly by answering a few questions
placeholder: |
- Did you follow https://github.com/actions-runner-controller/actions-runner-controller#installation? If not, describe the installation process so that we can reproduce your environment.
- Are you sure you've installed cert-manager from an official source?
(Note that we won't provide user support for cert-manager itself. Make sure cert-manager is fully working before testing ARC or reporting a bug
validations:
required: true
- type: checkboxes
id: checks
attributes:
@@ -41,7 +58,7 @@ body:
required: true
- label: My actions-runner-controller version (v0.x.y) does support the feature
required: true
- label: I've already upgraded ARC to the latest and it didn't fix the issue
- label: I've already upgraded ARC (including the CRDs, see charts/actions-runner-controller/docs/UPGRADING.md for details) to the latest and it didn't fix the issue
required: true
- type: textarea
id: resource-definitions
@@ -113,9 +130,11 @@ body:
id: controller-logs
attributes:
label: Controller Logs
description: "Include logs from `actions-runner-controller`'s controller-manager pod"
description: "NEVER EVER OMIT THIS! Include logs from `actions-runner-controller`'s controller-manager pod"
render: shell
placeholder: |
PROVIDE THE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
To grab controller logs:
# Set NS according to your setup
@@ -125,8 +144,6 @@ body:
kubectl -n $NS get po
kubectl -n $NS logs $POD_NAME > arc.log
Upload it to e.g. https://gist.github.com/ and paste the link to it here.
validations:
required: true
- type: textarea
@@ -136,6 +153,8 @@ body:
description: "Include logs from runner pod(s)"
render: shell
placeholder: |
PROVIDE THE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
To grab the runner pod logs:
# Set NS according to your setup. It should match your RunnerDeployment's metadata.namespace.
@@ -146,8 +165,6 @@ body:
kubectl -n $NS logs $POD_NAME -c runner > runnerpod_runner.log
kubectl -n $NS logs $POD_NAME -c docker > runnerpod_docker.log
Upload it to e.g. https://gist.github.com/ and paste the link to it here.
validations:
required: true
- type: textarea

View File

@@ -37,15 +37,15 @@ runs:
version: latest
- name: Login to DockerHub
if: ${{ github.ref == 'master' && github.event.pull_request.merged == true }}
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' }}
uses: docker/login-action@v2
with:
username: ${{ inputs.username }}
password: ${{ inputs.password }}
- name: Login to GitHub Container Registry
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' }}
uses: docker/login-action@v2
if: ${{ github.ref == 'master' && github.event.pull_request.merged == true }}
with:
registry: ghcr.io
username: ${{ inputs.ghcr_username }}

View File

@@ -13,7 +13,7 @@
{
// use https://github.com/actions/runner/releases
"fileMatch": [
".github/workflows/runners.yml"
".github/workflows/runners.yaml"
],
"matchStrings": ["RUNNER_VERSION: +(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",

View File

@@ -1,24 +1,26 @@
name: Publish Controller Image
name: Publish ARC
on:
release:
types: [published]
types:
- published
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: write
packages: write
jobs:
build:
runs-on: ubuntu-latest
release-controller:
name: Release
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USER }}
steps:
- name: Set outputs
id: vars
run: echo ::set-output name=sha_short::${GITHUB_SHA::7}
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
uses: actions/checkout@v3
- uses: actions/setup-go@193b404f8a1d1dccaf6ed9bf03cdb68d2d02020f
- uses: actions/setup-go@v3
with:
go-version: '1.18.2'
@@ -39,25 +41,20 @@ jobs:
- name: Upload artifacts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: make github-release
run: |
make github-release
- name: Set up QEMU
uses: docker/setup-qemu-action@0522dcd2bf084920c411162fde334a308be75015
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@91cb32d715c128e5f0ede915cd7e196ab7799b83
- name: Setup Docker Environment
id: vars
uses: ./.github/actions/setup-docker-environment
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@d398f07826957cd0a18ea1b059cf1207835e60bc
with:
username: ${{ secrets.DOCKER_USER }}
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
ghcr_username: ${{ github.actor }}
ghcr_password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@c5e6528d5ddefc82f682165021e05edf58044bce
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
@@ -66,4 +63,8 @@ jobs:
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:latest
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }}
ghcr.io/actions-runner-controller/actions-runner-controller:latest
ghcr.io/actions-runner-controller/actions-runner-controller:${{ env.VERSION }}
ghcr.io/actions-runner-controller/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }}
cache-from: type=gha
cache-to: type=gha,mode=max

58
.github/workflows/publish-canary.yaml vendored Normal file
View File

@@ -0,0 +1,58 @@
name: Publish Canary Image
on:
push:
branches:
- master
paths-ignore:
- '**.md'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/runners.yaml'
- '.github/workflows/validate-entrypoint.yaml'
- '.github/renovate.*'
- 'runner/**'
- '.gitignore'
- 'PROJECT'
- 'LICENSE'
- 'Makefile'
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: read
packages: write
jobs:
canary-build:
name: Build and Publish Canary Image
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USER }}
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Docker Environment
id: vars
uses: ./.github/actions/setup-docker-environment
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
ghcr_username: ${{ github.actor }}
ghcr_password: ${{ secrets.GITHUB_TOKEN }}
# Considered unstable builds
# See Issue #285, PR #286, and PR #323 for more information
- name: Build and Push
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:canary
ghcr.io/actions-runner-controller/actions-runner-controller:canary
cache-from: type=gha,scope=arc-canary
cache-to: type=gha,mode=max,scope=arc-canary

View File

@@ -1,4 +1,4 @@
name: Publish helm chart
name: Publish Helm Chart
on:
push:
@@ -6,7 +6,7 @@ on:
- master
paths:
- 'charts/**'
- '.github/workflows/on-push-master-publish-chart.yml'
- '.github/workflows/publish-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
workflow_dispatch:
@@ -20,18 +20,18 @@ permissions:
jobs:
lint-chart:
runs-on: ubuntu-latest
name: Lint Chart
runs-on: ubuntu-latest
outputs:
publish-chart: ${{ steps.publish-chart-step.outputs.publish }}
steps:
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@217bf70cbd2e930ba2e81ba7e1de2f7faecc42ba
uses: azure/setup-helm@v3.0
with:
version: ${{ env.HELM_VERSION }}
@@ -52,12 +52,12 @@ jobs:
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@fff15a21cc8b16191cb1249f621fa3a55b9005b8
- uses: actions/setup-python@v4
with:
python-version: 3.7
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@62a185010be4cb08459f7acb19f37927235d5cf3
uses: helm/chart-testing-action@v2.2.1
- name: Run chart-testing (list-changed)
id: list-changed
@@ -68,22 +68,23 @@ jobs:
fi
- name: Run chart-testing (lint)
run: ct lint --config charts/.ci/ct-config.yaml
run: |
ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
uses: helm/kind-action@94729529f85113b88f4f819c17ce61382e6d8478
if: steps.list-changed.outputs.changed == 'true'
uses: helm/kind-action@v1.3.0
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
if: steps.list-changed.outputs.changed == 'true'
run: |
helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install)
run: ct install --config charts/.ci/ct-config.yaml
if: steps.list-changed.outputs.changed == 'true'
run: ct install --config charts/.ci/ct-config.yaml
# WARNING: This relies on the latest release being inat the top of the JSON from GitHub and a clean chart.yaml
- name: Check if Chart Publish is Needed
@@ -100,16 +101,17 @@ jobs:
fi
publish-chart:
permissions:
contents: write # for helm/chart-releaser-action to push chart release and create a release
if: needs.lint-chart.outputs.publish-chart == 'true'
needs: lint-chart
runs-on: ubuntu-latest
name: Publish Chart
runs-on: ubuntu-latest
permissions:
contents: write # for helm/chart-releaser-action to push chart release and create a release
steps:
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -119,7 +121,7 @@ jobs:
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Run chart-releaser
uses: helm/chart-releaser-action@a3454e46a6f5ac4811069a381e646961dda2e1bf
uses: helm/chart-releaser-action@v1.4.0
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -1,26 +1,32 @@
name: "Code Scanning"
name: Run CodeQL
on:
push:
branches: [master]
branches:
- master
pull_request:
branches: [master]
branches:
- master
schedule:
- cron: '30 1 * * 0'
jobs:
CodeQL-Build:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@v3.0.2
uses: actions/checkout@v3
- name: Initialize CodeQL
uses: github/codeql-action/init@v2.1.11
uses: github/codeql-action/init@v2
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v2.1.11
uses: github/codeql-action/autobuild@v2
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2.1.11
uses: github/codeql-action/analyze@v2

View File

@@ -1,7 +1,6 @@
name: 'Close stale issues and PRs'
name: Run Stale Bot
on:
schedule:
# 01:30 every day
- cron: '30 1 * * *'
permissions:
@@ -9,12 +8,13 @@ permissions:
jobs:
stale:
permissions:
issues: write # for actions/stale to close stale issues
pull-requests: write # for actions/stale to close stale PRs
name: Run Stale
runs-on: ubuntu-latest
permissions:
issues: write # for actions/stale to close stale issues
pull-requests: write # for actions/stale to close stale PRs
steps:
- uses: actions/stale@65d24b70926a596b0f0098d7e1eb572175d73bc1
- uses: actions/stale@v5
with:
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
# turn off stale for both issues and PRs

View File

@@ -6,27 +6,37 @@ on:
- opened
- synchronize
- reopened
- closed
branches:
- 'master'
paths:
- 'runner/**'
- '!runner/Makefile'
- .github/workflows/runners.yml
- '.github/workflows/runners.yaml'
- '!**.md'
# We must do a trigger on a push: instead of a types: closed so GitHub Secrets
# are available to the workflow run
push:
branches:
- 'master'
paths:
- 'runner/**'
- '!runner/Makefile'
- '.github/workflows/runners.yaml'
- '!**.md'
env:
RUNNER_VERSION: 2.292.0
RUNNER_VERSION: 2.294.0
DOCKER_VERSION: 20.10.12
RUNNER_CONTAINER_HOOKS_VERSION: 0.1.2
DOCKERHUB_USERNAME: summerwind
jobs:
build:
build-runners:
name: Build ${{ matrix.name }}-${{ matrix.os-name }}-${{ matrix.os-version }}
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
name: Build ${{ matrix.name }}-${{ matrix.os-name }}-${{ matrix.os-version }}
strategy:
fail-fast: false
matrix:
@@ -40,7 +50,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
uses: actions/checkout@v3
- name: Setup Docker Environment
id: vars
@@ -52,15 +62,16 @@ jobs:
ghcr_password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Push Versioned Tags
uses: docker/build-push-action@c5e6528d5ddefc82f682165021e05edf58044bce
uses: docker/build-push-action@v3
with:
context: ./runner
file: ./runner/${{ matrix.name }}.dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ github.ref == 'master' && github.event.pull_request.merged == true }}
push: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }}
build-args: |
RUNNER_VERSION=${{ env.RUNNER_VERSION }}
DOCKER_VERSION=${{ env.DOCKER_VERSION }}
RUNNER_CONTAINER_HOOKS_VERSION=${{ env.RUNNER_CONTAINER_HOOKS_VERSION }}
tags: |
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}-${{ steps.vars.outputs.sha_short }}
@@ -68,5 +79,5 @@ jobs:
ghcr.io/${{ github.repository }}/${{ matrix.name }}:latest
ghcr.io/${{ github.repository }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}
ghcr.io/${{ github.repository }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ matrix.os-name }}-${{ matrix.os-version }}-${{ steps.vars.outputs.sha_short }}
cache-from: type=gha
cache-to: type=gha,mode=max
cache-from: type=gha,scope=build-${{ matrix.name }}
cache-to: type=gha,mode=max,scope=build-${{ matrix.name }}

View File

@@ -1,48 +1,59 @@
name: CI
name: Validate ARC
on:
pull_request:
branches:
- master
paths-ignore:
- .github/workflows/runners.yml
- .github/workflows/on-push-lint-charts.yml
- .github/workflows/on-push-master-publish-chart.yml
- .github/workflows/release.yml
- .github/workflows/test-entrypoint.yml
- .github/workflows/wip.yml
- 'runner/**'
- '**.md'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/publish-canary.yaml'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/runners.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/validate-entrypoint.yaml'
- '.github/renovate.*'
- 'runner/**'
- '.gitignore'
- 'PROJECT'
- 'LICENSE'
- 'Makefile'
permissions:
contents: read
jobs:
test:
test-controller:
name: Test ARC
runs-on: ubuntu-latest
name: Test
steps:
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- uses: actions/setup-go@193b404f8a1d1dccaf6ed9bf03cdb68d2d02020f
uses: actions/checkout@v3
- name: Set-up Go
uses: actions/setup-go@v3
with:
go-version: '1.18.2'
check-latest: false
- run: go version
- uses: actions/cache@95f200e41cfa87b8e07f30196c0df17a67e67786
- uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_linux_amd64.tar.gz
tar zxvf kubebuilder_2.3.2_linux_amd64.tar.gz
sudo mv kubebuilder_2.3.2_linux_amd64 /usr/local/kubebuilder
- name: Run tests
run: make test
run: |
make test
- name: Verify manifests are up-to-date
run: |
make manifests

View File

@@ -1,10 +1,10 @@
name: Lint and Test Charts
name: Validate Helm Chart
on:
push:
paths:
- 'charts/**'
- '.github/workflows/on-push-lint-charts.yml'
- '.github/workflows/validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
workflow_dispatch:
@@ -16,17 +16,17 @@ permissions:
contents: read
jobs:
lint-test:
runs-on: ubuntu-latest
validate-chart:
name: Lint Chart
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@217bf70cbd2e930ba2e81ba7e1de2f7faecc42ba
uses: azure/setup-helm@v3.0
with:
version: ${{ env.HELM_VERSION }}
@@ -47,12 +47,12 @@ jobs:
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@fff15a21cc8b16191cb1249f621fa3a55b9005b8
- uses: actions/setup-python@v4
with:
python-version: 3.7
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@62a185010be4cb08459f7acb19f37927235d5cf3
uses: helm/chart-testing-action@v2.2.1
- name: Run chart-testing (list-changed)
id: list-changed
@@ -63,18 +63,20 @@ jobs:
fi
- name: Run chart-testing (lint)
run: ct lint --config charts/.ci/ct-config.yaml
run: |
ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
uses: helm/kind-action@94729529f85113b88f4f819c17ce61382e6d8478
uses: helm/kind-action@v1.3.0
if: steps.list-changed.outputs.changed == 'true'
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
if: steps.list-changed.outputs.changed == 'true'
run: |
helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install)
run: ct install --config charts/.ci/ct-config.yaml
run: |
ct install --config charts/.ci/ct-config.yaml

View File

@@ -1,4 +1,4 @@
name: Unit tests for entrypoint
name: Validate Runners
on:
pull_request:
@@ -13,12 +13,13 @@ permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
test-runner-entrypoint:
name: Test entrypoint
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- name: Run unit tests for entrypoint.sh
uses: actions/checkout@v3
- name: Run tests
run: |
make acceptance/runner/entrypoint

View File

@@ -1,54 +0,0 @@
name: Publish Canary Image
on:
push:
branches:
- master
paths-ignore:
- .github/workflows/runners.yml
- .github/workflows/on-push-lint-charts.yml
- .github/workflows/on-push-master-publish-chart.yml
- .github/workflows/release.yml
- .github/workflows/test-entrypoint.yml
- "runner/**"
- "**.md"
- ".gitignore"
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
name: Build and Publish Canary Image
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKER_USER }}
steps:
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- name: Set up QEMU
uses: docker/setup-qemu-action@0522dcd2bf084920c411162fde334a308be75015
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@91cb32d715c128e5f0ede915cd7e196ab7799b83
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@d398f07826957cd0a18ea1b059cf1207835e60bc
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
# Considered unstable builds
# See Issue #285, PR #286, and PR #323 for more information
- name: Build and Push
uses: docker/build-push-action@c5e6528d5ddefc82f682165021e05edf58044bce
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:canary

View File

@@ -1,5 +1,5 @@
# Build the manager binary
FROM --platform=$BUILDPLATFORM golang:1.18.2 as builder
FROM --platform=$BUILDPLATFORM golang:1.18.3 as builder
WORKDIR /workspace

View File

@@ -5,7 +5,7 @@ else
endif
DOCKER_USER ?= $(shell echo ${NAME} | cut -d / -f1)
VERSION ?= latest
RUNNER_VERSION ?= 2.292.0
RUNNER_VERSION ?= 2.294.0
TARGETPLATFORM ?= $(shell arch)
RUNNER_NAME ?= ${DOCKER_USER}/actions-runner
RUNNER_TAG ?= ${VERSION}

234
README.md
View File

@@ -15,14 +15,14 @@ ToC:
- [Setting Up Authentication with GitHub API](#setting-up-authentication-with-github-api)
- [Deploying Using GitHub App Authentication](#deploying-using-github-app-authentication)
- [Deploying Using PAT Authentication](#deploying-using-pat-authentication)
- [Deploying Multiple Controllers](#deploying-multiple-controllers)
- [Deploying Multiple Controllers](#deploying-multiple-controllers)
- [Usage](#usage)
- [Repository Runners](#repository-runners)
- [Organization Runners](#organization-runners)
- [Enterprise Runners](#enterprise-runners)
- [RunnerDeployments](#runnerdeployments)
- [RunnerSets](#runnersets)
- [Persistent Runners](#persistent-runners)
- [Persistent Runners](#persistent-runners)
- [Autoscaling](#autoscaling)
- [Anti-Flapping Configuration](#anti-flapping-configuration)
- [Pull Driven Scaling](#pull-driven-scaling)
@@ -30,6 +30,7 @@ ToC:
- [Autoscaling to/from 0](#autoscaling-tofrom-0)
- [Scheduled Overrides](#scheduled-overrides)
- [Runner with DinD](#runner-with-dind)
- [Runner with k8s jobs](#runner-with-k8s-jobs)
- [Additional Tweaks](#additional-tweaks)
- [Custom Volume mounts](#custom-volume-mounts)
- [Runner Labels](#runner-labels)
@@ -223,7 +224,7 @@ Log-in to a GitHub account that has `admin` privileges for the repository, and [
_Note: When you deploy enterprise runners they will get access to organizations, however, access to the repositories themselves is **NOT** allowed by default. Each GitHub organization must allow enterprise runner groups to be used in repositories as an initial one-time configuration step, this only needs to be done once after which it is permanent for that runner group._
_Note: GitHub does not document exactly what permissions you get with each PAT scope beyond a vague description. The best documentation they provide on the topic can be found [here](https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps) if you wish to review. The docs target OAuth apps and so are incomplete and may not be 100% accurate._
_Note: GitHub does not document exactly what permissions you get with each PAT scope beyond a vague description. The best documentation they provide on the topic can be found [here](https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps) if you wish to review. The docs target OAuth apps and so are incomplete and may not be 100% accurate._
---
@@ -418,7 +419,7 @@ spec:
app: example
```
As it is based on `StatefulSet`, `selector` and `template.medatada.labels` it needs to be defined and have the exact same set of labels. `serviceName` must be set to some non-empty string as it is also required by `StatefulSet`.
As it is based on `StatefulSet`, `selector` and `template.metadata.labels` it needs to be defined and have the exact same set of labels. `serviceName` must be set to some non-empty string as it is also required by `StatefulSet`.
Runner-related fields like `ephemeral`, `repository`, `organization`, `enterprise`, and so on should be written directly under `spec`.
@@ -445,7 +446,7 @@ spec:
securityContext:
# All level/role/type/user values will vary based on your SELinux policies.
# See https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/container_security_guide/docker_selinux_security_policy for information about SELinux with containers
seLinuxOptions:
seLinuxOptions:
level: "s0"
role: "system_r"
type: "super_t"
@@ -515,7 +516,7 @@ A `RunnerDeployment` or `RunnerSet` can scale the number of runners between `min
#### Anti-Flapping Configuration
For both pull driven or webhook driven scaling an anti-flapping implementation is included, by default a runner won't be scaled down within 10 minutes of it having been scaled up.
For both pull driven or webhook driven scaling an anti-flapping implementation is included, by default a runner won't be scaled down within 10 minutes of it having been scaled up.
This anti-flap configuration also has the final say on if a runner can be scaled down or not regardless of the chosen scaling method.
@@ -562,7 +563,7 @@ spec:
> To configure webhook driven scaling see the [Webhook Driven Scaling](#webhook-driven-scaling) section
The pull based metrics are configured in the `metrics` attribute of a HRA (see snippet below). The period between polls is defined by the controller's `--sync-period` flag. If this flag isn't provided then the controller defaults to a sync period of `1m`, this can be configured in seconds or minutes.
The pull based metrics are configured in the `metrics` attribute of a HRA (see snippet below). The period between polls is defined by the controller's `--sync-period` flag. If this flag isn't provided then the controller defaults to a sync period of `1m`, this can be configured in seconds or minutes.
Be aware that the shorter the sync period the quicker you will consume your rate limit budget, depending on your environment this may or may not be a risk. Consider monitoring ARCs rate limit budget when configuring this feature to find the optimal performance sync period.
@@ -580,7 +581,7 @@ spec:
minReplicas: 1
maxReplicas: 5
# Your chosen scaling metrics here
metrics: []
metrics: []
```
**Metric Options:**
@@ -718,7 +719,7 @@ With the above example, the webhook server scales `example-runners` by `1` repli
Of note is the `HRA.spec.scaleUpTriggers[].duration` attribute. This attribute is used to calculate if the replica number added via the trigger is expired or not. On each reconciliation loop, the controller sums up all the non-expiring replica numbers from previous scale-up triggers. It then compares the summed desired replica number against the current replica number. If the summed desired replica number > the current number then it means the replica count needs to scale up.
As mentioned previously, the `scaleDownDelaySecondsAfterScaleOut` property has the final say still. If the latest scale-up time + the anti-flapping duration is later than the current time, it doesnt immediately scale up and instead retries the calculation again later to see if it needs to scale yet.
As mentioned previously, the `scaleDownDelaySecondsAfterScaleOut` property has the final say still. If the latest scale-up time + the anti-flapping duration is later than the current time, it doesnt immediately scale down and instead retries the calculation again later to see if it needs to scale yet.
---
@@ -726,31 +727,164 @@ The primary benefit of autoscaling on Webhooks compared to the pull driven scali
> You can learn the implementation details in [#282](https://github.com/actions-runner-controller/actions-runner-controller/pull/282)
##### Install with Helm
To enable this feature, you first need to install the GitHub webhook server. To install via our Helm chart,
_[see the values documentation for all configuration options](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/charts/actions-runner-controller/README.md)_
```console
$ helm upgrade --install --namespace actions-runner-system --create-namespace \
--wait actions-runner-controller actions-runner-controller/actions-runner-controller \
--set "githubWebhookServer.enabled=true,githubWebhookServer.ports[0].nodePort=33080"
--set "githubWebhookServer.enabled=true,service.type=NodePort,githubWebhookServer.ports[0].nodePort=33080"
```
The above command will result in exposing the node port 33080 for Webhook events. Usually, you need to create an
external load balancer targeted to the node port, and register the hostname or the IP address of the external load balancer
to the GitHub Webhook.
The above command will result in exposing the node port 33080 for Webhook events.
Usually, you need to create an external load balancer targeted to the node port,
and register the hostname or the IP address of the external load balancer to the GitHub Webhook.
Once you were able to confirm that the Webhook server is ready and running from GitHub - this is usually verified by the
GitHub sending PING events to the Webhook server - create or update your `HorizontalRunnerAutoscaler` resources
by learning the following configuration examples.
**With a custom Kubernetes ingress controller:**
> **CAUTION:** The Kubernetes ingress controllers described below is just a suggestion from the community and
> the ARC team will not provide any user support for ingress controllers as it's not a part of this project.
>
> The following guide on creating an ingress has been contributed by the awesome ARC community and is provided here as-is.
> You may, however, still be able to ask for help on the community on GitHub Discussions if you have any problems.
Kubernetes provides `Ingress` resources to let you configure your ingress controller to expose a Kubernetes service.
If you plan to expose ARC via Ingress, you might not be required to make it a `NodePort` service
(although nothing would prevent an ingress controller to expose NodePort services too):
```console
$ helm upgrade --install --namespace actions-runner-system --create-namespace \
--wait actions-runner-controller actions-runner-controller/actions-runner-controller \
--set "githubWebhookServer.enabled=true"
```
The command above will create a new deployment and a service for receiving Github Webhooks on the `actions-runner-system` namespace.
Now we need to expose this service so that GitHub can send these webhooks over the network with TSL protection.
You can do it in any way you prefer, here we'll suggest doing it with a k8s Ingress.
For the sake of this example we'll expose this service on the following URL:
- https://your.domain.com/actions-runner-controller-github-webhook-server
Where `your.domain.com` should be replaced by your own domain.
> Note: This step assumes you already have a configured `cert-manager` and domain name for your cluster.
Let's start by creating an Ingress file called `arc-webhook-server.yaml` with the following contents:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: actions-runner-controller-github-webhook-server
namespace: actions-runner-system
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
tls:
- hosts:
- your.domain.com
secretName: your-tls-secret-name
rules:
- http:
paths:
- path: /actions-runner-controller-github-webhook-server
pathType: Prefix
backend:
service:
name: actions-runner-controller-github-webhook-server
port:
number: 80
```
Make sure to set the `spec.tls.secretName` to the name of your TLS secret and
`spec.tls.hosts[0]` to your own domain.
Then create this resource on your cluster with the following command:
```bash
kubectl apply -n actions-runner-system -f arc-webhook-server.yaml
```
**Configuring GitHub for sending webhooks for our newly created webhook server:**
After this step your webhook server should be ready to start receiving webhooks from GitHub.
To configure GitHub to start sending you webhooks, go to the settings page of your repository
or organization then click on `Webhooks`, then on `Add webhook`.
There set the "Payload URL" field with the webhook URL you just created,
if you followed the example ingress above the URL would be something like this:
- https://your.domain.com/actions-runner-controller-github-webhook-server
> Remember to replace `your.domain.com` with your own domain.
Then click on "let me select individual events" and choose `Workflow Jobs`.
You may also want to choose the following event(s) if you use it as a scale trigger in your HRA spec:
- Check runs
- Pushes
- Pull Requests
Later you can remove any of these you are not using to reduce the amount of data sent to your server.
Then click on `Add Webhook`.
GitHub will then send a `ping` event to your webhook server to check if it is working, if it is you'll see a green V mark
alongside your webhook on the Settings -> Webhooks page.
Once you were able to confirm that the Webhook server is ready and running from GitHub create or update your
`HorizontalRunnerAutoscaler` resources by learning the following configuration examples.
##### Install with Kustomize
To install this feature using Kustomize, add `github-webhook-server` resources to your `kustomization.yaml` file as in the example below:
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
# You should already have this
- github.com/actions-runner-controller/actions-runner-controller/config//default?ref=v0.22.2
# Add the below!
- github.com/actions-runner-controller/actions-runner-controller/config//github-webhook-server?ref=v0.22.2
Finally, you will have to configure an ingress so that you may configure the webhook in github. An example of such ingress can be find below:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: actions-runners-webhook-server
spec:
rules:
- http:
paths:
- path: /
backend:
service:
name: github-webhook-server
port:
number: 80
pathType: Exact
```
##### Examples
- [Example 1: Scale on each `workflow_job` event](#example-1-scale-on-each-workflow_job-event)
- [Example 2: Scale up on each `check_run` event](#example-2-scale-up-on-each-check_run-event)
- [Example 3: Scale on each `pull_request` event against a given set of branches](#example-3-scale-on-each-pull_request-event-against-a-given-set-of-branches)
- [Example 4: Scale on each `push` event](#example-4-scale-on-each-push-event)
**Note:** All these examples should have **minReplicas** & **maxReplicas** as mandatory parameters even for webhook driven scaling.
##### Example 1: Scale on each `workflow_job` event
###### Example 1: Scale on each `workflow_job` event
> This feature requires controller version => [v0.20.0](https://github.com/actions-runner-controller/actions-runner-controller/releases/tag/v0.20.0)
@@ -761,16 +895,23 @@ The most flexible webhook GitHub offers is the `workflow_job` webhook, it includ
This webhook should cover most people's needs, please experiment with this webhook first before considering the others.
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runners
name: example-runners
spec:
template:
spec:
repository: example/myrepo
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runners
spec:
scaleDownDelaySecondsAfterScaleOut: 300
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
name: example-runners
# Uncomment the below in case the target is not RunnerDeployment but RunnerSet
@@ -787,7 +928,7 @@ You can configure your GitHub webhook settings to only include `Workflows Job` e
Each kind has a `status` of `queued`, `in_progress` and `completed`. With the above configuration, `actions-runner-controller` adds one runner for a `workflow_job` event whose `status` is `queued`. Similarly, it removes one runner for a `workflow_job` event whose `status` is `completed`. The caveat to this to remember is that this scale-down is within the bounds of your `scaleDownDelaySecondsAfterScaleOut` configuration, if this time hasn't passed the scale down will be deferred.
##### Example 2: Scale up on each `check_run` event
###### Example 2: Scale up on each `check_run` event
> Note: This should work almost like https://github.com/philips-labs/terraform-aws-github-runner
@@ -804,6 +945,8 @@ spec:
---
kind: HorizontalRunnerAutoscaler
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
name: example-runners
# Uncomment the below in case the target is not RunnerDeployment but RunnerSet
@@ -830,6 +973,8 @@ spec:
---
kind: HorizontalRunnerAutoscaler
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
name: example-runners
# Uncomment the below in case the target is not RunnerDeployment but RunnerSet
@@ -845,7 +990,7 @@ spec:
duration: "5m"
```
##### Example 3: Scale on each `pull_request` event against a given set of branches
###### Example 3: Scale on each `pull_request` event against a given set of branches
To scale up replicas of the runners for `example/myrepo` by 1 for 5 minutes on each `pull_request` against the `main` or `develop` branch you write manifests like the below:
@@ -860,6 +1005,8 @@ spec:
---
kind: HorizontalRunnerAutoscaler
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
name: example-runners
# Uncomment the below in case the target is not RunnerDeployment but RunnerSet
@@ -888,6 +1035,8 @@ spec:
---
kind: HorizontalRunnerAutoscaler
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
name: example-runners
# Uncomment the below in case the target is not RunnerDeployment but RunnerSet
@@ -1016,6 +1165,36 @@ spec:
This also helps with resources, as you don't need to give resources separately to docker and runner.
### Runner with K8s Jobs
When using the default runner, jobs that use a container will run in docker. This necessitates privileged mode, either on the runner pod or the sidecar container
By setting the container mode, you can instead invoke these jobs using a [kubernetes implementation](https://github.com/actions/runner-container-hooks/tree/main/packages/k8s) while not executing in privileged mode.
The runner will dynamically spin up pods and k8s jobs in the runner's namespace to run the workflow, so a `workVolumeClaimTemplate` is required for the runner's working directory, and a service account with the [appropriate permissions.](https://github.com/actions/runner-container-hooks/tree/main/packages/k8s#pre-requisites)
There are some [limitations](https://github.com/actions/runner-container-hooks/tree/main/packages/k8s#limitations) to this approach, mainly [job containers](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container) are required on all workflows.
```yaml
# runner.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
name: example-runner
spec:
repository: example/myrepo
containerMode: kubernetes
serviceAccountName: my-service-account
workVolumeClaimTemplate:
storageClassName: "my-dynamic-storage-class"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
env: []
```
### Additional Tweaks
You can pass details through the spec selector. Here's an eg. of what you may like to do:
@@ -1033,6 +1212,7 @@ spec:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
spec:
priorityClassName: "high"
nodeSelector:
node-role.kubernetes.io/test: ""
@@ -1105,7 +1285,7 @@ spec:
# Valid only when dockerdWithinRunnerContainer=false
dockerEnv:
- name: HTTP_PROXY
value: http://example.com
value: http://example.com
# Docker sidecar container image tweaks examples below, only applicable if dockerdWithinRunnerContainer = false
dockerdContainerResources:
limits:
@@ -1284,7 +1464,7 @@ spec:
volumeMounts:
- name: var-lib-docker
mountPath: /var/lib/docker
volumeClaimtemplates:
volumeClaimTemplates:
- metadata:
name: var-lib-docker
spec:
@@ -1470,12 +1650,6 @@ spec:
# Disables automatic runner updates
- name: DISABLE_RUNNER_UPDATE
value: "true"
# Configure runner with legacy --once instead of --ephemeral flag
# WARNING | THIS ENV VAR IS DEPRECATED AND WILL BE REMOVED
# IN A FUTURE VERSION OF ARC.
# THIS ENV VAR WILL BE REMOVED, SEE ISSUE #1196 FOR DETAILS
- name: RUNNER_FEATURE_FLAG_ONCE
value: "true"
```
### Using IRSA (IAM Roles for Service Accounts) in EKS

View File

@@ -2,8 +2,9 @@
* [Tools](#tools)
* [Installation](#installation)
* [InternalError when calling webhook: context deadline exceeded](#internalerror-when-calling-webhook-context-deadline-exceeded)
* [Invalid header field value](#invalid-header-field-value)
* [Deployment fails on GKE due to webhooks](#deployment-fails-on-gke-due-to-webhooks)
* [Helm chart install failure: certificate signed by unknown authority](#helm-chart-install-failure-certificate-signed-by-unknown-authority)
* [Operations](#operations)
* [Stuck runner kind or backing pod](#stuck-runner-kind-or-backing-pod)
* [Delay in jobs being allocated to runners](#delay-in-jobs-being-allocated-to-runners)
@@ -22,39 +23,37 @@ A list of tools which are helpful for troubleshooting
Troubeshooting runbooks that relate to ARC installation problems
### Invalid header field value
### InternalError when calling webhook: context deadline exceeded
**Problem**
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error
{
"controller": "runner",
"request": "actions-runner-system/runner-deployment-dk7q8-dk5c9",
"error": "failed to create registration token: Post \"https://api.github.com/orgs/$YOUR_ORG_HERE/actions/runners/registration-token\": net/http: invalid header field value \"Bearer $YOUR_TOKEN_HERE\\n\" for key Authorization"
}
This issue can come up for various reasons like leftovers from previous installations or not being able to access the K8s service's clusterIP associated with the admission webhook server (of ARC).
```
Internal error occurred: failed calling webhook "mutate.runnerdeployment.actions.summerwind.dev":
Post "https://actions-runner-controller-webhook.actions-runner-system.svc:443/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment?timeout=10s": context deadline exceeded
```
**Solution**
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added, either:
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documented flags
First we will try the common solution of checking webhook leftovers from previous installations:
1. ```bash
kubectl get validatingwebhookconfiguration -A
kubectl get mutatingwebhookconfiguration -A
```
2. If you see any webhooks related to actions-runner-controller, delete them:
```bash
kubectl delete mutatingwebhookconfiguration actions-runner-controller-mutating-webhook-configuration
kubectl delete validatingwebhookconfiguration actions-runner-controller-validating-webhook-configuration
```
If that didn't work then probably your K8s control-plane is somehow unable to access the K8s service's clusterIP associated with the admission webhook server:
1. You're running apiserver as a binary and you didn't make service cluster IPs available to the host network.
2. You're running the apiserver in the pod but your pod network (i.e. CNI plugin installation and config) is not good so your pods(like kube-apiserver) in the K8s control-plane nodes can't access ARC's admission webhook server pod(s) in probably data-plane nodes.
### Deployment fails on GKE due to webhooks
**Problem**
Due to GKEs firewall settings you may run into the following errors when trying to deploy runners on a private GKE cluster:
```
Internal error occurred: failed calling webhook "mutate.runner.actions.summerwind.dev":
Post https://webhook-service.actions-runner-system.svc:443/mutate-actions-summerwind-dev-v1alpha1-runner?timeout=10s:
context deadline exceeded
```
**Solution**<br />
Another reason could be due to GKEs firewall settings you may run into the following errors when trying to deploy runners on a private GKE cluster:
To fix this, you may either:
@@ -88,6 +87,57 @@ To fix this, you may either:
gcloud compute firewall-rules create k8s-cert-manager --source-ranges $SOURCE --target-tags $WORKER_NODES_TAG --allow TCP:9443 --network $NETWORK
```
### Invalid header field value
**Problem**
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error
{
"controller": "runner",
"request": "actions-runner-system/runner-deployment-dk7q8-dk5c9",
"error": "failed to create registration token: Post \"https://api.github.com/orgs/$YOUR_ORG_HERE/actions/runners/registration-token\": net/http: invalid header field value \"Bearer $YOUR_TOKEN_HERE\\n\" for key Authorization"
}
```
**Solution**
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added, either:
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documented flags
### Helm chart install failure: certificate signed by unknown authority
**Problem**
```
Error: UPGRADE FAILED: failed to create resource: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority
```
Apparently, it's failing while `helm` is creating one of resources defined in the ARC chart and the cause was that cert-manager's webhook is not working correctly, due to the missing or the invalid CA certficate.
You'd try to tail logs from the `cert-manager-cainjector` and see it's failing with an error like:
```
$ kubectl -n cert-manager logs cert-manager-cainjector-7cdbb9c945-g6bt4
I0703 03:31:55.159339 1 start.go:91] "starting" version="v1.1.1" revision="3ac7418070e22c87fae4b22603a6b952f797ae96"
I0703 03:31:55.615061 1 leaderelection.go:243] attempting to acquire leader lease kube-system/cert-manager-cainjector-leader-election...
I0703 03:32:10.738039 1 leaderelection.go:253] successfully acquired lease kube-system/cert-manager-cainjector-leader-election
I0703 03:32:10.739941 1 recorder.go:52] cert-manager/controller-runtime/manager/events "msg"="Normal" "message"="cert-manager-cainjector-7cdbb9c945-g6bt4_88e4bc70-eded-4343-a6fb-0ddd6434eb55 became leader" "object"={"kind":"ConfigMap","namespace":"kube-system","name":"cert-manager-cainjector-leader-election","uid":"942a021e-364c-461a-978c-f54a95723cdc","apiVersion":"v1","resourceVersion":"1576"} "reason"="LeaderElection"
E0703 03:32:11.192128 1 start.go:119] cert-manager/ca-injector "msg"="manager goroutine exited" "error"=null
I0703 03:32:12.339197 1 request.go:645] Throttling request took 1.047437675s, request: GET:https://10.96.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
E0703 03:32:13.143790 1 start.go:151] cert-manager/ca-injector "msg"="Error registering certificate based controllers. Retrying after 5 seconds." "error"="no matches for kind \"MutatingWebhookConfiguration\" in version \"admissionregistration.k8s.io/v1beta1\""
Error: error registering secret controller: no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
```
**Solution**
Your cluster is based on a new enough Kubernetes of version 1.22 or greater which does not support the legacy `admissionregistration.k8s.io/v1beta1` API anymore, and your `cert-manager` is not up-to-date hence it's still trying to use the leagcy Kubernetes API.
In many cases, it's not an option to downgrade Kubernetes. So, just upgrade `cert-manager` to a more recent version that does have have the support for the specific Kubernetes version you're using.
See https://cert-manager.io/docs/installation/supported-releases/ for the list of available cert-manager versions.
## Operations
Troubeshooting runbooks that relate to ARC operational problems

97
acceptance/argotunnel.sh Executable file
View File

@@ -0,0 +1,97 @@
#!/usr/bin/env bash
# See https://developers.cloudflare.com/cloudflare-one/tutorials/many-cfd-one-tunnel/
kubectl create ns tunnel || :
kubectl -n tunnel delete secret tunnel-credentials || :
kubectl -n tunnel create secret generic tunnel-credentials \
--from-file=credentials.json=$HOME/.cloudflared/${TUNNEL_ID}.json || :
cat <<MANIFEST | kubectl -n tunnel ${OP} -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudflared
spec:
selector:
matchLabels:
app: cloudflared
replicas: 2 # You could also consider elastic scaling for this deployment
template:
metadata:
labels:
app: cloudflared
spec:
containers:
- name: cloudflared
image: cloudflare/cloudflared:latest
args:
- tunnel
# Points cloudflared to the config file, which configures what
# cloudflared will actually do. This file is created by a ConfigMap
# below.
- --config
- /etc/cloudflared/config/config.yaml
- run
livenessProbe:
httpGet:
# Cloudflared has a /ready endpoint which returns 200 if and only if
# it has an active connection to the edge.
path: /ready
port: 2000
failureThreshold: 1
initialDelaySeconds: 10
periodSeconds: 10
volumeMounts:
- name: config
mountPath: /etc/cloudflared/config
readOnly: true
# Each tunnel has an associated "credentials file" which authorizes machines
# to run the tunnel. cloudflared will read this file from its local filesystem,
# and it'll be stored in a k8s secret.
- name: creds
mountPath: /etc/cloudflared/creds
readOnly: true
volumes:
- name: creds
secret:
secretName: tunnel-credentials
# Create a config.yaml file from the ConfigMap below.
- name: config
configMap:
name: cloudflared
items:
- key: config.yaml
path: config.yaml
---
# This ConfigMap is just a way to define the cloudflared config.yaml file in k8s.
# It's useful to define it in k8s, rather than as a stand-alone .yaml file, because
# this lets you use various k8s templating solutions (e.g. Helm charts) to
# parameterize your config, instead of just using string literals.
apiVersion: v1
kind: ConfigMap
metadata:
name: cloudflared
data:
config.yaml: |
# Name of the tunnel you want to run
tunnel: ${TUNNEL_NAME}
credentials-file: /etc/cloudflared/creds/credentials.json
# Serves the metrics server under /metrics and the readiness server under /ready
metrics: 0.0.0.0:2000
# Autoupdates applied in a k8s pod will be lost when the pod is removed or restarted, so
# autoupdate doesn't make sense in Kubernetes. However, outside of Kubernetes, we strongly
# recommend using autoupdate.
no-autoupdate: true
ingress:
# The first rule proxies traffic to the httpbin sample Service defined in app.yaml
- hostname: ${TUNNEL_HOSTNAME}
service: http://actions-runner-controller-github-webhook-server.actions-runner-system:80
# This rule matches any traffic which didn't match a previous rule, and responds with HTTP 404.
- service: http_status:404
MANIFEST
kubectl -n tunnel delete po -l app=cloudflared || :

View File

@@ -51,6 +51,9 @@ if [ "${tool}" == "helm" ]; then
--set image.tag=${VERSION} \
--set podAnnotations.test-id=${TEST_ID} \
--set githubWebhookServer.podAnnotations.test-id=${TEST_ID} \
--set imagePullSecrets[0].name=${IMAGE_PULL_SECRET} \
--set image.actionsRunnerImagePullSecrets[0].name=${IMAGE_PULL_SECRET} \
--set githubWebhookServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET} \
-f ${VALUES_FILE}
set +v
# To prevent `CustomResourceDefinition.apiextensions.k8s.io "runners.actions.summerwind.dev" is invalid: metadata.annotations: Too long: must have at most 262144 bytes`
@@ -76,56 +79,3 @@ kubectl -n actions-runner-system wait deploy/actions-runner-controller --for con
# Adhocly wait for some time until actions-runner-controller's admission webhook gets ready
sleep 20
RUNNER_LABEL=${RUNNER_LABEL:-self-hosted}
if [ -n "${TEST_REPO}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerset envsubst | kubectl apply -f -
else
echo 'Deploying runnerdeployment and hra. Set USE_RUNNERSET if you want to deploy runnerset instead.'
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerdeploy envsubst | kubectl apply -f -
fi
else
echo 'Skipped deploying runnerdeployment and hra. Set TEST_REPO to "yourorg/yourrepo" to deploy.'
fi
if [ -n "${TEST_ORG}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerset envsubst | kubectl apply -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerdeploy envsubst | kubectl apply -f -
fi
if [ -n "${TEST_ORG_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerset envsubst | kubectl apply -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerdeploy envsubst | kubectl apply -f -
fi
else
echo 'Skipped deploying enterprise runnerdeployment. Set TEST_ORG_GROUP to deploy.'
fi
else
echo 'Skipped deploying organizational runnerdeployment. Set TEST_ORG to deploy.'
fi
if [ -n "${TEST_ENTERPRISE}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerset envsubst | kubectl apply -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerdeploy envsubst | kubectl apply -f -
fi
if [ -n "${TEST_ENTERPRISE_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerset envsubst | kubectl apply -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerdeploy envsubst | kubectl apply -f -
fi
else
echo 'Skipped deploying enterprise runnerdeployment. Set TEST_ENTERPRISE_GROUP to deploy.'
fi
else
echo 'Skipped deploying enterprise runnerdeployment. Set TEST_ENTERPRISE to deploy.'
fi

58
acceptance/deploy_runners.sh Executable file
View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bash
set -e
OP=${OP:-apply}
RUNNER_LABEL=${RUNNER_LABEL:-self-hosted}
if [ -n "${TEST_REPO}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerset envsubst | kubectl ${OP} -f -
else
echo "Running ${OP} runnerdeployment and hra. Set USE_RUNNERSET if you want to deploy runnerset instead."
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} for runnerdeployment and hra. Set TEST_REPO to "yourorg/yourrepo" to deploy."
fi
if [ -n "${TEST_ORG}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerdeploy envsubst | kubectl ${OP} -f -
fi
if [ -n "${TEST_ORG_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ORG_GROUP to ${OP}."
fi
else
echo "Skipped ${OP} on organizational runnerdeployment. Set TEST_ORG to ${OP}."
fi
if [ -n "${TEST_ENTERPRISE}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerdeploy envsubst | kubectl ${OP} -f -
fi
if [ -n "${TEST_ENTERPRISE_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ENTERPRISE_GROUP to ${OP}."
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ENTERPRISE to ${OP}."
fi

View File

@@ -122,8 +122,10 @@ spec:
value: "/home/runner/.cache/go-mod"
# PV-backed runner work dir
volumeMounts:
- name: work
mountPath: /runner/_work
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
# The volume and mount with the same names will be created by workVolumeClaimTemplate and the kubernetes container mode support.
# - name: work
# mountPath: /runner/_work
# Cache docker image layers, in case dockerdWithinRunnerContainer=true
- name: var-lib-docker
mountPath: /var/lib/docker
@@ -163,17 +165,18 @@ spec:
# For buildx cache
- name: cache
mountPath: "/home/runner/.cache"
volumes:
- name: work
ephemeral:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
storageClassName: "${NAME}-runner-work-dir"
resources:
requests:
storage: 10Gi
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
# volumes:
# - name: work
# ephemeral:
# volumeClaimTemplate:
# spec:
# accessModes:
# - ReadWriteOnce
# storageClassName: "${NAME}-runner-work-dir"
# resources:
# requests:
# storage: 10Gi
volumeClaimTemplates:
- metadata:
name: vol1
@@ -251,3 +254,10 @@ spec:
minReplicas: ${RUNNER_MIN_REPLICAS}
maxReplicas: 10
scaleDownDelaySecondsAfterScaleOut: ${RUNNER_SCALE_DOWN_DELAY_SECONDS_AFTER_SCALE_OUT}
# Comment out the whole metrics if you'd like to solely test webhook-based scaling
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75'
scaleDownThreshold: '0.25'
scaleUpFactor: '2'
scaleDownFactor: '0.5'

View File

@@ -1,6 +1,13 @@
# Set actions-runner-controller settings for testing
logLevel: "-4"
imagePullSecrets:
- name:
image:
actionsRunnerImagePullSecrets:
- name:
githubWebhookServer:
imagePullSecrets:
- name:
logLevel: "-4"
enabled: true
labels: {}

View File

@@ -18,8 +18,10 @@ package v1alpha1
import (
"errors"
"fmt"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/util/validation/field"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -71,6 +73,9 @@ type RunnerConfig struct {
VolumeSizeLimit *resource.Quantity `json:"volumeSizeLimit,omitempty"`
// +optional
VolumeStorageMedium *string `json:"volumeStorageMedium,omitempty"`
// +optional
ContainerMode string `json:"containerMode,omitempty"`
}
// RunnerPodSpec defines the desired pod spec fields of the runner pod
@@ -135,6 +140,9 @@ type RunnerPodSpec struct {
// +optional
Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
// +optional
PriorityClassName string `json:"priorityClassName,omitempty"`
// +optional
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
@@ -154,10 +162,37 @@ type RunnerPodSpec struct {
// +optional
DnsConfig *corev1.PodDNSConfig `json:"dnsConfig,omitempty"`
// +optional
WorkVolumeClaimTemplate *WorkVolumeClaimTemplate `json:"workVolumeClaimTemplate,omitempty"`
}
func (rs *RunnerSpec) Validate(rootPath *field.Path) field.ErrorList {
var (
errList field.ErrorList
err error
)
err = rs.validateRepository()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("repository"), rs.Repository, err.Error()))
}
err = rs.validateWorkVolumeClaimTemplate()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("workVolumeClaimTemplate"), rs.WorkVolumeClaimTemplate, err.Error()))
}
err = rs.validateIsServiceAccountNameSet()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("serviceAccountName"), rs.ServiceAccountName, err.Error()))
}
return errList
}
// ValidateRepository validates repository field.
func (rs *RunnerSpec) ValidateRepository() error {
func (rs *RunnerSpec) validateRepository() error {
// Enterprise, Organization and repository are both exclusive.
foundCount := 0
if len(rs.Organization) > 0 {
@@ -179,6 +214,29 @@ func (rs *RunnerSpec) ValidateRepository() error {
return nil
}
func (rs *RunnerSpec) validateWorkVolumeClaimTemplate() error {
if rs.ContainerMode != "kubernetes" {
return nil
}
if rs.WorkVolumeClaimTemplate == nil {
return errors.New("Spec.ContainerMode: kubernetes must have workVolumeClaimTemplate field specified")
}
return rs.WorkVolumeClaimTemplate.validate()
}
func (rs *RunnerSpec) validateIsServiceAccountNameSet() error {
if rs.ContainerMode != "kubernetes" {
return nil
}
if rs.ServiceAccountName == "" {
return errors.New("service account name is required if container mode is kubernetes")
}
return nil
}
// RunnerStatus defines the observed state of Runner
type RunnerStatus struct {
// Turns true only if the runner pod is ready.
@@ -207,6 +265,51 @@ type RunnerStatusRegistration struct {
ExpiresAt metav1.Time `json:"expiresAt"`
}
type WorkVolumeClaimTemplate struct {
StorageClassName string `json:"storageClassName"`
AccessModes []corev1.PersistentVolumeAccessMode `json:"accessModes"`
Resources corev1.ResourceRequirements `json:"resources"`
}
func (w *WorkVolumeClaimTemplate) validate() error {
if w.AccessModes == nil || len(w.AccessModes) == 0 {
return errors.New("Access mode should have at least one mode specified")
}
for _, accessMode := range w.AccessModes {
switch accessMode {
case corev1.ReadWriteOnce, corev1.ReadWriteMany:
default:
return fmt.Errorf("Access mode %v is not supported", accessMode)
}
}
return nil
}
func (w *WorkVolumeClaimTemplate) V1Volume() corev1.Volume {
return corev1.Volume{
Name: "work",
VolumeSource: corev1.VolumeSource{
Ephemeral: &corev1.EphemeralVolumeSource{
VolumeClaimTemplate: &corev1.PersistentVolumeClaimTemplate{
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: w.AccessModes,
StorageClassName: &w.StorageClassName,
Resources: w.Resources,
},
},
},
},
}
}
func (w *WorkVolumeClaimTemplate) V1VolumeMount(mountPath string) corev1.VolumeMount {
return corev1.VolumeMount{
MountPath: mountPath,
Name: "work",
}
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.enterprise",name=Enterprise,type=string

View File

@@ -66,15 +66,7 @@ func (r *Runner) ValidateDelete() error {
// Validate validates resource spec.
func (r *Runner) Validate() error {
var (
errList field.ErrorList
err error
)
err = r.Spec.ValidateRepository()
if err != nil {
errList = append(errList, field.Invalid(field.NewPath("spec", "repository"), r.Spec.Repository, err.Error()))
}
errList := r.Spec.Validate(field.NewPath("spec"))
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)

View File

@@ -66,15 +66,7 @@ func (r *RunnerDeployment) ValidateDelete() error {
// Validate validates resource spec.
func (r *RunnerDeployment) Validate() error {
var (
errList field.ErrorList
err error
)
err = r.Spec.Template.Spec.ValidateRepository()
if err != nil {
errList = append(errList, field.Invalid(field.NewPath("spec", "template", "spec", "repository"), r.Spec.Template.Spec.Repository, err.Error()))
}
errList := r.Spec.Template.Spec.Validate(field.NewPath("spec", "template", "spec"))
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)

View File

@@ -66,15 +66,7 @@ func (r *RunnerReplicaSet) ValidateDelete() error {
// Validate validates resource spec.
func (r *RunnerReplicaSet) Validate() error {
var (
errList field.ErrorList
err error
)
err = r.Spec.Template.Spec.ValidateRepository()
if err != nil {
errList = append(errList, field.Invalid(field.NewPath("spec", "template", "spec", "repository"), r.Spec.Template.Spec.Repository, err.Error()))
}
errList := r.Spec.Template.Spec.Validate(field.NewPath("spec", "template", "spec"))
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)

View File

@@ -33,6 +33,12 @@ type RunnerSetSpec struct {
// +nullable
EffectiveTime *metav1.Time `json:"effectiveTime,omitempty"`
// +optional
ServiceAccountName string `json:"serviceAccountName,omitempty"`
// +optional
WorkVolumeClaimTemplate *WorkVolumeClaimTemplate `json:"workVolumeClaimTemplate,omitempty"`
appsv1.StatefulSetSpec `json:",inline"`
}

View File

@@ -741,6 +741,11 @@ func (in *RunnerPodSpec) DeepCopyInto(out *RunnerPodSpec) {
*out = new(v1.PodDNSConfig)
(*in).DeepCopyInto(*out)
}
if in.WorkVolumeClaimTemplate != nil {
in, out := &in.WorkVolumeClaimTemplate, &out.WorkVolumeClaimTemplate
*out = new(WorkVolumeClaimTemplate)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerPodSpec.
@@ -939,6 +944,11 @@ func (in *RunnerSetSpec) DeepCopyInto(out *RunnerSetSpec) {
in, out := &in.EffectiveTime, &out.EffectiveTime
*out = (*in).DeepCopy()
}
if in.WorkVolumeClaimTemplate != nil {
in, out := &in.WorkVolumeClaimTemplate, &out.WorkVolumeClaimTemplate
*out = new(WorkVolumeClaimTemplate)
(*in).DeepCopyInto(*out)
}
in.StatefulSetSpec.DeepCopyInto(&out.StatefulSetSpec)
}
@@ -1126,6 +1136,27 @@ func (in *ScheduledOverride) DeepCopy() *ScheduledOverride {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkVolumeClaimTemplate) DeepCopyInto(out *WorkVolumeClaimTemplate) {
*out = *in
if in.AccessModes != nil {
in, out := &in.AccessModes, &out.AccessModes
*out = make([]v1.PersistentVolumeAccessMode, len(*in))
copy(*out, *in)
}
in.Resources.DeepCopyInto(&out.Resources)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkVolumeClaimTemplate.
func (in *WorkVolumeClaimTemplate) DeepCopy() *WorkVolumeClaimTemplate {
if in == nil {
return nil
}
out := new(WorkVolumeClaimTemplate)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkflowJobSpec) DeepCopyInto(out *WorkflowJobSpec) {
*out = *in

View File

@@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.19.0
version: 0.20.0
# Used as the default manager tag value when no tag property is provided in the values.yaml
appVersion: 0.24.0
appVersion: 0.25.0
home: https://github.com/actions-runner-controller/actions-runner-controller

View File

@@ -1,11 +1,11 @@
{{- if .Values.githubWebhookServer.ingress.enabled -}}
{{- $fullName := include "actions-runner-controller-github-webhook-server.fullname" . -}}
{{- $svcPort := (index .Values.githubWebhookServer.service.ports 0).port -}}
{{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1" }}
{{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
apiVersion: networking.k8s.io/v1
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" }}
{{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1/Ingress" }}
apiVersion: networking.k8s.io/v1beta1
{{- else if .Capabilities.APIVersions.Has "extensions/v1beta1" }}
{{- else if .Capabilities.APIVersions.Has "extensions/v1beta1/Ingress" }}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
@@ -42,11 +42,11 @@ spec:
{{- end }}
{{- range .paths }}
- path: {{ .path }}
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1" }}
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1" }}
{{- if $.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
service:
name: {{ $fullName }}
port:

View File

@@ -12,5 +12,17 @@ data:
{{- if .Values.githubWebhookServer.secret.github_webhook_secret_token }}
github_webhook_secret_token: {{ .Values.githubWebhookServer.secret.github_webhook_secret_token | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_app_id }}
github_app_id: {{ .Values.githubWebhookServer.secret.github_app_id | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_app_installation_id }}
github_app_installation_id: {{ .Values.githubWebhookServer.secret.github_app_installation_id | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_app_private_key }}
github_app_private_key: {{ .Values.githubWebhookServer.secret.github_app_private_key | toString | b64enc }}
{{- end }}
{{- if .Values.githubWebhookServer.secret.github_token }}
github_token: {{ .Values.githubWebhookServer.secret.github_token | toString | b64enc }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -250,3 +250,11 @@ rules:
- patch
- update
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch

View File

@@ -109,7 +109,7 @@ metrics:
enabled: true
image:
repository: quay.io/brancz/kube-rbac-proxy
tag: v0.12.0
tag: v0.13.0
resources:
{}
@@ -183,6 +183,13 @@ githubWebhookServer:
name: "github-webhook-server"
### GitHub Webhook Configuration
github_webhook_secret_token: ""
### GitHub Apps Configuration
## NOTE: IDs MUST be strings, use quotes
#github_app_id: ""
#github_app_installation_id: ""
#github_app_private_key: |
### GitHub PAT Configuration
#github_token: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

View File

@@ -72,6 +72,7 @@ func main() {
enableLeaderElection bool
syncPeriod time.Duration
logLevel string
queueLimit int
ghClient *github.Client
)
@@ -92,6 +93,7 @@ func main() {
"Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change")
flag.StringVar(&logLevel, "log-level", logging.LogLevelDebug, `The verbosity of the logging. Valid values are "debug", "info", "warn", "error". Defaults to "debug".`)
flag.IntVar(&queueLimit, "queue-limit", controllers.DefaultQueueLimit, `The maximum length of the scale operation queue. The scale opration is enqueued per every matching webhook event, and the server returns a 500 HTTP status when the queue was already full on enqueue attempt.`)
flag.StringVar(&webhookSecretToken, "github-webhook-secret-token", "", "The personal access token of GitHub.")
flag.StringVar(&c.Token, "github-token", c.Token, "The personal access token of GitHub.")
flag.Int64Var(&c.AppID, "github-app-id", c.AppID, "The application ID of GitHub App.")
@@ -164,6 +166,7 @@ func main() {
SecretKeyBytes: []byte(webhookSecretToken),
Namespace: watchNamespace,
GitHubClient: ghClient,
QueueLimit: queueLimit,
}
if err = hraGitHubWebhook.SetupWithManager(mgr); err != nil {

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -22,8 +22,6 @@ bases:
- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
#- ../prometheus
# [GH_WEBHOOK_SERVER] To enable the GitHub webhook server, uncomment all sections with 'GH_WEBHOOK_SERVER'.
#- ../github-webhook-server
patchesStrategicMerge:
# Protect the /metrics endpoint by putting it behind auth.
@@ -46,10 +44,6 @@ patchesStrategicMerge:
# 'CERTMANAGER' needs to be enabled to use ca injection
- webhookcainjection_patch.yaml
# [GH_WEBHOOK_SERVER] To enable the GitHub webhook server, uncomment all sections with 'GH_WEBHOOK_SERVER'.
# Protect the GitHub webhook server metrics endpoint by putting it behind auth.
# - gh-webhook-server-auth-proxy-patch.yaml
# the following config is for teaching kustomize how to do var substitution
vars:
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.

View File

@@ -2,11 +2,14 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: controller
newName: summerwind/actions-runner-controller
newTag: latest
- name: controller
newName: summerwind/actions-runner-controller
newTag: latest
resources:
- deployment.yaml
- rbac.yaml
- service.yaml
- deployment.yaml
- rbac.yaml
- service.yaml
patchesStrategicMerge:
- gh-webhook-server-auth-proxy-patch.yaml

View File

@@ -249,3 +249,12 @@ rules:
- patch
- update
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- delete
- get
- list
- watch

View File

@@ -9,7 +9,9 @@ import (
"strings"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v45/github"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
@@ -314,22 +316,52 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunner
numRunners int
numRunnersRegistered int
numRunnersBusy int
numTerminatingBusy int
)
numRunners = len(runnerMap)
busyTerminatingRunnerPods := map[string]struct{}{}
kindLabel := LabelKeyRunnerDeploymentName
if hra.Spec.ScaleTargetRef.Kind == "RunnerSet" {
kindLabel = LabelKeyRunnerSetName
}
var runnerPodList corev1.PodList
if err := r.Client.List(ctx, &runnerPodList, client.InNamespace(hra.Namespace), client.MatchingLabels(map[string]string{
kindLabel: hra.Spec.ScaleTargetRef.Name,
})); err != nil {
return nil, err
}
for _, p := range runnerPodList.Items {
if p.Annotations[AnnotationKeyUnregistrationFailureMessage] != "" {
busyTerminatingRunnerPods[p.Name] = struct{}{}
}
}
for _, runner := range runners {
if _, ok := runnerMap[*runner.Name]; ok {
numRunnersRegistered++
if runner.GetBusy() {
numRunnersBusy++
} else if _, ok := busyTerminatingRunnerPods[*runner.Name]; ok {
numTerminatingBusy++
}
delete(busyTerminatingRunnerPods, *runner.Name)
}
}
// Remaining busyTerminatingRunnerPods are runners that were not on the ListRunners API response yet
for range busyTerminatingRunnerPods {
numTerminatingBusy++
}
var desiredReplicas int
fractionBusy := float64(numRunnersBusy) / float64(desiredReplicasBefore)
fractionBusy := float64(numRunnersBusy+numTerminatingBusy) / float64(desiredReplicasBefore)
if fractionBusy >= scaleUpThreshold {
if scaleUpAdjustment > 0 {
desiredReplicas = desiredReplicasBefore + scaleUpAdjustment
@@ -358,6 +390,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunner
"num_runners", numRunners,
"num_runners_registered", numRunnersRegistered,
"num_runners_busy", numRunnersBusy,
"num_terminating_busy", numTerminatingBusy,
"namespace", hra.Namespace,
"kind", st.kind,
"name", st.st,

View File

@@ -4,17 +4,22 @@ import "time"
const (
LabelKeyRunnerSetName = "runnerset-name"
LabelKeyRunner = "actions-runner"
)
const (
// This names requires at least one slash to work.
// See https://github.com/google/knative-gcp/issues/378
runnerPodFinalizerName = "actions.summerwind.dev/runner-pod"
runnerPodFinalizerName = "actions.summerwind.dev/runner-pod"
runnerLinkedResourcesFinalizerName = "actions.summerwind.dev/linked-resources"
annotationKeyPrefix = "actions-runner/"
AnnotationKeyLastRegistrationCheckTime = "actions-runner-controller/last-registration-check-time"
// AnnotationKeyUnregistrationFailureMessage is the annotation that is added onto the pod once it failed to be unregistered from GitHub due to e.g. 422 error
AnnotationKeyUnregistrationFailureMessage = annotationKeyPrefix + "unregistration-failure-message"
// AnnotationKeyUnregistrationCompleteTimestamp is the annotation that is added onto the pod once the previously started unregistration process has been completed.
AnnotationKeyUnregistrationCompleteTimestamp = annotationKeyPrefix + "unregistration-complete-timestamp"
@@ -61,4 +66,7 @@ const (
EnvVarRunnerName = "RUNNER_NAME"
EnvVarRunnerToken = "RUNNER_TOKEN"
// defaultHookPath is path to the hook script used when the "containerMode: kubernetes" is specified
defaultRunnerHookPath = "/runner/k8s/index.js"
)

View File

@@ -0,0 +1,207 @@
package controllers
import (
"context"
"fmt"
"sync"
"time"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/go-logr/logr"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
type batchScaler struct {
Ctx context.Context
Client client.Client
Log logr.Logger
interval time.Duration
queue chan *ScaleTarget
workerStart sync.Once
}
func newBatchScaler(ctx context.Context, client client.Client, log logr.Logger) *batchScaler {
return &batchScaler{
Ctx: ctx,
Client: client,
Log: log,
interval: 3 * time.Second,
}
}
type batchScaleOperation struct {
namespacedName types.NamespacedName
scaleOps []scaleOperation
}
type scaleOperation struct {
trigger v1alpha1.ScaleUpTrigger
log logr.Logger
}
// Add the scale target to the unbounded queue, blocking until the target is successfully added to the queue.
// All the targets in the queue are dequeued every 3 seconds, grouped by the HRA, and applied.
// In a happy path, batchScaler update each HRA only once, even though the HRA had two or more associated webhook events in the 3 seconds interval,
// which results in less K8s API calls and less HRA update conflicts in case your ARC installation receives a lot of webhook events
func (s *batchScaler) Add(st *ScaleTarget) {
if st == nil {
return
}
s.workerStart.Do(func() {
var expBackoff = []time.Duration{time.Second, 2 * time.Second, 4 * time.Second, 8 * time.Second, 16 * time.Second}
s.queue = make(chan *ScaleTarget)
log := s.Log
go func() {
log.Info("Starting batch worker")
defer log.Info("Stopped batch worker")
for {
select {
case <-s.Ctx.Done():
return
default:
}
log.V(2).Info("Batch worker is dequeueing operations")
batches := map[types.NamespacedName]batchScaleOperation{}
after := time.After(s.interval)
var ops uint
batch:
for {
select {
case <-after:
after = nil
break batch
case st := <-s.queue:
nsName := types.NamespacedName{
Namespace: st.HorizontalRunnerAutoscaler.Namespace,
Name: st.HorizontalRunnerAutoscaler.Name,
}
b, ok := batches[nsName]
if !ok {
b = batchScaleOperation{
namespacedName: nsName,
}
}
b.scaleOps = append(b.scaleOps, scaleOperation{
log: *st.log,
trigger: st.ScaleUpTrigger,
})
batches[nsName] = b
ops++
}
}
log.V(2).Info("Batch worker dequeued operations", "ops", ops, "batches", len(batches))
retry:
for i := 0; ; i++ {
failed := map[types.NamespacedName]batchScaleOperation{}
for nsName, b := range batches {
b := b
if err := s.batchScale(context.Background(), b); err != nil {
log.V(2).Info("Failed to scale due to error", "error", err)
failed[nsName] = b
} else {
log.V(2).Info("Successfully ran batch scale", "hra", b.namespacedName)
}
}
if len(failed) == 0 {
break retry
}
batches = failed
delay := 16 * time.Second
if i < len(expBackoff) {
delay = expBackoff[i]
}
time.Sleep(delay)
}
}
}()
})
s.queue <- st
}
func (s *batchScaler) batchScale(ctx context.Context, batch batchScaleOperation) error {
var hra v1alpha1.HorizontalRunnerAutoscaler
if err := s.Client.Get(ctx, batch.namespacedName, &hra); err != nil {
return err
}
copy := hra.DeepCopy()
copy.Spec.CapacityReservations = getValidCapacityReservations(copy)
var added, completed int
for _, scale := range batch.scaleOps {
amount := 1
if scale.trigger.Amount != 0 {
amount = scale.trigger.Amount
}
scale.log.V(2).Info("Adding capacity reservation", "amount", amount)
if amount > 0 {
now := time.Now()
copy.Spec.CapacityReservations = append(copy.Spec.CapacityReservations, v1alpha1.CapacityReservation{
EffectiveTime: metav1.Time{Time: now},
ExpirationTime: metav1.Time{Time: now.Add(scale.trigger.Duration.Duration)},
Replicas: amount,
})
added += amount
} else if amount < 0 {
var reservations []v1alpha1.CapacityReservation
var found bool
for _, r := range copy.Spec.CapacityReservations {
if !found && r.Replicas+amount == 0 {
found = true
} else {
reservations = append(reservations, r)
}
}
copy.Spec.CapacityReservations = reservations
completed += amount
}
}
before := len(hra.Spec.CapacityReservations)
expired := before - len(copy.Spec.CapacityReservations)
after := len(copy.Spec.CapacityReservations)
s.Log.V(1).Info(
fmt.Sprintf("Updating hra %s for capacityReservations update", hra.Name),
"before", before,
"expired", expired,
"added", added,
"completed", completed,
"after", after,
)
if err := s.Client.Update(ctx, copy); err != nil {
return fmt.Errorf("updating horizontalrunnerautoscaler to add capacity reservation: %w", err)
}
return nil
}

View File

@@ -23,14 +23,14 @@ import (
"io/ioutil"
"net/http"
"strings"
"sync"
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"github.com/go-logr/logr"
gogithub "github.com/google/go-github/v39/github"
gogithub "github.com/google/go-github/v45/github"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
@@ -46,6 +46,8 @@ const (
keyPrefixEnterprise = "enterprises/"
keyRunnerGroup = "/group/"
DefaultQueueLimit = 100
)
// HorizontalRunnerAutoscalerGitHubWebhook autoscales a HorizontalRunnerAutoscaler and the RunnerDeployment on each
@@ -68,6 +70,15 @@ type HorizontalRunnerAutoscalerGitHubWebhook struct {
// Set to empty for letting it watch for all namespaces.
Namespace string
Name string
// QueueLimit is the maximum length of the bounded queue of scale targets and their associated operations
// A scale target is enqueued on each retrieval of each eligible webhook event, so that it is processed asynchronously.
QueueLimit int
worker *worker
workerInit sync.Once
workerStart sync.Once
batchCh chan *ScaleTarget
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Reconcile(_ context.Context, request reconcile.Request) (reconcile.Result, error) {
@@ -312,9 +323,19 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
return
}
if err := autoscaler.tryScale(context.TODO(), target); err != nil {
log.Error(err, "could not scale up")
autoscaler.workerInit.Do(func() {
batchScaler := newBatchScaler(context.Background(), autoscaler.Client, autoscaler.Log)
queueLimit := autoscaler.QueueLimit
if queueLimit == 0 {
queueLimit = DefaultQueueLimit
}
autoscaler.worker = newWorker(context.Background(), queueLimit, batchScaler.Add)
})
target.log = &log
if ok := autoscaler.worker.Add(target); !ok {
log.Error(err, "Could not scale up due to queue full")
return
}
@@ -383,6 +404,8 @@ func matchTriggerConditionAgainstEvent(types []string, eventAction *string) bool
type ScaleTarget struct {
v1alpha1.HorizontalRunnerAutoscaler
v1alpha1.ScaleUpTrigger
log *logr.Logger
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) searchScaleTargets(hras []v1alpha1.HorizontalRunnerAutoscaler, f func(v1alpha1.ScaleUpTrigger) bool) []ScaleTarget {
@@ -501,6 +524,7 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleUpTargetWithF
if autoscaler.GitHubClient != nil {
simu := &simulator.Simulator{
Client: autoscaler.GitHubClient,
Log: log,
}
// Get available organization runner groups and enterprise runner groups for a repository
// These are the sum of runner groups with repository access = All repositories and runner groups
@@ -770,63 +794,6 @@ HRA:
return nil, nil
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) tryScale(ctx context.Context, target *ScaleTarget) error {
if target == nil {
return nil
}
copy := target.HorizontalRunnerAutoscaler.DeepCopy()
amount := 1
if target.ScaleUpTrigger.Amount != 0 {
amount = target.ScaleUpTrigger.Amount
}
capacityReservations := getValidCapacityReservations(copy)
if amount > 0 {
now := time.Now()
copy.Spec.CapacityReservations = append(capacityReservations, v1alpha1.CapacityReservation{
EffectiveTime: metav1.Time{Time: now},
ExpirationTime: metav1.Time{Time: now.Add(target.ScaleUpTrigger.Duration.Duration)},
Replicas: amount,
})
} else if amount < 0 {
var reservations []v1alpha1.CapacityReservation
var found bool
for _, r := range capacityReservations {
if !found && r.Replicas+amount == 0 {
found = true
} else {
reservations = append(reservations, r)
}
}
copy.Spec.CapacityReservations = reservations
}
before := len(target.HorizontalRunnerAutoscaler.Spec.CapacityReservations)
expired := before - len(capacityReservations)
after := len(copy.Spec.CapacityReservations)
autoscaler.Log.V(1).Info(
fmt.Sprintf("Patching hra %s for capacityReservations update", target.HorizontalRunnerAutoscaler.Name),
"before", before,
"expired", expired,
"amount", amount,
"after", after,
)
if err := autoscaler.Client.Patch(ctx, copy, client.MergeFrom(&target.HorizontalRunnerAutoscaler)); err != nil {
return fmt.Errorf("patching horizontalrunnerautoscaler to add capacity reservation: %w", err)
}
return nil
}
func getValidCapacityReservations(autoscaler *v1alpha1.HorizontalRunnerAutoscaler) []v1alpha1.CapacityReservation {
var capacityReservations []v1alpha1.CapacityReservation

View File

@@ -3,7 +3,7 @@ package controllers
import (
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/pkg/actionsglob"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v45/github"
)
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchCheckRunEvent(event *github.CheckRunEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {

View File

@@ -2,7 +2,7 @@ package controllers
import (
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v45/github"
)
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchPullRequestEvent(event *github.PullRequestEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {

View File

@@ -2,7 +2,7 @@ package controllers
import (
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v45/github"
)
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchPushEvent(event *github.PushEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {

View File

@@ -15,7 +15,7 @@ import (
actionsv1alpha1 "github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/go-logr/logr"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v45/github"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"

View File

@@ -0,0 +1,55 @@
package controllers
import (
"context"
)
// worker is a worker that has a non-blocking bounded queue of scale targets, dequeues scale target and executes the scale operation one by one.
type worker struct {
scaleTargetQueue chan *ScaleTarget
work func(*ScaleTarget)
done chan struct{}
}
func newWorker(ctx context.Context, queueLimit int, work func(*ScaleTarget)) *worker {
w := &worker{
scaleTargetQueue: make(chan *ScaleTarget, queueLimit),
work: work,
done: make(chan struct{}),
}
go func() {
defer close(w.done)
for {
select {
case <-ctx.Done():
return
case t := <-w.scaleTargetQueue:
work(t)
}
}
}()
return w
}
// Add the scale target to the bounded queue, returning the result as a bool value. It returns true on successful enqueue, and returns false otherwise.
// When returned false, the queue is already full so the enqueue operation must be retried later.
// If the enqueue was triggered by an external source and there's no intermediate queue that we can use,
// you must instruct the source to resend the original request later.
// In case you're building a webhook server around this worker, this means that you must return a http error to the webhook server,
// so that (hopefully) the sender can resend the webhook event later, or at least the human operator can notice or be notified about the
// webhook develiery failure so that a manual retry can be done later.
func (w *worker) Add(st *ScaleTarget) bool {
select {
case w.scaleTargetQueue <- st:
return true
default:
return false
}
}
func (w *worker) Done() chan struct{} {
return w.done
}

View File

@@ -0,0 +1,36 @@
package controllers
import (
"context"
"testing"
"github.com/stretchr/testify/require"
)
func TestWorker_Add(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
w := newWorker(ctx, 2, func(st *ScaleTarget) {})
require.True(t, w.Add(&ScaleTarget{}))
require.True(t, w.Add(&ScaleTarget{}))
require.False(t, w.Add(&ScaleTarget{}))
}
func TestWorker_Work(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
var count int
w := newWorker(ctx, 1, func(st *ScaleTarget) {
count++
cancel()
})
require.True(t, w.Add(&ScaleTarget{}))
require.False(t, w.Add(&ScaleTarget{}))
<-w.Done()
require.Equal(t, count, 1)
}

View File

@@ -8,7 +8,7 @@ import (
"time"
github2 "github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v45/github"
"github.com/actions-runner-controller/actions-runner-controller/github/fake"
@@ -1367,7 +1367,7 @@ func (env *testEnvironment) ExpectRegisteredNumberCountEventuallyEquals(want int
return len(rs)
},
time.Second*5, time.Millisecond*500).Should(Equal(want), optionalDescriptions...)
time.Second*10, time.Millisecond*500).Should(Equal(want), optionalDescriptions...)
}
func (env *testEnvironment) SendOrgPullRequestEvent(org, repo, branch, action string) {

View File

@@ -56,7 +56,7 @@ func TestNewRunnerPod(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"runnerset-name": "runner",
"actions-runner": "",
},
},
Spec: corev1.PodSpec{
@@ -198,7 +198,7 @@ func TestNewRunnerPod(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"runnerset-name": "runner",
"actions-runner": "",
},
},
Spec: corev1.PodSpec{
@@ -276,7 +276,7 @@ func TestNewRunnerPod(t *testing.T) {
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"runnerset-name": "runner",
"actions-runner": "",
},
},
Spec: corev1.PodSpec{
@@ -515,7 +515,7 @@ func TestNewRunnerPod(t *testing.T) {
for i := range testcases {
tc := testcases[i]
t.Run(tc.description, func(t *testing.T) {
got, err := newRunnerPod("runner", tc.template, tc.config, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL)
got, err := newRunnerPod(tc.template, tc.config, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL)
require.NoError(t, err)
require.Equal(t, tc.want, got)
})
@@ -546,7 +546,7 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"pod-template-hash": "8857b86c7",
"runnerset-name": "runner",
"actions-runner": "",
},
OwnerReferences: []metav1.OwnerReference{
{
@@ -703,7 +703,7 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"pod-template-hash": "8857b86c7",
"runnerset-name": "runner",
"actions-runner": "",
},
OwnerReferences: []metav1.OwnerReference{
{
@@ -800,7 +800,7 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Labels: map[string]string{
"actions-runner-controller/inject-registration-token": "true",
"pod-template-hash": "8857b86c7",
"runnerset-name": "runner",
"actions-runner": "",
},
OwnerReferences: []metav1.OwnerReference{
{

View File

@@ -18,7 +18,9 @@ package controllers
import (
"context"
"errors"
"fmt"
"strconv"
"strings"
"time"
@@ -76,6 +78,7 @@ type RunnerReconciler struct {
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=secrets,verbs=get;list;watch;delete
// +kubebuilder:rbac:groups=core,resources=pods/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
@@ -112,6 +115,7 @@ func (r *RunnerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
// Pod was not found
return r.processRunnerDeletion(runner, ctx, log, nil)
}
return r.processRunnerDeletion(runner, ctx, log, &pod)
}
@@ -412,7 +416,17 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
template.Spec.SecurityContext = runner.Spec.SecurityContext
template.Spec.EnableServiceLinks = runner.Spec.EnableServiceLinks
pod, err := newRunnerPod(runner.Name, template, runner.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, r.GitHubClient.GithubBaseURL)
if runner.Spec.ContainerMode == "kubernetes" {
workDir := runner.Spec.WorkDir
if workDir == "" {
workDir = "/runner/_work"
}
if err := applyWorkVolumeClaimTemplateToPod(&template, runner.Spec.WorkVolumeClaimTemplate, workDir); err != nil {
return corev1.Pod{}, err
}
}
pod, err := newRunnerPodWithContainerMode(runner.Spec.ContainerMode, template, runner.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, r.GitHubClient.GithubBaseURL)
if err != nil {
return pod, err
}
@@ -424,6 +438,9 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
// if operater provides a work volume mount, use that
isPresent, _ := workVolumeMountPresent(runnerSpec.VolumeMounts)
if isPresent {
if runnerSpec.ContainerMode == "kubernetes" {
return pod, errors.New("volume mount \"work\" should be specified by workVolumeClaimTemplate in container mode kubernetes")
}
// remove work volume since it will be provided from runnerSpec.Volumes
// if we don't remove it here we would get a duplicate key error, i.e. two volumes named work
_, index := workVolumeMountPresent(pod.Spec.Containers[0].VolumeMounts)
@@ -437,6 +454,9 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
// if operator provides a work volume. use that
isPresent, _ := workVolumePresent(runnerSpec.Volumes)
if isPresent {
if runnerSpec.ContainerMode == "kubernetes" {
return pod, errors.New("volume \"work\" should be specified by workVolumeClaimTemplate in container mode kubernetes")
}
_, index := workVolumePresent(pod.Spec.Volumes)
// remove work volume since it will be provided from runnerSpec.Volumes
@@ -446,6 +466,7 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
pod.Spec.Volumes = append(pod.Spec.Volumes, runnerSpec.Volumes...)
}
if len(runnerSpec.InitContainers) != 0 {
pod.Spec.InitContainers = append(pod.Spec.InitContainers, runnerSpec.InitContainers...)
}
@@ -476,6 +497,10 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
pod.Spec.Tolerations = runnerSpec.Tolerations
}
if runnerSpec.PriorityClassName != "" {
pod.Spec.PriorityClassName = runnerSpec.PriorityClassName
}
if len(runnerSpec.TopologySpreadConstraints) != 0 {
pod.Spec.TopologySpreadConstraints = runnerSpec.TopologySpreadConstraints
}
@@ -526,7 +551,45 @@ func mutatePod(pod *corev1.Pod, token string) *corev1.Pod {
return updated
}
func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string) (corev1.Pod, error) {
func runnerHookEnvs(pod *corev1.Pod) ([]corev1.EnvVar, error) {
isRequireSameNode, err := isRequireSameNode(pod)
if err != nil {
return nil, err
}
return []corev1.EnvVar{
{
Name: "ACTIONS_RUNNER_CONTAINER_HOOKS",
Value: defaultRunnerHookPath,
},
{
Name: "ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER",
Value: "true",
},
{
Name: "ACTIONS_RUNNER_POD_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.name",
},
},
},
{
Name: "ACTIONS_RUNNER_JOB_NAMESPACE",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.namespace",
},
},
},
corev1.EnvVar{
Name: "ACTIONS_RUNNER_REQUIRE_SAME_NODE",
Value: strconv.FormatBool(isRequireSameNode),
},
}, nil
}
func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string) (corev1.Pod, error) {
var (
privileged bool = true
dockerdInRunner bool = runnerSpec.DockerdWithinRunnerContainer != nil && *runnerSpec.DockerdWithinRunnerContainer
@@ -535,10 +598,16 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
dockerdInRunnerPrivileged bool = dockerdInRunner
)
if containerMode == "kubernetes" {
dockerdInRunner = false
dockerEnabled = false
dockerdInRunnerPrivileged = false
}
template = *template.DeepCopy()
// This label selector is used by default when rd.Spec.Selector is empty.
template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyRunnerSetName, runnerName)
template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyRunner, "")
template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyPodMutation, LabelValuePodMutation)
workDir := runnerSpec.WorkDir
@@ -621,6 +690,17 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
}
}
if containerMode == "kubernetes" {
if dockerdContainer != nil {
template.Spec.Containers = append(template.Spec.Containers[:dockerdContainerIndex], template.Spec.Containers[dockerdContainerIndex+1:]...)
}
if runnerContainerIndex < runnerContainerIndex {
runnerContainerIndex--
}
dockerdContainer = nil
dockerdContainerIndex = -1
}
if runnerContainer == nil {
runnerContainerIndex = -1
runnerContainer = &corev1.Container{
@@ -651,6 +731,13 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
}
runnerContainer.Env = append(runnerContainer.Env, env...)
if containerMode == "kubernetes" {
hookEnvs, err := runnerHookEnvs(&template)
if err != nil {
return corev1.Pod{}, err
}
runnerContainer.Env = append(runnerContainer.Env, hookEnvs...)
}
if runnerContainer.SecurityContext == nil {
runnerContainer.SecurityContext = &corev1.SecurityContext{}
@@ -875,6 +962,10 @@ func newRunnerPod(runnerName string, template corev1.Pod, runnerSpec v1alpha1.Ru
return *pod, nil
}
func newRunnerPod(template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string) (corev1.Pod, error) {
return newRunnerPodWithContainerMode("", template, runnerSpec, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL)
}
func (r *RunnerReconciler) SetupWithManager(mgr ctrl.Manager) error {
name := "runner-controller"
if r.Name != "" {
@@ -937,3 +1028,71 @@ func workVolumeMountPresent(items []corev1.VolumeMount) (bool, int) {
}
return false, 0
}
func applyWorkVolumeClaimTemplateToPod(pod *corev1.Pod, workVolumeClaimTemplate *v1alpha1.WorkVolumeClaimTemplate, workDir string) error {
if workVolumeClaimTemplate == nil {
return errors.New("work volume claim template must be specified in container mode kubernetes")
}
for i := range pod.Spec.Volumes {
if pod.Spec.Volumes[i].Name == "work" {
return fmt.Errorf("Work volume should not be specified in container mode kubernetes. workVolumeClaimTemplate field should be used instead.")
}
}
pod.Spec.Volumes = append(pod.Spec.Volumes, workVolumeClaimTemplate.V1Volume())
var runnerContainer *corev1.Container
for i := range pod.Spec.Containers {
if pod.Spec.Containers[i].Name == "runner" {
runnerContainer = &pod.Spec.Containers[i]
break
}
}
if runnerContainer == nil {
return fmt.Errorf("runner container is not present when applying work volume claim template")
}
if isPresent, _ := workVolumeMountPresent(runnerContainer.VolumeMounts); isPresent {
return fmt.Errorf("volume mount \"work\" should not be present on the runner container in container mode kubernetes")
}
runnerContainer.VolumeMounts = append(runnerContainer.VolumeMounts, workVolumeClaimTemplate.V1VolumeMount(workDir))
return nil
}
// isRequireSameNode specifies for the runner in kubernetes mode wether it should
// schedule jobs to the same node where the runner is
//
// This function should only be called in containerMode: kubernetes
func isRequireSameNode(pod *corev1.Pod) (bool, error) {
isPresent, index := workVolumePresent(pod.Spec.Volumes)
if !isPresent {
return true, errors.New("internal error: work volume mount must exist in containerMode: kubernetes")
}
if pod.Spec.Volumes[index].Ephemeral == nil || pod.Spec.Volumes[index].Ephemeral.VolumeClaimTemplate == nil {
return true, errors.New("containerMode: kubernetes should have pod.Spec.Volumes[].Ephemeral.VolumeClaimTemplate set")
}
for _, accessMode := range pod.Spec.Volumes[index].Ephemeral.VolumeClaimTemplate.Spec.AccessModes {
switch accessMode {
case corev1.ReadWriteOnce:
return true, nil
case corev1.ReadWriteMany:
default:
return true, errors.New("actions-runner-controller supports ReadWriteOnce and ReadWriteMany modes only")
}
}
return false, nil
}
func overwriteRunnerEnv(runner *v1alpha1.Runner, key string, value string) {
for i := range runner.Spec.Env {
if runner.Spec.Env[i].Name == key {
runner.Spec.Env[i].Value = value
return
}
}
runner.Spec.Env = append(runner.Spec.Env, corev1.EnvVar{Name: key, Value: value})
}

View File

@@ -9,7 +9,7 @@ import (
"github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/go-logr/logr"
gogithub "github.com/google/go-github/v39/github"
gogithub "github.com/google/go-github/v45/github"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
ctrl "sigs.k8s.io/controller-runtime"
@@ -151,7 +151,10 @@ func ensureRunnerUnregistration(ctx context.Context, retryDelay time.Duration, l
log.V(1).Info("Failed to unregister runner before deleting the pod.", "error", err)
var runnerBusy bool
var (
runnerBusy bool
runnerUnregistrationFailureMessage string
)
errRes := &gogithub.ErrorResponse{}
if errors.As(err, &errRes) {
@@ -173,6 +176,7 @@ func ensureRunnerUnregistration(ctx context.Context, retryDelay time.Duration, l
}
runnerBusy = errRes.Response.StatusCode == 422
runnerUnregistrationFailureMessage = errRes.Message
if runnerBusy && code != nil {
log.V(2).Info("Runner container has already stopped but the unregistration attempt failed. "+
@@ -187,6 +191,11 @@ func ensureRunnerUnregistration(ctx context.Context, retryDelay time.Duration, l
}
if runnerBusy {
_, err := annotatePodOnce(ctx, c, log, pod, AnnotationKeyUnregistrationFailureMessage, runnerUnregistrationFailureMessage)
if err != nil {
return &ctrl.Result{}, err
}
// We want to prevent spamming the deletion attemps but returning ctrl.Result with RequeueAfter doesn't
// work as the reconcilation can happen earlier due to pod status update.
// For ephemeral runners, we can expect it to stop and unregister itself on completion.

View File

@@ -20,6 +20,7 @@ import (
"context"
"errors"
"fmt"
"sync"
"time"
"github.com/go-logr/logr"
@@ -50,6 +51,7 @@ type RunnerPodReconciler struct {
}
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=secrets,verbs=get;list;watch
// +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
@@ -60,8 +62,11 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, client.IgnoreNotFound(err)
}
_, isRunnerPod := runnerPod.Labels[LabelKeyRunnerSetName]
if !isRunnerPod {
_, isRunnerPod := runnerPod.Labels[LabelKeyRunner]
_, isRunnerSetPod := runnerPod.Labels[LabelKeyRunnerSetName]
_, isRunnerDeploymentPod := runnerPod.Labels[LabelKeyRunnerDeploymentName]
if !isRunnerPod && !isRunnerSetPod && !isRunnerDeploymentPod {
return ctrl.Result{}, nil
}
@@ -77,6 +82,7 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
}
var enterprise, org, repo string
var isContainerMode bool
for _, e := range envvars {
switch e.Name {
@@ -86,13 +92,20 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
org = e.Value
case EnvVarRepo:
repo = e.Value
case "ACTIONS_RUNNER_CONTAINER_HOOKS":
isContainerMode = true
}
}
if runnerPod.ObjectMeta.DeletionTimestamp.IsZero() {
finalizers, added := addFinalizer(runnerPod.ObjectMeta.Finalizers, runnerPodFinalizerName)
if added {
var cleanupFinalizersAdded bool
if isContainerMode {
finalizers, cleanupFinalizersAdded = addFinalizer(finalizers, runnerLinkedResourcesFinalizerName)
}
if added || cleanupFinalizersAdded {
newRunner := runnerPod.DeepCopy()
newRunner.ObjectMeta.Finalizers = finalizers
@@ -108,6 +121,27 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
} else {
log.V(2).Info("Seen deletion-timestamp is already set")
if finalizers, removed := removeFinalizer(runnerPod.ObjectMeta.Finalizers, runnerLinkedResourcesFinalizerName); removed {
if err := r.cleanupRunnerLinkedPods(ctx, &runnerPod, log); err != nil {
log.Info("Runner-linked pods clean up that has failed due to an error. If this persists, please manually remove the runner-linked pods to unblock ARC", "err", err.Error())
return ctrl.Result{Requeue: true, RequeueAfter: 30 * time.Second}, nil
}
if err := r.cleanupRunnerLinkedSecrets(ctx, &runnerPod, log); err != nil {
log.Info("Runner-linked secrets clean up that has failed due to an error. If this persists, please manually remove the runner-linked secrets to unblock ARC", "err", err.Error())
return ctrl.Result{Requeue: true, RequeueAfter: 30 * time.Second}, nil
}
patchedPod := runnerPod.DeepCopy()
patchedPod.ObjectMeta.Finalizers = finalizers
if err := r.Patch(ctx, patchedPod, client.MergeFrom(&runnerPod)); err != nil {
log.Error(err, "Failed to update runner for finalizer linked resources removal")
return ctrl.Result{}, err
}
// Otherwise the subsequent patch request can revive the removed finalizer and it will trigger a unnecessary reconcilation
runnerPod = *patchedPod
}
finalizers, removed := removeFinalizer(runnerPod.ObjectMeta.Finalizers, runnerPodFinalizerName)
if removed {
@@ -222,3 +256,93 @@ func (r *RunnerPodReconciler) SetupWithManager(mgr ctrl.Manager) error {
Named(name).
Complete(r)
}
func (r *RunnerPodReconciler) cleanupRunnerLinkedPods(ctx context.Context, pod *corev1.Pod, log logr.Logger) error {
var runnerLinkedPodList corev1.PodList
if err := r.List(ctx, &runnerLinkedPodList, client.InNamespace(pod.Namespace), client.MatchingLabels(
map[string]string{
"runner-pod": pod.ObjectMeta.Name,
},
)); err != nil {
return fmt.Errorf("failed to list runner-linked pods: %w", err)
}
var (
wg sync.WaitGroup
errs []error
)
for _, p := range runnerLinkedPodList.Items {
if !p.ObjectMeta.DeletionTimestamp.IsZero() {
continue
}
p := p
wg.Add(1)
go func() {
defer wg.Done()
if err := r.Delete(ctx, &p); err != nil {
if kerrors.IsNotFound(err) || kerrors.IsGone(err) {
return
}
errs = append(errs, fmt.Errorf("delete pod %q error: %v", p.ObjectMeta.Name, err))
}
}()
}
wg.Wait()
if len(errs) > 0 {
for _, err := range errs {
log.Error(err, "failed to remove runner-linked pod")
}
return errors.New("failed to remove some runner linked pods")
}
return nil
}
func (r *RunnerPodReconciler) cleanupRunnerLinkedSecrets(ctx context.Context, pod *corev1.Pod, log logr.Logger) error {
log.V(2).Info("Listing runner-linked secrets to be deleted", "ns", pod.Namespace)
var runnerLinkedSecretList corev1.SecretList
if err := r.List(ctx, &runnerLinkedSecretList, client.InNamespace(pod.Namespace), client.MatchingLabels(
map[string]string{
"runner-pod": pod.ObjectMeta.Name,
},
)); err != nil {
return fmt.Errorf("failed to list runner-linked secrets: %w", err)
}
var (
wg sync.WaitGroup
errs []error
)
for _, s := range runnerLinkedSecretList.Items {
if !s.ObjectMeta.DeletionTimestamp.IsZero() {
continue
}
s := s
wg.Add(1)
go func() {
defer wg.Done()
if err := r.Delete(ctx, &s); err != nil {
if kerrors.IsNotFound(err) || kerrors.IsGone(err) {
return
}
errs = append(errs, fmt.Errorf("delete secret %q error: %v", s.ObjectMeta.Name, err))
}
}()
}
wg.Wait()
if len(errs) > 0 {
for _, err := range errs {
log.Error(err, "failed to remove runner-linked secret")
}
return errors.New("failed to remove some runner linked secrets")
}
return nil
}

View File

@@ -165,6 +165,8 @@ func (r *RunnerDeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Req
return ctrl.Result{}, err
}
log.V(1).Info("Updated runnerreplicaset due to selector change")
// At this point, we are already sure that there's no need to create a new replicaset
// as the runner template hash is not changed.
//
@@ -179,7 +181,17 @@ func (r *RunnerDeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Req
newDesiredReplicas := getIntOrDefault(desiredRS.Spec.Replicas, defaultReplicas)
// Please add more conditions that we can in-place update the newest runnerreplicaset without disruption
if currentDesiredReplicas != newDesiredReplicas {
//
// If we missed taking the EffectiveTime diff into account, you might end up experiencing scale-ups being delayed scale-down.
// See https://github.com/actions-runner-controller/actions-runner-controller/pull/1477#issuecomment-1164154496
var et1, et2 time.Time
if newestSet.Spec.EffectiveTime != nil {
et1 = newestSet.Spec.EffectiveTime.Time
}
if rd.Spec.EffectiveTime != nil {
et2 = rd.Spec.EffectiveTime.Time
}
if currentDesiredReplicas != newDesiredReplicas || et1 != et2 {
newestSet.Spec.Replicas = &newDesiredReplicas
newestSet.Spec.EffectiveTime = rd.Spec.EffectiveTime
@@ -189,6 +201,13 @@ func (r *RunnerDeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Req
return ctrl.Result{}, err
}
log.V(1).Info("Updated runnerreplicaset due to spec change",
"currentDesiredReplicas", currentDesiredReplicas,
"newDesiredReplicas", newDesiredReplicas,
"currentEffectiveTime", newestSet.Spec.EffectiveTime,
"newEffectiveTime", rd.Spec.EffectiveTime,
)
return ctrl.Result{}, err
}

View File

@@ -195,7 +195,33 @@ func (r *RunnerSetReconciler) newStatefulSet(runnerSet *v1alpha1.RunnerSet) (*ap
Spec: runnerSetWithOverrides.StatefulSetSpec.Template.Spec,
}
pod, err := newRunnerPod(runnerSet.Name, template, runnerSet.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, r.GitHubBaseURL)
if runnerSet.Spec.RunnerConfig.ContainerMode == "kubernetes" {
found := false
for i := range template.Spec.Containers {
if template.Spec.Containers[i].Name == containerName {
found = true
}
}
if !found {
template.Spec.Containers = append(template.Spec.Containers, corev1.Container{
Name: "runner",
})
}
workDir := runnerSet.Spec.RunnerConfig.WorkDir
if workDir == "" {
workDir = "/runner/_work"
}
if err := applyWorkVolumeClaimTemplateToPod(&template, runnerSet.Spec.WorkVolumeClaimTemplate, workDir); err != nil {
return nil, err
}
template.Spec.ServiceAccountName = runnerSet.Spec.ServiceAccountName
}
template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyRunnerSetName, runnerSet.Name)
pod, err := newRunnerPodWithContainerMode(runnerSet.Spec.RunnerConfig.ContainerMode, template, runnerSet.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, r.GitHubBaseURL)
if err != nil {
return nil, err
}

View File

@@ -148,7 +148,7 @@ func syncPV(ctx context.Context, c client.Client, log logr.Logger, ns string, pv
if pv.Labels[labelKeyCleanup] == "" {
// We assume that the pvc is shortly terminated, hence retry forever until it gets removed.
retry := 10 * time.Second
log.V(1).Info("Retrying sync until pvc gets removed", "requeueAfter", retry)
log.V(2).Info("Retrying sync to see if this PV needs to be managed by ARC", "requeueAfter", retry)
return &ctrl.Result{RequeueAfter: retry}, nil
}

View File

@@ -3,6 +3,9 @@ package controllers
import (
"reflect"
"testing"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
corev1 "k8s.io/api/core/v1"
)
func Test_filterLabels(t *testing.T) {
@@ -32,3 +35,94 @@ func Test_filterLabels(t *testing.T) {
})
}
}
func Test_workVolumeClaimTemplateVolumeV1VolumeTransformation(t *testing.T) {
storageClassName := "local-storage"
workVolumeClaimTemplate := v1alpha1.WorkVolumeClaimTemplate{
StorageClassName: storageClassName,
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce, corev1.ReadWriteMany},
Resources: corev1.ResourceRequirements{},
}
want := corev1.Volume{
Name: "work",
VolumeSource: corev1.VolumeSource{
Ephemeral: &corev1.EphemeralVolumeSource{
VolumeClaimTemplate: &corev1.PersistentVolumeClaimTemplate{
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce, corev1.ReadWriteMany},
StorageClassName: &storageClassName,
Resources: corev1.ResourceRequirements{},
},
},
},
},
}
got := workVolumeClaimTemplate.V1Volume()
if got.Name != want.Name {
t.Errorf("want name %q, got %q\n", want.Name, got.Name)
}
if got.VolumeSource.Ephemeral == nil {
t.Fatal("work volume claim template should transform itself into Ephemeral volume source\n")
}
if got.VolumeSource.Ephemeral.VolumeClaimTemplate == nil {
t.Fatal("work volume claim template should have ephemeral volume claim template set\n")
}
gotClassName := *got.VolumeSource.Ephemeral.VolumeClaimTemplate.Spec.StorageClassName
wantClassName := *want.VolumeSource.Ephemeral.VolumeClaimTemplate.Spec.StorageClassName
if gotClassName != wantClassName {
t.Errorf("expected storage class name %q, got %q\n", wantClassName, gotClassName)
}
gotAccessModes := got.VolumeSource.Ephemeral.VolumeClaimTemplate.Spec.AccessModes
wantAccessModes := want.VolumeSource.Ephemeral.VolumeClaimTemplate.Spec.AccessModes
if len(gotAccessModes) != len(wantAccessModes) {
t.Fatalf("access modes lengths missmatch: got %v, expected %v\n", gotAccessModes, wantAccessModes)
}
diff := make(map[corev1.PersistentVolumeAccessMode]int, len(wantAccessModes))
for _, am := range wantAccessModes {
diff[am]++
}
for _, am := range gotAccessModes {
_, ok := diff[am]
if !ok {
t.Errorf("got access mode %v that is not in the wanted access modes\n", am)
}
diff[am]--
if diff[am] == 0 {
delete(diff, am)
}
}
if len(diff) != 0 {
t.Fatalf("got access modes did not take every access mode into account\nactual: %v expected: %v\n", gotAccessModes, wantAccessModes)
}
}
func Test_workVolumeClaimTemplateV1VolumeMount(t *testing.T) {
workVolumeClaimTemplate := v1alpha1.WorkVolumeClaimTemplate{
StorageClassName: "local-storage",
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce, corev1.ReadWriteMany},
Resources: corev1.ResourceRequirements{},
}
mountPath := "/test/_work"
want := corev1.VolumeMount{
MountPath: mountPath,
Name: "work",
}
got := workVolumeClaimTemplate.V1VolumeMount(mountPath)
if want != got {
t.Fatalf("expected volume mount %+v, actual %+v\n", want, got)
}
}

43
docs/releasenotes/0.25.md Normal file
View File

@@ -0,0 +1,43 @@
# actions-runner-controller v0.25.0
All planned changes in this release can be found in the milestone https://github.com/actions-runner-controller/actions-runner-controller/milestone/8.
Also see https://github.com/actions-runner-controller/actions-runner-controller/compare/v0.24.1...v0.25.0 for full changelog.
This log documents breaking changes and major enhancements
## Upgrading
In case you're using our Helm chart to deploy ARC, use the chart 0.20.0 or greater. Don't miss upgrading CRDs as usual! Helm doesn't upgrade CRDs.
## BREAKING CHANGE : Support for `--once` has been dropped
In case you're still on ARC v0.23.0 or earlier, please also read [the relevant part of v0.24.0 release note for more information](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/docs/releasenotes/0.24.md#breaking-change--support-for---once-is-being-dropped).
Relevant PR(s): #1580, #1590
## ENHANCEMENT : Support for the new Kubernetes container mode of Actions runner
The GitHub Actions team has recently added `actions/runner` an ability to use [runner container hooks](https://github.com/actions/runner-container-hooks) to run job steps on Kubernetes pods instead of docker containers created by the `docker` command. It allows us to avoid the use of privileged containers while still being able to run container-backed job steps.
To use the new container mode, you set `.spec.template.spec.containerMode` in `RunnerDeployment` to `"kubernetes"`, while defining `.spec.template.spec.workVolumeClaimTemplate`. The volume claim template is used for provisioning and assigning persistent volumes mounted across the runner pod and the job pods for sharing the job workspace.
Before using this feature, we highly recommend you to read [the detailed explanation in the original pull request](https://github.com/actions-runner-controller/actions-runner-controller/pull/1546), and [the new section in ARC's documentation](https://github.com/actions-runner-controller/actions-runner-controller#runner-with-k8s-jobs).
Big kudos to @thboop and the GitHub Actions team for implementing and contributing this feature!
Relevant PR(s): #1546
## FIX : Webhook-based scaling is even more reliable
We fixed a race condition in the webhook-based autoscaler that resulted in not adding a runner when necessary.
The race condition had been happening when it received a webhook event while processing another webhook event and both ended up scaling up the same horizontal runner autoscaler at the same time.
To mitigate that, ARC now uses Kubernetes' Update API instead of Patch to update `HRA.spec.capacityReservations` which is the underlying data structure that makes the webhook-based scaler to add replicas to RunnerDeployment or RunnerSet on demand.
We were also worried about stressing the Kubernetes apiserver when your ARC webhook-based autoscaler received a lot of concurrent webhook events, we also enhanced it to batch the Update API calls for 3 seconds, which basically means it will call the Update API at most once every 3 seconds per webhook-based autoscaler instance.
Lastly, we fixed a bug in the autoscaler that resulted in it to stop adding replicas for newly received webhook events when the desired replicas reached `maxReplicas`.
Relevant PR(s): #1477, #1568

View File

@@ -8,7 +8,7 @@ import (
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v45/github"
"github.com/gorilla/mux"
)

View File

@@ -14,7 +14,7 @@ import (
"github.com/actions-runner-controller/actions-runner-controller/logging"
"github.com/bradleyfalzon/ghinstallation/v2"
"github.com/go-logr/logr"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v45/github"
"github.com/gregjones/httpcache"
"golang.org/x/oauth2"
)
@@ -248,7 +248,8 @@ func (c *Client) ListRunners(ctx context.Context, enterprise, org, repo string)
func (c *Client) ListOrganizationRunnerGroups(ctx context.Context, org string) ([]*github.RunnerGroup, error) {
var runnerGroups []*github.RunnerGroup
opts := github.ListOptions{PerPage: 100}
opts := github.ListOrgRunnerGroupOptions{}
opts.PerPage = 100
for {
list, res, err := c.Client.Actions.ListOrganizationRunnerGroups(ctx, org, &opts)
if err != nil {
@@ -271,9 +272,21 @@ func (c *Client) ListOrganizationRunnerGroups(ctx context.Context, org string) (
func (c *Client) ListOrganizationRunnerGroupsForRepository(ctx context.Context, org, repo string) ([]*github.RunnerGroup, error) {
var runnerGroups []*github.RunnerGroup
opts := github.ListOptions{PerPage: 100}
var opts github.ListOrgRunnerGroupOptions
opts.PerPage = 100
repoName := repo
parts := strings.Split(repo, "/")
if len(parts) == 2 {
repoName = parts[1]
}
// This must be the repo name without the owner part, so in case the repo is "myorg/myrepo" the repo name
// passed to visible_to_repository must be "myrepo".
opts.VisibleToRepository = repoName
for {
list, res, err := c.listOrganizationRunnerGroupsVisibleToRepo(ctx, org, repo, &opts)
list, res, err := c.Actions.ListOrganizationRunnerGroups(ctx, org, &opts)
if err != nil {
return runnerGroups, fmt.Errorf("failed to list organization runner groups: %w", err)
}
@@ -309,42 +322,6 @@ func (c *Client) ListRunnerGroupRepositoryAccesses(ctx context.Context, org stri
return repos, nil
}
// listOrganizationRunnerGroupsVisibleToRepo lists all self-hosted runner groups configured in an organization which can be used by the repository.
//
// GitHub API docs: https://docs.github.com/en/rest/reference/actions#list-self-hosted-runner-groups-for-an-organization
func (c *Client) listOrganizationRunnerGroupsVisibleToRepo(ctx context.Context, org, repo string, opts *github.ListOptions) (*github.RunnerGroups, *github.Response, error) {
repoName := repo
parts := strings.Split(repo, "/")
if len(parts) == 2 {
repoName = parts[1]
}
u := fmt.Sprintf("orgs/%v/actions/runner-groups?visible_to_repository=%v", org, repoName)
if opts != nil {
if opts.PerPage > 0 {
u = fmt.Sprintf("%v&per_page=%v", u, opts.PerPage)
}
if opts.Page > 0 {
u = fmt.Sprintf("%v&page=%v", u, opts.Page)
}
}
req, err := c.Client.NewRequest("GET", u, nil)
if err != nil {
return nil, nil, err
}
groups := &github.RunnerGroups{}
resp, err := c.Client.Do(ctx, req, &groups)
if err != nil {
return nil, resp, err
}
return groups, resp, nil
}
// cleanup removes expired registration tokens.
func (c *Client) cleanup() {
c.mu.Lock()

View File

@@ -8,7 +8,7 @@ import (
"time"
"github.com/actions-runner-controller/actions-runner-controller/github/fake"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v45/github"
)
var server *httptest.Server

48
go.mod
View File

@@ -7,46 +7,56 @@ require (
github.com/davecgh/go-spew v1.1.1
github.com/go-logr/logr v1.2.3
github.com/google/go-cmp v0.5.8
github.com/google/go-github/v39 v39.2.0
github.com/google/go-github/v45 v45.2.0
github.com/gorilla/mux v1.8.0
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79
github.com/kelseyhightower/envconfig v1.4.0
github.com/onsi/ginkgo v1.16.5
github.com/onsi/gomega v1.19.0
github.com/prometheus/client_golang v1.12.2
github.com/stretchr/testify v1.7.1
github.com/stretchr/testify v1.8.0
github.com/teambition/rrule-go v1.8.0
go.uber.org/zap v1.21.0
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5
golang.org/x/oauth2 v0.0.0-20220630143837-2104d58473e0
gomodules.xyz/jsonpatch/v2 v2.2.0
k8s.io/api v0.23.5
k8s.io/apimachinery v0.23.5
k8s.io/client-go v0.23.5
sigs.k8s.io/controller-runtime v0.11.2
k8s.io/api v0.24.2
k8s.io/apimachinery v0.24.2
k8s.io/client-go v0.24.2
sigs.k8s.io/controller-runtime v0.12.2
sigs.k8s.io/yaml v1.3.0
)
require (
cloud.google.com/go v0.81.0 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/emicklei/go-restful v2.9.5+incompatible // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/go-logr/zapr v1.2.0 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.5 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.0.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/go-github/v41 v41.0.0 // indirect
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
@@ -56,24 +66,24 @@ require (
github.com/spf13/pflag v1.0.5 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 // indirect
golang.org/x/net v0.0.0-20220225172249-27dd8689420f // indirect
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 // indirect
golang.org/x/crypto v0.0.0-20220214200702-86341886e292 // indirect
golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e // indirect
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.27.1 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/apiextensions-apiserver v0.23.5 // indirect
k8s.io/component-base v0.23.5 // indirect
k8s.io/klog/v2 v2.30.0 // indirect
k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 // indirect
k8s.io/utils v0.0.0-20211116205334-6203023598ed // indirect
sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.24.2 // indirect
k8s.io/component-base v0.24.2 // indirect
k8s.io/klog/v2 v2.60.1 // indirect
k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 // indirect
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 // indirect
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
)

81
go.sum
View File

@@ -52,7 +52,9 @@ github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/actions-runner-controller/httpcache v0.2.0 h1:hCNvYuVPJ2xxYBymqBvH0hSiQpqz4PHF/LbU3XghGNI=
github.com/actions-runner-controller/httpcache v0.2.0/go.mod h1:JLu9/2M/btPz1Zu/vTZ71XzukQHn2YeISPmJoM5exBI=
@@ -66,6 +68,7 @@ github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20210826220005-b48c857c3a0e/go.m
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
@@ -78,6 +81,7 @@ github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kB
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/bketelsen/crypt v0.0.4/go.mod h1:aI6NrJ0pMGgvZKL1iVgXLnfIFJtfV+bKCoqOes/6LfM=
github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/bradleyfalzon/ghinstallation/v2 v2.0.4 h1:tXKVfhE7FcSkhkv0UwkLvPDeZ4kz6OXd0PKPlFqf81M=
github.com/bradleyfalzon/ghinstallation/v2 v2.0.4/go.mod h1:B40qPqJxWE0jDZgOR1JmaMy+4AY1eBP+IByOvqyAKp0=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
@@ -106,6 +110,7 @@ github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -117,6 +122,7 @@ github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful v2.9.5+incompatible h1:spTtZBk5DYEvbxMVutUuTyh1Ao2r4iyvLdACqsl/Ljk=
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
@@ -157,10 +163,13 @@ github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbV
github.com/go-logr/zapr v1.2.0 h1:n4JnPI1T3Qq1SFEi/F8rwLrZERp2bso19PJZDB9dayk=
github.com/go-logr/zapr v1.2.0/go.mod h1:Qa4Bsj2Vb+FAVeAKsLD8RLQ+YRJB8YDmOAKxaBQf7Ro=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.5 h1:gZr+CIYByUqjcgeLXnQu2gHYQC9o73G2XUeOFYEICuY=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/jsonreference v0.19.5 h1:1WJP/wi4OjB4iV8KVbH73rQaoialJrqv8gitZLxGLtM=
github.com/go-openapi/jsonreference v0.19.5/go.mod h1:RdybgQwPxbL4UEjuAruzK1x3nE69AqPYEJeo/TWfEeg=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.14 h1:gm3vOOXfiuw5i9p5N9xJvfjvuofpyvLA9Wr6QfK5Fng=
github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
@@ -210,7 +219,10 @@ github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Z
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/cel-go v0.9.0/go.mod h1:U7ayypeSkw23szu4GaQTPJGx66c20mx8JklMSxrmI1w=
github.com/google/cel-go v0.10.1/go.mod h1:U7ayypeSkw23szu4GaQTPJGx66c20mx8JklMSxrmI1w=
github.com/google/cel-spec v0.6.0/go.mod h1:Nwjgxy5CbjlPrtCWjeDjUyKMl8w41YBYGjsyDdqk0xA=
github.com/google/gnostic v0.5.7-v3refs h1:FhTMOKj2VhjpouxvWJAV1TL304uMlb9zcDqkl6cEI54=
github.com/google/gnostic v0.5.7-v3refs/go.mod h1:73MKFl6jIHelAJNaBGFzt3SPtZULs9dYrGFt8OiIsHQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
@@ -225,10 +237,10 @@ github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-github/v39 v39.2.0 h1:rNNM311XtPOz5rDdsJXAp2o8F67X9FnROXTvto3aSnQ=
github.com/google/go-github/v39 v39.2.0/go.mod h1:C1s8C5aCC9L+JXIYpJM5GYytdX52vC1bLvHEF1IhBrE=
github.com/google/go-github/v41 v41.0.0 h1:HseJrM2JFf2vfiZJ8anY2hqBjdfY1Vlj/K27ueww4gg=
github.com/google/go-github/v41 v41.0.0/go.mod h1:XgmCA5H323A9rtgExdTcnDkcqp6S30AVACCBDOonIxg=
github.com/google/go-github/v45 v45.2.0 h1:5oRLszbrkvxDDqBCNj2hjDZMKmvexaZ1xw/FCD+K3FI=
github.com/google/go-github/v45 v45.2.0/go.mod h1:FObaZJEDSTa/WGCzZ2Z3eoCDXWJKMenWWTrd8jrta28=
github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8=
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@@ -295,6 +307,7 @@ github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANyt
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
@@ -327,6 +340,7 @@ github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czP
github.com/magiconair/properties v1.8.5/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.6 h1:8yTIVnZgCoiM1TgqoeTl+LfU5Jg6/xL3QhGQnimLYnA=
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
@@ -345,6 +359,7 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297/go.mod h1:vgPCkQMyxTZ7IDy8SXRufE172gr8+K/JE/7hHFxHW3A=
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6/go.mod h1:E2VnQOmVuvZB6UYnnDB0qG5Nq/1tD9acaOpo6xmt0Kw=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -353,6 +368,7 @@ github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3Rllmb
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
@@ -394,6 +410,7 @@ github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDf
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_golang v1.12.2 h1:51L9cDoUHVrXx4zWYlcLQIZ+d+VXHgqnYKkIuq4g/34=
github.com/prometheus/client_golang v1.12.2/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
@@ -421,6 +438,7 @@ github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6So
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
@@ -441,6 +459,7 @@ github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkU
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.1.3/go.mod h1:pGADOWyqRD/YMrPZigI/zbliZ2wVD/23d+is3pSWzOo=
github.com/spf13/cobra v1.2.1/go.mod h1:ExllRjgxM/piMAM+3tAZvg8fsklGAf3tPfi+i8t68Nk=
github.com/spf13/cobra v1.4.0/go.mod h1:Wo4iy3BUC+X2Fybo0PDqwJIv3dNRiZLHQymsfxlB84g=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
@@ -451,6 +470,7 @@ github.com/spf13/viper v1.8.1/go.mod h1:o0Pch8wJ9BVSWGQMbra6iw0oQ5oktSIBaujf1rJH
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
@@ -459,6 +479,10 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.5 h1:s5PTfem8p8EbKQOctVV53k6jCJt3UX4IEJzwh+C324Q=
github.com/stretchr/testify v1.7.5/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/teambition/rrule-go v1.8.0 h1:a/IX5s56hGkFF+nRlJUooZU/45OTeeldBGL29nDKIHw=
github.com/teambition/rrule-go v1.8.0/go.mod h1:Ieq5AbrKGciP1V//Wq8ktsTXwSwJHDD5mD/wLBGl3p4=
@@ -471,12 +495,16 @@ github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/api/v3 v3.5.1/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/pkg/v3 v3.5.1/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v2 v2.305.0/go.mod h1:h9puh54ZTgAKtEbut2oe9P4L/oqKCVB6xsXlzd7alYQ=
go.etcd.io/etcd/client/v3 v3.5.0/go.mod h1:AIKXXVX/DQXtfTEqBryiLTUXwON+GuvO6Z7lLS/oTh0=
go.etcd.io/etcd/client/v3 v3.5.1/go.mod h1:OnjH4M8OnAotwaB2l9bVgZzRFKru7/ZMoS46OtKyd3Q=
go.etcd.io/etcd/pkg/v3 v3.5.0/go.mod h1:UzJGatBQ1lXChBkQF0AuAtkRQMYnHubxAEYIrC3MSsE=
go.etcd.io/etcd/raft/v3 v3.5.0/go.mod h1:UFOHSIvO/nKwd4lhkwabrTD3cqW5yVyYYf/KlD00Szc=
go.etcd.io/etcd/server/v3 v3.5.0/go.mod h1:3Ah5ruV+M+7RZr0+Y/5mNLwC+eQlni+mQmOVdCRJoS4=
@@ -524,6 +552,9 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 h1:HWj/xjIHfjYU5nVXpTM0s39J9CbLn7Cc5a7IC5rwsMQ=
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220214200702-86341886e292 h1:f+lwQ+GtmgoY+A2YaQxlSOnDjXcQ7ZRLWOHbC6HtRqE=
golang.org/x/crypto v0.0.0-20220214200702-86341886e292/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -559,6 +590,7 @@ golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -605,10 +637,14 @@ golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96b
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210825183410-e898025ed96a/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211209124913-491a49abca63/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f h1:oA4XRj0qtSt8Yo1Zms0CUlsT3KG69V2UGQWPBxujDmc=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e h1:TsQ7F31D3bUCLeqPT0u+yjp1guoArKaNKmCr22PYgTQ=
golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -623,8 +659,13 @@ golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5 h1:OSnWWcOd/CtWQC2cYSBgbTSJv3ciqd8r54ySIW2y3RE=
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220628200809-02e64fa58f26 h1:uBgVQYJLi/m8M0wzp+aGwBWt90gMRoOVf+aWTW10QHI=
golang.org/x/oauth2 v0.0.0-20220628200809-02e64fa58f26/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE=
golang.org/x/oauth2 v0.0.0-20220630143837-2104d58473e0 h1:VnGaRqoLmqZH/3TMLJwYCEWkR4j1nuIU1U9TvbqsDUw=
golang.org/x/oauth2 v0.0.0-20220630143837-2104d58473e0/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -701,9 +742,14 @@ golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 h1:XfKQ4OlFl8okEOr5UvAqFRVj8pY/4yfcXrddB8qAbU0=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220209214540-3681064d5158 h1:rm+CHSpPEEW2IsXUib1ThaHIjuBVZjxNgSKmBLFfD4c=
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a h1:dGzPydgVsqGcTRVwiLJ1jVbufYwmzD3LfVPLKsKg+0k=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
@@ -724,6 +770,8 @@ golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxb
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac h1:7zkz7BUtwNFFqcowJ+RIgu2MaV/MapERkDIy+mwPyjs=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 h1:vVKdlvoWBphwdxWKrFZEuM0kGgGLxUOYcY4U/2Vjg44=
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -783,6 +831,7 @@ golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.6-0.20210820212750-d4cc65f0b2ff/go.mod h1:YD9qOF0M9xpSpdWTBbzEl5e/RnCefISl8E5Noe10jFM=
golang.org/x/tools v0.1.10-0.20220218145154-897bd77cd717/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -864,6 +913,7 @@ google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
@@ -900,6 +950,8 @@ google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp0
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0 h1:w43yiav+6bVFTBQFZX0r7ipe9JQ1QsbMgHwbBziscLw=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -931,6 +983,8 @@ gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
@@ -942,34 +996,57 @@ honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.23.5 h1:zno3LUiMubxD/V1Zw3ijyKO3wxrhbUF1Ck+VjBvfaoA=
k8s.io/api v0.23.5/go.mod h1:Na4XuKng8PXJ2JsploYYrivXrINeTaycCGcYgF91Xm8=
k8s.io/api v0.24.2 h1:g518dPU/L7VRLxWfcadQn2OnsiGWVOadTLpdnqgY2OI=
k8s.io/api v0.24.2/go.mod h1:AHqbSkTm6YrQ0ObxjO3Pmp/ubFF/KuM7jU+3khoBsOg=
k8s.io/apiextensions-apiserver v0.23.5 h1:5SKzdXyvIJKu+zbfPc3kCbWpbxi+O+zdmAJBm26UJqI=
k8s.io/apiextensions-apiserver v0.23.5/go.mod h1:ntcPWNXS8ZPKN+zTXuzYMeg731CP0heCTl6gYBxLcuQ=
k8s.io/apiextensions-apiserver v0.24.2 h1:/4NEQHKlEz1MlaK/wHT5KMKC9UKYz6NZz6JE6ov4G6k=
k8s.io/apiextensions-apiserver v0.24.2/go.mod h1:e5t2GMFVngUEHUd0wuCJzw8YDwZoqZfJiGOW6mm2hLQ=
k8s.io/apimachinery v0.23.5 h1:Va7dwhp8wgkUPWsEXk6XglXWU4IKYLKNlv8VkX7SDM0=
k8s.io/apimachinery v0.23.5/go.mod h1:BEuFMMBaIbcOqVIJqNZJXGFTP4W6AycEpb5+m/97hrM=
k8s.io/apimachinery v0.24.2 h1:5QlH9SL2C8KMcrNJPor+LbXVTaZRReml7svPEh4OKDM=
k8s.io/apimachinery v0.24.2/go.mod h1:82Bi4sCzVBdpYjyI4jY6aHX+YCUchUIrZrXKedjd2UM=
k8s.io/apiserver v0.23.5/go.mod h1:7wvMtGJ42VRxzgVI7jkbKvMbuCbVbgsWFT7RyXiRNTw=
k8s.io/apiserver v0.24.2/go.mod h1:pSuKzr3zV+L+MWqsEo0kHHYwCo77AT5qXbFXP2jbvFI=
k8s.io/client-go v0.23.5 h1:zUXHmEuqx0RY4+CsnkOn5l0GU+skkRXKGJrhmE2SLd8=
k8s.io/client-go v0.23.5/go.mod h1:flkeinTO1CirYgzMPRWxUCnV0G4Fbu2vLhYCObnt/r4=
k8s.io/client-go v0.24.2 h1:CoXFSf8if+bLEbinDqN9ePIDGzcLtqhfd6jpfnwGOFA=
k8s.io/client-go v0.24.2/go.mod h1:zg4Xaoo+umDsfCWr4fCnmLEtQXyCNXCvJuSsglNcV30=
k8s.io/code-generator v0.23.5/go.mod h1:S0Q1JVA+kSzTI1oUvbKAxZY/DYbA/ZUb4Uknog12ETk=
k8s.io/code-generator v0.24.2/go.mod h1:dpVhs00hTuTdTY6jvVxvTFCk6gSMrtfRydbhZwHI15w=
k8s.io/component-base v0.23.5 h1:8qgP5R6jG1BBSXmRYW+dsmitIrpk8F/fPEvgDenMCCE=
k8s.io/component-base v0.23.5/go.mod h1:c5Nq44KZyt1aLl0IpHX82fhsn84Sb0jjzwjpcA42bY0=
k8s.io/component-base v0.24.2 h1:kwpQdoSfbcH+8MPN4tALtajLDfSfYxBDYlXobNWI6OU=
k8s.io/component-base v0.24.2/go.mod h1:ucHwW76dajvQ9B7+zecZAP3BVqvrHoOxm8olHEg0nmM=
k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/klog/v2 v2.30.0 h1:bUO6drIvCIsvZ/XFgfxoGFQU/a4Qkh0iAlvUR7vlHJw=
k8s.io/klog/v2 v2.30.0/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/klog/v2 v2.60.1 h1:VW25q3bZx9uE3vvdL6M8ezOX79vA2Aq1nEWLqNQclHc=
k8s.io/klog/v2 v2.60.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 h1:E3J9oCLlaobFUqsjG9DfKbP2BmgwBL2p7pn0A3dG9W4=
k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65/go.mod h1:sX9MT8g7NVZM5lVL/j8QyCCJe8YSMW30QvGZWaCIDIk=
k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 h1:Gii5eqf+GmIEwGNKQYQClCayuJCe2/4fZUvF7VG99sU=
k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42/go.mod h1:Z/45zLw8lUo4wdiUkI+v/ImEGAvu3WatcZl3lPMR4Rk=
k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20211116205334-6203023598ed h1:ck1fRPWPJWsMd8ZRFsWc6mh/zHp5fZ/shhbrgPUxDAE=
k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 h1:HNSDgDCrr/6Ly3WEGKZftiE7IY19Vz2GdbOCyI4qqhc=
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.30/go.mod h1:fEO7lRTdivWO2qYVCVG7dEADOMo/MLDCVr8So2g88Uw=
sigs.k8s.io/controller-runtime v0.11.2 h1:H5GTxQl0Mc9UjRJhORusqfJCIjBO8UtUxGggCwL1rLA=
sigs.k8s.io/controller-runtime v0.11.2/go.mod h1:P6QCzrEjLaZGqHsfd+os7JQ+WFZhvB8MRFsn4dWF7O4=
sigs.k8s.io/controller-runtime v0.12.2 h1:nqV02cvhbAj7tbt21bpPpTByrXGn2INHRsi39lXy9sE=
sigs.k8s.io/controller-runtime v0.12.2/go.mod h1:qKsk4WE6zW2Hfj0G4v10EnNB2jMG1C+NTb8h+DwCoU0=
sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 h1:fD1pz4yfdADVNfFmcP2aBEtudwUQ1AlLnRBALr33v3s=
sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6/go.mod h1:p4QtZmO4uMYipTQNzagwnNoseA6OxSUutVw05NhYDRs=
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 h1:kDi4JBNAsJWfz1aEXhO8Jg87JJaPNLh5tIzYHgStQ9Y=
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2/go.mod h1:B+TnT182UBxE84DiCz4CVE26eOSDAeYCpfDnC2kdKMY=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 h1:bKCqE9GvQ5tiVHn5rfn1r+yao3aLQEaLzkkmAkf+A6Y=
sigs.k8s.io/structured-merge-diff/v4 v4.2.1/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=

View File

@@ -11,7 +11,7 @@ import (
"time"
"github.com/actions-runner-controller/actions-runner-controller/github"
gogithub "github.com/google/go-github/v39/github"
gogithub "github.com/google/go-github/v45/github"
)
type server struct {

View File

@@ -12,7 +12,7 @@ import (
"time"
"github.com/actions-runner-controller/actions-runner-controller/github"
gogithub "github.com/google/go-github/v39/github"
gogithub "github.com/google/go-github/v45/github"
)
type Forwarder struct {

View File

@@ -3,7 +3,7 @@ package hookdeliveryforwarder
import (
"context"
gogithub "github.com/google/go-github/v39/github"
gogithub "github.com/google/go-github/v45/github"
)
type hooksAPI struct {

View File

@@ -3,7 +3,7 @@ package hookdeliveryforwarder
import (
"context"
gogithub "github.com/google/go-github/v39/github"
gogithub "github.com/google/go-github/v45/github"
)
type hookDeliveriesAPI struct {

View File

@@ -8,7 +8,7 @@ import (
"sync"
"github.com/actions-runner-controller/actions-runner-controller/github"
gogithub "github.com/google/go-github/v39/github"
gogithub "github.com/google/go-github/v45/github"
)
type MultiForwarder struct {

View File

@@ -4,7 +4,8 @@ DIND_RUNNER_NAME ?= ${DOCKER_USER}/actions-runner-dind
TAG ?= latest
TARGETPLATFORM ?= $(shell arch)
RUNNER_VERSION ?= 2.292.0
RUNNER_VERSION ?= 2.294.0
RUNNER_CONTAINER_HOOKS_VERSION ?= 0.1.2
DOCKER_VERSION ?= 20.10.12
# default list of platforms for which multiarch image is built
@@ -28,6 +29,7 @@ docker-build-ubuntu:
docker build \
--build-arg TARGETPLATFORM=${TARGETPLATFORM} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg RUNNER_CONTAINER_HOOKS_VERSION=${RUNNER_CONTAINER_HOOKS_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
-f actions-runner.dockerfile \
-t ${NAME}:${TAG} .
@@ -50,12 +52,14 @@ docker-buildx-ubuntu:
fi
docker buildx build --platform ${PLATFORMS} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg RUNNER_CONTAINER_HOOKS_VERSION=${RUNNER_CONTAINER_HOOKS_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
-f actions-runner.dockerfile \
-t "${NAME}:${TAG}" \
. ${PUSH_ARG}
docker buildx build --platform ${PLATFORMS} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg RUNNER_CONTAINER_HOOKS_VERSION=${RUNNER_CONTAINER_HOOKS_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
-f actions-runner-dind.dockerfile \
-t "${DIND_RUNNER_NAME}:${TAG}" \

View File

@@ -1,7 +1,7 @@
FROM ubuntu:20.04
ARG TARGETPLATFORM
ARG RUNNER_VERSION=2.292.0
ARG RUNNER_VERSION=2.294.0
ARG DOCKER_CHANNEL=stable
ARG DOCKER_VERSION=20.10.12
ARG DUMB_INIT_VERSION=1.2.5
@@ -74,8 +74,6 @@ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
dockerd --version; \
docker --version
ENV HOME=/home/runner
# Runner download supports amd64 as x64
#
# libyaml-dev is required for ruby/setup-ruby action.
@@ -113,6 +111,7 @@ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
VOLUME /var/lib/docker
ENV HOME=/home/runner
# Add the Python "User Script Directory" to the PATH
ENV PATH="${PATH}:${HOME}/.local/bin"
ENV ImageOS=ubuntu20

View File

@@ -1,7 +1,8 @@
FROM ubuntu:20.04
ARG TARGETPLATFORM
ARG RUNNER_VERSION=2.292.0
ARG RUNNER_VERSION=2.294.0
ARG RUNNER_CONTAINER_HOOKS_VERSION=0.1.2
ARG DOCKER_CHANNEL=stable
ARG DOCKER_VERSION=20.10.12
ARG DUMB_INIT_VERSION=1.2.5
@@ -66,8 +67,6 @@ RUN set -vx; \
&& usermod -aG docker runner \
&& echo "%sudo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers
ENV HOME=/home/runner
# Uncomment the below COPY to use your own custom build of actions-runner.
#
# To build a custom runner:
@@ -105,6 +104,11 @@ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& apt-get install -y libyaml-dev \
&& rm -rf /var/lib/apt/lists/*
RUN cd "$RUNNER_ASSETS_DIR" \
&& curl -f -L -o runner-container-hooks.zip https://github.com/actions/runner-container-hooks/releases/download/v${RUNNER_CONTAINER_HOOKS_VERSION}/actions-runner-hooks-k8s-${RUNNER_CONTAINER_HOOKS_VERSION}.zip \
&& unzip ./runner-container-hooks.zip -d ./k8s \
&& rm runner-container-hooks.zip
ENV RUNNER_TOOL_CACHE=/opt/hostedtoolcache
RUN mkdir /opt/hostedtoolcache \
&& chgrp docker /opt/hostedtoolcache \
@@ -114,6 +118,7 @@ RUN mkdir /opt/hostedtoolcache \
# override them with scripts of the same name placed in `/usr/local/bin`.
COPY entrypoint.sh logger.bash /usr/bin/
ENV HOME=/home/runner
# Add the Python "User Script Directory" to the PATH
ENV PATH="${PATH}:${HOME}/.local/bin"
ENV ImageOS=ubuntu20

View File

@@ -9,14 +9,6 @@ if [ ! -z "${STARTUP_DELAY_IN_SECONDS}" ]; then
sleep ${STARTUP_DELAY_IN_SECONDS}
fi
if [[ "${DISABLE_WAIT_FOR_DOCKER}" != "true" ]] && [[ "${DOCKER_ENABLED}" == "true" ]]; then
log.debug 'Docker enabled runner detected and Docker daemon wait is enabled'
log.debug 'Waiting until Docker is available or the timeout is reached'
timeout 120s bash -c 'until docker ps ;do sleep 1; done'
else
log.notice 'Docker wait check skipped. Either Docker is disabled or the wait is disabled, continuing with entrypoint'
fi
if [ -z "${GITHUB_URL}" ]; then
log.debug 'Working with public GitHub'
GITHUB_URL="https://github.com/"
@@ -140,13 +132,12 @@ if [ -z "${UNITTEST:-}" ] && [ -e ./externalstmp ]; then
mv ./externalstmp/* ./externals/
fi
args=()
if [ "${RUNNER_FEATURE_FLAG_ONCE:-}" == "true" -a "${RUNNER_EPHEMERAL}" == "true" ]; then
args+=(--once)
log.warning 'Passing --once is deprecated and will be removed as an option' \
'from the image and actions-runner-controller at the release of 0.25.0.' \
'Upgrade to GHES => 3.3 to continue using actions-runner-controller. If' \
'you are using github.com ignore this warning.'
if [[ "${DISABLE_WAIT_FOR_DOCKER}" != "true" ]] && [[ "${DOCKER_ENABLED}" == "true" ]]; then
log.debug 'Docker enabled runner detected and Docker daemon wait is enabled'
log.debug 'Waiting until Docker is available or the timeout is reached'
timeout 120s bash -c 'until docker ps ;do sleep 1; done'
else
log.notice 'Docker wait check skipped. Either Docker is disabled or the wait is disabled, continuing with entrypoint'
fi
# Unset entrypoint environment variables so they don't leak into the runner environment
@@ -164,4 +155,4 @@ unset RUNNER_NAME RUNNER_REPO RUNNER_TOKEN STARTUP_DELAY_IN_SECONDS DISABLE_WAIT
if [ -z "${UNITTEST:-}" ]; then
mapfile -t env </etc/environment
fi
exec env -- "${env[@]}" ./run.sh "${args[@]}"
exec env -- "${env[@]}" ./run.sh

View File

@@ -20,7 +20,9 @@ function wait_for_process () {
sudo /bin/bash <<SCRIPT
mkdir -p /etc/docker
echo "{}" > /etc/docker/daemon.json
if [ ! -f /etc/docker/daemon.json ]; then
echo "{}" > /etc/docker/daemon.json
fi
if [ -n "${MTU}" ]; then
jq ".\"mtu\" = ${MTU}" /etc/docker/daemon.json > /tmp/.daemon.json && mv /tmp/.daemon.json /etc/docker/daemon.json

View File

@@ -5,10 +5,12 @@ import (
"fmt"
"github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/go-logr/logr"
)
type Simulator struct {
Client *github.Client
Log logr.Logger
}
func (c *Simulator) GetRunnerGroupsVisibleToRepository(ctx context.Context, org, repo string, managed *VisibleRunnerGroups) (*VisibleRunnerGroups, error) {
@@ -24,6 +26,10 @@ func (c *Simulator) GetRunnerGroupsVisibleToRepository(ctx context.Context, org,
return visible, err
}
if c.Log.V(3).Enabled() {
c.Log.V(3).Info("ListOrganizationRunnerGroupsForRepository succeeded", "runerGroups", runnerGroups)
}
for _, runnerGroup := range runnerGroups {
ref := NewRunnerGroupFromGitHub(runnerGroup)

View File

@@ -5,7 +5,7 @@ import (
"sort"
"strings"
"github.com/google/go-github/v39/github"
"github.com/google/go-github/v45/github"
)
type RunnerGroupScope int

View File

@@ -13,52 +13,15 @@ import (
"sigs.k8s.io/yaml"
)
type DeployKind int
const (
RunnerSets DeployKind = iota
RunnerDeployments
)
var (
controllerImageRepo = "actionsrunnercontrollere2e/actions-runner-controller"
controllerImageTag = "e2e"
controllerImage = testing.Img(controllerImageRepo, controllerImageTag)
runnerImageRepo = "actionsrunnercontrollere2e/actions-runner"
runnerDindImageRepo = "actionsrunnercontrollere2e/actions-runner-dind"
runnerImageTag = "e2e"
runnerImage = testing.Img(runnerImageRepo, runnerImageTag)
runnerDindImage = testing.Img(runnerDindImageRepo, runnerImageTag)
prebuildImages = []testing.ContainerImage{
controllerImage,
runnerImage,
runnerDindImage,
}
builds = []testing.DockerBuild{
{
Dockerfile: "../../Dockerfile",
Args: []testing.BuildArg{},
Image: controllerImage,
EnableBuildX: true,
},
{
Dockerfile: "../../runner/actions-runner.dockerfile",
Args: []testing.BuildArg{
{
Name: "RUNNER_VERSION",
Value: "2.291.1",
},
},
Image: runnerImage,
},
{
Dockerfile: "../../runner/actions-runner-dind.dockerfile",
Args: []testing.BuildArg{
{
Name: "RUNNER_VERSION",
Value: "2.291.1",
},
},
Image: runnerDindImage,
},
}
certManagerVersion = "v1.1.1"
certManagerVersion = "v1.8.2"
images = []testing.ContainerImage{
testing.Img("docker", "dind"),
@@ -68,13 +31,6 @@ var (
testing.Img("quay.io/jetstack/cert-manager-webhook", certManagerVersion),
}
commonScriptEnv = []string{
"SYNC_PERIOD=" + "30m",
"NAME=" + controllerImageRepo,
"VERSION=" + controllerImageTag,
"RUNNER_TAG=" + runnerImageTag,
}
testResultCMNamePrefix = "test-result-"
)
@@ -99,18 +55,47 @@ var (
// whenever the whole test failed, so that you can immediately start fixing issues and rerun inidividual tests.
// See the below link for how terratest handles this:
// https://terratest.gruntwork.io/docs/testing-best-practices/iterating-locally-using-test-stages/
//
// This functions leaves PVs undeleted. To delete PVs, run:
// kubectl get pv -ojson | jq -rMc '.items[] | select(.status.phase == "Available") | {name:.metadata.name, status:.status.phase} | .name' | xargs kubectl delete pv
//
// If you disk full after dozens of test runs, try:
// docker system prune
// and
// kind delete cluster --name teste2e
//
// The former tend to release 200MB-3GB and the latter can result in releasing like 100GB due to kind node contains loaded container images and
// (in case you use it) local provisioners disk image(which is implemented as a directory within the kind node).
func TestE2E(t *testing.T) {
if testing.Short() {
t.Skip("Skipped as -short is set")
}
env := initTestEnv(t)
env.useRunnerSet = true
k8sMinorVer := os.Getenv("ARC_E2E_KUBE_VERSION")
skipRunnerCleanUp := os.Getenv("ARC_E2E_SKIP_RUNNER_CLEANUP") != ""
retainCluster := os.Getenv("ARC_E2E_RETAIN_CLUSTER") != ""
skipTestIDCleanUp := os.Getenv("ARC_E2E_SKIP_TEST_ID_CLEANUP") != ""
skipArgoTunnelCleanUp := os.Getenv("ARC_E2E_SKIP_ARGO_TUNNEL_CLEAN_UP") != ""
vars := buildVars(os.Getenv("ARC_E2E_IMAGE_REPO"))
env := initTestEnv(t, k8sMinorVer, vars)
if vt := os.Getenv("ARC_E2E_VERIFY_TIMEOUT"); vt != "" {
var err error
env.VerifyTimeout, err = time.ParseDuration(vt)
if err != nil {
t.Fatalf("Failed to parse duration %q: %v", vt, err)
}
}
t.Run("build and load images", func(t *testing.T) {
env.buildAndLoadImages(t)
})
if t.Failed() {
return
}
t.Run("install cert-manager", func(t *testing.T) {
env.installCertManager(t)
})
@@ -119,72 +104,127 @@ func TestE2E(t *testing.T) {
return
}
t.Run("install actions-runner-controller and runners", func(t *testing.T) {
env.installActionsRunnerController(t)
t.Run("RunnerSets", func(t *testing.T) {
var (
testID string
)
t.Run("get or generate test ID", func(t *testing.T) {
testID = env.GetOrGenerateTestID(t)
})
if !skipTestIDCleanUp {
t.Cleanup(func() {
env.DeleteTestID(t)
})
}
t.Run("install actions-runner-controller v0.24.1", func(t *testing.T) {
env.installActionsRunnerController(t, "summerwind/actions-runner-controller", "v0.24.1", testID)
})
t.Run("install argo-tunnel", func(t *testing.T) {
env.installArgoTunnel(t)
})
if !skipArgoTunnelCleanUp {
t.Cleanup(func() {
env.uninstallArgoTunnel(t)
})
}
t.Run("deploy runners", func(t *testing.T) {
env.deploy(t, RunnerSets, testID)
})
if !skipRunnerCleanUp {
t.Cleanup(func() {
env.undeploy(t, RunnerSets, testID)
})
}
t.Run("install edge actions-runner-controller", func(t *testing.T) {
env.installActionsRunnerController(t, vars.controllerImageRepo, vars.controllerImageTag, testID)
})
if t.Failed() {
return
}
t.Run("Install workflow", func(t *testing.T) {
env.installActionsWorkflow(t, RunnerSets, testID)
})
if t.Failed() {
return
}
t.Run("Verify workflow run result", func(t *testing.T) {
env.verifyActionsWorkflowRun(t, testID)
})
})
if t.Failed() {
return
}
t.Run("RunnerDeployments", func(t *testing.T) {
var (
testID string
)
t.Run("Install workflow", func(t *testing.T) {
env.installActionsWorkflow(t)
t.Run("get or generate test ID", func(t *testing.T) {
testID = env.GetOrGenerateTestID(t)
})
if !skipTestIDCleanUp {
t.Cleanup(func() {
env.DeleteTestID(t)
})
}
t.Run("install actions-runner-controller v0.24.1", func(t *testing.T) {
env.installActionsRunnerController(t, "summerwind/actions-runner-controller", "v0.24.1", testID)
})
t.Run("install argo-tunnel", func(t *testing.T) {
env.installArgoTunnel(t)
})
if !skipArgoTunnelCleanUp {
t.Cleanup(func() {
env.uninstallArgoTunnel(t)
})
}
t.Run("deploy runners", func(t *testing.T) {
env.deploy(t, RunnerDeployments, testID)
})
if !skipRunnerCleanUp {
t.Cleanup(func() {
env.undeploy(t, RunnerDeployments, testID)
})
}
t.Run("install edge actions-runner-controller", func(t *testing.T) {
env.installActionsRunnerController(t, vars.controllerImageRepo, vars.controllerImageTag, testID)
})
if t.Failed() {
return
}
t.Run("Install workflow", func(t *testing.T) {
env.installActionsWorkflow(t, RunnerDeployments, testID)
})
if t.Failed() {
return
}
t.Run("Verify workflow run result", func(t *testing.T) {
env.verifyActionsWorkflowRun(t, testID)
})
})
if t.Failed() {
return
}
t.Run("Verify workflow run result", func(t *testing.T) {
env.verifyActionsWorkflowRun(t)
})
if os.Getenv("ARC_E2E_NO_CLEANUP") != "" {
t.FailNow()
}
}
func TestE2ERunnerDeploy(t *testing.T) {
if testing.Short() {
t.Skip("Skipped as -short is set")
}
env := initTestEnv(t)
env.useApp = true
t.Run("build and load images", func(t *testing.T) {
env.buildAndLoadImages(t)
})
t.Run("install cert-manager", func(t *testing.T) {
env.installCertManager(t)
})
if t.Failed() {
return
}
t.Run("install actions-runner-controller and runners", func(t *testing.T) {
env.installActionsRunnerController(t)
})
if t.Failed() {
return
}
t.Run("Install workflow", func(t *testing.T) {
env.installActionsWorkflow(t)
})
if t.Failed() {
return
}
t.Run("Verify workflow run result", func(t *testing.T) {
env.verifyActionsWorkflowRun(t)
})
if os.Getenv("ARC_E2E_NO_CLEANUP") != "" {
if retainCluster {
t.FailNow()
}
}
@@ -192,41 +232,122 @@ func TestE2ERunnerDeploy(t *testing.T) {
type env struct {
*testing.Env
useRunnerSet bool
Kind *testing.Kind
// Uses GITHUB_APP_ID, GITHUB_APP_INSTALLATION_ID, and GITHUB_APP_PRIVATE_KEY
// to let ARC authenticate as a GitHub App
useApp bool
testID string
testName string
repoToCommit string
appID, appInstallationID, appPrivateKeyFile string
runnerLabel, githubToken, testRepo, testOrg, testOrgRepo string
githubTokenWebhook string
testEnterprise string
testEphemeral string
scaleDownDelaySecondsAfterScaleOut int64
minReplicas int64
dockerdWithinRunnerContainer bool
testJobs []job
testName string
repoToCommit string
appID, appInstallationID, appPrivateKeyFile string
githubToken, testRepo, testOrg, testOrgRepo string
githubTokenWebhook string
testEnterprise string
testEphemeral string
scaleDownDelaySecondsAfterScaleOut int64
minReplicas int64
dockerdWithinRunnerContainer bool
remoteKubeconfig string
imagePullSecretName string
vars vars
VerifyTimeout time.Duration
}
func initTestEnv(t *testing.T) *env {
type vars struct {
controllerImageRepo, controllerImageTag string
runnerImageRepo string
runnerDindImageRepo string
prebuildImages []testing.ContainerImage
builds []testing.DockerBuild
commonScriptEnv []string
}
func buildVars(repo string) vars {
if repo == "" {
repo = "actionsrunnercontrollere2e"
}
var (
controllerImageRepo = repo + "/actions-runner-controller"
controllerImageTag = "e2e"
controllerImage = testing.Img(controllerImageRepo, controllerImageTag)
runnerImageRepo = repo + "/actions-runner"
runnerDindImageRepo = repo + "/actions-runner-dind"
runnerImageTag = "e2e"
runnerImage = testing.Img(runnerImageRepo, runnerImageTag)
runnerDindImage = testing.Img(runnerDindImageRepo, runnerImageTag)
)
var vs vars
vs.controllerImageRepo, vs.controllerImageTag = controllerImageRepo, controllerImageTag
vs.runnerDindImageRepo = runnerDindImageRepo
vs.runnerImageRepo = runnerImageRepo
// vs.controllerImage, vs.controllerImageTag
vs.prebuildImages = []testing.ContainerImage{
controllerImage,
runnerImage,
runnerDindImage,
}
vs.builds = []testing.DockerBuild{
{
Dockerfile: "../../Dockerfile",
Args: []testing.BuildArg{},
Image: controllerImage,
EnableBuildX: true,
},
{
Dockerfile: "../../runner/actions-runner.dockerfile",
Args: []testing.BuildArg{
{
Name: "RUNNER_VERSION",
Value: "2.294.0",
},
},
Image: runnerImage,
EnableBuildX: true,
},
{
Dockerfile: "../../runner/actions-runner-dind.dockerfile",
Args: []testing.BuildArg{
{
Name: "RUNNER_VERSION",
Value: "2.294.0",
},
},
Image: runnerDindImage,
EnableBuildX: true,
},
}
vs.commonScriptEnv = []string{
"SYNC_PERIOD=" + "30s",
"RUNNER_TAG=" + runnerImageTag,
}
return vs
}
func initTestEnv(t *testing.T, k8sMinorVer string, vars vars) *env {
t.Helper()
testingEnv := testing.Start(t, testing.Preload(images...))
testingEnv := testing.Start(t, k8sMinorVer)
e := &env{Env: testingEnv}
id := e.ID()
testName := t.Name() + " " + id
testName := t.Name()
t.Logf("Initializing test with name %s", testName)
e.testID = id
e.testName = testName
e.runnerLabel = "test-" + id
e.githubToken = testing.Getenv(t, "GITHUB_TOKEN")
e.appID = testing.Getenv(t, "GITHUB_APP_ID")
e.appInstallationID = testing.Getenv(t, "GITHUB_APP_INSTALLATION_ID")
@@ -238,7 +359,23 @@ func initTestEnv(t *testing.T) *env {
e.testOrgRepo = testing.Getenv(t, "TEST_ORG_REPO", "")
e.testEnterprise = testing.Getenv(t, "TEST_ENTERPRISE", "")
e.testEphemeral = testing.Getenv(t, "TEST_EPHEMERAL", "")
e.testJobs = createTestJobs(id, testResultCMNamePrefix, 6)
e.remoteKubeconfig = testing.Getenv(t, "ARC_E2E_REMOTE_KUBECONFIG", "")
e.imagePullSecretName = testing.Getenv(t, "ARC_E2E_IMAGE_PULL_SECRET_NAME", "")
e.vars = vars
if e.remoteKubeconfig == "" {
e.Kind = testing.StartKind(t, k8sMinorVer, testing.Preload(images...))
e.Env.Kubeconfig = e.Kind.Kubeconfig()
} else {
e.Env.Kubeconfig = e.remoteKubeconfig
// Kind automatically installs https://github.com/rancher/local-path-provisioner for PVs.
// But assuming the remote cluster isn't a kind Kubernetes cluster,
// we need to install any provisioner manually.
// Here, we install the local-path-provisioner on the remote cluster too,
// so that we won't suffer from E2E failures due to the provisioner difference.
e.KubectlApply(t, "https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml", testing.KubectlConfig{})
}
e.scaleDownDelaySecondsAfterScaleOut, _ = strconv.ParseInt(testing.Getenv(t, "TEST_RUNNER_SCALE_DOWN_DELAY_SECONDS_AFTER_SCALE_OUT", "10"), 10, 32)
e.minReplicas, _ = strconv.ParseInt(testing.Getenv(t, "TEST_RUNNER_MIN_REPLICAS", "1"), 10, 32)
@@ -258,8 +395,29 @@ func (e *env) f() {
func (e *env) buildAndLoadImages(t *testing.T) {
t.Helper()
e.DockerBuild(t, builds)
e.KindLoadImages(t, prebuildImages)
e.DockerBuild(t, e.vars.builds)
if e.remoteKubeconfig == "" {
e.KindLoadImages(t, e.vars.prebuildImages)
} else {
// If it fails with `no basic auth credentials` here, you might have missed logging into the container registry beforehand.
// For ECR, run something like:
// aws ecr get-login-password | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com
// Also note that the authenticated session can be expired in a day or so(probably depends on your AWS config),
// so you might better write a script to do docker login before running the E2E test.
e.DockerPush(t, e.vars.prebuildImages)
}
}
func (e *env) KindLoadImages(t *testing.T, prebuildImages []testing.ContainerImage) {
t.Helper()
ctx, cancel := context.WithTimeout(context.Background(), 300*time.Second)
defer cancel()
if err := e.Kind.LoadImages(ctx, prebuildImages); err != nil {
t.Fatal(err)
}
}
func (e *env) installCertManager(t *testing.T) {
@@ -279,35 +437,22 @@ func (e *env) installCertManager(t *testing.T) {
e.KubectlWaitUntilDeployAvailable(t, "cert-manager", waitCfg.WithTimeout(60*time.Second))
}
func (e *env) installActionsRunnerController(t *testing.T) {
func (e *env) installActionsRunnerController(t *testing.T, repo, tag, testID string) {
t.Helper()
e.createControllerNamespaceAndServiceAccount(t)
scriptEnv := []string{
"KUBECONFIG=" + e.Kubeconfig(),
"KUBECONFIG=" + e.Kubeconfig,
"ACCEPTANCE_TEST_DEPLOYMENT_TOOL=" + "helm",
}
if e.useRunnerSet {
scriptEnv = append(scriptEnv, "USE_RUNNERSET=1")
} else {
scriptEnv = append(scriptEnv, "USE_RUNNERSET=false")
}
varEnv := []string{
"TEST_ENTERPRISE=" + e.testEnterprise,
"TEST_REPO=" + e.testRepo,
"TEST_ORG=" + e.testOrg,
"TEST_ORG_REPO=" + e.testOrgRepo,
"WEBHOOK_GITHUB_TOKEN=" + e.githubTokenWebhook,
"RUNNER_LABEL=" + e.runnerLabel,
"TEST_ID=" + e.testID,
"TEST_EPHEMERAL=" + e.testEphemeral,
fmt.Sprintf("RUNNER_SCALE_DOWN_DELAY_SECONDS_AFTER_SCALE_OUT=%d", e.scaleDownDelaySecondsAfterScaleOut),
fmt.Sprintf("REPO_RUNNER_MIN_REPLICAS=%d", e.minReplicas),
fmt.Sprintf("ORG_RUNNER_MIN_REPLICAS=%d", e.minReplicas),
fmt.Sprintf("ENTERPRISE_RUNNER_MIN_REPLICAS=%d", e.minReplicas),
"TEST_ID=" + testID,
"NAME=" + repo,
"VERSION=" + tag,
"IMAGE_PULL_SECRET=" + e.imagePullSecretName,
}
if e.useApp {
@@ -324,22 +469,96 @@ func (e *env) installActionsRunnerController(t *testing.T) {
)
}
scriptEnv = append(scriptEnv, varEnv...)
scriptEnv = append(scriptEnv, e.vars.commonScriptEnv...)
e.RunScript(t, "../../acceptance/deploy.sh", testing.ScriptConfig{Dir: "../..", Env: scriptEnv})
}
func (e *env) deploy(t *testing.T, kind DeployKind, testID string) {
t.Helper()
e.do(t, "apply", kind, testID)
}
func (e *env) undeploy(t *testing.T, kind DeployKind, testID string) {
t.Helper()
e.do(t, "delete", kind, testID)
}
func (e *env) do(t *testing.T, op string, kind DeployKind, testID string) {
t.Helper()
e.createControllerNamespaceAndServiceAccount(t)
scriptEnv := []string{
"KUBECONFIG=" + e.Kubeconfig,
"OP=" + op,
}
switch kind {
case RunnerSets:
scriptEnv = append(scriptEnv, "USE_RUNNERSET=1")
case RunnerDeployments:
scriptEnv = append(scriptEnv, "USE_RUNNERSET=false")
default:
t.Fatalf("Invalid deploy kind %v", kind)
}
varEnv := []string{
"TEST_ENTERPRISE=" + e.testEnterprise,
"TEST_REPO=" + e.testRepo,
"TEST_ORG=" + e.testOrg,
"TEST_ORG_REPO=" + e.testOrgRepo,
"RUNNER_LABEL=" + e.runnerLabel(testID),
"TEST_EPHEMERAL=" + e.testEphemeral,
fmt.Sprintf("RUNNER_SCALE_DOWN_DELAY_SECONDS_AFTER_SCALE_OUT=%d", e.scaleDownDelaySecondsAfterScaleOut),
fmt.Sprintf("REPO_RUNNER_MIN_REPLICAS=%d", e.minReplicas),
fmt.Sprintf("ORG_RUNNER_MIN_REPLICAS=%d", e.minReplicas),
fmt.Sprintf("ENTERPRISE_RUNNER_MIN_REPLICAS=%d", e.minReplicas),
}
if e.dockerdWithinRunnerContainer {
varEnv = append(varEnv,
"RUNNER_DOCKERD_WITHIN_RUNNER_CONTAINER=true",
"RUNNER_NAME="+runnerDindImageRepo,
"RUNNER_NAME="+e.vars.runnerDindImageRepo,
)
} else {
varEnv = append(varEnv,
"RUNNER_DOCKERD_WITHIN_RUNNER_CONTAINER=false",
"RUNNER_NAME="+runnerImageRepo,
"RUNNER_NAME="+e.vars.runnerImageRepo,
)
}
scriptEnv = append(scriptEnv, varEnv...)
scriptEnv = append(scriptEnv, commonScriptEnv...)
scriptEnv = append(scriptEnv, e.vars.commonScriptEnv...)
e.RunScript(t, "../../acceptance/deploy.sh", testing.ScriptConfig{Dir: "../..", Env: scriptEnv})
e.RunScript(t, "../../acceptance/deploy_runners.sh", testing.ScriptConfig{Dir: "../..", Env: scriptEnv})
}
func (e *env) installArgoTunnel(t *testing.T) {
e.doArgoTunnel(t, "apply")
}
func (e *env) uninstallArgoTunnel(t *testing.T) {
e.doArgoTunnel(t, "delete")
}
func (e *env) doArgoTunnel(t *testing.T, op string) {
t.Helper()
scriptEnv := []string{
"KUBECONFIG=" + e.Kubeconfig,
"OP=" + op,
"TUNNEL_ID=" + os.Getenv("TUNNEL_ID"),
"TUNNE_NAME=" + os.Getenv("TUNNEL_NAME"),
"TUNNEL_HOSTNAME=" + os.Getenv("TUNNEL_HOSTNAME"),
}
e.RunScript(t, "../../acceptance/argotunnel.sh", testing.ScriptConfig{Dir: "../..", Env: scriptEnv})
}
func (e *env) runnerLabel(testID string) string {
return "test-" + testID
}
func (e *env) createControllerNamespaceAndServiceAccount(t *testing.T) {
@@ -349,16 +568,28 @@ func (e *env) createControllerNamespaceAndServiceAccount(t *testing.T) {
e.KubectlEnsureClusterRoleBindingServiceAccount(t, "default-admin", "cluster-admin", "default:default", testing.KubectlConfig{})
}
func (e *env) installActionsWorkflow(t *testing.T) {
func (e *env) installActionsWorkflow(t *testing.T, kind DeployKind, testID string) {
t.Helper()
installActionsWorkflow(t, e.testName, e.runnerLabel, testResultCMNamePrefix, e.repoToCommit, e.testJobs)
installActionsWorkflow(t, e.testName+" "+testID, e.runnerLabel(testID), testResultCMNamePrefix, e.repoToCommit, kind, e.testJobs(testID))
}
func (e *env) verifyActionsWorkflowRun(t *testing.T) {
func (e *env) testJobs(testID string) []job {
return createTestJobs(testID, testResultCMNamePrefix, 6)
}
func (e *env) verifyActionsWorkflowRun(t *testing.T, testID string) {
t.Helper()
verifyActionsWorkflowRun(t, e.Env, e.testJobs)
verifyActionsWorkflowRun(t, e.Env, e.testJobs(testID), e.verifyTimeout())
}
func (e *env) verifyTimeout() time.Duration {
if e.VerifyTimeout > 0 {
return e.VerifyTimeout
}
return 8 * 60 * time.Second
}
type job struct {
@@ -381,7 +612,7 @@ func createTestJobs(id, testResultCMNamePrefix string, numJobs int) []job {
const Branch = "main"
func installActionsWorkflow(t *testing.T, testName, runnerLabel, testResultCMNamePrefix, testRepo string, testJobs []job) {
func installActionsWorkflow(t *testing.T, testName, runnerLabel, testResultCMNamePrefix, testRepo string, kind DeployKind, testJobs []job) {
t.Helper()
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
@@ -398,45 +629,70 @@ func installActionsWorkflow(t *testing.T, testName, runnerLabel, testResultCMNam
Jobs: map[string]testing.Job{},
}
kubernetesContainerMode := os.Getenv("TEST_CONTAINER_MODE") == "kubernetes"
var container string
if kubernetesContainerMode {
container = "golang:1.18"
}
for _, j := range testJobs {
wf.Jobs[j.name] = testing.Job{
RunsOn: runnerLabel,
Steps: []testing.Step{
{
Uses: testing.ActionsCheckoutV2,
},
{
steps := []testing.Step{
{
Uses: testing.ActionsCheckout,
},
}
if !kubernetesContainerMode {
if kind == RunnerDeployments {
steps = append(steps,
testing.Step{
Run: "sudo mkdir -p \"${RUNNER_TOOL_CACHE}\" \"${HOME}/.cache\" \"/var/lib/docker\"",
},
)
}
steps = append(steps,
testing.Step{
// This might be the easiest way to handle permissions without use of securityContext
// https://stackoverflow.com/questions/50156124/kubernetes-nfs-persistent-volumes-permission-denied#comment107483717_53186320
Run: "sudo chmod 777 -R \"${RUNNER_TOOL_CACHE}\" \"${HOME}/.cache\" \"/var/lib/docker\"",
},
{
testing.Step{
// This might be the easiest way to handle permissions without use of securityContext
// https://stackoverflow.com/questions/50156124/kubernetes-nfs-persistent-volumes-permission-denied#comment107483717_53186320
Run: "ls -lah \"${RUNNER_TOOL_CACHE}\" \"${HOME}/.cache\" \"/var/lib/docker\"",
},
{
testing.Step{
Uses: "actions/setup-go@v3",
With: &testing.With{
GoVersion: "1.18.2",
},
},
{
Run: "go version",
},
{
Run: "go build .",
},
{
)
}
steps = append(steps,
testing.Step{
Run: "go version",
},
testing.Step{
Run: "go build .",
},
)
if !kubernetesContainerMode {
steps = append(steps,
testing.Step{
// https://github.com/docker/buildx/issues/413#issuecomment-710660155
// To prevent setup-buildx-action from failing with:
// error: could not create a builder instance with TLS data loaded from environment. Please use `docker context create <context-name>` to create a context for current environment and then create a builder instance with `docker buildx create <context-name>`
Run: "docker context create mycontext",
},
{
testing.Step{
Run: "docker context use mycontext",
},
{
testing.Step{
Name: "Set up Docker Buildx",
Uses: "docker/setup-buildx-action@v1",
With: &testing.With{
@@ -447,30 +703,36 @@ func installActionsWorkflow(t *testing.T, testName, runnerLabel, testResultCMNam
Install: false,
},
},
{
testing.Step{
Run: "docker buildx build --platform=linux/amd64 " +
"--cache-from=type=local,src=/home/runner/.cache/buildx " +
"--cache-to=type=local,dest=/home/runner/.cache/buildx-new,mode=max " +
".",
},
{
testing.Step{
// https://github.com/docker/build-push-action/blob/master/docs/advanced/cache.md#local-cache
// See https://github.com/moby/buildkit/issues/1896 for why this is needed
Run: "rm -rf /home/runner/.cache/buildx && mv /home/runner/.cache/buildx-new /home/runner/.cache/buildx",
},
{
testing.Step{
Run: "ls -lah /home/runner/.cache/*",
},
{
testing.Step{
Uses: "azure/setup-kubectl@v1",
With: &testing.With{
Version: "v1.20.2",
},
},
{
testing.Step{
Run: fmt.Sprintf("./test.sh %s %s", t.Name(), j.testArg),
},
},
)
}
wf.Jobs[j.name] = testing.Job{
RunsOn: runnerLabel,
Container: container,
Steps: steps,
}
}
@@ -504,7 +766,7 @@ kubectl create cm %s$id --from-literal=status=ok
}
}
func verifyActionsWorkflowRun(t *testing.T, env *testing.Env, testJobs []job) {
func verifyActionsWorkflowRun(t *testing.T, env *testing.Env, testJobs []job, timeout time.Duration) {
t.Helper()
var expected []string
@@ -522,7 +784,7 @@ func verifyActionsWorkflowRun(t *testing.T, env *testing.Env, testJobs []job) {
testResultCMName := testJobs[i].configMapName
kubectlEnv := []string{
"KUBECONFIG=" + env.Kubeconfig(),
"KUBECONFIG=" + env.Kubeconfig,
}
cmCfg := testing.KubectlConfig{
@@ -554,5 +816,5 @@ func verifyActionsWorkflowRun(t *testing.T, env *testing.Env, testJobs []job) {
}
return results, err
}, 3*60*time.Second, 10*time.Second).Should(gomega.Equal(expected))
}, timeout, 30*time.Second).Should(gomega.Equal(expected))
}

View File

@@ -71,3 +71,18 @@ func (k *Docker) dockerBuildCombinedOutput(ctx context.Context, build DockerBuil
return k.CombinedOutput(cmd)
}
func (k *Docker) Push(ctx context.Context, images []ContainerImage) error {
for _, img := range images {
_, err := k.CombinedOutput(dockerPushCmd(ctx, img.Repo, img.Tag))
if err != nil {
return err
}
}
return nil
}
func dockerPushCmd(ctx context.Context, repo, tag string) *exec.Cmd {
return exec.CommandContext(ctx, "docker", "push", repo+":"+tag)
}

View File

@@ -86,6 +86,16 @@ func (k *Kubectl) CreateCMLiterals(ctx context.Context, name string, literals ma
return nil
}
func (k *Kubectl) DeleteCM(ctx context.Context, name string, cfg KubectlConfig) error {
args := []string{"cm", name}
if _, err := k.CombinedOutput(k.kubectlCmd(ctx, "delete", args, cfg)); err != nil {
return err
}
return nil
}
func (k *Kubectl) Apply(ctx context.Context, path string, cfg KubectlConfig) error {
if _, err := k.CombinedOutput(k.kubectlCmd(ctx, "apply", []string{"-f", path}, cfg)); err != nil {
return err

View File

@@ -17,6 +17,12 @@ type T = testing.T
var Short = testing.Short
var images = map[string]string{
"1.22": "kindest/node:v1.22.9@sha256:8135260b959dfe320206eb36b3aeda9cffcb262f4b44cda6b33f7bb73f453105",
"1.23": "kindest/node:v1.23.6@sha256:b1fa224cc6c7ff32455e0b1fd9cbfd3d3bc87ecaa8fcb06961ed1afb3db0f9ae",
"1.24": "kindest/node:v1.24.0@sha256:0866296e693efe1fed79d5e6c7af8df71fc73ae45e3679af05342239cdc5bc8e",
}
func Img(repo, tag string) ContainerImage {
return ContainerImage{
Repo: repo,
@@ -28,22 +34,17 @@ func Img(repo, tag string) ContainerImage {
// All of its methods are idempotent so that you can safely call it from within each subtest
// and you can rerun the individual subtest until it works as you expect.
type Env struct {
kind *Kind
docker *Docker
Kubectl *Kubectl
bash *Bash
id string
Kubeconfig string
docker *Docker
Kubectl *Kubectl
bash *Bash
}
func Start(t *testing.T, opts ...Option) *Env {
func Start(t *testing.T, k8sMinorVer string) *Env {
t.Helper()
k := StartKind(t, opts...)
var env Env
env.kind = k
d := &Docker{}
env.docker = d
@@ -56,12 +57,16 @@ func Start(t *testing.T, opts ...Option) *Env {
env.bash = bash
//
return &env
}
func (e *Env) GetOrGenerateTestID(t *testing.T) string {
kctl := e.Kubectl
cmKey := "id"
kubectlEnv := []string{
"KUBECONFIG=" + k.Kubeconfig(),
"KUBECONFIG=" + e.Kubeconfig,
}
cmCfg := KubectlConfig{
@@ -82,13 +87,27 @@ func Start(t *testing.T, opts ...Option) *Env {
}
}
env.id = m[cmKey]
return &env
return m[cmKey]
}
func (e *Env) ID() string {
return e.id
func (e *Env) DeleteTestID(t *testing.T) {
kctl := e.Kubectl
kubectlEnv := []string{
"KUBECONFIG=" + e.Kubeconfig,
}
cmCfg := KubectlConfig{
Env: kubectlEnv,
}
testInfoName := "test-info"
ctx, cancel := context.WithTimeout(context.Background(), 300*time.Second)
defer cancel()
if err := kctl.DeleteCM(ctx, testInfoName, cmCfg); err != nil {
t.Fatal(err)
}
}
func (e *Env) DockerBuild(t *testing.T, builds []DockerBuild) {
@@ -102,13 +121,13 @@ func (e *Env) DockerBuild(t *testing.T, builds []DockerBuild) {
}
}
func (e *Env) KindLoadImages(t *testing.T, prebuildImages []ContainerImage) {
func (e *Env) DockerPush(t *testing.T, images []ContainerImage) {
t.Helper()
ctx, cancel := context.WithTimeout(context.Background(), 300*time.Second)
defer cancel()
if err := e.kind.LoadImages(ctx, prebuildImages); err != nil {
if err := e.docker.Push(ctx, images); err != nil {
t.Fatal(err)
}
}
@@ -120,7 +139,7 @@ func (e *Env) KubectlApply(t *testing.T, path string, cfg KubectlConfig) {
defer cancel()
kubectlEnv := []string{
"KUBECONFIG=" + e.kind.Kubeconfig(),
"KUBECONFIG=" + e.Kubeconfig,
}
cfg.Env = append(kubectlEnv, cfg.Env...)
@@ -137,7 +156,7 @@ func (e *Env) KubectlWaitUntilDeployAvailable(t *testing.T, name string, cfg Kub
defer cancel()
kubectlEnv := []string{
"KUBECONFIG=" + e.kind.Kubeconfig(),
"KUBECONFIG=" + e.Kubeconfig,
}
cfg.Env = append(kubectlEnv, cfg.Env...)
@@ -154,7 +173,7 @@ func (e *Env) KubectlEnsureNS(t *testing.T, name string, cfg KubectlConfig) {
defer cancel()
kubectlEnv := []string{
"KUBECONFIG=" + e.kind.Kubeconfig(),
"KUBECONFIG=" + e.Kubeconfig,
}
cfg.Env = append(kubectlEnv, cfg.Env...)
@@ -171,7 +190,7 @@ func (e *Env) KubectlEnsureClusterRoleBindingServiceAccount(t *testing.T, bindin
defer cancel()
kubectlEnv := []string{
"KUBECONFIG=" + e.kind.Kubeconfig(),
"KUBECONFIG=" + e.Kubeconfig,
}
cfg.Env = append(kubectlEnv, cfg.Env...)
@@ -183,10 +202,6 @@ func (e *Env) KubectlEnsureClusterRoleBindingServiceAccount(t *testing.T, bindin
}
}
func (e *Env) Kubeconfig() string {
return e.kind.Kubeconfig()
}
func (e *Env) RunScript(t *testing.T, path string, cfg ScriptConfig) {
t.Helper()
@@ -234,7 +249,7 @@ type ContainerImage struct {
Repo, Tag string
}
func StartKind(t *testing.T, opts ...Option) *Kind {
func StartKind(t *testing.T, k8sMinorVer string, opts ...Option) *Kind {
t.Helper()
invalidChars := []string{"/"}
@@ -249,7 +264,7 @@ func StartKind(t *testing.T, opts ...Option) *Kind {
k.Dir = t.TempDir()
kk := &k
if err := kk.Start(context.Background()); err != nil {
if err := kk.Start(context.Background(), k8sMinorVer); err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
@@ -306,7 +321,7 @@ func (k *Kind) Kubeconfig() string {
return k.kubeconfig
}
func (k *Kind) Start(ctx context.Context) error {
func (k *Kind) Start(ctx context.Context, k8sMinorVer string) error {
getNodes, err := k.CombinedOutput(k.kindGetNodesCmd(ctx, k.Name))
if err != nil {
return err
@@ -320,6 +335,8 @@ func (k *Kind) Start(ctx context.Context) error {
return err
}
image := images[k8sMinorVer]
kindConfig := []byte(fmt.Sprintf(`kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: %s
@@ -327,8 +344,10 @@ networking:
apiServerAddress: 0.0.0.0
nodes:
- role: control-plane
image: %s
- role: worker
`, k.Name))
image: %s
`, k.Name, image, image))
if err := os.WriteFile(f.Name(), kindConfig, 0644); err != nil {
return err

View File

@@ -1,7 +1,7 @@
package testing
const (
ActionsCheckoutV2 = "actions/checkout@v2"
ActionsCheckout = "actions/checkout@v3"
)
type Workflow struct {
@@ -30,8 +30,9 @@ type InputSpec struct {
}
type Job struct {
RunsOn string `json:"runs-on"`
Steps []Step `json:"steps"`
RunsOn string `json:"runs-on"`
Container string `json:"container,omitempty"`
Steps []Step `json:"steps"`
}
type Step struct {