Compare commits

..

30 Commits

Author SHA1 Message Date
Yusuke Kuoka
4ede0c18d0 Fix the new ct chart lint error 2022-07-15 10:23:33 +09:00
Yusuke Kuoka
9091d9b756 chart: Bump version/appVersion to 0.20.2/0.25.2 2022-07-15 10:23:33 +09:00
renovate[bot]
a09c2564d9 fix(deps): update module github.com/bradleyfalzon/ghinstallation/v2 to v2.1.0 (#1637)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-15 10:20:42 +09:00
renovate[bot]
a555c90fd5 chore(deps): update dependency golang to v1.18.4 (#1639)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-15 10:20:29 +09:00
Yusuke Kuoka
38644cf4e8 Remove redundant flags from webhook-based autoscaler (#1630)
* Remove redundant flags from webhook-based autoscaler

Ref #623

* fixup! Remove redundant flags from webhook-based autoscaler
2022-07-15 09:58:30 +09:00
Jonathan Wiemers
23f357db10 Adds way to allow additional environment variables from secretKeyRef (#1565)
* adds additionalFullEnv to allow additional secret refs

* Update charts/actions-runner-controller/templates/deployment.yaml

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>

* adds examples into values.yaml

* fix

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-07-15 09:57:30 +09:00
Felipe Galindo Sanchez
584745b67d Minor improvements for runner groups
- Add group in runners columns
- Add constant for runner group and labels
2022-07-15 09:47:25 +09:00
AJ Schmidt
df9592dc99 docs: Update README.md (#1645)a 2022-07-13 18:13:11 +01:00
Yusuke Kuoka
8071ac7066 Remove github-api-cache-duration flag and code (#1631)
This removes the flag and code for the legacy GitHub API cache. We already migrated to fully use the new HTTP cache based API cache functionality which had been added via #1127 and available since ARC 0.22.0. Since then, the legacy one had been no-op and therefore removing it is safe.

Ref #1412
2022-07-12 20:37:24 +09:00
toast-gear
3c33eca501 docs: remove superfluous file names 2022-07-12 09:45:51 +09:00
toast-gear
aa827474b2 docs: clearer wording 2022-07-12 09:45:51 +09:00
toast-gear
c75c9f9226 docs: use consistent wording 2022-07-12 09:45:51 +09:00
toast-gear
c09a04ec01 docs: add default label considerations 2022-07-12 09:45:51 +09:00
Yusuke Kuoka
618276e3d3 Enhance support for multi-tenancy (#1371)
This enhances every ARC controller and the various K8s custom resources so that the user can now configure a custom GitHub API credentials (that is different from the default one configured per the ARC instance).

Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/1067#issuecomment-1043716646
2022-07-12 09:45:00 +09:00
renovate[bot]
18dd89c884 chore(deps): update azure/setup-helm action to v3.1 (#1628)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2022-07-12 09:19:02 +09:00
k.bigwheel (kazufumi nishida)
98b17dc0a5 Fix the dind image to work with the latest entrypoint.sh (#1624)
Fixes #1621
2022-07-12 09:11:04 +09:00
Giovanni Barillari
c658dcfa6d fix #1621: add missing COPY statements to dind docker image 2022-07-11 20:44:35 +09:00
renovate[bot]
c4996d4bbd fix(deps): update module sigs.k8s.io/controller-runtime to v0.12.3 2022-07-11 10:52:14 +09:00
Callum Tait
7a3fa4f362 docs: correct the comparison
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-07-11 10:43:09 +09:00
toast-gear
1bfd743e69 docs: add pod exmaple too 2022-07-11 10:43:09 +09:00
toast-gear
734f3bd63a docs: put shell k8s commands back 2022-07-11 10:43:09 +09:00
toast-gear
409dc4c114 docs: remove ephemeral and simplify 2022-07-11 10:43:09 +09:00
toast-gear
4b9a6c6700 docs: remove runner kind 2022-07-11 10:43:09 +09:00
Yusuke Kuoka
86e1a4a8f3 Fix helm lint error and the unability to install the chart with the default values 2022-07-10 16:16:32 +09:00
Yusuke Kuoka
544d620bc3 e2e: Ensure ARC is roll-updated on deployment even if the container image tag name does not change 2022-07-10 16:16:32 +09:00
Yusuke Kuoka
1cfe1974c4 Add missing job-related permissions to runner pods with k8s container mode 2022-07-10 16:16:32 +09:00
Yusuke Kuoka
7e4b6ebd6d chart: Add rbac.allowGrantingKubernetesContainerModePermissions 2022-07-10 16:16:32 +09:00
Felipe Galindo Sanchez
11cb9b7882 feat: allow to discover runner statuses (#1268)
* feat: allow to discover runner statuses

* fix manifests

* Bump runner version to 2.289.1 which includes the hooks support

* Add feedback from review

* Update reference to newRunnerPod

* Fix TestNewRunnerPodFromRunnerController and make hooks file names job specific

* Fix additional TestNewRunnerPod test

* Cover additional feedback from review

* fix rbac manager role

* Add permissions to service account for container mode if not provided

* Rename flag to runner.statusUpdateHook.enabled and fix needsServiceAccount

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-07-10 15:11:29 +09:00
Tamás Kádár
10b88bf070 Fix typos in README (#1613) 2022-07-10 08:49:35 +09:00
Callum Tait
8b619e7c6f chore: bump helm chart (#1619) 2022-07-10 08:25:55 +09:00
55 changed files with 1365 additions and 218 deletions

View File

@@ -31,7 +31,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: Set up Helm - name: Set up Helm
uses: azure/setup-helm@v3.0 uses: azure/setup-helm@v3.1
with: with:
version: ${{ env.HELM_VERSION }} version: ${{ env.HELM_VERSION }}

View File

@@ -26,7 +26,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: Set up Helm - name: Set up Helm
uses: azure/setup-helm@v3.0 uses: azure/setup-helm@v3.1
with: with:
version: ${{ env.HELM_VERSION }} version: ${{ env.HELM_VERSION }}

View File

@@ -1,5 +1,5 @@
# Build the manager binary # Build the manager binary
FROM --platform=$BUILDPLATFORM golang:1.18.3 as builder FROM --platform=$BUILDPLATFORM golang:1.18.4 as builder
WORKDIR /workspace WORKDIR /workspace

175
README.md
View File

@@ -29,8 +29,9 @@ ToC:
- [Webhook Driven Scaling](#webhook-driven-scaling) - [Webhook Driven Scaling](#webhook-driven-scaling)
- [Autoscaling to/from 0](#autoscaling-tofrom-0) - [Autoscaling to/from 0](#autoscaling-tofrom-0)
- [Scheduled Overrides](#scheduled-overrides) - [Scheduled Overrides](#scheduled-overrides)
- [Runner with DinD](#runner-with-dind) - [Alternative Runners](#alternative-runners)
- [Runner with k8s jobs](#runner-with-k8s-jobs) - [Runner with DinD](#runner-with-dind)
- [Runner with k8s jobs](#runner-with-k8s-jobs)
- [Additional Tweaks](#additional-tweaks) - [Additional Tweaks](#additional-tweaks)
- [Custom Volume mounts](#custom-volume-mounts) - [Custom Volume mounts](#custom-volume-mounts)
- [Runner Labels](#runner-labels) - [Runner Labels](#runner-labels)
@@ -39,6 +40,7 @@ ToC:
- [Using IRSA (IAM Roles for Service Accounts) in EKS](#using-irsa-iam-roles-for-service-accounts-in-eks) - [Using IRSA (IAM Roles for Service Accounts) in EKS](#using-irsa-iam-roles-for-service-accounts-in-eks)
- [Software Installed in the Runner Image](#software-installed-in-the-runner-image) - [Software Installed in the Runner Image](#software-installed-in-the-runner-image)
- [Using without cert-manager](#using-without-cert-manager) - [Using without cert-manager](#using-without-cert-manager)
- [Multitenancy](#multitenancy)
- [Troubleshooting](#troubleshooting) - [Troubleshooting](#troubleshooting)
- [Contributing](#contributing) - [Contributing](#contributing)
@@ -256,7 +258,7 @@ You can deploy multiple controllers either in a single shared namespace, or in a
If you plan on installing all instances of the controller stack into a single namespace there are a few things you need to do for this to work. If you plan on installing all instances of the controller stack into a single namespace there are a few things you need to do for this to work.
1. All resources per stack must have a unique, in the case of Helm this can be done by giving each install a unique release name, or via the `fullnameOverride` properties. 1. All resources per stack must have a unique name, in the case of Helm this can be done by giving each install a unique release name, or via the `fullnameOverride` properties.
2. `authSecret.name` needs to be unique per stack when each stack is tied to runners in different GitHub organizations and repositories AND you want your GitHub credentials to be narrowly scoped. 2. `authSecret.name` needs to be unique per stack when each stack is tied to runners in different GitHub organizations and repositories AND you want your GitHub credentials to be narrowly scoped.
3. `leaderElectionId` needs to be unique per stack. If this is not unique to the stack the controller tries to race onto the leader election lock resulting in only one stack working concurrently. Your controller will be stuck with a log message something like this `attempting to acquire leader lease arc-controllers/actions-runner-controller...` 3. `leaderElectionId` needs to be unique per stack. If this is not unique to the stack the controller tries to race onto the leader election lock resulting in only one stack working concurrently. Your controller will be stuck with a log message something like this `attempting to acquire leader lease arc-controllers/actions-runner-controller...`
4. The MutatingWebhookConfiguration in each stack must include a namespace selector for that stack's corresponding runner namespace, this is already configured in the helm chart. 4. The MutatingWebhookConfiguration in each stack must include a namespace selector for that stack's corresponding runner namespace, this is already configured in the helm chart.
@@ -270,52 +272,50 @@ Alternatively, you can install each controller stack into a unique namespace (re
- The organization level - The organization level
- The enterprise level - The enterprise level
There are two ways to use this controller: Runners can be deployed as 1 of 2 abstractions:
- Manage runners one by one with `Runner`. - A `RunnerDeployment` (similar to k8s's `Deployments`, based on `Pods`)
- Manage a set of runners with `RunnerDeployment`. - A `RunnerSet` (based on k8s's `StatefulSets`)
We go into details about the differences between the 2 later, initially lets look at how to deploy a basic `RunnerDeployment` at the 3 possible management hierarchies.
### Repository Runners ### Repository Runners
To launch a single self-hosted runner, you need to create a manifest file that includes a `Runner` resource as follows. This example launches a self-hosted runner with name *example-runner* for the *actions-runner-controller/actions-runner-controller* repository. To launch a single self-hosted runner, you need to create a manifest file that includes a `RunnerDeployment` resource as follows. This example launches a self-hosted runner with name *example-runnerdeploy* for the *actions-runner-controller/actions-runner-controller* repository.
```yaml ```yaml
# runner.yaml # runnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner kind: RunnerDeployment
metadata: metadata:
name: example-runner name: example-runnerdeploy
spec: spec:
repository: example/myrepo replicas: 1
env: [] template:
spec:
repository: mumoshu/actions-runner-controller-ci
``` ```
Apply the created manifest file to your Kubernetes. Apply the created manifest file to your Kubernetes.
```shell ```shell
$ kubectl apply -f runner.yaml $ kubectl apply -f runnerdeployment.yaml
runner.actions.summerwind.dev/example-runner created runnerdeployment.actions.summerwind.dev/example-runnerdeploy created
``` ```
You can see that the Runner resource has been created. You can see that 1 runner and its underlying pod has been created as specified by `replicas: 1` attribute:
```shell ```shell
$ kubectl get runners $ kubectl get runners
NAME REPOSITORY STATUS NAME REPOSITORY STATUS
example-runner actions-runner-controller/actions-runner-controller Running example-runnerdeploy2475h595fr mumoshu/actions-runner-controller-ci Running
```
You can also see that the runner pod has been running.
```shell
$ kubectl get pods $ kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
example-runner 2/2 Running 0 1m example-runnerdeploy2475ht2qbr 2/2 Running 0 1m
``` ```
The runner you created has been registered to your repository. The runner you created has been registered directly to the defined repository, you should be able to see it in the settings of the repository.
<img width="756" alt="Actions tab in your repository settings" src="https://user-images.githubusercontent.com/230145/73618667-8cbf9700-466c-11ea-80b6-c67e6d3f70e7.png">
Now you can use your self-hosted runner. See the [official documentation](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/using-self-hosted-runners-in-a-workflow) on how to run a job with it. Now you can use your self-hosted runner. See the [official documentation](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/using-self-hosted-runners-in-a-workflow) on how to run a job with it.
@@ -324,13 +324,15 @@ Now you can use your self-hosted runner. See the [official documentation](https:
To add the runner to an organization, you only need to replace the `repository` field with `organization`, so the runner will register itself to the organization. To add the runner to an organization, you only need to replace the `repository` field with `organization`, so the runner will register itself to the organization.
```yaml ```yaml
# runner.yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner kind: RunnerDeployment
metadata: metadata:
name: example-org-runner name: example-runnerdeploy
spec: spec:
organization: your-organization-name replicas: 1
template:
spec:
organization: your-organization-name
``` ```
Now you can see the runner on the organization level (if you have organization owner permissions). Now you can see the runner on the organization level (if you have organization owner permissions).
@@ -340,24 +342,22 @@ Now you can see the runner on the organization level (if you have organization o
To add the runner to an enterprise, you only need to replace the `repository` field with `enterprise`, so the runner will register itself to the enterprise. To add the runner to an enterprise, you only need to replace the `repository` field with `enterprise`, so the runner will register itself to the enterprise.
```yaml ```yaml
# runner.yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner kind: RunnerDeployment
metadata: metadata:
name: example-enterprise-runner name: example-runnerdeploy
spec: spec:
enterprise: your-enterprise-name replicas: 1
template:
spec:
enterprise: your-enterprise-name
``` ```
Now you can see the runner on the enterprise level (if you have enterprise access permissions). Now you can see the runner on the enterprise level (if you have enterprise access permissions).
### RunnerDeployments ### RunnerDeployments
You can manage sets of runners instead of individually through the `RunnerDeployment` kind and its `replicas:` attribute. This kind is required for many of the advanced features. In our previous examples we were deploying a single runner via the `RunnerDeployment` kind, the amount of runners deployed can be statically set via the `replicas:` field, we can increase this value to deploy additioanl sets of runners instead:
There are `RunnerReplicaSet` and `RunnerDeployment` kinds that corresponds to the `ReplicaSet` and `Deployment` kinds but for the `Runner` kind.
You typically only need `RunnerDeployment` rather than `RunnerReplicaSet` as the former is for managing the latter.
```yaml ```yaml
# runnerdeployment.yaml # runnerdeployment.yaml
@@ -366,11 +366,11 @@ kind: RunnerDeployment
metadata: metadata:
name: example-runnerdeploy name: example-runnerdeploy
spec: spec:
# This will deploy 2 runners now
replicas: 2 replicas: 2
template: template:
spec: spec:
repository: mumoshu/actions-runner-controller-ci repository: mumoshu/actions-runner-controller-ci
env: []
``` ```
Apply the manifest file to your cluster: Apply the manifest file to your cluster:
@@ -389,15 +389,13 @@ example-runnerdeploy2475h595fr mumoshu/actions-runner-controller-ci Running
example-runnerdeploy2475ht2qbr mumoshu/actions-runner-controller-ci Running example-runnerdeploy2475ht2qbr mumoshu/actions-runner-controller-ci Running
``` ```
### RunnerSets ### RunnerSets
> This feature requires controller version => [v0.20.0](https://github.com/actions-runner-controller/actions-runner-controller/releases/tag/v0.20.0) > This feature requires controller version => [v0.20.0](https://github.com/actions-runner-controller/actions-runner-controller/releases/tag/v0.20.0)
_Ensure you see the limitations before using this kind!!!!!_ _Ensure you see the limitations before using this kind!!!!!_
For scenarios where you require the advantages of a `StatefulSet`, for example persistent storage, ARC implements a runner based on Kubernetes' `StatefulSets`, the `RunnerSet`. We can also deploy sets of RunnerSets the same way, a basic `RunnerSet` would look like this:
A basic `RunnerSet` would look like this:
```yaml ```yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
@@ -405,8 +403,7 @@ kind: RunnerSet
metadata: metadata:
name: example name: example
spec: spec:
ephemeral: false replicas: 1
replicas: 2
repository: mumoshu/actions-runner-controller-ci repository: mumoshu/actions-runner-controller-ci
# Other mandatory fields from StatefulSet # Other mandatory fields from StatefulSet
selector: selector:
@@ -437,8 +434,7 @@ kind: RunnerSet
metadata: metadata:
name: example name: example
spec: spec:
ephemeral: false replicas: 1
replicas: 2
repository: mumoshu/actions-runner-controller-ci repository: mumoshu/actions-runner-controller-ci
dockerdWithinRunnerContainer: true dockerdWithinRunnerContainer: true
template: template:
@@ -1143,9 +1139,13 @@ The earlier entry is prioritized higher than later entries. So you usually defin
A common use case for this may be to have 1 override to scale to 0 during the week outside of core business hours and another override to scale to 0 during all hours of the weekend. A common use case for this may be to have 1 override to scale to 0 during the week outside of core business hours and another override to scale to 0 during all hours of the weekend.
### Runner with DinD ### Alternative Runners
When using the default runner, the runner pod starts up 2 containers: runner and DinD (Docker-in-Docker). This might create issues if there's `LimitRange` set to namespace. ARC also offers a few altenrative runner options
#### Runner with DinD
When using the default runner, the runner pod starts up 2 containers: runner and DinD (Docker-in-Docker). ARC maintains an alternative all in one runner image with docker running in the same container as the runner. This may be prefered from a resource or complexity perspective or to be compliant with a `LimitRange` namespace configuration.
```yaml ```yaml
# dindrunnerdeployment.yaml # dindrunnerdeployment.yaml
@@ -1163,9 +1163,7 @@ spec:
env: [] env: []
``` ```
This also helps with resources, as you don't need to give resources separately to docker and runner. #### Runner with K8s Jobs
### Runner with K8s Jobs
When using the default runner, jobs that use a container will run in docker. This necessitates privileged mode, either on the runner pod or the sidecar container When using the default runner, jobs that use a container will run in docker. This necessitates privileged mode, either on the runner pod or the sidecar container
@@ -1386,7 +1384,7 @@ spec:
- name: tmp - name: tmp
emptyDir: emptyDir:
medium: Memory medium: Memory
emphemeral: true # recommended to not leak data between builds. ephemeral: true # recommended to not leak data between builds.
``` ```
#### NVME SSD #### NVME SSD
@@ -1394,7 +1392,7 @@ spec:
In this example we provide NVME backed storage for the workdir, docker sidecar and /tmp within the runner. In this example we provide NVME backed storage for the workdir, docker sidecar and /tmp within the runner.
Here we use a working example on GKE, which will provide the NVME disk at /mnt/disks/ssd0. We will be placing the respective volumes in subdirs here and in order to be able to run multiple runners we will use the pod name as a prefix for subdirectories. Also the disk will fill up over time and disk space will not be freed until the node is removed. Here we use a working example on GKE, which will provide the NVME disk at /mnt/disks/ssd0. We will be placing the respective volumes in subdirs here and in order to be able to run multiple runners we will use the pod name as a prefix for subdirectories. Also the disk will fill up over time and disk space will not be freed until the node is removed.
**Beware** that running these persistent backend volumes **leave data behind** between 2 different jobs on the workdir and `/tmp` with `emphemeral: false`. **Beware** that running these persistent backend volumes **leave data behind** between 2 different jobs on the workdir and `/tmp` with `ephemeral: false`.
```yaml ```yaml
kind: RunnerDeployment kind: RunnerDeployment
@@ -1435,7 +1433,7 @@ spec:
- hostPath: - hostPath:
path: /mnt/disks/ssd0 path: /mnt/disks/ssd0
name: tmp name: tmp
emphemeral: true # VERY important. otherwise data inside the workdir and /tmp is not cleared between builds ephemeral: true # VERY important. otherwise data inside the workdir and /tmp is not cleared between builds
``` ```
#### Docker image layers caching #### Docker image layers caching
@@ -1570,7 +1568,6 @@ jobs:
When you have multiple kinds of self-hosted runners, you can distinguish between them using labels. In order to do so, you can specify one or more labels in your `Runner` or `RunnerDeployment` spec. When you have multiple kinds of self-hosted runners, you can distinguish between them using labels. In order to do so, you can specify one or more labels in your `Runner` or `RunnerDeployment` spec.
```yaml ```yaml
# runnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment kind: RunnerDeployment
metadata: metadata:
@@ -1592,7 +1589,10 @@ jobs:
runs-on: custom-runner runs-on: custom-runner
``` ```
Note that if you specify `self-hosted` in your workflow, then this will run your job on _any_ self-hosted runner, regardless of the labels that they have. When using labels there are a few things to be aware of:
1. `self-hosted` is implict with every runner as this is an automatic label GitHub apply to any self-hosted runner. As a result ARC can treat all runners as having this label without having it explicitly defined in a runner's manifest. You do not need to explicitly define this label in your runner manifests (you can if you want though).
2. In addition to the `self-hosted` label, GitHub also applies a few other [default](https://docs.github.com/en/actions/hosting-your-own-runners/using-self-hosted-runners-in-a-workflow#using-default-labels-to-route-jobs) labels to any self-hosted runner. The other default labels relate to the architecture of the runner and so can't be implicitly applied by ARC as ARC doesn't know if the runner is `linux` or `windows`, `x64` or `ARM64` etc. If you wish to use these labels in your workflows and have ARC scale runners accurately you must also add them to your runner manifests.
### Runner Groups ### Runner Groups
@@ -1601,7 +1601,6 @@ Runner groups can be used to limit which repositories are able to use the GitHub
To add the runner to the group `NewGroup`, specify the group in your `Runner` or `RunnerDeployment` spec. To add the runner to the group `NewGroup`, specify the group in your `Runner` or `RunnerDeployment` spec.
```yaml ```yaml
# runnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment kind: RunnerDeployment
metadata: metadata:
@@ -1744,6 +1743,64 @@ $ helm --upgrade install actions-runner-controller/actions-runner-controller \
admissionWebHooks.caBundle=${CA_BUNDLE} admissionWebHooks.caBundle=${CA_BUNDLE}
``` ```
### Multitenancy
> This feature requires controller version => [v0.26.0](https://github.com/actions-runner-controller/actions-runner-controller/releases/tag/v0.26.0)
In a large enterprise, there might be many GitHub organizations that requires self-hosted runners. Previously, the only way to provide ARC-managed self-hosted runners in such environment was [Deploying Multiple Controllers](#deploying-multiple-controllers), which incurs overhead due to it requires one ARC installation per GitHub organization.
With multitenancy, you can let ARC manage self-hosted runners across organizations. It's enabled by default and the only thing you need to start using it is to set the `spec.githubAPICredentialsFrom.secretRef.name` fields for the following resources:
- `HorizontalRunnerAutoscaler`
- `RunnerSet`
Or `spec.template.spec.githubAPICredentialsFrom.secretRef.name` field for the following resource:
- `RunnerDeployment`
> Although not explained above, `spec.githubAPICredentialsFrom` fields do exist in `Runner` and `RunnerReplicaSet`. A comparable pod annotation exists for the runner pod, too.
> However, note that `Runner`, `RunnerReplicaSet` and runner pods are implementation details and are managed by `RunnerDeployment` and ARC.
> Usually you don't need to manually set the fields for those resources.
`githubAPICredentialsFrom.secretRef.name` should refer to the name of the Kubernetes secret that contains either PAT or GitHub App credentials that is used for GitHub API calls for the said resource.
Usually, you should have a set of GitHub App credentials per a GitHub organization and you would have a RunnerDeployment and a HorizontalRunnerAutoscaler per an organization runner group. So, you might end up having the following resources for each organization:
- 1 Kuernetes secret that contains GitHub App credentials
- 1 RunnerDeployment/RunnerSet and 1 HorizontalRunnerAutoscaler per Runner Group
And the RunnerDeployment/RunnerSet and HorizontalRunnerAutoscaler should have the same value for `spec.githubAPICredentialsFrom.secretRef.name`, which refers to the name of the Kubernetes secret.
```yaml
kind: Secret
data:
github_app_id: ...
github_app_installation_id: ...
github_app_private_key: ...
---
kind: RunnerDeployment
metadata:
namespace: org1-runners
spec:
githubAPICredentialsFrom:
secretRef:
name: org1-github-app
---
kind: HorizontalRunnerAutoscaler
metadata:
namespace: org1-runners
spec:
githubAPICredentialsFrom:
secretRef:
name: org1-github-app
```
> Do note that, as shown in the above example, you usually set the same secret name to `githubAPICredentialsFrom.secretRef.name` fields of both `RunnerDeployment` and `HorizontalRunnerAutoscaler`, so that GitHub API calls for the same set of runners shares the specified credentials, regardless of
when and which varying ARC component(`horizontalrunnerautoscaler-controller`, `runnerdeployment-controller`, `runnerreplicaset-controller`, `runner-controller` or `runnerpod-controller`) makes specific API calls.
> Just don't be surprised you have to repeat `githubAPICredentialsFrom.secretRef.name` settings among two resources!
Please refer to [Deploying Using GitHub App Authentication](#deploying-using-github-app-authentication) for how you could create the Kubernetes secret containing GitHub App credentials.
# Troubleshooting # Troubleshooting
See [troubleshooting guide](TROUBLESHOOTING.md) for solutions to various problems people have run into consistently. See [troubleshooting guide](TROUBLESHOOTING.md) for solutions to various problems people have run into consistently.

View File

@@ -54,6 +54,7 @@ if [ "${tool}" == "helm" ]; then
--set imagePullSecrets[0].name=${IMAGE_PULL_SECRET} \ --set imagePullSecrets[0].name=${IMAGE_PULL_SECRET} \
--set image.actionsRunnerImagePullSecrets[0].name=${IMAGE_PULL_SECRET} \ --set image.actionsRunnerImagePullSecrets[0].name=${IMAGE_PULL_SECRET} \
--set githubWebhookServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET} \ --set githubWebhookServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET} \
--set image.imagePullPolicy=${IMAGE_PULL_POLICY} \
-f ${VALUES_FILE} -f ${VALUES_FILE}
set +v set +v
# To prevent `CustomResourceDefinition.apiextensions.k8s.io "runners.actions.summerwind.dev" is invalid: metadata.annotations: Too long: must have at most 262144 bytes` # To prevent `CustomResourceDefinition.apiextensions.k8s.io "runners.actions.summerwind.dev" is invalid: metadata.annotations: Too long: must have at most 262144 bytes`

View File

@@ -0,0 +1,82 @@
# USAGE:
# cat acceptance/testdata/kubernetes_container_mode.envsubst.yaml | NAMESPACE=default envsubst | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-mode-runner
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch",]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: runner-status-updater
rules:
- apiGroups: ["actions.summerwind.dev"]
resources: ["runners/status"]
verbs: ["get", "update", "patch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: runner
namespace: ${NAMESPACE}
---
# To verify it's working, try:
# kubectl auth can-i --as system:serviceaccount:default:runner get pod
# If incomplete, workflows and jobs would fail with an error message like:
# Error: Error: The Service account needs the following permissions [{"group":"","verbs":["get","list","create","delete"],"resource":"pods","subresource":""},{"group":"","verbs":["get","create"],"resource":"pods","subresource":"exec"},{"group":"","verbs":["get","list","watch"],"resource":"pods","subresource":"log"},{"group":"batch","verbs":["get","list","create","delete"],"resource":"jobs","subresource":""},{"group":"","verbs":["create","delete","get","list"],"resource":"secrets","subresource":""}] on the pod resource in the 'default' namespace. Please contact your self hosted runner administrator.
# Error: Process completed with exit code 1.
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
name: runner-k8s-mode-runner
namespace: ${NAMESPACE}
subjects:
- kind: ServiceAccount
name: runner
namespace: ${NAMESPACE}
roleRef:
kind: ClusterRole
name: k8s-mode-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: runner-runner-stat-supdater
namespace: ${NAMESPACE}
subjects:
- kind: ServiceAccount
name: runner
namespace: ${NAMESPACE}
roleRef:
kind: ClusterRole
name: runner-status-updater
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: org-runnerdeploy-runner-work-dir
labels:
content: org-runnerdeploy-runner-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

View File

@@ -43,6 +43,17 @@ spec:
# Non-standard working directory # Non-standard working directory
# #
# workDir: "/" # workDir: "/"
# # Uncomment the below to enable the kubernetes container mode
# # See https://github.com/actions-runner-controller/actions-runner-controller#runner-with-k8s-jobs
containerMode: kubernetes
workVolumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: "${NAME}-runner-work-dir"
resources:
requests:
storage: 10Gi
--- ---
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler kind: HorizontalRunnerAutoscaler

View File

@@ -5,6 +5,11 @@ imagePullSecrets:
image: image:
actionsRunnerImagePullSecrets: actionsRunnerImagePullSecrets:
- name: - name:
runner:
statusUpdateHook:
enabled: true
rbac:
allowGrantingKubernetesContainerModePermissions: true
githubWebhookServer: githubWebhookServer:
imagePullSecrets: imagePullSecrets:
- name: - name:

View File

@@ -60,6 +60,9 @@ type HorizontalRunnerAutoscalerSpec struct {
// The earlier a scheduled override is, the higher it is prioritized. // The earlier a scheduled override is, the higher it is prioritized.
// +optional // +optional
ScheduledOverrides []ScheduledOverride `json:"scheduledOverrides,omitempty"` ScheduledOverrides []ScheduledOverride `json:"scheduledOverrides,omitempty"`
// +optional
GitHubAPICredentialsFrom *GitHubAPICredentialsFrom `json:"githubAPICredentialsFrom,omitempty"`
} }
type ScaleUpTrigger struct { type ScaleUpTrigger struct {

View File

@@ -76,6 +76,16 @@ type RunnerConfig struct {
// +optional // +optional
ContainerMode string `json:"containerMode,omitempty"` ContainerMode string `json:"containerMode,omitempty"`
GitHubAPICredentialsFrom *GitHubAPICredentialsFrom `json:"githubAPICredentialsFrom,omitempty"`
}
type GitHubAPICredentialsFrom struct {
SecretRef SecretReference `json:"secretRef,omitempty"`
}
type SecretReference struct {
Name string `json:"name"`
} }
// RunnerPodSpec defines the desired pod spec fields of the runner pod // RunnerPodSpec defines the desired pod spec fields of the runner pod
@@ -183,11 +193,6 @@ func (rs *RunnerSpec) Validate(rootPath *field.Path) field.ErrorList {
errList = append(errList, field.Invalid(rootPath.Child("workVolumeClaimTemplate"), rs.WorkVolumeClaimTemplate, err.Error())) errList = append(errList, field.Invalid(rootPath.Child("workVolumeClaimTemplate"), rs.WorkVolumeClaimTemplate, err.Error()))
} }
err = rs.validateIsServiceAccountNameSet()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("serviceAccountName"), rs.ServiceAccountName, err.Error()))
}
return errList return errList
} }
@@ -226,17 +231,6 @@ func (rs *RunnerSpec) validateWorkVolumeClaimTemplate() error {
return rs.WorkVolumeClaimTemplate.validate() return rs.WorkVolumeClaimTemplate.validate()
} }
func (rs *RunnerSpec) validateIsServiceAccountNameSet() error {
if rs.ContainerMode != "kubernetes" {
return nil
}
if rs.ServiceAccountName == "" {
return errors.New("service account name is required if container mode is kubernetes")
}
return nil
}
// RunnerStatus defines the observed state of Runner // RunnerStatus defines the observed state of Runner
type RunnerStatus struct { type RunnerStatus struct {
// Turns true only if the runner pod is ready. // Turns true only if the runner pod is ready.
@@ -315,8 +309,10 @@ func (w *WorkVolumeClaimTemplate) V1VolumeMount(mountPath string) corev1.VolumeM
// +kubebuilder:printcolumn:JSONPath=".spec.enterprise",name=Enterprise,type=string // +kubebuilder:printcolumn:JSONPath=".spec.enterprise",name=Enterprise,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.organization",name=Organization,type=string // +kubebuilder:printcolumn:JSONPath=".spec.organization",name=Organization,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.repository",name=Repository,type=string // +kubebuilder:printcolumn:JSONPath=".spec.repository",name=Repository,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.group",name=Group,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.labels",name=Labels,type=string // +kubebuilder:printcolumn:JSONPath=".spec.labels",name=Labels,type=string
// +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string // +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string
// +kubebuilder:printcolumn:JSONPath=".status.message",name=Message,type=string
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp" // +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// Runner is the Schema for the runners API // Runner is the Schema for the runners API

View File

@@ -90,6 +90,22 @@ func (in *CheckRunSpec) DeepCopy() *CheckRunSpec {
return out return out
} }
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubAPICredentialsFrom) DeepCopyInto(out *GitHubAPICredentialsFrom) {
*out = *in
out.SecretRef = in.SecretRef
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GitHubAPICredentialsFrom.
func (in *GitHubAPICredentialsFrom) DeepCopy() *GitHubAPICredentialsFrom {
if in == nil {
return nil
}
out := new(GitHubAPICredentialsFrom)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubEventScaleUpTriggerSpec) DeepCopyInto(out *GitHubEventScaleUpTriggerSpec) { func (in *GitHubEventScaleUpTriggerSpec) DeepCopyInto(out *GitHubEventScaleUpTriggerSpec) {
*out = *in *out = *in
@@ -231,6 +247,11 @@ func (in *HorizontalRunnerAutoscalerSpec) DeepCopyInto(out *HorizontalRunnerAuto
(*in)[i].DeepCopyInto(&(*out)[i]) (*in)[i].DeepCopyInto(&(*out)[i])
} }
} }
if in.GitHubAPICredentialsFrom != nil {
in, out := &in.GitHubAPICredentialsFrom, &out.GitHubAPICredentialsFrom
*out = new(GitHubAPICredentialsFrom)
**out = **in
}
} }
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerSpec. // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerSpec.
@@ -425,6 +446,11 @@ func (in *RunnerConfig) DeepCopyInto(out *RunnerConfig) {
*out = new(string) *out = new(string)
**out = **in **out = **in
} }
if in.GitHubAPICredentialsFrom != nil {
in, out := &in.GitHubAPICredentialsFrom, &out.GitHubAPICredentialsFrom
*out = new(GitHubAPICredentialsFrom)
**out = **in
}
} }
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerConfig. // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerConfig.
@@ -1136,6 +1162,21 @@ func (in *ScheduledOverride) DeepCopy() *ScheduledOverride {
return out return out
} }
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SecretReference) DeepCopyInto(out *SecretReference) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SecretReference.
func (in *SecretReference) DeepCopy() *SecretReference {
if in == nil {
return nil
}
out := new(SecretReference)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkVolumeClaimTemplate) DeepCopyInto(out *WorkVolumeClaimTemplate) { func (in *WorkVolumeClaimTemplate) DeepCopyInto(out *WorkVolumeClaimTemplate) {
*out = *in *out = *in

View File

@@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.20.0 version: 0.20.2
# Used as the default manager tag value when no tag property is provided in the values.yaml # Used as the default manager tag value when no tag property is provided in the values.yaml
appVersion: 0.25.0 appVersion: 0.25.2
home: https://github.com/actions-runner-controller/actions-runner-controller home: https://github.com/actions-runner-controller/actions-runner-controller

View File

@@ -73,11 +73,11 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
| `scope.watchNamespace` | Tells the controller and the github webhook server which namespace to watch if `scope.singleNamespace` is true | `Release.Namespace` (the default namespace of the helm chart). | | `scope.watchNamespace` | Tells the controller and the github webhook server which namespace to watch if `scope.singleNamespace` is true | `Release.Namespace` (the default namespace of the helm chart). |
| `scope.singleNamespace` | Limit the controller to watch a single namespace | false | | `scope.singleNamespace` | Limit the controller to watch a single namespace | false |
| `certManagerEnabled` | Enable cert-manager. If disabled you must set admissionWebHooks.caBundle and create TLS secrets manually | true | | `certManagerEnabled` | Enable cert-manager. If disabled you must set admissionWebHooks.caBundle and create TLS secrets manually | true |
| `runner.statusUpdateHook.enabled` | Use custom RBAC for runners (role, role binding and service account), this will enable reporting runner statuses | false |
| `admissionWebHooks.caBundle` | Base64-encoded PEM bundle containing the CA that signed the webhook's serving certificate | | | `admissionWebHooks.caBundle` | Base64-encoded PEM bundle containing the CA that signed the webhook's serving certificate | |
| `githubWebhookServer.logLevel` | Set the log level of the githubWebhookServer container | | | `githubWebhookServer.logLevel` | Set the log level of the githubWebhookServer container | |
| `githubWebhookServer.replicaCount` | Set the number of webhook server pods | 1 | | `githubWebhookServer.replicaCount` | Set the number of webhook server pods | 1 |
| `githubWebhookServer.useRunnerGroupsVisibility` | Enable supporting runner groups with custom visibility. This will incur in extra API calls and may blow up your budget. Currently, you also need to set `githubWebhookServer.secret.enabled` to enable this feature. | false | | `githubWebhookServer.useRunnerGroupsVisibility` | Enable supporting runner groups with custom visibility. This will incur in extra API calls and may blow up your budget. Currently, you also need to set `githubWebhookServer.secret.enabled` to enable this feature. | false |
| `githubWebhookServer.syncPeriod` | Set the period in which the controller reconciles the resources | 10m |
| `githubWebhookServer.enabled` | Deploy the webhook server pod | false | | `githubWebhookServer.enabled` | Deploy the webhook server pod | false |
| `githubWebhookServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false | | `githubWebhookServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false |
| `githubWebhookServer.secret.create` | Deploy the webhook hook secret | false | | `githubWebhookServer.secret.create` | Deploy the webhook hook secret | false |

View File

@@ -61,6 +61,16 @@ spec:
type: integer type: integer
type: object type: object
type: array type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
maxReplicas: maxReplicas:
description: MaxReplicas is the maximum number of replicas the deployment is allowed to scale description: MaxReplicas is the maximum number of replicas the deployment is allowed to scale
type: integer type: integer

View File

@@ -2415,6 +2415,16 @@ spec:
- name - name
type: object type: object
type: array type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group: group:
type: string type: string
hostAliases: hostAliases:

View File

@@ -2412,6 +2412,16 @@ spec:
- name - name
type: object type: object
type: array type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group: group:
type: string type: string
hostAliases: hostAliases:

View File

@@ -24,12 +24,18 @@ spec:
- jsonPath: .spec.repository - jsonPath: .spec.repository
name: Repository name: Repository
type: string type: string
- jsonPath: .spec.group
name: Group
type: string
- jsonPath: .spec.labels - jsonPath: .spec.labels
name: Labels name: Labels
type: string type: string
- jsonPath: .status.phase - jsonPath: .status.phase
name: Status name: Status
type: string type: string
- jsonPath: .status.message
name: Message
type: string
- jsonPath: .metadata.creationTimestamp - jsonPath: .metadata.creationTimestamp
name: Age name: Age
type: date type: date
@@ -2353,6 +2359,16 @@ spec:
- name - name
type: object type: object
type: array type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group: group:
type: string type: string
hostAliases: hostAliases:

View File

@@ -67,6 +67,16 @@ spec:
type: string type: string
ephemeral: ephemeral:
type: boolean type: boolean
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group: group:
type: string type: string
image: image:

View File

@@ -58,15 +58,15 @@ spec:
{{- if .Values.scope.singleNamespace }} {{- if .Values.scope.singleNamespace }}
- "--watch-namespace={{ default .Release.Namespace .Values.scope.watchNamespace }}" - "--watch-namespace={{ default .Release.Namespace .Values.scope.watchNamespace }}"
{{- end }} {{- end }}
{{- if .Values.githubAPICacheDuration }}
- "--github-api-cache-duration={{ .Values.githubAPICacheDuration }}"
{{- end }}
{{- if .Values.logLevel }} {{- if .Values.logLevel }}
- "--log-level={{ .Values.logLevel }}" - "--log-level={{ .Values.logLevel }}"
{{- end }} {{- end }}
{{- if .Values.runnerGithubURL }} {{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}" - "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }} {{- end }}
{{- if .Values.runner.statusUpdateHook.enabled }}
- "--runner-status-update-hook"
{{- end }}
command: command:
- "/manager" - "/manager"
env: env:
@@ -118,10 +118,14 @@ spec:
name: {{ include "actions-runner-controller.secretName" . }} name: {{ include "actions-runner-controller.secretName" . }}
optional: true optional: true
{{- end }} {{- end }}
{{- if kindIs "slice" .Values.env }}
{{- toYaml .Values.env | nindent 8 }}
{{- else }}
{{- range $key, $val := .Values.env }} {{- range $key, $val := .Values.env }}
- name: {{ $key }} - name: {{ $key }}
value: {{ $val | quote }} value: {{ $val | quote }}
{{- end }} {{- end }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}" image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: manager name: manager
imagePullPolicy: {{ .Values.image.pullPolicy }} imagePullPolicy: {{ .Values.image.pullPolicy }}

View File

@@ -39,7 +39,6 @@ spec:
{{- $metricsHost := .Values.metrics.proxy.enabled | ternary "127.0.0.1" "0.0.0.0" }} {{- $metricsHost := .Values.metrics.proxy.enabled | ternary "127.0.0.1" "0.0.0.0" }}
{{- $metricsPort := .Values.metrics.proxy.enabled | ternary "8080" .Values.metrics.port }} {{- $metricsPort := .Values.metrics.proxy.enabled | ternary "8080" .Values.metrics.port }}
- "--metrics-addr={{ $metricsHost }}:{{ $metricsPort }}" - "--metrics-addr={{ $metricsHost }}:{{ $metricsPort }}"
- "--sync-period={{ .Values.githubWebhookServer.syncPeriod }}"
{{- if .Values.githubWebhookServer.logLevel }} {{- if .Values.githubWebhookServer.logLevel }}
- "--log-level={{ .Values.githubWebhookServer.logLevel }}" - "--log-level={{ .Values.githubWebhookServer.logLevel }}"
{{- end }} {{- end }}

View File

@@ -258,3 +258,64 @@ rules:
- get - get
- list - list
- watch - watch
{{- if .Values.runner.statusUpdateHook.enabled }}
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- create
- delete
- get
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- create
- delete
- get
{{- end }}
{{- if .Values.rbac.allowGrantingKubernetesContainerModePermissions }}
{{/* These permissions are required by ARC to create RBAC resources for the runner pod to use the kubernetes container mode. */}}
{{/* See https://github.com/actions-runner-controller/actions-runner-controller/pull/1268/files#r917331632 */}}
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
- get
- apiGroups:
- ""
resources:
- pods/log
verbs:
- get
- list
- watch
- apiGroups:
- "batch"
resources:
- jobs
verbs:
- get
- list
- create
- delete
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
{{- end }}

View File

@@ -15,12 +15,6 @@ enableLeaderElection: true
# Must be unique if more than one controller installed onto the same namespace. # Must be unique if more than one controller installed onto the same namespace.
#leaderElectionId: "actions-runner-controller" #leaderElectionId: "actions-runner-controller"
# DEPRECATED: This has been removed as unnecessary in #1192
# The controller tries its best not to repeat the duplicate GitHub API call
# within this duration.
# Defaults to syncPeriod - 10s.
#githubAPICacheDuration: 30s
# The URL of your GitHub Enterprise server, if you're using one. # The URL of your GitHub Enterprise server, if you're using one.
#githubEnterpriseServerURL: https://github.example.com #githubEnterpriseServerURL: https://github.example.com
@@ -67,6 +61,18 @@ imagePullSecrets: []
nameOverride: "" nameOverride: ""
fullnameOverride: "" fullnameOverride: ""
runner:
statusUpdateHook:
enabled: false
rbac:
{}
# # This allows ARC to dynamically create a ServiceAccount and a Role for each Runner pod that uses "kubernetes" container mode,
# # by extending ARC's manager role to have the same permissions required by the pod runs the runner agent in "kubernetes" container mode.
# # Without this, Kubernetes blocks ARC to create the role to prevent a priviledge escalation.
# # See https://github.com/actions-runner-controller/actions-runner-controller/pull/1268/files#r917327010
# allowGrantingKubernetesContainerModePermissions: true
serviceAccount: serviceAccount:
# Specifies whether a service account should be created # Specifies whether a service account should be created
create: true create: true
@@ -143,10 +149,20 @@ priorityClassName: ""
env: env:
{} {}
# specify additional environment variables for the controller pod.
# It's possible to specify either key vale pairs e.g.:
# http_proxy: "proxy.com:8080" # http_proxy: "proxy.com:8080"
# https_proxy: "proxy.com:8080" # https_proxy: "proxy.com:8080"
# no_proxy: "" # no_proxy: ""
# or a list of complete environment variable definitions e.g.:
# - name: GITHUB_APP_INSTALLATION_ID
# valueFrom:
# secretKeyRef:
# key: some_key_in_the_secret
# name: some-secret-name
# optional: true
## specify additional volumes to mount in the manager container, this can be used ## specify additional volumes to mount in the manager container, this can be used
## to specify additional storage of material or to inject files from ConfigMaps ## to specify additional storage of material or to inject files from ConfigMaps
## into the running container ## into the running container
@@ -175,7 +191,6 @@ admissionWebHooks:
githubWebhookServer: githubWebhookServer:
enabled: false enabled: false
replicaCount: 1 replicaCount: 1
syncPeriod: 10m
useRunnerGroupsVisibility: false useRunnerGroupsVisibility: false
secret: secret:
enabled: false enabled: false

View File

@@ -69,10 +69,8 @@ func main() {
watchNamespace string watchNamespace string
enableLeaderElection bool logLevel string
syncPeriod time.Duration queueLimit int
logLevel string
queueLimit int
ghClient *github.Client ghClient *github.Client
) )
@@ -89,9 +87,6 @@ func main() {
flag.StringVar(&webhookAddr, "webhook-addr", ":8000", "The address the metric endpoint binds to.") flag.StringVar(&webhookAddr, "webhook-addr", ":8000", "The address the metric endpoint binds to.")
flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.") flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.")
flag.StringVar(&watchNamespace, "watch-namespace", "", "The namespace to watch for HorizontalRunnerAutoscaler's to scale on Webhook. Set to empty for letting it watch for all namespaces.") flag.StringVar(&watchNamespace, "watch-namespace", "", "The namespace to watch for HorizontalRunnerAutoscaler's to scale on Webhook. Set to empty for letting it watch for all namespaces.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change")
flag.StringVar(&logLevel, "log-level", logging.LogLevelDebug, `The verbosity of the logging. Valid values are "debug", "info", "warn", "error". Defaults to "debug".`) flag.StringVar(&logLevel, "log-level", logging.LogLevelDebug, `The verbosity of the logging. Valid values are "debug", "info", "warn", "error". Defaults to "debug".`)
flag.IntVar(&queueLimit, "queue-limit", controllers.DefaultQueueLimit, `The maximum length of the scale operation queue. The scale opration is enqueued per every matching webhook event, and the server returns a 500 HTTP status when the queue was already full on enqueue attempt.`) flag.IntVar(&queueLimit, "queue-limit", controllers.DefaultQueueLimit, `The maximum length of the scale operation queue. The scale opration is enqueued per every matching webhook event, and the server returns a 500 HTTP status when the queue was already full on enqueue attempt.`)
flag.StringVar(&webhookSecretToken, "github-webhook-secret-token", "", "The personal access token of GitHub.") flag.StringVar(&webhookSecretToken, "github-webhook-secret-token", "", "The personal access token of GitHub.")
@@ -144,10 +139,10 @@ func main() {
setupLog.Info("GitHub client is not initialized. Runner groups with custom visibility are not supported. If needed, please provide GitHub authentication. This will incur in extra GitHub API calls") setupLog.Info("GitHub client is not initialized. Runner groups with custom visibility are not supported. If needed, please provide GitHub authentication. This will incur in extra GitHub API calls")
} }
syncPeriod := 10 * time.Minute
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme, Scheme: scheme,
SyncPeriod: &syncPeriod, SyncPeriod: &syncPeriod,
LeaderElection: enableLeaderElection,
Namespace: watchNamespace, Namespace: watchNamespace,
MetricsBindAddress: metricsAddr, MetricsBindAddress: metricsAddr,
Port: 9443, Port: 9443,

View File

@@ -61,6 +61,16 @@ spec:
type: integer type: integer
type: object type: object
type: array type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
maxReplicas: maxReplicas:
description: MaxReplicas is the maximum number of replicas the deployment is allowed to scale description: MaxReplicas is the maximum number of replicas the deployment is allowed to scale
type: integer type: integer

View File

@@ -2415,6 +2415,16 @@ spec:
- name - name
type: object type: object
type: array type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group: group:
type: string type: string
hostAliases: hostAliases:

View File

@@ -2412,6 +2412,16 @@ spec:
- name - name
type: object type: object
type: array type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group: group:
type: string type: string
hostAliases: hostAliases:

View File

@@ -24,12 +24,18 @@ spec:
- jsonPath: .spec.repository - jsonPath: .spec.repository
name: Repository name: Repository
type: string type: string
- jsonPath: .spec.group
name: Group
type: string
- jsonPath: .spec.labels - jsonPath: .spec.labels
name: Labels name: Labels
type: string type: string
- jsonPath: .status.phase - jsonPath: .status.phase
name: Status name: Status
type: string type: string
- jsonPath: .status.message
name: Message
type: string
- jsonPath: .metadata.creationTimestamp - jsonPath: .metadata.creationTimestamp
name: Age name: Age
type: date type: date
@@ -2353,6 +2359,16 @@ spec:
- name - name
type: object type: object
type: array type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group: group:
type: string type: string
hostAliases: hostAliases:

View File

@@ -67,6 +67,16 @@ spec:
type: string type: string
ephemeral: ephemeral:
type: boolean type: boolean
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
group: group:
type: string type: string
image: image:

View File

@@ -258,3 +258,27 @@ rules:
- get - get
- list - list
- watch - watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- create
- delete
- get
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- create
- delete
- get

View File

@@ -9,6 +9,7 @@ import (
"strings" "strings"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1" "github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
arcgithub "github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/google/go-github/v45/github" "github.com/google/go-github/v45/github"
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/client"
@@ -21,7 +22,7 @@ const (
defaultScaleDownFactor = 0.7 defaultScaleDownFactor = 0.7
) )
func (r *HorizontalRunnerAutoscalerReconciler) suggestDesiredReplicas(st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) { func (r *HorizontalRunnerAutoscalerReconciler) suggestDesiredReplicas(ghc *arcgithub.Client, st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
if hra.Spec.MinReplicas == nil { if hra.Spec.MinReplicas == nil {
return nil, fmt.Errorf("horizontalrunnerautoscaler %s/%s is missing minReplicas", hra.Namespace, hra.Name) return nil, fmt.Errorf("horizontalrunnerautoscaler %s/%s is missing minReplicas", hra.Namespace, hra.Name)
} else if hra.Spec.MaxReplicas == nil { } else if hra.Spec.MaxReplicas == nil {
@@ -48,9 +49,9 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestDesiredReplicas(st scaleTa
switch primaryMetricType { switch primaryMetricType {
case v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns: case v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns:
suggested, err = r.suggestReplicasByQueuedAndInProgressWorkflowRuns(st, hra, &primaryMetric) suggested, err = r.suggestReplicasByQueuedAndInProgressWorkflowRuns(ghc, st, hra, &primaryMetric)
case v1alpha1.AutoscalingMetricTypePercentageRunnersBusy: case v1alpha1.AutoscalingMetricTypePercentageRunnersBusy:
suggested, err = r.suggestReplicasByPercentageRunnersBusy(st, hra, primaryMetric) suggested, err = r.suggestReplicasByPercentageRunnersBusy(ghc, st, hra, primaryMetric)
default: default:
return nil, fmt.Errorf("validating autoscaling metrics: unsupported metric type %q", primaryMetric) return nil, fmt.Errorf("validating autoscaling metrics: unsupported metric type %q", primaryMetric)
} }
@@ -83,11 +84,10 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestDesiredReplicas(st scaleTa
) )
} }
return r.suggestReplicasByQueuedAndInProgressWorkflowRuns(st, hra, &fallbackMetric) return r.suggestReplicasByQueuedAndInProgressWorkflowRuns(ghc, st, hra, &fallbackMetric)
} }
func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgressWorkflowRuns(st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, metrics *v1alpha1.MetricSpec) (*int, error) { func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgressWorkflowRuns(ghc *arcgithub.Client, st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, metrics *v1alpha1.MetricSpec) (*int, error) {
var repos [][]string var repos [][]string
repoID := st.repo repoID := st.repo
if repoID == "" { if repoID == "" {
@@ -126,7 +126,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgr
opt := github.ListWorkflowJobsOptions{ListOptions: github.ListOptions{PerPage: 50}} opt := github.ListWorkflowJobsOptions{ListOptions: github.ListOptions{PerPage: 50}}
var allJobs []*github.WorkflowJob var allJobs []*github.WorkflowJob
for { for {
jobs, resp, err := r.GitHubClient.Actions.ListWorkflowJobs(context.TODO(), user, repoName, runID, &opt) jobs, resp, err := ghc.Actions.ListWorkflowJobs(context.TODO(), user, repoName, runID, &opt)
if err != nil { if err != nil {
r.Log.Error(err, "Error listing workflow jobs") r.Log.Error(err, "Error listing workflow jobs")
return //err return //err
@@ -184,7 +184,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgr
for _, repo := range repos { for _, repo := range repos {
user, repoName := repo[0], repo[1] user, repoName := repo[0], repo[1]
workflowRuns, err := r.GitHubClient.ListRepositoryWorkflowRuns(context.TODO(), user, repoName) workflowRuns, err := ghc.ListRepositoryWorkflowRuns(context.TODO(), user, repoName)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -226,7 +226,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgr
return &necessaryReplicas, nil return &necessaryReplicas, nil
} }
func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunnersBusy(st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, metrics v1alpha1.MetricSpec) (*int, error) { func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunnersBusy(ghc *arcgithub.Client, st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, metrics v1alpha1.MetricSpec) (*int, error) {
ctx := context.Background() ctx := context.Background()
scaleUpThreshold := defaultScaleUpThreshold scaleUpThreshold := defaultScaleUpThreshold
scaleDownThreshold := defaultScaleDownThreshold scaleDownThreshold := defaultScaleDownThreshold
@@ -295,7 +295,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByPercentageRunner
) )
// ListRunners will return all runners managed by GitHub - not restricted to ns // ListRunners will return all runners managed by GitHub - not restricted to ns
runners, err := r.GitHubClient.ListRunners( runners, err := ghc.ListRunners(
ctx, ctx,
enterprise, enterprise,
organization, organization,

View File

@@ -330,7 +330,6 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
h := &HorizontalRunnerAutoscalerReconciler{ h := &HorizontalRunnerAutoscalerReconciler{
Log: log, Log: log,
GitHubClient: client,
Scheme: scheme, Scheme: scheme,
DefaultScaleDownDelay: DefaultScaleDownDelay, DefaultScaleDownDelay: DefaultScaleDownDelay,
} }
@@ -379,7 +378,7 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
st := h.scaleTargetFromRD(context.Background(), rd) st := h.scaleTargetFromRD(context.Background(), rd)
got, err := h.computeReplicasWithCache(log, metav1Now.Time, st, hra, minReplicas) got, err := h.computeReplicasWithCache(client, log, metav1Now.Time, st, hra, minReplicas)
if err != nil { if err != nil {
if tc.err == "" { if tc.err == "" {
t.Fatalf("unexpected error: expected none, got %v", err) t.Fatalf("unexpected error: expected none, got %v", err)
@@ -720,7 +719,6 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
h := &HorizontalRunnerAutoscalerReconciler{ h := &HorizontalRunnerAutoscalerReconciler{
Log: log, Log: log,
Scheme: scheme, Scheme: scheme,
GitHubClient: client,
DefaultScaleDownDelay: DefaultScaleDownDelay, DefaultScaleDownDelay: DefaultScaleDownDelay,
} }
@@ -781,7 +779,7 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
st := h.scaleTargetFromRD(context.Background(), rd) st := h.scaleTargetFromRD(context.Background(), rd)
got, err := h.computeReplicasWithCache(log, metav1Now.Time, st, hra, minReplicas) got, err := h.computeReplicasWithCache(client, log, metav1Now.Time, st, hra, minReplicas)
if err != nil { if err != nil {
if tc.err == "" { if tc.err == "" {
t.Fatalf("unexpected error: expected none, got %v", err) t.Fatalf("unexpected error: expected none, got %v", err)

View File

@@ -24,7 +24,6 @@ import (
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
"github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/go-logr/logr" "github.com/go-logr/logr"
kerrors "k8s.io/apimachinery/pkg/api/errors" kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
@@ -38,6 +37,7 @@ import (
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1" "github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/controllers/metrics" "github.com/actions-runner-controller/actions-runner-controller/controllers/metrics"
arcgithub "github.com/actions-runner-controller/actions-runner-controller/github"
) )
const ( const (
@@ -47,11 +47,10 @@ const (
// HorizontalRunnerAutoscalerReconciler reconciles a HorizontalRunnerAutoscaler object // HorizontalRunnerAutoscalerReconciler reconciles a HorizontalRunnerAutoscaler object
type HorizontalRunnerAutoscalerReconciler struct { type HorizontalRunnerAutoscalerReconciler struct {
client.Client client.Client
GitHubClient *github.Client GitHubClient *MultiGitHubClient
Log logr.Logger Log logr.Logger
Recorder record.EventRecorder Recorder record.EventRecorder
Scheme *runtime.Scheme Scheme *runtime.Scheme
CacheDuration time.Duration
DefaultScaleDownDelay time.Duration DefaultScaleDownDelay time.Duration
Name string Name string
} }
@@ -73,6 +72,8 @@ func (r *HorizontalRunnerAutoscalerReconciler) Reconcile(ctx context.Context, re
} }
if !hra.ObjectMeta.DeletionTimestamp.IsZero() { if !hra.ObjectMeta.DeletionTimestamp.IsZero() {
r.GitHubClient.DeinitForHRA(&hra)
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
@@ -310,7 +311,12 @@ func (r *HorizontalRunnerAutoscalerReconciler) reconcile(ctx context.Context, re
return ctrl.Result{}, err return ctrl.Result{}, err
} }
newDesiredReplicas, err := r.computeReplicasWithCache(log, now, st, hra, minReplicas) ghc, err := r.GitHubClient.InitForHRA(context.Background(), &hra)
if err != nil {
return ctrl.Result{}, err
}
newDesiredReplicas, err := r.computeReplicasWithCache(ghc, log, now, st, hra, minReplicas)
if err != nil { if err != nil {
r.Recorder.Event(&hra, corev1.EventTypeNormal, "RunnerAutoscalingFailure", err.Error()) r.Recorder.Event(&hra, corev1.EventTypeNormal, "RunnerAutoscalingFailure", err.Error())
@@ -461,10 +467,10 @@ func (r *HorizontalRunnerAutoscalerReconciler) getMinReplicas(log logr.Logger, n
return minReplicas, active, upcoming, nil return minReplicas, active, upcoming, nil
} }
func (r *HorizontalRunnerAutoscalerReconciler) computeReplicasWithCache(log logr.Logger, now time.Time, st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, minReplicas int) (int, error) { func (r *HorizontalRunnerAutoscalerReconciler) computeReplicasWithCache(ghc *arcgithub.Client, log logr.Logger, now time.Time, st scaleTarget, hra v1alpha1.HorizontalRunnerAutoscaler, minReplicas int) (int, error) {
var suggestedReplicas int var suggestedReplicas int
v, err := r.suggestDesiredReplicas(st, hra) v, err := r.suggestDesiredReplicas(ghc, st, hra)
if err != nil { if err != nil {
return 0, err return 0, err
} }

View File

@@ -99,12 +99,14 @@ func SetupIntegrationTest(ctx2 context.Context) *testEnvironment {
return fmt.Sprintf("%s%s", ns.Name, name) return fmt.Sprintf("%s%s", ns.Name, name)
} }
multiClient := NewMultiGitHubClient(mgr.GetClient(), env.ghClient)
runnerController := &RunnerReconciler{ runnerController := &RunnerReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Scheme: scheme.Scheme, Scheme: scheme.Scheme,
Log: logf.Log, Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"), Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: env.ghClient, GitHubClient: multiClient,
RunnerImage: "example/runner:test", RunnerImage: "example/runner:test",
DockerImage: "example/docker:test", DockerImage: "example/docker:test",
Name: controllerName("runner"), Name: controllerName("runner"),
@@ -116,12 +118,11 @@ func SetupIntegrationTest(ctx2 context.Context) *testEnvironment {
Expect(err).NotTo(HaveOccurred(), "failed to setup runner controller") Expect(err).NotTo(HaveOccurred(), "failed to setup runner controller")
replicasetController := &RunnerReplicaSetReconciler{ replicasetController := &RunnerReplicaSetReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Scheme: scheme.Scheme, Scheme: scheme.Scheme,
Log: logf.Log, Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"), Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: env.ghClient, Name: controllerName("runnerreplicaset"),
Name: controllerName("runnerreplicaset"),
} }
err = replicasetController.SetupWithManager(mgr) err = replicasetController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup runnerreplicaset controller") Expect(err).NotTo(HaveOccurred(), "failed to setup runnerreplicaset controller")
@@ -137,13 +138,12 @@ func SetupIntegrationTest(ctx2 context.Context) *testEnvironment {
Expect(err).NotTo(HaveOccurred(), "failed to setup runnerdeployment controller") Expect(err).NotTo(HaveOccurred(), "failed to setup runnerdeployment controller")
autoscalerController := &HorizontalRunnerAutoscalerReconciler{ autoscalerController := &HorizontalRunnerAutoscalerReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Scheme: scheme.Scheme, Scheme: scheme.Scheme,
Log: logf.Log, Log: logf.Log,
GitHubClient: env.ghClient, GitHubClient: multiClient,
Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"), Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"),
CacheDuration: 1 * time.Second, Name: controllerName("horizontalrunnerautoscaler"),
Name: controllerName("horizontalrunnerautoscaler"),
} }
err = autoscalerController.SetupWithManager(mgr) err = autoscalerController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup autoscaler controller") Expect(err).NotTo(HaveOccurred(), "failed to setup autoscaler controller")

View File

@@ -0,0 +1,389 @@
package controllers
import (
"context"
"crypto/sha1"
"encoding/base64"
"encoding/hex"
"fmt"
"sort"
"strconv"
"sync"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/github"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
// The api creds scret annotation is added by the runner controller or the runnerset controller according to runner.spec.githubAPICredentialsFrom.secretRef.name,
// so that the runner pod controller can share the same GitHub API credentials and the instance of the GitHub API client with the upstream controllers.
annotationKeyGitHubAPICredsSecret = annotationKeyPrefix + "github-api-creds-secret"
)
type runnerOwnerRef struct {
// kind is either StatefulSet or Runner, and populated via the owner reference in the runner pod controller or via the reconcilation target's kind in
// runnerset and runner controllers.
kind string
ns, name string
}
type secretRef struct {
ns, name string
}
// savedClient is the each cache entry that contains the client for the specific set of credentials,
// like a PAT or a pair of key and cert.
// the `hash` is a part of the savedClient not the key because we are going to keep only the client for the latest creds
// in case the operator updated the k8s secret containing the credentials.
type savedClient struct {
hash string
// refs is the map of all the objects that references this client, used for reference counting to gc
// the client if unneeded.
refs map[runnerOwnerRef]struct{}
*github.Client
}
type resourceReader interface {
Get(context.Context, types.NamespacedName, client.Object) error
}
type MultiGitHubClient struct {
mu sync.Mutex
client resourceReader
githubClient *github.Client
// The saved client is freed once all its dependents disappear, or the contents of the secret changed.
// We track dependents via a golang map embedded within the savedClient struct. Each dependent is checked on their respective Kubernetes finalizer,
// so that we won't miss any dependent's termination.
// The change is the secret is determined using the hash of its contents.
clients map[secretRef]savedClient
}
func NewMultiGitHubClient(client resourceReader, githubClient *github.Client) *MultiGitHubClient {
return &MultiGitHubClient{
client: client,
githubClient: githubClient,
clients: map[secretRef]savedClient{},
}
}
// Init sets up and return the *github.Client for the object.
// In case the object (like RunnerDeployment) does not request a custom client, it returns the default client.
func (c *MultiGitHubClient) InitForRunnerPod(ctx context.Context, pod *corev1.Pod) (*github.Client, error) {
// These 3 default values are used only when the user created the pod directly, not via Runner, RunnerReplicaSet, RunnerDeploment, or RunnerSet resources.
ref := refFromRunnerPod(pod)
secretName := pod.Annotations[annotationKeyGitHubAPICredsSecret]
// kind can be any of Pod, Runner, RunnerReplicaSet, RunnerDeployment, or RunnerSet depending on which custom resource the user directly created.
return c.initClientWithSecretName(ctx, pod.Namespace, secretName, ref)
}
// Init sets up and return the *github.Client for the object.
// In case the object (like RunnerDeployment) does not request a custom client, it returns the default client.
func (c *MultiGitHubClient) InitForRunner(ctx context.Context, r *v1alpha1.Runner) (*github.Client, error) {
var secretName string
if r.Spec.GitHubAPICredentialsFrom != nil {
secretName = r.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
// These 3 default values are used only when the user created the runner resource directly, not via RunnerReplicaSet, RunnerDeploment, or RunnerSet resources.
ref := refFromRunner(r)
if ref.ns != r.Namespace {
return nil, fmt.Errorf("referencing github api creds secret from owner in another namespace is not supported yet")
}
// kind can be any of Runner, RunnerReplicaSet, or RunnerDeployment depending on which custom resource the user directly created.
return c.initClientWithSecretName(ctx, r.Namespace, secretName, ref)
}
// Init sets up and return the *github.Client for the object.
// In case the object (like RunnerDeployment) does not request a custom client, it returns the default client.
func (c *MultiGitHubClient) InitForRunnerSet(ctx context.Context, rs *v1alpha1.RunnerSet) (*github.Client, error) {
ref := refFromRunnerSet(rs)
var secretName string
if rs.Spec.GitHubAPICredentialsFrom != nil {
secretName = rs.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
return c.initClientWithSecretName(ctx, rs.Namespace, secretName, ref)
}
// Init sets up and return the *github.Client for the object.
// In case the object (like RunnerDeployment) does not request a custom client, it returns the default client.
func (c *MultiGitHubClient) InitForHRA(ctx context.Context, hra *v1alpha1.HorizontalRunnerAutoscaler) (*github.Client, error) {
ref := refFromHorizontalRunnerAutoscaler(hra)
var secretName string
if hra.Spec.GitHubAPICredentialsFrom != nil {
secretName = hra.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
return c.initClientWithSecretName(ctx, hra.Namespace, secretName, ref)
}
func (c *MultiGitHubClient) DeinitForRunnerPod(p *corev1.Pod) {
secretName := p.Annotations[annotationKeyGitHubAPICredsSecret]
c.derefClient(p.Namespace, secretName, refFromRunnerPod(p))
}
func (c *MultiGitHubClient) DeinitForRunner(r *v1alpha1.Runner) {
var secretName string
if r.Spec.GitHubAPICredentialsFrom != nil {
secretName = r.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
c.derefClient(r.Namespace, secretName, refFromRunner(r))
}
func (c *MultiGitHubClient) DeinitForRunnerSet(rs *v1alpha1.RunnerSet) {
var secretName string
if rs.Spec.GitHubAPICredentialsFrom != nil {
secretName = rs.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
c.derefClient(rs.Namespace, secretName, refFromRunnerSet(rs))
}
func (c *MultiGitHubClient) deinitClientForRunnerReplicaSet(rs *v1alpha1.RunnerReplicaSet) {
c.derefClient(rs.Namespace, rs.Spec.Template.Spec.GitHubAPICredentialsFrom.SecretRef.Name, refFromRunnerReplicaSet(rs))
}
func (c *MultiGitHubClient) deinitClientForRunnerDeployment(rd *v1alpha1.RunnerDeployment) {
c.derefClient(rd.Namespace, rd.Spec.Template.Spec.GitHubAPICredentialsFrom.SecretRef.Name, refFromRunnerDeployment(rd))
}
func (c *MultiGitHubClient) DeinitForHRA(hra *v1alpha1.HorizontalRunnerAutoscaler) {
var secretName string
if hra.Spec.GitHubAPICredentialsFrom != nil {
secretName = hra.Spec.GitHubAPICredentialsFrom.SecretRef.Name
}
c.derefClient(hra.Namespace, secretName, refFromHorizontalRunnerAutoscaler(hra))
}
func (c *MultiGitHubClient) initClientForSecret(secret *corev1.Secret, dependent *runnerOwnerRef) (*savedClient, error) {
secRef := secretRef{
ns: secret.Namespace,
name: secret.Name,
}
cliRef := c.clients[secRef]
var ks []string
for k := range secret.Data {
ks = append(ks, k)
}
sort.SliceStable(ks, func(i, j int) bool { return ks[i] < ks[j] })
hash := sha1.New()
for _, k := range ks {
hash.Write(secret.Data[k])
}
hashStr := hex.EncodeToString(hash.Sum(nil))
if cliRef.hash != hashStr {
delete(c.clients, secRef)
conf, err := secretDataToGitHubClientConfig(secret.Data)
if err != nil {
return nil, err
}
cli, err := conf.NewClient()
if err != nil {
return nil, err
}
cliRef = savedClient{
hash: hashStr,
refs: map[runnerOwnerRef]struct{}{},
Client: cli,
}
c.clients[secRef] = cliRef
}
if dependent != nil {
c.clients[secRef].refs[*dependent] = struct{}{}
}
return &cliRef, nil
}
func (c *MultiGitHubClient) initClientWithSecretName(ctx context.Context, ns, secretName string, runRef *runnerOwnerRef) (*github.Client, error) {
c.mu.Lock()
defer c.mu.Unlock()
if secretName == "" {
return c.githubClient, nil
}
secRef := secretRef{
ns: ns,
name: secretName,
}
if _, ok := c.clients[secRef]; !ok {
c.clients[secRef] = savedClient{}
}
var sec corev1.Secret
if err := c.client.Get(ctx, types.NamespacedName{Namespace: ns, Name: secretName}, &sec); err != nil {
return nil, err
}
savedClient, err := c.initClientForSecret(&sec, runRef)
if err != nil {
return nil, err
}
return savedClient.Client, nil
}
func (c *MultiGitHubClient) derefClient(ns, secretName string, dependent *runnerOwnerRef) {
c.mu.Lock()
defer c.mu.Unlock()
secRef := secretRef{
ns: ns,
name: secretName,
}
if dependent != nil {
delete(c.clients[secRef].refs, *dependent)
}
cliRef := c.clients[secRef]
if dependent == nil || len(cliRef.refs) == 0 {
delete(c.clients, secRef)
}
}
func decodeBase64(s []byte) (string, error) {
enc := base64.RawStdEncoding
dbuf := make([]byte, enc.DecodedLen(len(s)))
n, err := enc.Decode(dbuf, []byte(s))
if err != nil {
return "", err
}
return string(dbuf[:n]), nil
}
func secretDataToGitHubClientConfig(data map[string][]byte) (*github.Config, error) {
var (
conf github.Config
err error
)
conf.URL, err = decodeBase64(data["github_url"])
if err != nil {
return nil, err
}
conf.UploadURL, err = decodeBase64(data["github_upload_url"])
if err != nil {
return nil, err
}
conf.EnterpriseURL, err = decodeBase64(data["github_enterprise_url"])
if err != nil {
return nil, err
}
conf.RunnerGitHubURL, err = decodeBase64(data["github_runner_url"])
if err != nil {
return nil, err
}
conf.Token, err = decodeBase64(data["github_token"])
if err != nil {
return nil, err
}
appID, err := decodeBase64(data["github_app_id"])
if err != nil {
return nil, err
}
conf.AppID, err = strconv.ParseInt(appID, 10, 64)
if err != nil {
return nil, err
}
instID, err := decodeBase64(data["github_app_installation_id"])
if err != nil {
return nil, err
}
conf.AppInstallationID, err = strconv.ParseInt(instID, 10, 64)
if err != nil {
return nil, err
}
conf.AppPrivateKey, err = decodeBase64(data["github_app_private_key"])
if err != nil {
return nil, err
}
return &conf, nil
}
func refFromRunnerDeployment(rd *v1alpha1.RunnerDeployment) *runnerOwnerRef {
return &runnerOwnerRef{
kind: rd.Kind,
ns: rd.Namespace,
name: rd.Name,
}
}
func refFromRunnerReplicaSet(rs *v1alpha1.RunnerReplicaSet) *runnerOwnerRef {
return &runnerOwnerRef{
kind: rs.Kind,
ns: rs.Namespace,
name: rs.Name,
}
}
func refFromRunner(r *v1alpha1.Runner) *runnerOwnerRef {
return &runnerOwnerRef{
kind: r.Kind,
ns: r.Namespace,
name: r.Name,
}
}
func refFromRunnerPod(po *corev1.Pod) *runnerOwnerRef {
return &runnerOwnerRef{
kind: po.Kind,
ns: po.Namespace,
name: po.Name,
}
}
func refFromRunnerSet(rs *v1alpha1.RunnerSet) *runnerOwnerRef {
return &runnerOwnerRef{
kind: rs.Kind,
ns: rs.Namespace,
name: rs.Name,
}
}
func refFromHorizontalRunnerAutoscaler(hra *v1alpha1.HorizontalRunnerAutoscaler) *runnerOwnerRef {
return &runnerOwnerRef{
kind: hra.Kind,
ns: hra.Namespace,
name: hra.Name,
}
}

View File

@@ -10,7 +10,9 @@ import (
"k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
clientgoscheme "k8s.io/client-go/kubernetes/scheme" clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"sigs.k8s.io/controller-runtime/pkg/client"
) )
func newWorkGenericEphemeralVolume(t *testing.T, storageReq string) corev1.Volume { func newWorkGenericEphemeralVolume(t *testing.T, storageReq string) corev1.Volume {
@@ -125,6 +127,10 @@ func TestNewRunnerPod(t *testing.T) {
Name: "RUNNER_EPHEMERAL", Name: "RUNNER_EPHEMERAL",
Value: "true", Value: "true",
}, },
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
{ {
Name: "DOCKER_HOST", Name: "DOCKER_HOST",
Value: "tcp://localhost:2376", Value: "tcp://localhost:2376",
@@ -255,6 +261,10 @@ func TestNewRunnerPod(t *testing.T) {
Name: "RUNNER_EPHEMERAL", Name: "RUNNER_EPHEMERAL",
Value: "true", Value: "true",
}, },
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
}, },
VolumeMounts: []corev1.VolumeMount{ VolumeMounts: []corev1.VolumeMount{
{ {
@@ -333,6 +343,10 @@ func TestNewRunnerPod(t *testing.T) {
Name: "RUNNER_EPHEMERAL", Name: "RUNNER_EPHEMERAL",
Value: "true", Value: "true",
}, },
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
}, },
VolumeMounts: []corev1.VolumeMount{ VolumeMounts: []corev1.VolumeMount{
{ {
@@ -515,7 +529,7 @@ func TestNewRunnerPod(t *testing.T) {
for i := range testcases { for i := range testcases {
tc := testcases[i] tc := testcases[i]
t.Run(tc.description, func(t *testing.T) { t.Run(tc.description, func(t *testing.T) {
got, err := newRunnerPod(tc.template, tc.config, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL) got, err := newRunnerPod(tc.template, tc.config, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL, false)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, tc.want, got) require.Equal(t, tc.want, got)
}) })
@@ -624,6 +638,10 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Name: "RUNNER_EPHEMERAL", Name: "RUNNER_EPHEMERAL",
Value: "true", Value: "true",
}, },
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
{ {
Name: "DOCKER_HOST", Name: "DOCKER_HOST",
Value: "tcp://localhost:2376", Value: "tcp://localhost:2376",
@@ -769,6 +787,10 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Name: "RUNNER_EPHEMERAL", Name: "RUNNER_EPHEMERAL",
Value: "true", Value: "true",
}, },
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
{ {
Name: "RUNNER_NAME", Name: "RUNNER_NAME",
Value: "runner", Value: "runner",
@@ -866,6 +888,10 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Name: "RUNNER_EPHEMERAL", Name: "RUNNER_EPHEMERAL",
Value: "true", Value: "true",
}, },
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: "false",
},
{ {
Name: "RUNNER_NAME", Name: "RUNNER_NAME",
Value: "runner", Value: "runner",
@@ -1105,13 +1131,20 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
for i := range testcases { for i := range testcases {
tc := testcases[i] tc := testcases[i]
rr := &testResourceReader{
objects: map[types.NamespacedName]client.Object{},
}
multiClient := NewMultiGitHubClient(rr, &github.Client{GithubBaseURL: githubBaseURL})
t.Run(tc.description, func(t *testing.T) { t.Run(tc.description, func(t *testing.T) {
r := &RunnerReconciler{ r := &RunnerReconciler{
RunnerImage: defaultRunnerImage, RunnerImage: defaultRunnerImage,
RunnerImagePullSecrets: defaultRunnerImagePullSecrets, RunnerImagePullSecrets: defaultRunnerImagePullSecrets,
DockerImage: defaultDockerImage, DockerImage: defaultDockerImage,
DockerRegistryMirror: defaultDockerRegistryMirror, DockerRegistryMirror: defaultDockerRegistryMirror,
GitHubClient: &github.Client{GithubBaseURL: githubBaseURL}, GitHubClient: multiClient,
Scheme: scheme, Scheme: scheme,
} }
got, err := r.newPod(tc.runner) got, err := r.newPod(tc.runner)

View File

@@ -6,7 +6,6 @@ import (
"net/http" "net/http"
"time" "time"
"github.com/actions-runner-controller/actions-runner-controller/github"
"github.com/go-logr/logr" "github.com/go-logr/logr"
"gomodules.xyz/jsonpatch/v2" "gomodules.xyz/jsonpatch/v2"
admissionv1 "k8s.io/api/admission/v1" admissionv1 "k8s.io/api/admission/v1"
@@ -29,7 +28,7 @@ type PodRunnerTokenInjector struct {
Name string Name string
Log logr.Logger Log logr.Logger
Recorder record.EventRecorder Recorder record.EventRecorder
GitHubClient *github.Client GitHubClient *MultiGitHubClient
decoder *admission.Decoder decoder *admission.Decoder
} }
@@ -66,7 +65,12 @@ func (t *PodRunnerTokenInjector) Handle(ctx context.Context, req admission.Reque
return newEmptyResponse() return newEmptyResponse()
} }
rt, err := t.GitHubClient.GetRegistrationToken(context.Background(), enterprise, org, repo, pod.Name) ghc, err := t.GitHubClient.InitForRunnerPod(ctx, &pod)
if err != nil {
return admission.Errored(http.StatusInternalServerError, err)
}
rt, err := ghc.GetRegistrationToken(context.Background(), enterprise, org, repo, pod.Name)
if err != nil { if err != nil {
t.Log.Error(err, "Failed to get new registration token") t.Log.Error(err, "Failed to get new registration token")
return admission.Errored(http.StatusInternalServerError, err) return admission.Errored(http.StatusInternalServerError, err)

View File

@@ -20,6 +20,7 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"reflect"
"strconv" "strconv"
"strings" "strings"
"time" "time"
@@ -35,10 +36,10 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile" "sigs.k8s.io/controller-runtime/pkg/reconcile"
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1" "github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/github"
) )
const ( const (
@@ -51,6 +52,8 @@ const (
EnvVarOrg = "RUNNER_ORG" EnvVarOrg = "RUNNER_ORG"
EnvVarRepo = "RUNNER_REPO" EnvVarRepo = "RUNNER_REPO"
EnvVarGroup = "RUNNER_GROUP"
EnvVarLabels = "RUNNER_LABELS"
EnvVarEnterprise = "RUNNER_ENTERPRISE" EnvVarEnterprise = "RUNNER_ENTERPRISE"
EnvVarEphemeral = "RUNNER_EPHEMERAL" EnvVarEphemeral = "RUNNER_EPHEMERAL"
EnvVarTrue = "true" EnvVarTrue = "true"
@@ -62,7 +65,7 @@ type RunnerReconciler struct {
Log logr.Logger Log logr.Logger
Recorder record.EventRecorder Recorder record.EventRecorder
Scheme *runtime.Scheme Scheme *runtime.Scheme
GitHubClient *github.Client GitHubClient *MultiGitHubClient
RunnerImage string RunnerImage string
RunnerImagePullSecrets []string RunnerImagePullSecrets []string
DockerImage string DockerImage string
@@ -70,8 +73,8 @@ type RunnerReconciler struct {
Name string Name string
RegistrationRecheckInterval time.Duration RegistrationRecheckInterval time.Duration
RegistrationRecheckJitter time.Duration RegistrationRecheckJitter time.Duration
UseRunnerStatusUpdateHook bool
UnregistrationRetryDelay time.Duration UnregistrationRetryDelay time.Duration
} }
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners,verbs=get;list;watch;create;update;patch;delete
@@ -81,6 +84,9 @@ type RunnerReconciler struct {
// +kubebuilder:rbac:groups=core,resources=secrets,verbs=get;list;watch;delete // +kubebuilder:rbac:groups=core,resources=secrets,verbs=get;list;watch;delete
// +kubebuilder:rbac:groups=core,resources=pods/finalizers,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch // +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
// +kubebuilder:rbac:groups=core,resources=serviceaccounts,verbs=create;delete;get
// +kubebuilder:rbac:groups=rbac.authorization.k8s.io,resources=roles,verbs=create;delete;get
// +kubebuilder:rbac:groups=rbac.authorization.k8s.io,resources=rolebindings,verbs=create;delete;get
func (r *RunnerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { func (r *RunnerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("runner", req.NamespacedName) log := r.Log.WithValues("runner", req.NamespacedName)
@@ -116,6 +122,8 @@ func (r *RunnerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
return r.processRunnerDeletion(runner, ctx, log, nil) return r.processRunnerDeletion(runner, ctx, log, nil)
} }
r.GitHubClient.DeinitForRunner(&runner)
return r.processRunnerDeletion(runner, ctx, log, &pod) return r.processRunnerDeletion(runner, ctx, log, &pod)
} }
@@ -135,7 +143,7 @@ func (r *RunnerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
ready := runnerPodReady(&pod) ready := runnerPodReady(&pod)
if runner.Status.Phase != phase || runner.Status.Ready != ready { if (runner.Status.Phase != phase || runner.Status.Ready != ready) && !r.UseRunnerStatusUpdateHook || runner.Status.Phase == "" && r.UseRunnerStatusUpdateHook {
if pod.Status.Phase == corev1.PodRunning { if pod.Status.Phase == corev1.PodRunning {
// Seeing this message, you can expect the runner to become `Running` soon. // Seeing this message, you can expect the runner to become `Running` soon.
log.V(1).Info( log.V(1).Info(
@@ -256,6 +264,96 @@ func (r *RunnerReconciler) processRunnerCreation(ctx context.Context, runner v1a
return ctrl.Result{}, err return ctrl.Result{}, err
} }
needsServiceAccount := runner.Spec.ServiceAccountName == "" && (r.UseRunnerStatusUpdateHook || runner.Spec.ContainerMode == "kubernetes")
if needsServiceAccount {
serviceAccount := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: runner.ObjectMeta.Name,
Namespace: runner.ObjectMeta.Namespace,
},
}
if res := r.createObject(ctx, serviceAccount, serviceAccount.ObjectMeta, &runner, log); res != nil {
return *res, nil
}
rules := []rbacv1.PolicyRule{}
if r.UseRunnerStatusUpdateHook {
rules = append(rules, []rbacv1.PolicyRule{
{
APIGroups: []string{"actions.summerwind.dev"},
Resources: []string{"runners/status"},
Verbs: []string{"get", "update", "patch"},
ResourceNames: []string{runner.ObjectMeta.Name},
},
}...)
}
if runner.Spec.ContainerMode == "kubernetes" {
// Permissions based on https://github.com/actions/runner-container-hooks/blob/main/packages/k8s/README.md
rules = append(rules, []rbacv1.PolicyRule{
{
APIGroups: []string{""},
Resources: []string{"pods"},
Verbs: []string{"get", "list", "create", "delete"},
},
{
APIGroups: []string{""},
Resources: []string{"pods/exec"},
Verbs: []string{"get", "create"},
},
{
APIGroups: []string{""},
Resources: []string{"pods/log"},
Verbs: []string{"get", "list", "watch"},
},
{
APIGroups: []string{"batch"},
Resources: []string{"jobs"},
Verbs: []string{"get", "list", "create", "delete"},
},
{
APIGroups: []string{""},
Resources: []string{"secrets"},
Verbs: []string{"get", "list", "create", "delete"},
},
}...)
}
role := &rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: runner.ObjectMeta.Name,
Namespace: runner.ObjectMeta.Namespace,
},
Rules: rules,
}
if res := r.createObject(ctx, role, role.ObjectMeta, &runner, log); res != nil {
return *res, nil
}
roleBinding := &rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: runner.ObjectMeta.Name,
Namespace: runner.ObjectMeta.Namespace,
},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Kind: "Role",
Name: runner.ObjectMeta.Name,
},
Subjects: []rbacv1.Subject{
{
Kind: "ServiceAccount",
Name: runner.ObjectMeta.Name,
Namespace: runner.ObjectMeta.Namespace,
},
},
}
if res := r.createObject(ctx, roleBinding, roleBinding.ObjectMeta, &runner, log); res != nil {
return *res, nil
}
}
if err := r.Create(ctx, &newPod); err != nil { if err := r.Create(ctx, &newPod); err != nil {
if kerrors.IsAlreadyExists(err) { if kerrors.IsAlreadyExists(err) {
// Gracefully handle pod-already-exists errors due to informer cache delay. // Gracefully handle pod-already-exists errors due to informer cache delay.
@@ -278,6 +376,27 @@ func (r *RunnerReconciler) processRunnerCreation(ctx context.Context, runner v1a
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
func (r *RunnerReconciler) createObject(ctx context.Context, obj client.Object, meta metav1.ObjectMeta, runner *v1alpha1.Runner, log logr.Logger) *ctrl.Result {
kind := strings.Split(reflect.TypeOf(obj).String(), ".")[1]
if err := ctrl.SetControllerReference(runner, obj, r.Scheme); err != nil {
log.Error(err, fmt.Sprintf("Could not add owner reference to %s %s. %s", kind, meta.Name, err.Error()))
return &ctrl.Result{Requeue: true}
}
if err := r.Create(ctx, obj); err != nil {
if kerrors.IsAlreadyExists(err) {
log.Info(fmt.Sprintf("Failed to create %s %s as it already exists. Reusing existing %s", kind, meta.Name, kind))
r.Recorder.Event(runner, corev1.EventTypeNormal, fmt.Sprintf("%sReused", kind), fmt.Sprintf("Reused %s '%s'", kind, meta.Name))
return nil
}
log.Error(err, fmt.Sprintf("Retrying as failed to create %s %s resource", kind, meta.Name))
return &ctrl.Result{Requeue: true}
}
r.Recorder.Event(runner, corev1.EventTypeNormal, fmt.Sprintf("%sCreated", kind), fmt.Sprintf("Created %s '%s'", kind, meta.Name))
log.Info(fmt.Sprintf("Created %s", kind), "name", meta.Name)
return nil
}
func (r *RunnerReconciler) updateRegistrationToken(ctx context.Context, runner v1alpha1.Runner) (bool, error) { func (r *RunnerReconciler) updateRegistrationToken(ctx context.Context, runner v1alpha1.Runner) (bool, error) {
if runner.IsRegisterable() { if runner.IsRegisterable() {
return false, nil return false, nil
@@ -285,7 +404,12 @@ func (r *RunnerReconciler) updateRegistrationToken(ctx context.Context, runner v
log := r.Log.WithValues("runner", runner.Name) log := r.Log.WithValues("runner", runner.Name)
rt, err := r.GitHubClient.GetRegistrationToken(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name) ghc, err := r.GitHubClient.InitForRunner(ctx, &runner)
if err != nil {
return false, err
}
rt, err := ghc.GetRegistrationToken(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil { if err != nil {
// An error can be a permanent, permission issue like the below: // An error can be a permanent, permission issue like the below:
// POST https://api.github.com/enterprises/YOUR_ENTERPRISE/actions/runners/registration-token: 403 Resource not accessible by integration [] // POST https://api.github.com/enterprises/YOUR_ENTERPRISE/actions/runners/registration-token: 403 Resource not accessible by integration []
@@ -325,6 +449,11 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
labels[k] = v labels[k] = v
} }
ghc, err := r.GitHubClient.InitForRunner(context.Background(), &runner)
if err != nil {
return corev1.Pod{}, err
}
// This implies that... // This implies that...
// //
// (1) We recreate the runner pod whenever the runner has changes in: // (1) We recreate the runner pod whenever the runner has changes in:
@@ -348,7 +477,7 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
filterLabels(runner.ObjectMeta.Labels, LabelKeyRunnerTemplateHash), filterLabels(runner.ObjectMeta.Labels, LabelKeyRunnerTemplateHash),
runner.ObjectMeta.Annotations, runner.ObjectMeta.Annotations,
runner.Spec, runner.Spec,
r.GitHubClient.GithubBaseURL, ghc.GithubBaseURL,
// Token change should trigger replacement. // Token change should trigger replacement.
// We need to include this explicitly here because // We need to include this explicitly here because
// runner.Spec does not contain the possibly updated token stored in the // runner.Spec does not contain the possibly updated token stored in the
@@ -426,7 +555,7 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
} }
} }
pod, err := newRunnerPodWithContainerMode(runner.Spec.ContainerMode, template, runner.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, r.GitHubClient.GithubBaseURL) pod, err := newRunnerPodWithContainerMode(runner.Spec.ContainerMode, template, runner.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, ghc.GithubBaseURL, r.UseRunnerStatusUpdateHook)
if err != nil { if err != nil {
return pod, err return pod, err
} }
@@ -474,9 +603,13 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
if runnerSpec.NodeSelector != nil { if runnerSpec.NodeSelector != nil {
pod.Spec.NodeSelector = runnerSpec.NodeSelector pod.Spec.NodeSelector = runnerSpec.NodeSelector
} }
if runnerSpec.ServiceAccountName != "" { if runnerSpec.ServiceAccountName != "" {
pod.Spec.ServiceAccountName = runnerSpec.ServiceAccountName pod.Spec.ServiceAccountName = runnerSpec.ServiceAccountName
} else if r.UseRunnerStatusUpdateHook || runner.Spec.ContainerMode == "kubernetes" {
pod.Spec.ServiceAccountName = runner.ObjectMeta.Name
} }
if runnerSpec.AutomountServiceAccountToken != nil { if runnerSpec.AutomountServiceAccountToken != nil {
pod.Spec.AutomountServiceAccountToken = runnerSpec.AutomountServiceAccountToken pod.Spec.AutomountServiceAccountToken = runnerSpec.AutomountServiceAccountToken
} }
@@ -589,7 +722,7 @@ func runnerHookEnvs(pod *corev1.Pod) ([]corev1.EnvVar, error) {
}, nil }, nil
} }
func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string) (corev1.Pod, error) { func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string, useRunnerStatusUpdateHook bool) (corev1.Pod, error) {
var ( var (
privileged bool = true privileged bool = true
dockerdInRunner bool = runnerSpec.DockerdWithinRunnerContainer != nil && *runnerSpec.DockerdWithinRunnerContainer dockerdInRunner bool = runnerSpec.DockerdWithinRunnerContainer != nil && *runnerSpec.DockerdWithinRunnerContainer
@@ -609,6 +742,9 @@ func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, ru
// This label selector is used by default when rd.Spec.Selector is empty. // This label selector is used by default when rd.Spec.Selector is empty.
template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyRunner, "") template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyRunner, "")
template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyPodMutation, LabelValuePodMutation) template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyPodMutation, LabelValuePodMutation)
if runnerSpec.GitHubAPICredentialsFrom != nil {
template.ObjectMeta.Annotations = CloneAndAddLabel(template.ObjectMeta.Annotations, annotationKeyGitHubAPICredsSecret, runnerSpec.GitHubAPICredentialsFrom.SecretRef.Name)
}
workDir := runnerSpec.WorkDir workDir := runnerSpec.WorkDir
if workDir == "" { if workDir == "" {
@@ -638,11 +774,11 @@ func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, ru
Value: runnerSpec.Enterprise, Value: runnerSpec.Enterprise,
}, },
{ {
Name: "RUNNER_LABELS", Name: EnvVarLabels,
Value: strings.Join(runnerSpec.Labels, ","), Value: strings.Join(runnerSpec.Labels, ","),
}, },
{ {
Name: "RUNNER_GROUP", Name: EnvVarGroup,
Value: runnerSpec.Group, Value: runnerSpec.Group,
}, },
{ {
@@ -665,6 +801,10 @@ func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, ru
Name: EnvVarEphemeral, Name: EnvVarEphemeral,
Value: fmt.Sprintf("%v", ephemeral), Value: fmt.Sprintf("%v", ephemeral),
}, },
{
Name: "RUNNER_STATUS_UPDATE_HOOK",
Value: fmt.Sprintf("%v", useRunnerStatusUpdateHook),
},
} }
var seLinuxOptions *corev1.SELinuxOptions var seLinuxOptions *corev1.SELinuxOptions
@@ -962,8 +1102,8 @@ func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, ru
return *pod, nil return *pod, nil
} }
func newRunnerPod(template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string) (corev1.Pod, error) { func newRunnerPod(template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string, useRunnerStatusUpdateHookEphemeralRole bool) (corev1.Pod, error) {
return newRunnerPodWithContainerMode("", template, runnerSpec, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL) return newRunnerPodWithContainerMode("", template, runnerSpec, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL, useRunnerStatusUpdateHookEphemeralRole)
} }
func (r *RunnerReconciler) SetupWithManager(mgr ctrl.Manager) error { func (r *RunnerReconciler) SetupWithManager(mgr ctrl.Manager) error {

View File

@@ -32,8 +32,6 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/client"
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
"github.com/actions-runner-controller/actions-runner-controller/github"
) )
// RunnerPodReconciler reconciles a Runner object // RunnerPodReconciler reconciles a Runner object
@@ -42,7 +40,7 @@ type RunnerPodReconciler struct {
Log logr.Logger Log logr.Logger
Recorder record.EventRecorder Recorder record.EventRecorder
Scheme *runtime.Scheme Scheme *runtime.Scheme
GitHubClient *github.Client GitHubClient *MultiGitHubClient
Name string Name string
RegistrationRecheckInterval time.Duration RegistrationRecheckInterval time.Duration
RegistrationRecheckJitter time.Duration RegistrationRecheckJitter time.Duration
@@ -97,6 +95,11 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
} }
} }
ghc, err := r.GitHubClient.InitForRunnerPod(ctx, &runnerPod)
if err != nil {
return ctrl.Result{}, err
}
if runnerPod.ObjectMeta.DeletionTimestamp.IsZero() { if runnerPod.ObjectMeta.DeletionTimestamp.IsZero() {
finalizers, added := addFinalizer(runnerPod.ObjectMeta.Finalizers, runnerPodFinalizerName) finalizers, added := addFinalizer(runnerPod.ObjectMeta.Finalizers, runnerPodFinalizerName)
@@ -148,7 +151,7 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
// In a standard scenario, the upstream controller, like runnerset-controller, ensures this runner to be gracefully stopped before the deletion timestamp is set. // In a standard scenario, the upstream controller, like runnerset-controller, ensures this runner to be gracefully stopped before the deletion timestamp is set.
// But for the case that the user manually deleted it for whatever reason, // But for the case that the user manually deleted it for whatever reason,
// we have to ensure it to gracefully stop now. // we have to ensure it to gracefully stop now.
updatedPod, res, err := tickRunnerGracefulStop(ctx, r.unregistrationRetryDelay(), log, r.GitHubClient, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod) updatedPod, res, err := tickRunnerGracefulStop(ctx, r.unregistrationRetryDelay(), log, ghc, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod)
if res != nil { if res != nil {
return *res, err return *res, err
} }
@@ -164,6 +167,8 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
log.V(2).Info("Removed finalizer") log.V(2).Info("Removed finalizer")
r.GitHubClient.DeinitForRunnerPod(updatedPod)
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
@@ -202,7 +207,7 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
po, res, err := ensureRunnerPodRegistered(ctx, log, r.GitHubClient, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod) po, res, err := ensureRunnerPodRegistered(ctx, log, ghc, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod)
if res != nil { if res != nil {
return *res, err return *res, err
} }
@@ -216,7 +221,7 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
// //
// In a standard scenario, ARC starts the unregistration process before marking the pod for deletion at all, // In a standard scenario, ARC starts the unregistration process before marking the pod for deletion at all,
// so that it isn't subject to terminationGracePeriod and can safely take hours to finish it's work. // so that it isn't subject to terminationGracePeriod and can safely take hours to finish it's work.
_, res, err := tickRunnerGracefulStop(ctx, r.unregistrationRetryDelay(), log, r.GitHubClient, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod) _, res, err := tickRunnerGracefulStop(ctx, r.unregistrationRetryDelay(), log, ghc, r.Client, enterprise, org, repo, runnerPod.Name, &runnerPod)
if res != nil { if res != nil {
return *res, err return *res, err
} }

View File

@@ -32,17 +32,15 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1" "github.com/actions-runner-controller/actions-runner-controller/api/v1alpha1"
"github.com/actions-runner-controller/actions-runner-controller/github"
) )
// RunnerReplicaSetReconciler reconciles a Runner object // RunnerReplicaSetReconciler reconciles a Runner object
type RunnerReplicaSetReconciler struct { type RunnerReplicaSetReconciler struct {
client.Client client.Client
Log logr.Logger Log logr.Logger
Recorder record.EventRecorder Recorder record.EventRecorder
Scheme *runtime.Scheme Scheme *runtime.Scheme
GitHubClient *github.Client Name string
Name string
} }
const ( const (

View File

@@ -52,15 +52,13 @@ func SetupTest(ctx2 context.Context) *corev1.Namespace {
runnersList = fake.NewRunnersList() runnersList = fake.NewRunnersList()
server = runnersList.GetServer() server = runnersList.GetServer()
ghClient := newGithubClient(server)
controller := &RunnerReplicaSetReconciler{ controller := &RunnerReplicaSetReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Scheme: scheme.Scheme, Scheme: scheme.Scheme,
Log: logf.Log, Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"), Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: ghClient, Name: "runnerreplicaset-" + ns.Name,
Name: "runnerreplicaset-" + ns.Name,
} }
err = controller.SetupWithManager(mgr) err = controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller") Expect(err).NotTo(HaveOccurred(), "failed to setup controller")

View File

@@ -45,12 +45,13 @@ type RunnerSetReconciler struct {
Recorder record.EventRecorder Recorder record.EventRecorder
Scheme *runtime.Scheme Scheme *runtime.Scheme
CommonRunnerLabels []string CommonRunnerLabels []string
GitHubBaseURL string GitHubClient *MultiGitHubClient
RunnerImage string RunnerImage string
RunnerImagePullSecrets []string RunnerImagePullSecrets []string
DockerImage string DockerImage string
DockerRegistryMirror string DockerRegistryMirror string
UseRunnerStatusUpdateHook bool
} }
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnersets,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnersets,verbs=get;list;watch;create;update;patch;delete
@@ -80,6 +81,8 @@ func (r *RunnerSetReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
} }
if !runnerSet.ObjectMeta.DeletionTimestamp.IsZero() { if !runnerSet.ObjectMeta.DeletionTimestamp.IsZero() {
r.GitHubClient.DeinitForRunnerSet(runnerSet)
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
@@ -97,7 +100,7 @@ func (r *RunnerSetReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
desiredStatefulSet, err := r.newStatefulSet(runnerSet) desiredStatefulSet, err := r.newStatefulSet(ctx, runnerSet)
if err != nil { if err != nil {
r.Recorder.Event(runnerSet, corev1.EventTypeNormal, "RunnerAutoscalingFailure", err.Error()) r.Recorder.Event(runnerSet, corev1.EventTypeNormal, "RunnerAutoscalingFailure", err.Error())
@@ -185,7 +188,7 @@ func getRunnerSetSelector(runnerSet *v1alpha1.RunnerSet) *metav1.LabelSelector {
var LabelKeyPodMutation = "actions-runner-controller/inject-registration-token" var LabelKeyPodMutation = "actions-runner-controller/inject-registration-token"
var LabelValuePodMutation = "true" var LabelValuePodMutation = "true"
func (r *RunnerSetReconciler) newStatefulSet(runnerSet *v1alpha1.RunnerSet) (*appsv1.StatefulSet, error) { func (r *RunnerSetReconciler) newStatefulSet(ctx context.Context, runnerSet *v1alpha1.RunnerSet) (*appsv1.StatefulSet, error) {
runnerSetWithOverrides := *runnerSet.Spec.DeepCopy() runnerSetWithOverrides := *runnerSet.Spec.DeepCopy()
runnerSetWithOverrides.Labels = append(runnerSetWithOverrides.Labels, r.CommonRunnerLabels...) runnerSetWithOverrides.Labels = append(runnerSetWithOverrides.Labels, r.CommonRunnerLabels...)
@@ -221,7 +224,14 @@ func (r *RunnerSetReconciler) newStatefulSet(runnerSet *v1alpha1.RunnerSet) (*ap
template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyRunnerSetName, runnerSet.Name) template.ObjectMeta.Labels = CloneAndAddLabel(template.ObjectMeta.Labels, LabelKeyRunnerSetName, runnerSet.Name)
pod, err := newRunnerPodWithContainerMode(runnerSet.Spec.RunnerConfig.ContainerMode, template, runnerSet.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, r.GitHubBaseURL) ghc, err := r.GitHubClient.InitForRunnerSet(ctx, runnerSet)
if err != nil {
return nil, err
}
githubBaseURL := ghc.GithubBaseURL
pod, err := newRunnerPodWithContainerMode(runnerSet.Spec.RunnerConfig.ContainerMode, template, runnerSet.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, githubBaseURL, r.UseRunnerStatusUpdateHook)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@@ -0,0 +1,31 @@
package controllers
import (
"context"
"errors"
"reflect"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
type testResourceReader struct {
objects map[types.NamespacedName]client.Object
}
func (r *testResourceReader) Get(_ context.Context, nsName types.NamespacedName, obj client.Object) error {
ret, ok := r.objects[nsName]
if !ok {
return &kerrors.StatusError{ErrStatus: metav1.Status{Reason: metav1.StatusReasonNotFound}}
}
v := reflect.ValueOf(obj)
if v.Kind() != reflect.Ptr {
return errors.New("obj must be a pointer")
}
v.Elem().Set(reflect.ValueOf(ret).Elem())
return nil
}

View File

@@ -0,0 +1,35 @@
package controllers
import (
"context"
"testing"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func TestResourceReader(t *testing.T) {
rr := &testResourceReader{
objects: map[types.NamespacedName]client.Object{
{Namespace: "default", Name: "sec1"}: &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "sec1",
},
Data: map[string][]byte{
"foo": []byte("bar"),
},
},
},
}
var sec corev1.Secret
err := rr.Get(context.Background(), types.NamespacedName{Namespace: "default", Name: "sec1"}, &sec)
require.NoError(t, err)
require.Equal(t, []byte("bar"), sec.Data["foo"])
}

6
go.mod
View File

@@ -3,7 +3,7 @@ module github.com/actions-runner-controller/actions-runner-controller
go 1.18 go 1.18
require ( require (
github.com/bradleyfalzon/ghinstallation/v2 v2.0.4 github.com/bradleyfalzon/ghinstallation/v2 v2.1.0
github.com/davecgh/go-spew v1.1.1 github.com/davecgh/go-spew v1.1.1
github.com/go-logr/logr v1.2.3 github.com/go-logr/logr v1.2.3
github.com/google/go-cmp v0.5.8 github.com/google/go-cmp v0.5.8
@@ -22,7 +22,7 @@ require (
k8s.io/api v0.24.2 k8s.io/api v0.24.2
k8s.io/apimachinery v0.24.2 k8s.io/apimachinery v0.24.2
k8s.io/client-go v0.24.2 k8s.io/client-go v0.24.2
sigs.k8s.io/controller-runtime v0.12.2 sigs.k8s.io/controller-runtime v0.12.3
sigs.k8s.io/yaml v1.3.0 sigs.k8s.io/yaml v1.3.0
) )
@@ -40,7 +40,7 @@ require (
github.com/go-openapi/jsonreference v0.19.5 // indirect github.com/go-openapi/jsonreference v0.19.5 // indirect
github.com/go-openapi/swag v0.19.14 // indirect github.com/go-openapi/swag v0.19.14 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.0.0 // indirect github.com/golang-jwt/jwt/v4 v4.4.1 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect github.com/golang/protobuf v1.5.2 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect github.com/google/gnostic v0.5.7-v3refs // indirect

6
go.sum
View File

@@ -84,6 +84,8 @@ github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnweb
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ= github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/bradleyfalzon/ghinstallation/v2 v2.0.4 h1:tXKVfhE7FcSkhkv0UwkLvPDeZ4kz6OXd0PKPlFqf81M= github.com/bradleyfalzon/ghinstallation/v2 v2.0.4 h1:tXKVfhE7FcSkhkv0UwkLvPDeZ4kz6OXd0PKPlFqf81M=
github.com/bradleyfalzon/ghinstallation/v2 v2.0.4/go.mod h1:B40qPqJxWE0jDZgOR1JmaMy+4AY1eBP+IByOvqyAKp0= github.com/bradleyfalzon/ghinstallation/v2 v2.0.4/go.mod h1:B40qPqJxWE0jDZgOR1JmaMy+4AY1eBP+IByOvqyAKp0=
github.com/bradleyfalzon/ghinstallation/v2 v2.1.0 h1:5+NghM1Zred9Z078QEZtm28G/kfDfZN/92gkDlLwGVA=
github.com/bradleyfalzon/ghinstallation/v2 v2.1.0/go.mod h1:Xg3xPRN5Mcq6GDqeUVhFbjEWMb4JHCyWEeeBGEYQoTU=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA= github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA= github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
@@ -181,6 +183,8 @@ github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.0.0 h1:RAqyYixv1p7uEnocuy8P1nru5wprCh/MH2BIlW5z5/o= github.com/golang-jwt/jwt/v4 v4.0.0 h1:RAqyYixv1p7uEnocuy8P1nru5wprCh/MH2BIlW5z5/o=
github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang-jwt/jwt/v4 v4.4.1 h1:pC5DB52sCeK48Wlb9oPcdhnjkz1TKt1D/P7WKJ0kUcQ=
github.com/golang-jwt/jwt/v4 v4.4.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4= github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@@ -1043,6 +1047,8 @@ sigs.k8s.io/controller-runtime v0.11.2 h1:H5GTxQl0Mc9UjRJhORusqfJCIjBO8UtUxGggCw
sigs.k8s.io/controller-runtime v0.11.2/go.mod h1:P6QCzrEjLaZGqHsfd+os7JQ+WFZhvB8MRFsn4dWF7O4= sigs.k8s.io/controller-runtime v0.11.2/go.mod h1:P6QCzrEjLaZGqHsfd+os7JQ+WFZhvB8MRFsn4dWF7O4=
sigs.k8s.io/controller-runtime v0.12.2 h1:nqV02cvhbAj7tbt21bpPpTByrXGn2INHRsi39lXy9sE= sigs.k8s.io/controller-runtime v0.12.2 h1:nqV02cvhbAj7tbt21bpPpTByrXGn2INHRsi39lXy9sE=
sigs.k8s.io/controller-runtime v0.12.2/go.mod h1:qKsk4WE6zW2Hfj0G4v10EnNB2jMG1C+NTb8h+DwCoU0= sigs.k8s.io/controller-runtime v0.12.2/go.mod h1:qKsk4WE6zW2Hfj0G4v10EnNB2jMG1C+NTb8h+DwCoU0=
sigs.k8s.io/controller-runtime v0.12.3 h1:FCM8xeY/FI8hoAfh/V4XbbYMY20gElh9yh+A98usMio=
sigs.k8s.io/controller-runtime v0.12.3/go.mod h1:qKsk4WE6zW2Hfj0G4v10EnNB2jMG1C+NTb8h+DwCoU0=
sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 h1:fD1pz4yfdADVNfFmcP2aBEtudwUQ1AlLnRBALr33v3s= sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 h1:fD1pz4yfdADVNfFmcP2aBEtudwUQ1AlLnRBALr33v3s=
sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6/go.mod h1:p4QtZmO4uMYipTQNzagwnNoseA6OxSUutVw05NhYDRs= sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6/go.mod h1:p4QtZmO4uMYipTQNzagwnNoseA6OxSUutVw05NhYDRs=
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 h1:kDi4JBNAsJWfz1aEXhO8Jg87JJaPNLh5tIzYHgStQ9Y= sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 h1:kDi4JBNAsJWfz1aEXhO8Jg87JJaPNLh5tIzYHgStQ9Y=

63
main.go
View File

@@ -68,14 +68,14 @@ func main() {
err error err error
ghClient *github.Client ghClient *github.Client
metricsAddr string metricsAddr string
enableLeaderElection bool enableLeaderElection bool
leaderElectionId string runnerStatusUpdateHook bool
port int leaderElectionId string
syncPeriod time.Duration port int
syncPeriod time.Duration
gitHubAPICacheDuration time.Duration defaultScaleDownDelay time.Duration
defaultScaleDownDelay time.Duration
runnerImage string runnerImage string
runnerImagePullSecrets stringSlice runnerImagePullSecrets stringSlice
@@ -112,7 +112,7 @@ func main() {
flag.StringVar(&c.BasicauthUsername, "github-basicauth-username", c.BasicauthUsername, "Username for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API") flag.StringVar(&c.BasicauthUsername, "github-basicauth-username", c.BasicauthUsername, "Username for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API")
flag.StringVar(&c.BasicauthPassword, "github-basicauth-password", c.BasicauthPassword, "Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API") flag.StringVar(&c.BasicauthPassword, "github-basicauth-password", c.BasicauthPassword, "Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API")
flag.StringVar(&c.RunnerGitHubURL, "runner-github-url", c.RunnerGitHubURL, "GitHub URL to be used by runners during registration") flag.StringVar(&c.RunnerGitHubURL, "runner-github-url", c.RunnerGitHubURL, "GitHub URL to be used by runners during registration")
flag.DurationVar(&gitHubAPICacheDuration, "github-api-cache-duration", 0, "DEPRECATED: The duration until the GitHub API cache expires. Setting this to e.g. 10m results in the controller tries its best not to make the same API call within 10m to reduce the chance of being rate-limited. Defaults to mostly the same value as sync-period. If you're tweaking this in order to make autoscaling more responsive, you'll probably want to tweak sync-period, too") flag.BoolVar(&runnerStatusUpdateHook, "runner-status-update-hook", false, "Use custom RBAC for runners (role, role binding and service account).")
flag.DurationVar(&defaultScaleDownDelay, "default-scale-down-delay", controllers.DefaultScaleDownDelay, "The approximate delay for a scale down followed by a scale up, used to prevent flapping (down->up->down->... loop)") flag.DurationVar(&defaultScaleDownDelay, "default-scale-down-delay", controllers.DefaultScaleDownDelay, "The approximate delay for a scale down followed by a scale up, used to prevent flapping (down->up->down->... loop)")
flag.IntVar(&port, "port", 9443, "The port to which the admission webhook endpoint should bind") flag.IntVar(&port, "port", 9443, "The port to which the admission webhook endpoint should bind")
flag.DurationVar(&syncPeriod, "sync-period", 1*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled.") flag.DurationVar(&syncPeriod, "sync-period", 1*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled.")
@@ -147,13 +147,19 @@ func main() {
os.Exit(1) os.Exit(1)
} }
multiClient := controllers.NewMultiGitHubClient(
mgr.GetClient(),
ghClient,
)
runnerReconciler := &controllers.RunnerReconciler{ runnerReconciler := &controllers.RunnerReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Log: log.WithName("runner"), Log: log.WithName("runner"),
Scheme: mgr.GetScheme(), Scheme: mgr.GetScheme(),
GitHubClient: ghClient, GitHubClient: multiClient,
DockerImage: dockerImage, DockerImage: dockerImage,
DockerRegistryMirror: dockerRegistryMirror, DockerRegistryMirror: dockerRegistryMirror,
UseRunnerStatusUpdateHook: runnerStatusUpdateHook,
// Defaults for self-hosted runner containers // Defaults for self-hosted runner containers
RunnerImage: runnerImage, RunnerImage: runnerImage,
RunnerImagePullSecrets: runnerImagePullSecrets, RunnerImagePullSecrets: runnerImagePullSecrets,
@@ -165,10 +171,9 @@ func main() {
} }
runnerReplicaSetReconciler := &controllers.RunnerReplicaSetReconciler{ runnerReplicaSetReconciler := &controllers.RunnerReplicaSetReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Log: log.WithName("runnerreplicaset"), Log: log.WithName("runnerreplicaset"),
Scheme: mgr.GetScheme(), Scheme: mgr.GetScheme(),
GitHubClient: ghClient,
} }
if err = runnerReplicaSetReconciler.SetupWithManager(mgr); err != nil { if err = runnerReplicaSetReconciler.SetupWithManager(mgr); err != nil {
@@ -195,27 +200,20 @@ func main() {
CommonRunnerLabels: commonRunnerLabels, CommonRunnerLabels: commonRunnerLabels,
DockerImage: dockerImage, DockerImage: dockerImage,
DockerRegistryMirror: dockerRegistryMirror, DockerRegistryMirror: dockerRegistryMirror,
GitHubBaseURL: ghClient.GithubBaseURL, GitHubClient: multiClient,
// Defaults for self-hosted runner containers // Defaults for self-hosted runner containers
RunnerImage: runnerImage, RunnerImage: runnerImage,
RunnerImagePullSecrets: runnerImagePullSecrets, RunnerImagePullSecrets: runnerImagePullSecrets,
UseRunnerStatusUpdateHook: runnerStatusUpdateHook,
} }
if err = runnerSetReconciler.SetupWithManager(mgr); err != nil { if err = runnerSetReconciler.SetupWithManager(mgr); err != nil {
log.Error(err, "unable to create controller", "controller", "RunnerSet") log.Error(err, "unable to create controller", "controller", "RunnerSet")
os.Exit(1) os.Exit(1)
} }
if gitHubAPICacheDuration == 0 {
gitHubAPICacheDuration = syncPeriod - 10*time.Second
}
if gitHubAPICacheDuration < 0 {
gitHubAPICacheDuration = 0
}
log.Info( log.Info(
"Initializing actions-runner-controller", "Initializing actions-runner-controller",
"github-api-cache-duration", gitHubAPICacheDuration,
"default-scale-down-delay", defaultScaleDownDelay, "default-scale-down-delay", defaultScaleDownDelay,
"sync-period", syncPeriod, "sync-period", syncPeriod,
"default-runner-image", runnerImage, "default-runner-image", runnerImage,
@@ -230,8 +228,7 @@ func main() {
Client: mgr.GetClient(), Client: mgr.GetClient(),
Log: log.WithName("horizontalrunnerautoscaler"), Log: log.WithName("horizontalrunnerautoscaler"),
Scheme: mgr.GetScheme(), Scheme: mgr.GetScheme(),
GitHubClient: ghClient, GitHubClient: multiClient,
CacheDuration: gitHubAPICacheDuration,
DefaultScaleDownDelay: defaultScaleDownDelay, DefaultScaleDownDelay: defaultScaleDownDelay,
} }
@@ -239,7 +236,7 @@ func main() {
Client: mgr.GetClient(), Client: mgr.GetClient(),
Log: log.WithName("runnerpod"), Log: log.WithName("runnerpod"),
Scheme: mgr.GetScheme(), Scheme: mgr.GetScheme(),
GitHubClient: ghClient, GitHubClient: multiClient,
} }
runnerPersistentVolumeReconciler := &controllers.RunnerPersistentVolumeReconciler{ runnerPersistentVolumeReconciler := &controllers.RunnerPersistentVolumeReconciler{
@@ -290,7 +287,7 @@ func main() {
injector := &controllers.PodRunnerTokenInjector{ injector := &controllers.PodRunnerTokenInjector{
Client: mgr.GetClient(), Client: mgr.GetClient(),
GitHubClient: ghClient, GitHubClient: multiClient,
Log: ctrl.Log.WithName("webhook").WithName("PodRunnerTokenInjector"), Log: ctrl.Log.WithName("webhook").WithName("PodRunnerTokenInjector"),
} }
if err = injector.SetupWithManager(mgr); err != nil { if err = injector.SetupWithManager(mgr); err != nil {

View File

@@ -98,10 +98,13 @@ RUN mkdir /opt/hostedtoolcache \
# We place the scripts in `/usr/bin` so that users who extend this image can # We place the scripts in `/usr/bin` so that users who extend this image can
# override them with scripts of the same name placed in `/usr/local/bin`. # override them with scripts of the same name placed in `/usr/local/bin`.
COPY entrypoint.sh logger.bash startup.sh /usr/bin/ COPY entrypoint.sh logger.bash startup.sh update-status /usr/bin/
COPY supervisor/ /etc/supervisor/conf.d/ COPY supervisor/ /etc/supervisor/conf.d/
RUN chmod +x /usr/bin/startup.sh /usr/bin/entrypoint.sh RUN chmod +x /usr/bin/startup.sh /usr/bin/entrypoint.sh
# Configure hooks folder structure.
COPY hooks /etc/arc/hooks/
# arch command on OS X reports "i386" for Intel CPUs regardless of bitness # arch command on OS X reports "i386" for Intel CPUs regardless of bitness
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& if [ "$ARCH" = "arm64" ]; then export ARCH=aarch64 ; fi \ && if [ "$ARCH" = "arm64" ]; then export ARCH=aarch64 ; fi \

View File

@@ -116,7 +116,10 @@ RUN mkdir /opt/hostedtoolcache \
# We place the scripts in `/usr/bin` so that users who extend this image can # We place the scripts in `/usr/bin` so that users who extend this image can
# override them with scripts of the same name placed in `/usr/local/bin`. # override them with scripts of the same name placed in `/usr/local/bin`.
COPY entrypoint.sh logger.bash /usr/bin/ COPY entrypoint.sh logger.bash update-status /usr/bin/
# Configure hooks folder structure.
COPY hooks /etc/arc/hooks/
ENV HOME=/home/runner ENV HOME=/home/runner
# Add the Python "User Script Directory" to the PATH # Add the Python "User Script Directory" to the PATH

View File

@@ -4,6 +4,13 @@ source logger.bash
RUNNER_ASSETS_DIR=${RUNNER_ASSETS_DIR:-/runnertmp} RUNNER_ASSETS_DIR=${RUNNER_ASSETS_DIR:-/runnertmp}
RUNNER_HOME=${RUNNER_HOME:-/runner} RUNNER_HOME=${RUNNER_HOME:-/runner}
# Let GitHub runner execute these hooks. These environment variables are used by GitHub's Runner as described here
# https://github.com/actions/runner/blob/main/docs/adrs/1751-runner-job-hooks.md
# Scripts referenced in the ACTIONS_RUNNER_HOOK_ environment variables must end in .sh or .ps1
# for it to become a valid hook script, otherwise GitHub will fail to run the hook
export ACTIONS_RUNNER_HOOK_JOB_STARTED=/etc/arc/hooks/job-started.sh
export ACTIONS_RUNNER_HOOK_JOB_COMPLETED=/etc/arc/hooks/job-completed.sh
if [ ! -z "${STARTUP_DELAY_IN_SECONDS}" ]; then if [ ! -z "${STARTUP_DELAY_IN_SECONDS}" ]; then
log.notice "Delaying startup by ${STARTUP_DELAY_IN_SECONDS} seconds" log.notice "Delaying startup by ${STARTUP_DELAY_IN_SECONDS} seconds"
sleep ${STARTUP_DELAY_IN_SECONDS} sleep ${STARTUP_DELAY_IN_SECONDS}
@@ -77,6 +84,8 @@ if [ "${DISABLE_RUNNER_UPDATE:-}" == "true" ]; then
log.debug 'Passing --disableupdate to config.sh to disable automatic runner updates.' log.debug 'Passing --disableupdate to config.sh to disable automatic runner updates.'
fi fi
update-status "Registering"
retries_left=10 retries_left=10
while [[ ${retries_left} -gt 0 ]]; do while [[ ${retries_left} -gt 0 ]]; do
log.debug 'Configuring the runner.' log.debug 'Configuring the runner.'
@@ -155,4 +164,5 @@ unset RUNNER_NAME RUNNER_REPO RUNNER_TOKEN STARTUP_DELAY_IN_SECONDS DISABLE_WAIT
if [ -z "${UNITTEST:-}" ]; then if [ -z "${UNITTEST:-}" ]; then
mapfile -t env </etc/environment mapfile -t env </etc/environment
fi fi
update-status "Idle"
exec env -- "${env[@]}" ./run.sh exec env -- "${env[@]}" ./run.sh

View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -u
exec update-status Idle

12
runner/hooks/job-completed.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -Eeuo pipefail
# shellcheck source=runner/logger.bash
source logger.bash
log.debug "Running ARC Job Completed Hooks"
for hook in /etc/arc/hooks/job-completed.d/*; do
log.debug "Running hook: $hook"
"$hook" "$@"
done

View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -u
exec update-status Running "Run $GITHUB_RUN_ID from $GITHUB_REPOSITORY"

View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -Eeuo pipefail
# shellcheck source=runner/logger.bash
source logger.bash
log.debug "Running ARC Job Started Hooks"
for hook in /etc/arc/hooks/job-started.d/*; do
log.debug "Running hook: $hook"
"$hook" "$@"
done

31
runner/update-status Executable file
View File

@@ -0,0 +1,31 @@
#!/usr/bin/env bash
set -Eeuo pipefail
if [[ ${1:-} == '' ]]; then
# shellcheck source=runner/logger.bash
source logger.bash
log.error "Missing required argument -- '<phase>'"
exit 64
fi
if [[ ${RUNNER_STATUS_UPDATE_HOOK:-false} == true ]]; then
apiserver=https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_HTTPS}
serviceaccount=/var/run/secrets/kubernetes.io/serviceaccount
namespace=$(cat ${serviceaccount}/namespace)
token=$(cat ${serviceaccount}/token)
phase=$1
shift
jq -n --arg phase "$phase" --arg message "${*:-}" '.status.phase = $phase | .status.message = $message' | curl \
--cacert ${serviceaccount}/ca.crt \
--data @- \
--noproxy '*' \
--header "Content-Type: application/merge-patch+json" \
--header "Authorization: Bearer ${token}" \
--show-error \
--silent \
--request PATCH \
"${apiserver}/apis/actions.summerwind.dev/v1alpha1/namespaces/${namespace}/runners/${HOSTNAME}/status"
1>&-
fi

View File

@@ -105,6 +105,10 @@ func TestE2E(t *testing.T) {
} }
t.Run("RunnerSets", func(t *testing.T) { t.Run("RunnerSets", func(t *testing.T) {
if os.Getenv("ARC_E2E_SKIP_RUNNERSETS") != "" {
t.Skip("RunnerSets test has been skipped due to ARC_E2E_SKIP_RUNNERSETS")
}
var ( var (
testID string testID string
) )
@@ -250,6 +254,7 @@ type env struct {
dockerdWithinRunnerContainer bool dockerdWithinRunnerContainer bool
remoteKubeconfig string remoteKubeconfig string
imagePullSecretName string imagePullSecretName string
imagePullPolicy string
vars vars vars vars
VerifyTimeout time.Duration VerifyTimeout time.Duration
@@ -363,6 +368,12 @@ func initTestEnv(t *testing.T, k8sMinorVer string, vars vars) *env {
e.imagePullSecretName = testing.Getenv(t, "ARC_E2E_IMAGE_PULL_SECRET_NAME", "") e.imagePullSecretName = testing.Getenv(t, "ARC_E2E_IMAGE_PULL_SECRET_NAME", "")
e.vars = vars e.vars = vars
if e.remoteKubeconfig != "" {
e.imagePullPolicy = "Always"
} else {
e.imagePullPolicy = "IfNotPresent"
}
if e.remoteKubeconfig == "" { if e.remoteKubeconfig == "" {
e.Kind = testing.StartKind(t, k8sMinorVer, testing.Preload(images...)) e.Kind = testing.StartKind(t, k8sMinorVer, testing.Preload(images...))
e.Env.Kubeconfig = e.Kind.Kubeconfig() e.Env.Kubeconfig = e.Kind.Kubeconfig()
@@ -453,6 +464,7 @@ func (e *env) installActionsRunnerController(t *testing.T, repo, tag, testID str
"NAME=" + repo, "NAME=" + repo,
"VERSION=" + tag, "VERSION=" + tag,
"IMAGE_PULL_SECRET=" + e.imagePullSecretName, "IMAGE_PULL_SECRET=" + e.imagePullSecretName,
"IMAGE_PULL_POLICY=" + e.imagePullPolicy,
} }
if e.useApp { if e.useApp {