Compare commits

...

73 Commits

Author SHA1 Message Date
Francesco Renzi
a5acb44ae0 Convert two more tests 2023-05-15 12:54:02 +01:00
Francesco Renzi
8243fa121c use TestMain 2023-03-23 15:58:50 +00:00
Francesco Renzi
93b6e7fa31 One more test, in parallel 2023-03-22 12:57:57 +00:00
Francesco Renzi
065655be7e Experimenting with no ginkgo/gomega 2023-03-22 11:54:15 +00:00
Bassem Dghaidi
f49d08e4bc Update 2022-12-05-adding-labels-k8s-resources.md (#2420) 2023-03-17 06:39:56 -04:00
Tingluo Huang
064039afc0 Ignore extra dind container when contaerinMode.type=dind. (#2418) 2023-03-17 09:26:51 +01:00
Nikola Jokic
e5d8d65396 Introduce ADR change for adding labels to our resources (#2407)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-16 11:02:42 -04:00
Bassem Dghaidi
c465ace8fb Update the values.yaml sample for improved clarity (#2416) 2023-03-16 11:02:18 -04:00
Tingluo Huang
34f3878829 Fix helm chart rendering errors. (#2414) 2023-03-16 09:21:43 -04:00
Tingluo Huang
44c3931d8e Adding e2e workflows to test dind, kube mode and proxy (#2412) 2023-03-15 12:17:11 -04:00
Tingluo Huang
08acb1b831 Get RunnerScaleSet based on both RunnerGroupId and Name. (#2413) 2023-03-15 11:10:09 -04:00
Tingluo Huang
40811ebe0e Support the controller to watching a single namespace. (#2374) 2023-03-14 10:52:25 -04:00
github-actions[bot]
3417c5a3a8 Update runner to version 2.303.0 (#2411)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-03-14 15:41:03 +01:00
Bassem Dghaidi
172faa883c Fix GITHUB_TOKEN permissions (#2410) 2023-03-14 10:38:04 -04:00
Tingluo Huang
9e6c7d019f Delay role/rolebinding creation to gha-runner-scale-set installation time (#2363) 2023-03-14 09:45:44 -04:00
Bassem Dghaidi
9fbcafa703 Fix canary image tag name (#2409) 2023-03-14 09:29:10 -04:00
Tingluo Huang
2bf83d0d7f Remove list/watch secrets permission from the manager cluster role. (#2276) 2023-03-14 09:23:14 -04:00
Bassem Dghaidi
19d30dea5f Add docker buildx pre-requisites (#2408) 2023-03-14 09:22:38 -04:00
Bassem Dghaidi
6c66c1633f Prevent releases on wrong tag name (#2406) 2023-03-14 09:13:25 -04:00
Bassem Dghaidi
e55708588b Add gha-runner-scale-set-controller canary build (#2405) 2023-03-14 09:12:53 -04:00
Tingluo Huang
261d4371b5 Update E2E test workflow. (#2395) 2023-03-14 09:00:07 -04:00
Tingluo Huang
bd9f32e354 Create separate chart validation workflow for gha-* charts. (#2393)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-03-13 12:44:54 -04:00
Nikola Jokic
babbfc77d5 Surface EphemeralRunnerSet stats to AutoscalingRunnerSet (#2382) 2023-03-13 16:16:28 +01:00
Bassem Dghaidi
322df79617 Delete renovate.json5 (#2397) 2023-03-13 08:39:07 -04:00
Bassem Dghaidi
1c7c6639ed Fix wrong file name in the workflow (#2394) 2023-03-13 06:56:21 -04:00
Hamish Forbes
bcaac39a2e feat(actionsmetrics): Add owner and workflow_name labels to workflow job metrics (#2225) 2023-03-13 10:50:36 +09:00
Milas Bowman
af625dd1cb Upgrade to Docker Engine v20.10.23 (#2328)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-13 10:29:40 +09:00
Bassem Dghaidi
44969659df Add upgrade steps (#2392)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-03-10 12:14:00 -05:00
Nikola Jokic
a5f98dea75 Refactor main.go and introduce make run-scaleset to be able to run manager locally (#2337) 2023-03-10 18:05:51 +01:00
Francesco Renzi
1d24d3b00d Prepare 0.3.0 release (#2388)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-10 10:28:07 -05:00
Ava Stancu
9994d3aa60 replaced inexistent variable with correct one for tag (#2390) 2023-03-10 16:57:35 +02:00
Bassem Dghaidi
a2ea12e93c Fix test's quotes issue (#2389)
Co-authored-by: Francesco Renzi <rentziass@gmail.com>
2023-03-10 09:22:19 -05:00
Tingluo Huang
d7b589bed5 Helm chart react changes for the new runner image. (#2348) 2023-03-10 11:18:21 +00:00
Ava Stancu
4f293c6f79 Build local image and load to kind cluster (#2378) 2023-03-10 13:16:07 +02:00
Francesco Renzi
c569304271 Add support for self-signed CA certificates (#2268)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-09 17:23:32 +00:00
Tingluo Huang
068f987238 Update permission ADR based on prototype. (#2383)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-03-09 12:18:53 -05:00
Tingluo Huang
a462ecbe79 Trim slash for configure URL. (#2381) 2023-03-09 09:02:05 -05:00
Nikola Jokic
c5d6842d5f Update gomega with new ginkgo version (#2373) 2023-03-07 12:05:25 +01:00
dependabot[bot]
947bc8ab5b chore(deps): bump github.com/onsi/ginkgo/v2 from 2.7.0 to 2.9.0 (#2369)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:27:54 +01:00
dependabot[bot]
9d5c6e85c5 chore(deps): bump k8s.io/client-go from 0.26.1 to 0.26.2 (#2370)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:27:23 +01:00
dependabot[bot]
2420a40c02 chore(deps): bump golang.org/x/net from 0.7.0 to 0.8.0 (#2368)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:24:20 +01:00
dependabot[bot]
b3e7c723d2 chore(deps): bump github.com/golang-jwt/jwt/v4 from 4.4.1 to 4.5.0 (#2367)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 11:23:37 +01:00
dependabot[bot]
2e36db52c3 chore(deps): bump github.com/gruntwork-io/terratest from 0.41.9 to 0.41.11 (#2335)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 07:39:31 +09:00
dependabot[bot]
5d41609bea chore(deps): bump github.com/teambition/rrule-go from 1.8.0 to 1.8.2 (#2230)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 07:38:53 +09:00
Francesco Renzi
e289fe43d4 Apply proxy settings from environment in listener (#2366)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-06 19:21:22 +00:00
Piotr Palka
91fddca3f7 Fix webhook server logging (#2320)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-06 14:20:46 -05:00
Tingluo Huang
befe4cee0a ADR for Limit cluster role permission on Secerts. (#2275) 2023-03-03 13:05:51 -05:00
Yusuke Kuoka
548acdf05c Correct and simplify a sentence in the scheduled overrides doc (#2323) 2023-03-03 09:18:07 -05:00
Chris Patterson
41f2ca3ed9 Adding parameter to configure the runner set name. (#2279)
Co-authored-by: TingluoHuang <TingluoHuang@github.com>
2023-03-03 08:36:14 -05:00
Bassem Dghaidi
00996ec799 Upgrading & pinning action versions (#2346) 2023-03-03 06:00:18 -05:00
Ava Stancu
893833fdd5 Added e2e workflow trigger on master push and on PRs (#2356) 2023-03-03 05:55:02 -05:00
github-actions[bot]
7f3eef8761 Update runner to version 2.302.1 (#2294)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-03-03 05:43:03 -05:00
Francesco Renzi
40c905f25d Simplify the setup of controller tests (#2352) 2023-03-02 18:55:49 +00:00
Nikola Jokic
2984de912c Split listener pod label to avoid long names issue (#2341) 2023-03-02 17:25:50 +01:00
dependabot[bot]
1df06a69d7 bump golang.org/x/net from 0.5.0 to 0.7.0 (#2299)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 10:41:18 +01:00
Nikola Jokic
be47190d4c Chart naming validation on AutoscalingRunnerSet install (#2347)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
Co-authored-by: Bassem Dghaidi <Link-@github.com>
2023-03-02 10:35:55 +01:00
Tingluo Huang
e8d8c6f357 Make CT test to install charts in the right order. (#2350) 2023-03-02 03:16:40 -05:00
Ava Stancu
0c091f59b6 Matrix jobs workflow path update (#2349) 2023-03-02 00:10:34 +02:00
Bassem Dghaidi
a4751b74e0 Update trigger events for validate-chart (#2342) 2023-03-01 10:55:08 -05:00
Bassem Dghaidi
adad3d5530 Rename actions-runner-controller-2 and auto-scaling-runner-set helm charts (#2333)
Co-authored-by: Ava S <avastancu@github.com>
2023-03-01 07:16:03 -05:00
Ava Stancu
70156e3fea Added space before backslash on the multi line command (#2340) 2023-03-01 11:43:17 +02:00
Alex Williams
69abd51f30 Ensure that EffectiveTime is updated on webhook scale down (#2258)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-01 08:27:37 +09:00
dhawalseth
73e35b1dc6 chart: Create actionsmetrics.secrets.yaml (#2208)
Co-authored-by: Dhawal Seth <dseth@linkedin.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-01 08:19:58 +09:00
dependabot[bot]
c4178d5633 chore(deps): bump github.com/stretchr/testify from 1.8.0 to 1.8.2 (#2336)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-01 07:21:24 +09:00
dependabot[bot]
edf924106b chore(deps): bump sigs.k8s.io/controller-runtime from 0.14.1 to 0.14.4 (#2261)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-01 07:19:47 +09:00
Milas Bowman
34ebbf74d1 Upgrade Docker Compose to v2.16.0 (#2327) 2023-03-01 07:18:13 +09:00
Ava Stancu
a9af82ec78 Change e2e config url (#2338) 2023-02-28 14:26:01 -05:00
Ava Stancu
b5e9e14244 Added org for getting the workflow token job as it errored without (#2334) 2023-02-27 23:30:40 +02:00
Ava Stancu
910269aa11 Avastancu/arc e2e test linux vm (#2285) 2023-02-27 16:36:15 +02:00
Yusuke Kuoka
149cf47c83 Fix actions-metrics-server segfault issue (#2325) 2023-02-27 07:34:29 +09:00
Kirill Bilchenko
ec3afef00d Add reposity name and full name for prometheus labels in actions metrics (#2218)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-02-25 16:02:22 +09:00
Dimitar
7d0918b6d5 Allow custom graceful termination and loadBalancerSourceRanges for the githubwebhook service (#2305)
Co-authored-by: Dimitar Hristov <dimitar.hristov@skyscanner.net>
2023-02-25 14:18:29 +09:00
João Carlos Ferra de Almeida
678eafcd67 [Docs] Fix typo (#2314) 2023-02-24 07:19:51 -05:00
144 changed files with 11351 additions and 5983 deletions

View File

@@ -0,0 +1,64 @@
name: 'Setup ARC E2E Test Action'
description: 'Build controller image, create kind cluster, load the image, and exchange ARC configure token.'
inputs:
github-app-id:
description: 'GitHub App Id for exchange access token'
required: true
github-app-pk:
description: "GitHub App private key for exchange access token"
required: true
github-app-org:
description: 'The organization the GitHub App has installed on'
required: true
docker-image-name:
description: "Local docker image name for building"
required: true
docker-image-tag:
description: "Tag of ARC Docker image for building"
required: true
outputs:
token:
description: 'Token to use for configure ARC'
value: ${{steps.config-token.outputs.token}}
runs:
using: "composite"
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
# Pinning v0.9.1 for Buildx and BuildKit v0.10.6
# BuildKit v0.11 which has a bug causing intermittent
# failures pushing images to GHCR
version: v0.9.1
driver-opts: image=moby/buildkit:v0.10.6
- name: Build controller image
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64
load: true
build-args: |
DOCKER_IMAGE_NAME=${{inputs.docker-image-name}}
VERSION=${{inputs.docker-image-tag}}
tags: |
${{inputs.docker-image-name}}:${{inputs.docker-image-tag}}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Create minikube cluster and load image
shell: bash
run: |
minikube start
minikube image load ${{inputs.docker-image-name}}:${{inputs.docker-image-tag}}
- name: Get configure token
id: config-token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ inputs.github-app-id }}
application_private_key: ${{ inputs.github-app-pk }}
organization: ${{ inputs.github-app-org }}

View File

@@ -1,43 +0,0 @@
{
"extends": ["config:base"],
"labels": ["dependencies"],
"packageRules": [
{
// automatically merge an update of runner
"matchPackageNames": ["actions/runner"],
"extractVersion": "^v(?<version>.*)$",
"automerge": true
}
],
"regexManagers": [
{
// use https://github.com/actions/runner/releases
"fileMatch": [
".github/workflows/runners.yaml"
],
"matchStrings": ["RUNNER_VERSION: +(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
},
{
"fileMatch": [
"runner/Makefile",
"Makefile"
],
"matchStrings": ["RUNNER_VERSION \\?= +(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
},
{
"fileMatch": [
"runner/actions-runner.ubuntu-20.04.dockerfile",
"runner/actions-runner.ubuntu-22.04.dockerfile",
"runner/actions-runner-dind.ubuntu-20.04.dockerfile",
"runner/actions-runner-dind-rootless.ubuntu-20.04.dockerfile"
],
"matchStrings": ["RUNNER_VERSION=+(?<currentValue>.*?)\\n"],
"depNameTemplate": "actions/runner",
"datasourceTemplate": "github-releases"
}
]
}

View File

@@ -1,4 +1,4 @@
name: ARC-REUSABLE-WORKFLOW
name: ARC Reusable Workflow
on:
workflow_dispatch:
inputs:

734
.github/workflows/e2e-test-linux-vm.yaml vendored Normal file
View File

@@ -0,0 +1,734 @@
name: CI ARC E2E Linux VM Test
on:
push:
branches:
- master
pull_request:
branches:
- master
workflow_dispatch:
inputs:
target_org:
description: The org of the test repository.
required: true
default: actions-runner-controller
target_repo:
description: The repository to install the ARC.
required: true
default: arc_e2e_test_dummy
env:
TARGET_ORG: actions-runner-controller
TARGET_REPO: arc_e2e_test_dummy
IMAGE_NAME: "arc-test-image"
IMAGE_VERSION: "dev"
jobs:
default-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
single-namespace-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
kubectl create namespace arc-runners
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
--set flags.watchSingleNamespace=arc-runners \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
dind-mode-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: arc-test-dind-workflow.yaml
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set containerMode.type="dind" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
kubernetes-mode-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: "arc-test-kubernetes-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
echo "Install openebs/dynamic-localpv-provisioner"
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install openebs openebs/openebs -n openebs --create-namespace
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set containerMode.type="kubernetes" \
--set containerMode.kubernetesModeWorkVolumeClaim.accessModes={"ReadWriteOnce"} \
--set containerMode.kubernetesModeWorkVolumeClaim.storageClassName="openebs-hostpath" \
--set containerMode.kubernetesModeWorkVolumeClaim.resources.requests.storage="1Gi" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
auth-proxy-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--name squid \
--publish 3128:3128 \
huangtingluo/squid-proxy:latest
kubectl create namespace arc-runners
kubectl create secret generic proxy-auth \
--namespace=arc-runners \
--from-literal=username=github \
--from-literal=password='actions'
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:3128" \
--set proxy.https.credentialSecretRef="proxy-auth" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
anonymous-proxy-setup:
runs-on: ubuntu-latest
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--name squid \
--publish 3128:3128 \
ubuntu/squid:latest
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:3128" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
exit 1
fi
sleep 1
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{ env.WORKFLOW_FILE }}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF

View File

@@ -20,4 +20,4 @@ jobs:
uses: golangci/golangci-lint-action@v3
with:
only-new-issues: true
version: v1.49.0
version: v1.51.1

View File

@@ -29,6 +29,10 @@ jobs:
release-controller:
name: Release
runs-on: ubuntu-latest
# gha-runner-scale-set has its own release workflow.
# We don't want to publish a new actions-runner-controller image
# we release gha-runner-scale-set.
if: ${{ !startsWith(github.event.inputs.release_tag_name, 'gha-runner-scale-set-') }}
steps:
- name: Checkout
uses: actions/checkout@v3

View File

@@ -8,35 +8,47 @@ on:
- master
paths-ignore:
- '**.md'
- '.github/actions/**'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/e2e-test-dispatch-workflow.yaml'
- '.github/workflows/e2e-test-linux-vm.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/runners.yaml'
- '.github/workflows/validate-entrypoint.yaml'
- '.github/renovate.*'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/publish-runner-scale-set.yaml'
- '.github/workflows/release-runners.yaml'
- '.github/workflows/run-codeql.yaml'
- '.github/workflows/run-first-interaction.yaml'
- '.github/workflows/run-stale.yaml'
- '.github/workflows/update-runners.yaml'
- '.github/workflows/validate-arc.yaml'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/validate-gha-chart.yaml'
- '.github/workflows/validate-runners.yaml'
- '.github/dependabot.yml'
- '.github/RELEASE_NOTE_TEMPLATE.md'
- 'runner/**'
- '.gitignore'
- 'PROJECT'
- 'LICENSE'
- 'Makefile'
env:
# Safeguard to prevent pushing images to registeries after build
PUSH_TO_REGISTRIES: true
TARGET_ORG: actions-runner-controller
TARGET_REPO: actions-runner-controller
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: read
packages: write
env:
# Safeguard to prevent pushing images to registeries after build
PUSH_TO_REGISTRIES: true
jobs:
canary-build:
name: Build and Publish Canary Image
legacy-canary-build:
name: Build and Publish Legacy Canary Image
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
TARGET_ORG: actions-runner-controller
TARGET_REPO: actions-runner-controller
steps:
- name: Checkout
uses: actions/checkout@v3
@@ -68,3 +80,50 @@ jobs:
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "[https://github.com/actions-runner-controller/releases/actions/workflows/publish-canary.yaml](https://github.com/actions-runner-controller/releases/actions/workflows/publish-canary.yaml)" >> $GITHUB_STEP_SUMMARY
canary-build:
name: Build and Publish gha-runner-scale-set-controller Canary Image
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Normalization is needed because upper case characters are not allowed in the repository name
# and the short sha is needed for image tagging
- name: Resolve parameters
id: resolve_parameters
run: |
echo "INFO: Resolving short sha"
echo "short_sha=$(git rev-parse --short ${{ github.ref }})" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: latest
# Unstable builds - run at your own risk
- name: Build and Push
uses: docker/build-push-action@v3
with:
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64
build-args: VERSION=canary-"${{ github.ref }}"
push: ${{ env.PUSH_TO_REGISTRIES }}
tags: |
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:canary
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:canary-${{ steps.resolve_parameters.outputs.short_sha }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -10,8 +10,8 @@ on:
- 'charts/**'
- '.github/workflows/publish-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!charts/actions-runner-controller-2/**'
- '!charts/auto-scaling-runner-set/**'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
- '!**.md'
workflow_dispatch:

View File

@@ -1,4 +1,4 @@
name: Publish ARC 2
name: Publish Runner Scale Set Controller Charts
on:
workflow_dispatch:
@@ -18,13 +18,13 @@ on:
required: true
type: boolean
default: false
publish_actions_runner_controller_2_chart:
description: 'Publish new helm chart for actions-runner-controller-2'
publish_gha_runner_scale_set_controller_chart:
description: 'Publish new helm chart for gha-runner-scale-set-controller'
required: true
type: boolean
default: false
publish_auto_scaling_runner_set_chart:
description: 'Publish new helm chart for auto-scaling-runner-set'
publish_gha_runner_scale_set_chart:
description: 'Publish new helm chart for gha-runner-scale-set'
required: true
type: boolean
default: false
@@ -87,14 +87,14 @@ jobs:
build-args: VERSION=${{ inputs.release_tag_name }}
push: ${{ inputs.push_to_registries }}
tags: |
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-2:${{ inputs.release_tag_name }}
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-2:${{ inputs.release_tag_name }}-${{ steps.resolve_parameters.outputs.short_sha }}
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:${{ inputs.release_tag_name }}
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:${{ inputs.release_tag_name }}-${{ steps.resolve_parameters.outputs.short_sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Job summary
run: |
echo "The [publish-arc2](https://github.com/actions/actions-runner-controller/blob/main/.github/workflows/publish-arc2.yaml) workflow run was completed successfully!" >> $GITHUB_STEP_SUMMARY
echo "The [publish-runner-scale-set.yaml](https://github.com/actions/actions-runner-controller/blob/main/.github/workflows/publish-runner-scale-set.yaml) workflow run was completed successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
@@ -103,10 +103,10 @@ jobs:
echo "- Push to registries: ${{ inputs.push_to_registries }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
publish-helm-chart-arc-2:
if: ${{ inputs.publish_actions_runner_controller_2_chart == true }}
publish-helm-chart-gha-runner-scale-set-controller:
if: ${{ inputs.publish_gha_runner_scale_set_controller_chart == true }}
needs: build-push-image
name: Publish Helm chart for actions-runner-controller-2
name: Publish Helm chart for gha-runner-scale-set-controller
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -129,31 +129,32 @@ jobs:
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up Helm
uses: azure/setup-helm@v3.3
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Publish new helm chart for actions-runner-controller-2
- name: Publish new helm chart for gha-runner-scale-set-controller
run: |
echo ${{ secrets.GITHUB_TOKEN }} | helm registry login ghcr.io --username ${{ github.actor }} --password-stdin
ACTIONS_RUNNER_CONTROLLER_2_CHART_VERSION_TAG=$(cat charts/actions-runner-controller-2/Chart.yaml | grep version: | cut -d " " -f 2)
echo "ACTIONS_RUNNER_CONTROLLER_2_CHART_VERSION_TAG=${ACTIONS_RUNNER_CONTROLLER_2_CHART_VERSION_TAG}" >> $GITHUB_ENV
helm package charts/actions-runner-controller-2/ --version="${ACTIONS_RUNNER_CONTROLLER_2_CHART_VERSION_TAG}"
helm push actions-runner-controller-2-"${ACTIONS_RUNNER_CONTROLLER_2_CHART_VERSION_TAG}".tgz oci://ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-charts
GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG=$(cat charts/gha-runner-scale-set-controller/Chart.yaml | grep version: | cut -d " " -f 2)
echo "GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG=${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}" >> $GITHUB_ENV
helm package charts/gha-runner-scale-set-controller/ --version="${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}"
helm push gha-runner-scale-set-controller-"${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}".tgz oci://ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-charts
- name: Job summary
run: |
echo "New helm chart for actions-runner-controller-2 published successfully!" >> $GITHUB_STEP_SUMMARY
echo "New helm chart for gha-runner-scale-set-controller published successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- Actions-Runner-Controller-2 Chart version: ${{ env.ACTIONS_RUNNER_CONTROLLER_2_CHART_VERSION_TAG }}" >> $GITHUB_STEP_SUMMARY
echo "- gha-runner-scale-set-controller Chart version: ${{ env.GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG }}" >> $GITHUB_STEP_SUMMARY
publish-helm-chart-auto-scaling-runner-set:
if: ${{ inputs.publish_auto_scaling_runner_set_chart == true }}
publish-helm-chart-gha-runner-scale-set:
if: ${{ inputs.publish_gha_runner_scale_set_chart == true }}
needs: build-push-image
name: Publish Helm chart for auto-scaling-runner-set
name: Publish Helm chart for gha-runner-scale-set
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -176,24 +177,25 @@ jobs:
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up Helm
uses: azure/setup-helm@v3.3
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Publish new helm chart for auto-scaling-runner-set
- name: Publish new helm chart for gha-runner-scale-set
run: |
echo ${{ secrets.GITHUB_TOKEN }} | helm registry login ghcr.io --username ${{ github.actor }} --password-stdin
AUTO_SCALING_RUNNER_SET_CHART_VERSION_TAG=$(cat charts/auto-scaling-runner-set/Chart.yaml | grep version: | cut -d " " -f 2)
echo "AUTO_SCALING_RUNNER_SET_CHART_VERSION_TAG=${AUTO_SCALING_RUNNER_SET_CHART_VERSION_TAG}" >> $GITHUB_ENV
helm package charts/auto-scaling-runner-set/ --version="${AUTO_SCALING_RUNNER_SET_CHART_VERSION_TAG}"
helm push auto-scaling-runner-set-"${AUTO_SCALING_RUNNER_SET_CHART_VERSION_TAG}".tgz oci://ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-charts
GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG=$(cat charts/gha-runner-scale-set/Chart.yaml | grep version: | cut -d " " -f 2)
echo "GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG=${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}" >> $GITHUB_ENV
helm package charts/gha-runner-scale-set/ --version="${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}"
helm push gha-runner-scale-set-"${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}".tgz oci://ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-charts
- name: Job summary
run: |
echo "New helm chart for auto-scaling-runner-set published successfully!" >> $GITHUB_STEP_SUMMARY
echo "New helm chart for gha-runner-scale-set published successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- Auto-Scaling-Runner-Set Chart version: ${{ env.AUTO_SCALING_RUNNER_SET_CHART_VERSION_TAG }}" >> $GITHUB_STEP_SUMMARY
echo "- gha-runner-scale-set Chart version: ${{ env.GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG }}" >> $GITHUB_STEP_SUMMARY

View File

@@ -17,7 +17,7 @@ env:
PUSH_TO_REGISTRIES: true
TARGET_ORG: actions-runner-controller
TARGET_WORKFLOW: release-runners.yaml
DOCKER_VERSION: 20.10.21
DOCKER_VERSION: 20.10.23
RUNNER_CONTAINER_HOOKS_VERSION: 0.2.0
jobs:

View File

@@ -77,6 +77,7 @@ jobs:
permissions:
pull-requests: write
contents: write
actions: write
env:
GH_TOKEN: ${{ github.token }}
CURRENT_VERSION: ${{ needs.check_versions.outputs.current_version }}
@@ -93,6 +94,7 @@ jobs:
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" test/e2e/e2e_test.go
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" .github/workflows/e2e-test-linux-vm.yaml
- name: Commit changes
run: |

View File

@@ -1,12 +1,24 @@
name: Validate Helm Chart
on:
pull_request:
branches:
- master
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
push:
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.10.0
@@ -26,7 +38,8 @@ jobs:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v3.4
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}

View File

@@ -0,0 +1,134 @@
name: Validate Helm Chart (gha-runner-scale-set-controller and gha-runner-scale-set)
on:
pull_request:
branches:
- master
paths:
- 'charts/**'
- '.github/workflows/validate-gha-chart.yaml'
- '!charts/actions-runner-controller/**'
- '!**.md'
push:
paths:
- 'charts/**'
- '.github/workflows/validate-gha-chart.yaml'
- '!charts/actions-runner-controller/**'
- '!**.md'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.16.1
HELM_VERSION: v3.8.0
permissions:
contents: read
jobs:
validate-chart:
name: Lint Chart
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
- name: Set up latest version chart-testing
run: |
echo 'deb [trusted=yes] https://repo.goreleaser.com/apt/ /' | sudo tee /etc/apt/sources.list.d/goreleaser.list
sudo apt update
sudo apt install goreleaser
git clone https://github.com/helm/chart-testing
cd chart-testing
unset CT_CONFIG_DIR
goreleaser build --clean --skip-validate
./dist/chart-testing_linux_amd64_v1/ct version
echo 'Adding ct directory to PATH...'
echo "$RUNNER_TEMP/chart-testing/dist/chart-testing_linux_amd64_v1" >> "$GITHUB_PATH"
echo 'Setting CT_CONFIG_DIR...'
echo "CT_CONFIG_DIR=$RUNNER_TEMP/chart-testing/etc" >> "$GITHUB_ENV"
working-directory: ${{ runner.temp }}
- name: Run chart-testing (list-changed)
id: list-changed
run: |
ct version
changed=$(ct list-changed --config charts/.ci/ct-config-gha.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: |
ct lint --config charts/.ci/ct-config-gha.yaml
- name: Set up docker buildx
uses: docker/setup-buildx-action@v2
if: steps.list-changed.outputs.changed == 'true'
with:
version: latest
- name: Build controller image
uses: docker/build-push-action@v3
if: steps.list-changed.outputs.changed == 'true'
with:
file: Dockerfile
platforms: linux/amd64
load: true
build-args: |
DOCKER_IMAGE_NAME=test-arc
VERSION=dev
tags: |
test-arc:dev
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Create kind cluster
uses: helm/kind-action@v1.4.0
if: steps.list-changed.outputs.changed == 'true'
with:
cluster_name: chart-testing
- name: Load image into cluster
if: steps.list-changed.outputs.changed == 'true'
run: |
export DOCKER_IMAGE_NAME=test-arc
export VERSION=dev
export IMG_RESULT=load
make docker-buildx
kind load docker-image test-arc:dev --name chart-testing
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: |
ct install --config charts/.ci/ct-config-gha.yaml

View File

@@ -39,7 +39,8 @@ RUN --mount=target=. \
go build -trimpath -ldflags="-s -w -X 'github.com/actions/actions-runner-controller/build.Version=${VERSION}'" -o /out/manager main.go && \
go build -trimpath -ldflags="-s -w" -o /out/github-runnerscaleset-listener ./cmd/githubrunnerscalesetlistener && \
go build -trimpath -ldflags="-s -w" -o /out/github-webhook-server ./cmd/githubwebhookserver && \
go build -trimpath -ldflags="-s -w" -o /out/actions-metrics-server ./cmd/actionsmetricsserver
go build -trimpath -ldflags="-s -w" -o /out/actions-metrics-server ./cmd/actionsmetricsserver && \
go build -trimpath -ldflags="-s -w" -o /out/sleep ./cmd/sleep
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
@@ -51,6 +52,7 @@ COPY --from=builder /out/manager .
COPY --from=builder /out/github-webhook-server .
COPY --from=builder /out/actions-metrics-server .
COPY --from=builder /out/github-runnerscaleset-listener .
COPY --from=builder /out/sleep .
USER 65532:65532

118
Makefile
View File

@@ -5,7 +5,7 @@ else
endif
DOCKER_USER ?= $(shell echo ${DOCKER_IMAGE_NAME} | cut -d / -f1)
VERSION ?= dev
RUNNER_VERSION ?= 2.301.1
RUNNER_VERSION ?= 2.303.0
TARGETPLATFORM ?= $(shell arch)
RUNNER_NAME ?= ${DOCKER_USER}/actions-runner
RUNNER_TAG ?= ${VERSION}
@@ -73,7 +73,7 @@ GO_TEST_ARGS ?= -short
# Run tests
test: generate fmt vet manifests shellcheck
go test $(GO_TEST_ARGS) ./... -coverprofile cover.out
go test $(GO_TEST_ARGS) `go list ./... | grep -v ./test_e2e_arc` -coverprofile cover.out
go test -fuzz=Fuzz -fuzztime=10s -run=Fuzz* ./controllers/actions.summerwind.net
test-with-deps: kube-apiserver etcd kubectl
@@ -92,9 +92,14 @@ manager: generate fmt vet
run: generate fmt vet manifests
go run ./main.go
run-scaleset: generate fmt vet
CONTROLLER_MANAGER_POD_NAMESPACE=default \
CONTROLLER_MANAGER_CONTAINER_IMAGE="${DOCKER_IMAGE_NAME}:${VERSION}" \
go run ./main.go --auto-scaling-runner-set-only
# Install CRDs into a cluster
install: manifests
kustomize build config/crd | kubectl apply -f -
kustomize build config/crd | kubectl apply --server-side -f -
# Uninstall CRDs from a cluster
uninstall: manifests
@@ -103,7 +108,7 @@ uninstall: manifests
# Deploy controller in the configured Kubernetes cluster in ~/.kube/config
deploy: manifests
cd config/manager && kustomize edit set image controller=${DOCKER_IMAGE_NAME}:${VERSION}
kustomize build config/default | kubectl apply -f -
kustomize build config/default | kubectl apply --server-side -f -
# Generate manifests e.g. CRD, RBAC etc.
manifests: manifests-gen-crds chart-crds
@@ -113,66 +118,71 @@ manifests-gen-crds: controller-gen yq
for YAMLFILE in config/crd/bases/actions*.yaml; do \
$(YQ) '.spec.preserveUnknownFields = false' --inplace "$$YAMLFILE" ; \
done
make manifests-gen-crds-fix DELETE_KEY=x-kubernetes-list-type
make manifests-gen-crds-fix DELETE_KEY=x-kubernetes-list-map-keys
manifests-gen-crds-fix: DELETE_KEY ?=
manifests-gen-crds-fix:
#runners
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.initContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.containers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.dockerdContainerResources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
#runnerreplicasets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.dockerdContainerResources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
#runnerdeployments
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.dockerdContainerResources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
#runnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.volumeClaimTemplates.items.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.volumeClaimTemplates.items.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
#autoscalingrunnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
#ehemeralrunnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.properties.spec.properties.initContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.initContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.containers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
# ephemeralrunners
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.containers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.initContainers.items.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.x-kubernetes-list-type)' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
chart-crds:
cp config/crd/bases/*.yaml charts/actions-runner-controller/crds/
cp config/crd/bases/actions.github.com_autoscalingrunnersets.yaml charts/actions-runner-controller-2/crds/
cp config/crd/bases/actions.github.com_autoscalinglisteners.yaml charts/actions-runner-controller-2/crds/
cp config/crd/bases/actions.github.com_ephemeralrunnersets.yaml charts/actions-runner-controller-2/crds/
cp config/crd/bases/actions.github.com_ephemeralrunners.yaml charts/actions-runner-controller-2/crds/
cp config/crd/bases/actions.github.com_autoscalingrunnersets.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_autoscalinglisteners.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_ephemeralrunnersets.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_ephemeralrunners.yaml charts/gha-runner-scale-set-controller/crds/
rm charts/actions-runner-controller/crds/actions.github.com_autoscalingrunnersets.yaml
rm charts/actions-runner-controller/crds/actions.github.com_autoscalinglisteners.yaml
rm charts/actions-runner-controller/crds/actions.github.com_ephemeralrunnersets.yaml

View File

@@ -35,7 +35,7 @@ else
echo 'Skipped deploying secret "github-webhook-server". Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
fi
if [ -n "${WEBHOOK_GITHUB_TOKEN}" ]; then
if [ -n "${WEBHOOK_GITHUB_TOKEN}" ] && [ -z "${CREATE_SECRETS_USING_HELM}" ]; then
kubectl -n actions-runner-system delete secret \
actions-metrics-server || :
kubectl -n actions-runner-system create secret generic \
@@ -69,6 +69,14 @@ if [ "${tool}" == "helm" ]; then
flags+=( --set githubWebhookServer.logFormat=${LOG_FORMAT})
flags+=( --set actionsMetricsServer.logFormat=${LOG_FORMAT})
fi
if [ -n "${CREATE_SECRETS_USING_HELM}" ]; then
if [ -z "${WEBHOOK_GITHUB_TOKEN}" ]; then
echo 'Failed deploying secret "actions-metrics-server" using helm. Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
exit 1
fi
flags+=( --set actionsMetricsServer.secret.create=true)
flags+=( --set actionsMetricsServer.secret.github_token=${WEBHOOK_GITHUB_TOKEN})
fi
set -vx

View File

@@ -1,35 +1,52 @@
# ADR 0001: Produce the runner image for the scaleset client
# ADR 2022-10-17: Produce the runner image for the scaleset client
**Date**: 2022-10-17
**Status**: Done
# Breaking Changes
We aim to provide an similar experience (as close as possible) between self-hosted and GitHub-hosted runners. To achieve this, we are making the following changes to align our self-hosted runner container image with the Ubuntu runners managed by GitHub.
Here are the changes:
- We created a USER `runner(1001)` and a GROUP `docker(123)`
- `sudo` has been on the image and the `runner` will be a passwordless sudoer.
- The runner binary was placed placed under `/home/runner/` and launched using `/home/runner/run.sh`
- The runner's work directory is `/home/runner/_work`
- `$HOME` will point to `/home/runner`
- The container image user will be the `runner(1001)`
The latest Dockerfile can be found at: https://github.com/actions/runner/blob/main/images/Dockerfile
# Context
user can bring their own runner images, the contract we have are:
- It must have a runner binary under /actions-runner (/actions-runner/run.sh exists)
- The WORKDIR is set to /actions-runner
- If the user inside the container is root, the ENV RUNNER_ALLOW_RUNASROOT should be set to 1
users can bring their own runner images, the contract we require is:
The existing ARC runner images will not work with the new ARC mode out-of-box for the following reason:
- It must have a runner binary under `/actions-runner` i.e. `/actions-runner/run.sh` exists
- The `WORKDIR` is set to `/actions-runner`
- If the user inside the container is root, the environment variable `RUNNER_ALLOW_RUNASROOT` should be set to `1`
- The current runner image requires caller to pass runner configure info, ex: URL and Config Token
- The current runner image has the runner binary under /runner
The existing [ARC runner images](https://github.com/orgs/actions-runner-controller/packages?tab=packages&q=actions-runner) will not work with the new ARC mode out-of-box for the following reason:
- The current runner image requires the caller to pass runner configuration info, ex: URL and Config Token
- The current runner image has the runner binary under `/runner` which violates the contract described above
- The current runner image requires a special entrypoint script in order to work around some volume mount limitation for setting up DinD.
However, since we expose the raw runner Pod spec to our user, advanced user can modify the helm values.yaml to make everything lines up properly.
Since we expose the raw runner PodSpec to our end users, they can modify the helm `values.yaml` to adjust the runner container to their needs.
# Guiding Principles
- Build image is separated in two stages.
## The first stage (build)
- Reuses the same base image, so it is faster to build.
- Installs utilities needed to download assets (runner and runner-container-hooks).
- Installs utilities needed to download assets (`runner` and `runner-container-hooks`).
- Downloads the runner and stores it into `/actions-runner` directory.
- Downloads the runner-container-hooks and stores it into `/actions-runner/k8s` directory.
- You can use build arguments to control the runner version, the target platform and runner container hooks version.
Preview:
Preview (the published runner image might vary):
```Dockerfile
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0 as build
@@ -51,6 +68,7 @@ RUN curl -f -L -o runner-container-hooks.zip https://github.com/actions/runner-c
```
## The main image:
- Copies assets from the build stage to `/actions-runner`
- Does not provide an entrypoint. The entrypoint should be set within the container definition.
@@ -64,6 +82,7 @@ COPY --from=build /actions-runner .
```
## Example of pod spec with the init container copying assets
```yaml
apiVersion: v1
kind: Pod
@@ -71,20 +90,20 @@ metadata:
name: <name>
spec:
containers:
- name: runner
image: <image>
command: ["/runner/run.sh"]
volumeMounts:
- name: runner
mountPath: /runner
image: <image>
command: ["/runner/run.sh"]
volumeMounts:
- name: runner
mountPath: /runner
initContainers:
- name: setup
image: <image>
command: ["sh", "-c", "cp -r /actions-runner/* /runner/"]
volumeMounts:
- name: runner
mountPath: /runner
- name: setup
image: <image>
command: ["sh", "-c", "cp -r /actions-runner/* /runner/"]
volumeMounts:
- name: runner
mountPath: /runner
volumes:
- name: runner
emptyDir: {}
- name: runner
emptyDir: {}
```

View File

@@ -1,4 +1,4 @@
# ADR 0003: Lifetime of RunnerScaleSet on Service
# ADR 2022-10-27: Lifetime of RunnerScaleSet on Service
**Date**: 2022-10-27
@@ -12,8 +12,9 @@ The `RunnerScaleSet` object will represent a set of homogeneous self-hosted runn
A `RunnerScaleSet` client (ARC) needs to communicate with the Actions service via HTTP long-poll in a certain protocol to get a workflow job successfully landed on one of its homogeneous self-hosted runners.
In this ADR, I want to discuss the following within the context of actions-runner-controller's new scaling mode:
- Who and how to create a RunnerScaleSet on the service?
In this ADR, we discuss the following within the context of actions-runner-controller's new scaling mode:
- Who and how to create a RunnerScaleSet on the service?
- Who and how to delete a RunnerScaleSet on the service?
- What will happen to all the runners and jobs when the deletion happens?
@@ -30,18 +31,19 @@ In this ADR, I want to discuss the following within the context of actions-runne
- When the user patch existing `AutoScalingRunnerSet`'s RunnerScaleSet related properly, ex: `runnerGroupName`, `runnerWorkDir`, the controller needs to make an HTTP PATCH call to the `_apis/runtime/runnerscalesets/2` endpoint in order to update the object on the service.
- We will put the deployed `AutoScalingRunnerSet` resource in an error state when the user tries to patch the resource with a different `githubConfigUrl`
> Basically, you can't move a deployed `AutoScalingRunnerSet` across GitHub entity, repoA->repoB, repoA->OrgC, etc.
> We evaluated blocking the change before instead of erroring at runtime and that we decided not to go down this route because it forces us to re-introduce admission webhooks (require cert-manager).
> Basically, you can't move a deployed `AutoScalingRunnerSet` across GitHub entity, repoA->repoB, repoA->OrgC, etc.
> We evaluated blocking the change before instead of erroring at runtime and that we decided not to go down this route because it forces us to re-introduce admission webhooks (require cert-manager).
## RunnerScaleSet deletion
- `AutoScalingRunnerSet` custom resource controller will delete the `RunnerScaleSet` object in the Actions service on any `AutoScalingRunnerSet` resource deletion.
> `AutoScalingRunnerSet` deletion will contain several steps:
> - Stop the listener app so no more new jobs coming and no more scaling up/down.
> - Request scale down to 0
> - Force stop all runners
> - Wait for the scale down to 0
> - Delete the `RunnerScaleSet` object from service via REST API
> `AutoScalingRunnerSet` deletion will contain several steps:
>
> - Stop the listener app so no more new jobs coming and no more scaling up/down.
> - Request scale down to 0
> - Force stop all runners
> - Wait for the scale down to 0
> - Delete the `RunnerScaleSet` object from service via REST API
- The deletion is via REST API on Actions service `DELETE _apis/runtime/runnerscalesets/1`
- The deletion needs to use the runner registration token (admin).

View File

@@ -1,4 +1,5 @@
# ADR 0004: Technical detail about actions-runner-controller repository transfer
# ADR 2022-11-04: Technical detail about actions-runner-controller repository transfer
**Date**: 2022-11-04
**Status**: Done
@@ -8,16 +9,17 @@
As part of ARC Private Beta: Repository Migration & Open Sourcing Process, we have decided to transfer the current [actions-runner-controller repository](https://github.com/actions-runner-controller/actions-runner-controller) into the [Actions org](https://github.com/actions).
**Goals:**
- A clear signal that GitHub will start taking over ARC and provide support.
- Since we are going to deprecate the existing auto-scale mode in ARC at some point, we want to have a clear separation between the legacy mode (not supported) and the new mode (supported).
- Avoid disrupting users as much as we can, existing ARC users will not notice any difference after the repository transfer, they can keep upgrading to the newer version of ARC and keep using the legacy mode.
- Avoid disrupting users as much as we can, existing ARC users will not notice any difference after the repository transfer, they can keep upgrading to the newer version of ARC and keep using the legacy mode.
**Challenges**
- The original creator's name (`summerwind`) is all over the place, including some critical parts of ARC:
- The k8s user resource API's full name is `actions.summerwind.dev/v1alpha1/RunnerDeployment`, renaming it to `actions.github.com` is a breaking change and will force the user to rebuild their entire k8s cluster.
- All docker images around ARC (controller + default runner) is published to [dockerhub/summerwind](https://hub.docker.com/u/summerwind)
- The helm chart for ARC is currently hosted on [GitHub pages](https://actions-runner-controller.github.io/actions-runner-controller) for https://github.com/actions-runner-controller/actions-runner-controller, moving the repository means we will break users who install ARC via the helm chart
- The original creator's name (`summerwind`) is all over the place, including some critical parts of ARC:
- The k8s user resource API's full name is `actions.summerwind.dev/v1alpha1/RunnerDeployment`, renaming it to `actions.github.com` is a breaking change and will force the user to rebuild their entire k8s cluster.
- All docker images around ARC (controller + default runner) is published to [dockerhub/summerwind](https://hub.docker.com/u/summerwind)
- The helm chart for ARC is currently hosted on [GitHub pages](https://actions-runner-controller.github.io/actions-runner-controller) for https://github.com/actions-runner-controller/actions-runner-controller, moving the repository means we will break users who install ARC via the helm chart
# Decisions
@@ -27,8 +29,9 @@ As part of ARC Private Beta: Repository Migration & Open Sourcing Process, we ha
- For any new resource API we are going to add, those will be named properly under GitHub, ex: `actions.github.com/v1alpha1/AutoScalingRunnerSet`
Benefits:
- A clear separation from existing ARC:
- Easy for the support engineer to triage income tickets and figure out whether we need to support the use case from the user
- Easy for the support engineer to triage income tickets and figure out whether we need to support the use case from the user
- We won't break existing users when they upgrade to a newer version of ARC after the repository transfer
Based on the spike done by `@nikola-jokic`, we have confidence that we can host multiple resources with different API names under the same repository, and the published ARC controller can handle both resources properly.

View File

@@ -1,8 +1,8 @@
# ADR 0007: Adding labels to our resources
# ADR 2022-12-05: Adding labels to our resources
**Date**: 2022-12-05
**Status**: Done
**Status**: Superceded [^1]
## Context
@@ -20,12 +20,15 @@ Assuming standard logging that would allow us to get all ARC logs by running
```bash
kubectl logs -l 'app.kubernetes.io/part-of=actions-runner-controller'
```
which would be very useful for development to begin with.
The proposal is to add these sets of labels to the pods ARC creates:
#### controller-manager
Labels to be set by the Helm chart:
```yaml
metadata:
labels:
@@ -35,7 +38,9 @@ metadata:
```
#### Listener
Labels to be set by controller at creation:
```yaml
metadata:
labels:
@@ -43,7 +48,7 @@ metadata:
app.kubernetes.io/component: runner-scale-set-listener
app.kubernetes.io/version: "x.x.x"
actions.github.com/scale-set-name: scale-set-name # this corresponds to metadata.name as set for AutoscalingRunnerSet
# the following labels are to be extracted by the config URL
actions.github.com/enterprise: enterprise
actions.github.com/organization: organization
@@ -51,7 +56,9 @@ metadata:
```
#### Runner
Labels to be set by controller at creation:
```yaml
metadata:
labels:
@@ -78,3 +85,5 @@ Or for example if they're having problems specifically with runners:
This way users don't have to understand ARC moving parts but we still have a
way to target them specifically if we need to.
[^1]: Superseded by [ADR 2023-04-14](2023-04-14-adding-labels-k8s-resources.md)

View File

@@ -1,4 +1,5 @@
# ADR 0008: Pick the right runner to scale down
# ADR 2022-12-27: Pick the right runner to scale down
**Date**: 2022-12-27
**Status**: Done
@@ -7,35 +8,37 @@
- A custom resource `EphemeralRunnerSet` manage a set of custom resource `EphemeralRunners`
- The `EphemeralRunnerSet` has `Replicas` in its `Spec`, and the responsibility of the `EphemeralRunnerSet_controller` is to reconcile a given `EphemeralRunnerSet` to have
the same amount of `EphemeralRunners` as the `Spec.Replicas` defined.
- This means the `EphemeralRunnerSet_controller` will scale up the `EphemeralRunnerSet` by creating more `EphemeralRunner` in the case of the `Spec.Replicas` is higher than
the current amount of `EphemeralRunners`.
- This also means the `EphemeralRunnerSet_controller` will scale down the `EphemeralRunnerSet` by finding some existing `EphemeralRunner` to delete in the case of
the same amount of `EphemeralRunners` as the `Spec.Replicas` defined.
- This means the `EphemeralRunnerSet_controller` will scale up the `EphemeralRunnerSet` by creating more `EphemeralRunner` in the case of the `Spec.Replicas` is higher than
the current amount of `EphemeralRunners`.
- This also means the `EphemeralRunnerSet_controller` will scale down the `EphemeralRunnerSet` by finding some existing `EphemeralRunner` to delete in the case of
the `Spec.Replicas` is less than the current amount of `EphemeralRunners`.
This ADR is about how can we find the right existing `EphemeralRunner` to delete when we need to scale down.
## Current approach
This ADR is about how can we find the right existing `EphemeralRunner` to delete when we need to scale down.
## Current approach
1. `EphemeralRunnerSet_controller` figure out how many `EphemeralRunner` it needs to delete, ex: need to scale down from 10 to 2 means we need to delete 8 `EphemeralRunner`
2. `EphemeralRunnerSet_controller` find all `EphemeralRunner` that is in the `Running` or `Pending` phase.
> `Pending` means the `EphemeralRunner` is still probably creating and a runner has not yet configured with the Actions service.
> `Running` means the `EphemeralRunner` is created and a runner has probably configured with Actions service, the runner may sit there idle,
> or maybe actively running a workflow job. We don't have a clear answer for it from the ARC side. (Actions service knows it for sure)
> `Pending` means the `EphemeralRunner` is still probably creating and a runner has not yet configured with the Actions service.
> `Running` means the `EphemeralRunner` is created and a runner has probably configured with Actions service, the runner may sit there idle,
> or maybe actively running a workflow job. We don't have a clear answer for it from the ARC side. (Actions service knows it for sure)
3. `EphemeralRunnerSet_controller` make an HTTP DELETE request to the Actions service for each `EphemeralRunner` from the previous step and ask the Actions service to delete the runner via `RunnerId`.
(The `RunnerId` is generated after the runner registered with the Actions service, and stored on the `EphemeralRunner.Status.RunnerId`)
> - The HTTP DELETE request looks like the following:
> `DELETE https://pipelines.actions.githubusercontent.com/WoxlUxJHrKEzIp4Nz3YmrmLlZBonrmj9xCJ1lrzcJ9ZsD1Tnw7/_apis/distributedtask/pools/0/agents/1024`
> The Actions service will return 2 types of responses:
> 1. 204 (No Content): The runner with Id 1024 has been successfully removed from the service or the runner with Id 1024 doesn't exist.
> 2. 400 (Bad Request) with JSON body that contains an error message like `JobStillRunningException`: The service can't remove this runner at this point since it has been
> assigned to a job request, the client won't be able to remove the runner until the runner finishes its current assigned job request.
(The `RunnerId` is generated after the runner registered with the Actions service, and stored on the `EphemeralRunner.Status.RunnerId`)
4. `EphemeralRunnerSet_controller` will ignore any deletion error from runners that are still running a job, and keep trying deletion until the amount of `204` equals the amount of
`EphemeralRunner` needs to delete.
> - The HTTP DELETE request looks like the following:
> `DELETE https://pipelines.actions.githubusercontent.com/WoxlUxJHrKEzIp4Nz3YmrmLlZBonrmj9xCJ1lrzcJ9ZsD1Tnw7/_apis/distributedtask/pools/0/agents/1024`
> The Actions service will return 2 types of responses:
>
> 1. 204 (No Content): The runner with Id 1024 has been successfully removed from the service or the runner with Id 1024 doesn't exist.
> 2. 400 (Bad Request) with JSON body that contains an error message like `JobStillRunningException`: The service can't remove this runner at this point since it has been
> assigned to a job request, the client won't be able to remove the runner until the runner finishes its current assigned job request.
4. `EphemeralRunnerSet_controller` will ignore any deletion error from runners that are still running a job, and keep trying deletion until the amount of `204` equals the amount of
`EphemeralRunner` needs to delete.
## The problem with the current approach
@@ -68,6 +71,7 @@ this would be a big `NO` from a security point of view since we may not trust th
The nature of the k8s controller-runtime means we might reconcile the resource base on stale cache data.
I think our goal for the solution should be:
- Reduce wasteful HTTP requests on a scale-down as much as we can.
- We can accept that we might make 1 or 2 wasteful requests to Actions service, but we can't accept making 5/10+ of them.
- See if we can meet feature parity with what the RunnerJobHook support with compromise any security concerns.
@@ -77,9 +81,11 @@ a simple thought is how about we somehow attach some info to the `EphemeralRunne
How about we send this info from the service to the auto-scaling-listener via the existing HTTP long-poll
and let the listener patch the `EphemeralRunner.Status` to indicate it's running a job?
> The listener is normally in a separate namespace with elevated permission and it's something we can trust.
Changes:
- Introduce a new message type `JobStarted` (in addition to the existing `JobAvailable/JobAssigned/JobCompleted`) on the service side, the message is sent when a runner of the `RunnerScaleSet` get assigned to a job,
`RequestId`, `RunnerId`, and `RunnerName` will be included in the message.
- Add `RequestId (int)` to `EphemeralRunner.Status`, this will indicate which job the runner is running.

View File

@@ -1,4 +1,6 @@
# Automate updating runner version
# ADR 2023-02-02: Automate updating runner version
**Date**: 2023-02-02
**Status**: Proposed
@@ -16,6 +18,7 @@ version is updated (and this is currently done manually).
We can have another workflow running on a cadence (hourly seems sensible) and checking for new runner
releases, creating a PR updating `RUNNER_VERSION` in:
- `.github/workflows/release-runners.yaml`
- `Makefile`
- `runner/Makefile`

View File

@@ -0,0 +1,138 @@
# ADR 2023-02-10: Limit Permissions for Service Accounts in Actions-Runner-Controller
**Date**: 2023-02-10
**Status**: Pending
## Context
- `actions-runner-controller` is a Kubernetes CRD (with controller) built using https://github.com/kubernetes-sigs/controller-runtime
- [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) has a default cache based k8s API client.Reader to make query k8s API server more efficiency.
- The cache-based API client requires cluster scope `list` and `watch` permission for any resource the controller may query.
- This documentation only scopes to the AutoscalingRunnerSet CRD and its controller.
## Service accounts and their role binding in actions-runner-controller
There are 3 service accounts involved for a working `AutoscalingRunnerSet` based `actions-runner-controller`
1. Service account for each Ephemeral runner Pod
This should have the lowest privilege (not any `RoleBinding` nor `ClusterRoleBinding`) by default, in the case of `containerMode=kubernetes`, it will get certain write permission with `RoleBinding` to limit the permission to a single namespace.
> References:
>
> - ./charts/gha-runner-scale-set/templates/no_permission_serviceaccount.yaml
> - ./charts/gha-runner-scale-set/templates/kube_mode_role.yaml
> - ./charts/gha-runner-scale-set/templates/kube_mode_role_binding.yaml
> - ./charts/gha-runner-scale-set/templates/kube_mode_serviceaccount.yaml
2. Service account for AutoScalingListener Pod
This has a `RoleBinding` to a single namespace with a `Role` that has permission to `PATCH` `EphemeralRunnerSet` and `EphemeralRunner`.
3. Service account for the controller manager
Since the CRD controller is a singleton installed in the cluster that manages the CRD across multiple namespaces by default, the service account of the controller manager pod has a `ClusterRoleBinding` to a `ClusterRole` with broader permissions.
The current `ClusterRole` has the following permissions:
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingListeners` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunners` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `Pods` (with `Status` sub-resource)
- **Get/List/Create/Delete/Update/Patch/Watch on `Secrets`**
- Get/List/Create/Delete/Update/Patch/Watch on `Roles`
- Get/List/Create/Delete/Update/Patch/Watch on `RoleBindings`
- Get/List/Create/Delete/Update/Patch/Watch on `ServiceAccounts`
> Full list can be found at: https://github.com/actions/actions-runner-controller/blob/facae69e0b189d3b5dd659f36df8a829516d2896/charts/actions-runner-controller-2/templates/manager_role.yaml
## Limit cluster role permission on Secrets
The cluster scope `List` `Secrets` permission might be a blocker for adopting `actions-runner-controller` for certain customers as they may have certain restriction in their cluster that simply doesn't allow any service account to have cluster scope `List Secrets` permission.
To help these customers and improve security for `actions-runner-controller` in general, we will try to limit the `ClusterRole` permission of the controller manager's service account down to the following:
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingListeners` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunners` (with `Status` and `Finalizer` sub-resource)
- List/Watch on `Pods`
- List/Watch on `Roles`
- List/Watch on `RoleBindings`
- List/Watch on `ServiceAccounts`
> We will change the default cache-based client to bypass cache on reading `Secrets` and `ConfigMaps`(ConfigMap is used when you configure `githubServerTLS`), so we can eliminate the need for `List` and `Watch` `Secrets` permission in cluster scope.
Introduce a new `Role` for the controller and `RoleBinding` the `Role` with the controller's `ServiceAccount` in the namespace the controller is deployed. This role will grant the controller's service account required permission to work with `AutoScalingListeners` in the controller namespace.
- Get/Create/Delete on `Pods`
- Get on `Pods/status`
- Get/Create/Delete/Update/Patch on `Secrets`
- Get/Create/Delete/Update/Patch on `ServiceAccounts`
The `Role` and `RoleBinding` creation will happen during the `helm install demo oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller`
During `helm install demo oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller`, we will store the controller's service account info as labels on the controller `Deployment`.
Ex:
```yaml
actions.github.com/controller-service-account-namespace: {{ .Release.Namespace }}
actions.github.com/controller-service-account-name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
```
Introduce a new `Role` per `AutoScalingRunnerSet` installation and `RoleBinding` the `Role` with the controller's `ServiceAccount` in the namespace that each `AutoScalingRunnerSet` deployed with the following permission.
- Get/Create/Delete/Update/Patch/List on `Secrets`
- Create/Delete on `Pods`
- Get on `Pods/status`
- Get/Create/Delete/Update/Patch on `Roles`
- Get/Create/Delete/Update/Patch on `RoleBindings`
- Get on `ConfigMaps`
The `Role` and `RoleBinding` creation will happen during `helm install demo oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set` to grant the controller's service account required permissions to operate in the namespace the `AutoScalingRunnerSet` deployed.
The `gha-runner-scale-set` helm chart will try to find the `Deployment` of the controller using `helm lookup`, and get the service account info from the labels of the controller `Deployment` (`actions.github.com/controller-service-account-namespace` and `actions.github.com/controller-service-account-name`).
The `gha-runner-scale-set` helm chart will use this service account to properly render the `RoleBinding` template.
The `gha-runner-scale-set` helm chart will also allow customers to explicitly provide the controller service account info, in case the `helm lookup` couldn't locate the right controller `Deployment`.
New sections in `values.yaml` of `gha-runner-scale-set`:
```yaml
## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
controllerServiceAccount:
namespace: arc-system
name: test-arc-gha-runner-scale-set-controller
```
## Install ARC to only watch/react resources in a single namespace
In case the user doesn't want to have any `ClusterRole`, they can choose to install the `actions-runner-controller` in a mode that only requires a `Role` with `RoleBinding` in a particular namespace.
In this mode, the `actions-runner-controller` will only be able to watch the `AutoScalingRunnerSet` resource in a single namespace.
If you want to deploy multiple `AutoScalingRunnerSet` into different namespaces, you will need to install `actions-runner-controller` in this mode multiple times as well and have each installation watch the namespace you want to deploy an `AutoScalingRunnerSet`
You will install `actions-runner-controller` with something like `helm install arc --namespace arc-system --set watchSingleNamespace=test-namespace oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller` (the `test-namespace` namespace needs to be created first).
You will deploy the `AutoScalingRunnerSet` with something like `helm install demo --namespace TestNamespace oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set`
In this mode, you will end up with a manager `Role` that has all Get/List/Create/Delete/Update/Patch/Watch permissions on resources we need, and a `RoleBinding` to bind the `Role` with the controller `ServiceAccount` in the watched single namespace and the controller namespace, ex: `test-namespace` and `arc-system` in the above example.
The downside of this mode:
- When you have multiple controllers deployed, they will still use the same version of the CRD. So you will need to make sure every controller you deployed has to be the same version as each other.
- You can't mismatch install both `actions-runner-controller` in this mode (watchSingleNamespace) with the regular installation mode (watchAllClusterNamespaces) in your cluster.

View File

@@ -0,0 +1,89 @@
# ADR 2023-04-14: Adding labels to our resources
**Date**: 2023-04-14
**Status**: Done [^1]
## Context
Users need to provide us with logs so that we can help support and troubleshoot their issues. We need a way for our users to filter and retrieve the logs we need.
## Proposal
A good start would be a catch-all label to get all logs that are
ARC-related: one of the [recommended labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/)
is `app.kubernetes.io/part-of` and we can set that for all ARC components
to be `actions-runner-controller`.
Assuming standard logging that would allow us to get all ARC logs by running
```bash
kubectl logs -l 'app.kubernetes.io/part-of=gha-runner-scale-set-controller'
```
which would be very useful for development to begin with.
The proposal is to add these sets of labels to the pods ARC creates:
#### controller-manager
Labels to be set by the Helm chart:
```yaml
metadata:
labels:
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/component: controller-manager
app.kubernetes.io/version: "x.x.x"
```
#### Listener
Labels to be set by controller at creation:
```yaml
metadata:
labels:
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/component: runner-scale-set-listener
app.kubernetes.io/version: "x.x.x"
actions.github.com/scale-set-name: scale-set-name # this corresponds to metadata.name as set for AutoscalingRunnerSet
# the following labels are to be extracted by the config URL
actions.github.com/enterprise: enterprise
actions.github.com/organization: organization
actions.github.com/repository: repository
```
#### Runner
Labels to be set by controller at creation:
```yaml
metadata:
labels:
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/component: runner
app.kubernetes.io/version: "x.x.x"
actions.github.com/scale-set-name: scale-set-name # this corresponds to metadata.name as set for AutoscalingRunnerSet
actions.github.com/runner-name: runner-name
actions.github.com/runner-group-name: runner-group-name
# the following labels are to be extracted by the config URL
actions.github.com/enterprise: enterprise
actions.github.com/organization: organization
actions.github.com/repository: repository
```
This would allow us to ask users:
> Can you please send us the logs coming from pods labelled 'app.kubernetes.io/part-of=gha-runner-scale-set-controller'?
Or for example if they're having problems specifically with runners:
> Can you please send us the logs coming from pods labelled 'app.kubernetes.io/component=runner'?
This way users don't have to understand ARC moving parts but we still have a
way to target them specifically if we need to.
[^1]: [ADR 2022-12-05](2022-12-05-adding-labels-k8s-resources.md)

View File

@@ -6,13 +6,13 @@
## Context
*What is the issue or background knowledge necessary for future readers
to understand why this ADR was written?*
_What is the issue or background knowledge necessary for future readers
to understand why this ADR was written?_
## Decision
**What** is the change being proposed? / **How** will it be implemented?*
_**What** is the change being proposed? **How** will it be implemented?_
## Consequences
*What becomes easier or more difficult to do because of this change?*
_What becomes easier or more difficult to do because of this change?_

View File

@@ -57,6 +57,9 @@ type AutoscalingListenerSpec struct {
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
}
// AutoscalingListenerStatus defines the observed state of AutoscalingListener

View File

@@ -17,6 +17,7 @@ limitations under the License.
package v1alpha1
import (
"crypto/x509"
"fmt"
"net/http"
"net/url"
@@ -32,10 +33,14 @@ import (
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:printcolumn:JSONPath=".spec.minRunners",name=Minimum Runners,type=number
//+kubebuilder:printcolumn:JSONPath=".spec.maxRunners",name=Maximum Runners,type=number
//+kubebuilder:printcolumn:JSONPath=".status.currentRunners",name=Current Runners,type=number
//+kubebuilder:printcolumn:JSONPath=".spec.minRunners",name=Minimum Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".spec.maxRunners",name=Maximum Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.currentRunners",name=Current Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.state",name=State,type=string
//+kubebuilder:printcolumn:JSONPath=".status.pendingEphemeralRunners",name=Pending Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.runningEphemeralRunners",name=Running Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.finishedEphemeralRunners",name=Finished Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.deletingEphemeralRunners",name=Deleting Runners,type=integer
// AutoscalingRunnerSet is the Schema for the autoscalingrunnersets API
type AutoscalingRunnerSet struct {
@@ -57,6 +62,9 @@ type AutoscalingRunnerSetSpec struct {
// +optional
RunnerGroup string `json:"runnerGroup,omitempty"`
// +optional
RunnerScaleSetName string `json:"runnerScaleSetName,omitempty"`
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
@@ -77,7 +85,44 @@ type AutoscalingRunnerSetSpec struct {
type GitHubServerTLSConfig struct {
// Required
RootCAsConfigMapRef string `json:"certConfigMapRef,omitempty"`
CertificateFrom *TLSCertificateSource `json:"certificateFrom,omitempty"`
}
func (c *GitHubServerTLSConfig) ToCertPool(keyFetcher func(name, key string) ([]byte, error)) (*x509.CertPool, error) {
if c.CertificateFrom == nil {
return nil, fmt.Errorf("certificateFrom not specified")
}
if c.CertificateFrom.ConfigMapKeyRef == nil {
return nil, fmt.Errorf("configMapKeyRef not specified")
}
cert, err := keyFetcher(c.CertificateFrom.ConfigMapKeyRef.Name, c.CertificateFrom.ConfigMapKeyRef.Key)
if err != nil {
return nil, fmt.Errorf(
"failed to fetch key %q in configmap %q: %w",
c.CertificateFrom.ConfigMapKeyRef.Key,
c.CertificateFrom.ConfigMapKeyRef.Name,
err,
)
}
systemPool, err := x509.SystemCertPool()
if err != nil {
return nil, fmt.Errorf("failed to get system cert pool: %w", err)
}
pool := systemPool.Clone()
if !pool.AppendCertsFromPEM(cert) {
return nil, fmt.Errorf("failed to parse certificate")
}
return pool, nil
}
type TLSCertificateSource struct {
// Required
ConfigMapKeyRef *corev1.ConfigMapKeySelector `json:"configMapKeyRef,omitempty"`
}
type ProxyConfig struct {
@@ -187,14 +232,22 @@ type ProxyServerConfig struct {
// AutoscalingRunnerSetStatus defines the observed state of AutoscalingRunnerSet
type AutoscalingRunnerSetStatus struct {
// +optional
CurrentRunners int `json:"currentRunners,omitempty"`
CurrentRunners int `json:"currentRunners"`
// +optional
State string `json:"state,omitempty"`
State string `json:"state"`
// EphemeralRunner counts separated by the stage ephemeral runners are in, taken from the EphemeralRunnerSet
//+optional
PendingEphemeralRunners int `json:"pendingEphemeralRunners"`
// +optional
RunningEphemeralRunners int `json:"runningEphemeralRunners"`
// +optional
FailedEphemeralRunners int `json:"failedEphemeralRunners"`
}
func (ars *AutoscalingRunnerSet) ListenerSpecHash() string {
type listenerSpec = AutoscalingRunnerSetSpec
arsSpec := ars.Spec.DeepCopy()
spec := arsSpec
return hash.ComputeTemplateHash(&spec)
@@ -205,6 +258,7 @@ func (ars *AutoscalingRunnerSet) RunnerSetSpecHash() string {
GitHubConfigUrl string
GitHubConfigSecret string
RunnerGroup string
RunnerScaleSetName string
Proxy *ProxyConfig
GitHubServerTLS *GitHubServerTLSConfig
Template corev1.PodTemplateSpec
@@ -213,6 +267,7 @@ func (ars *AutoscalingRunnerSet) RunnerSetSpecHash() string {
GitHubConfigUrl: ars.Spec.GitHubConfigUrl,
GitHubConfigSecret: ars.Spec.GitHubConfigSecret,
RunnerGroup: ars.Spec.RunnerGroup,
RunnerScaleSetName: ars.Spec.RunnerScaleSetName,
Proxy: ars.Spec.Proxy,
GitHubServerTLS: ars.Spec.GitHubServerTLS,
Template: ars.Spec.Template,

View File

@@ -31,13 +31,27 @@ type EphemeralRunnerSetSpec struct {
// EphemeralRunnerSetStatus defines the observed state of EphemeralRunnerSet
type EphemeralRunnerSetStatus struct {
// CurrentReplicas is the number of currently running EphemeralRunner resources being managed by this EphemeralRunnerSet.
CurrentReplicas int `json:"currentReplicas,omitempty"`
CurrentReplicas int `json:"currentReplicas"`
// EphemeralRunner counts separated by the stage ephemeral runners are in
// +optional
PendingEphemeralRunners int `json:"pendingEphemeralRunners"`
// +optional
RunningEphemeralRunners int `json:"runningEphemeralRunners"`
// +optional
FailedEphemeralRunners int `json:"failedEphemeralRunners"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name="DesiredReplicas",type="integer"
// +kubebuilder:printcolumn:JSONPath=".status.currentReplicas", name="CurrentReplicas",type="integer"
//+kubebuilder:printcolumn:JSONPath=".status.pendingEphemeralRunners",name=Pending Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.runningEphemeralRunners",name=Running Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.finishedEphemeralRunners",name=Finished Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.deletingEphemeralRunners",name=Deleting Runners,type=integer
// EphemeralRunnerSet is the Schema for the ephemeralrunnersets API
type EphemeralRunnerSet struct {
metav1.TypeMeta `json:",inline"`

View File

@@ -0,0 +1,105 @@
package v1alpha1_test
import (
"crypto/tls"
"crypto/x509"
"net/http"
"os"
"path/filepath"
"testing"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/actions/actions-runner-controller/github/actions/testserver"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
v1 "k8s.io/api/core/v1"
)
func TestGitHubServerTLSConfig_ToCertPool(t *testing.T) {
t.Run("returns an error if CertificateFrom not specified", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: nil,
}
pool, err := c.ToCertPool(nil)
assert.Nil(t, pool)
require.Error(t, err)
assert.Equal(t, err.Error(), "certificateFrom not specified")
})
t.Run("returns an error if CertificateFrom.ConfigMapKeyRef not specified", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: &v1alpha1.TLSCertificateSource{},
}
pool, err := c.ToCertPool(nil)
assert.Nil(t, pool)
require.Error(t, err)
assert.Equal(t, err.Error(), "configMapKeyRef not specified")
})
t.Run("returns a valid cert pool with correct configuration", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: &v1alpha1.TLSCertificateSource{
ConfigMapKeyRef: &v1.ConfigMapKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: "name",
},
Key: "key",
},
},
}
certsFolder := filepath.Join(
"../../../",
"github",
"actions",
"testdata",
)
fetcher := func(name, key string) ([]byte, error) {
cert, err := os.ReadFile(filepath.Join(certsFolder, "rootCA.crt"))
require.NoError(t, err)
pool := x509.NewCertPool()
ok := pool.AppendCertsFromPEM(cert)
assert.True(t, ok)
return cert, nil
}
pool, err := c.ToCertPool(fetcher)
require.NoError(t, err)
require.NotNil(t, pool)
// can be used to communicate with a server
serverSuccessfullyCalled := false
server := testserver.NewUnstarted(t, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
serverSuccessfullyCalled = true
w.WriteHeader(http.StatusOK)
}))
cert, err := tls.LoadX509KeyPair(
filepath.Join(certsFolder, "server.crt"),
filepath.Join(certsFolder, "server.key"),
)
require.NoError(t, err)
server.TLS = &tls.Config{Certificates: []tls.Certificate{cert}}
server.StartTLS()
client := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
},
},
}
_, err = client.Get(server.URL)
assert.NoError(t, err)
assert.True(t, serverSuccessfullyCalled)
})
}

View File

@@ -98,6 +98,11 @@ func (in *AutoscalingListenerSpec) DeepCopyInto(out *AutoscalingListenerSpec) {
*out = new(ProxyConfig)
(*in).DeepCopyInto(*out)
}
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerSpec.
@@ -195,7 +200,7 @@ func (in *AutoscalingRunnerSetSpec) DeepCopyInto(out *AutoscalingRunnerSetSpec)
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
**out = **in
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template)
if in.MaxRunners != nil {
@@ -395,7 +400,7 @@ func (in *EphemeralRunnerSpec) DeepCopyInto(out *EphemeralRunnerSpec) {
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
**out = **in
(*in).DeepCopyInto(*out)
}
in.PodTemplateSpec.DeepCopyInto(&out.PodTemplateSpec)
}
@@ -435,6 +440,11 @@ func (in *EphemeralRunnerStatus) DeepCopy() *EphemeralRunnerStatus {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubServerTLSConfig) DeepCopyInto(out *GitHubServerTLSConfig) {
*out = *in
if in.CertificateFrom != nil {
in, out := &in.CertificateFrom, &out.CertificateFrom
*out = new(TLSCertificateSource)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GitHubServerTLSConfig.
@@ -491,3 +501,23 @@ func (in *ProxyServerConfig) DeepCopy() *ProxyServerConfig {
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TLSCertificateSource) DeepCopyInto(out *TLSCertificateSource) {
*out = *in
if in.ConfigMapKeyRef != nil {
in, out := &in.ConfigMapKeyRef, &out.ConfigMapKeyRef
*out = new(v1.ConfigMapKeySelector)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TLSCertificateSource.
func (in *TLSCertificateSource) DeepCopy() *TLSCertificateSource {
if in == nil {
return nil
}
out := new(TLSCertificateSource)
in.DeepCopyInto(out)
return out
}

View File

@@ -0,0 +1,9 @@
# This file defines the config for "ct" (chart tester) used by the helm linting GitHub workflow
lint-conf: charts/.ci/lint-config.yaml
chart-repos:
- jetstack=https://charts.jetstack.io
check-version-increment: false # Disable checking that the chart version has been bumped
charts:
- charts/gha-runner-scale-set-controller
- charts/gha-runner-scale-set
skip-clean-up: true

View File

@@ -1,6 +1,7 @@
# This file defines the config for "ct" (chart tester) used by the helm linting GitHub workflow
all: true
lint-conf: charts/.ci/lint-config.yaml
chart-repos:
- jetstack=https://charts.jetstack.io
check-version-increment: false # Disable checking that the chart version has been bumped
charts:
- charts/actions-runner-controller

View File

@@ -1,97 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "actions-runner-controller-2.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "actions-runner-controller-2.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "actions-runner-controller-2.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "actions-runner-controller-2.labels" -}}
helm.sh/chart: {{ include "actions-runner-controller-2.chart" . }}
{{ include "actions-runner-controller-2.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/part-of: {{ .Chart.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- range $k, $v := .Values.labels }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "actions-runner-controller-2.selectorLabels" -}}
app.kubernetes.io/name: {{ include "actions-runner-controller-2.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "actions-runner-controller-2.serviceAccountName" -}}
{{- if eq .Values.serviceAccount.name "default"}}
{{- fail "serviceAccount.name cannot be set to 'default'" }}
{{- end }}
{{- if .Values.serviceAccount.create }}
{{- default (include "actions-runner-controller-2.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- if not .Values.serviceAccount.name }}
{{- fail "serviceAccount.name must be set if serviceAccount.create is false" }}
{{- else }}
{{- .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{- end }}
{{- define "actions-runner-controller-2.managerRoleName" -}}
{{- include "actions-runner-controller-2.fullname" . }}-manager-role
{{- end }}
{{- define "actions-runner-controller-2.managerRoleBinding" -}}
{{- include "actions-runner-controller-2.fullname" . }}-manager-rolebinding
{{- end }}
{{- define "actions-runner-controller-2.leaderElectionRoleName" -}}
{{- include "actions-runner-controller-2.fullname" . }}-leader-election-role
{{- end }}
{{- define "actions-runner-controller-2.leaderElectionRoleBinding" -}}
{{- include "actions-runner-controller-2.fullname" . }}-leader-election-rolebinding
{{- end }}
{{- define "actions-runner-controller-2.imagePullSecretsNames" -}}
{{- $names := list }}
{{- range $k, $v := . }}
{{- $names = append $names $v.name }}
{{- end }}
{{- $names | join ","}}
{{- end }}

View File

@@ -1,15 +0,0 @@
{{- if gt (int (default 1 .Values.replicaCount)) 1 -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "actions-runner-controller-2.leaderElectionRoleBinding" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "actions-runner-controller-2.leaderElectionRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller-2.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller-2.managerRoleBinding" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller-2.managerRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller-2.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -1,533 +0,0 @@
package tests
import (
"os"
"path/filepath"
"strings"
"testing"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/gruntwork-io/terratest/modules/k8s"
"github.com/gruntwork-io/terratest/modules/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v2"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
)
type Chart struct {
Version string `yaml:"version"`
AppVersion string `yaml:"appVersion"`
}
func TestTemplate_CreateServiceAccount(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
var serviceAccount corev1.ServiceAccount
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "test-arc-actions-runner-controller-2", serviceAccount.Name)
assert.Equal(t, "bar", string(serviceAccount.Annotations["foo"]))
}
func TestTemplate_CreateServiceAccount_OverwriteName(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.name": "overwritten-name",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
var serviceAccount corev1.ServiceAccount
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "overwritten-name", serviceAccount.Name)
assert.Equal(t, "bar", string(serviceAccount.Annotations["foo"]))
}
func TestTemplate_CreateServiceAccount_CannotUseDefaultServiceAccount(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.name": "default",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
assert.ErrorContains(t, err, "serviceAccount.name cannot be set to 'default'", "We should get an error because the default service account cannot be used")
}
func TestTemplate_NotCreateServiceAccount(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "false",
"serviceAccount.name": "overwritten-name",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
assert.ErrorContains(t, err, "could not find template templates/serviceaccount.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_NotCreateServiceAccount_ServiceAccountNotSet(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "false",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
assert.ErrorContains(t, err, "serviceAccount.name must be set if serviceAccount.create is false", "We should get an error because the default service account cannot be used")
}
func TestTemplate_CreateManagerRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_role.yaml"})
var managerRole rbacv1.ClusterRole
helm.UnmarshalK8SYaml(t, output, &managerRole)
assert.Empty(t, managerRole.Namespace, "ClusterRole should not have a namespace")
assert.Equal(t, "test-arc-actions-runner-controller-2-manager-role", managerRole.Name)
assert.Equal(t, 17, len(managerRole.Rules))
}
func TestTemplate_ManagerRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_role_binding.yaml"})
var managerRoleBinding rbacv1.ClusterRoleBinding
helm.UnmarshalK8SYaml(t, output, &managerRoleBinding)
assert.Empty(t, managerRoleBinding.Namespace, "ClusterRoleBinding should not have a namespace")
assert.Equal(t, "test-arc-actions-runner-controller-2-manager-rolebinding", managerRoleBinding.Name)
assert.Equal(t, "test-arc-actions-runner-controller-2-manager-role", managerRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-actions-runner-controller-2", managerRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerRoleBinding.Subjects[0].Namespace)
}
func TestTemplate_ControllerDeployment_Defaults(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
chartContent, err := os.ReadFile(filepath.Join(helmChartPath, "Chart.yaml"))
require.NoError(t, err)
chart := new(Chart)
err = yaml.Unmarshal(chartContent, chart)
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"image.tag": "dev",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-actions-runner-controller-2", deployment.Name)
assert.Equal(t, "actions-runner-controller-2-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "actions-runner-controller-2", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "actions-runner-controller-2", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "actions-runner-controller-2", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 0)
assert.Equal(t, "test-arc-actions-runner-controller-2", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Nil(t, deployment.Spec.Template.Spec.SecurityContext)
assert.Empty(t, deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
assert.Len(t, deployment.Spec.Template.Spec.Volumes, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Volumes[0].Name)
assert.NotNil(t, 10, deployment.Spec.Template.Spec.Volumes[0].EmptyDir)
assert.Len(t, deployment.Spec.Template.Spec.NodeSelector, 0)
assert.Nil(t, deployment.Spec.Template.Spec.Affinity)
assert.Len(t, deployment.Spec.Template.Spec.Tolerations, 0)
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/actions-runner-controller-2:dev", deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullIfNotPresent, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 2)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 2)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAME", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, "metadata.name", deployment.Spec.Template.Spec.Containers[0].Env[0].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Resources)
assert.Nil(t, deployment.Spec.Template.Spec.Containers[0].SecurityContext)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].VolumeMounts, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].Name)
assert.Equal(t, "/tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath)
}
func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
chartContent, err := os.ReadFile(filepath.Join(helmChartPath, "Chart.yaml"))
require.NoError(t, err)
chart := new(Chart)
err = yaml.Unmarshal(chartContent, chart)
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"labels.foo": "bar",
"labels.github": "actions",
"replicaCount": "1",
"image.pullPolicy": "Always",
"image.tag": "dev",
"imagePullSecrets[0].name": "dockerhub",
"nameOverride": "actions-runner-controller-2-override",
"fullnameOverride": "actions-runner-controller-2-fullname-override",
"serviceAccount.name": "actions-runner-controller-2-sa",
"podAnnotations.foo": "bar",
"podSecurityContext.fsGroup": "1000",
"securityContext.runAsUser": "1000",
"securityContext.runAsNonRoot": "true",
"resources.limits.cpu": "500m",
"nodeSelector.foo": "bar",
"tolerations[0].key": "foo",
"affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key": "foo",
"affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator": "bar",
"priorityClassName": "test-priority-class",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "actions-runner-controller-2-fullname-override", deployment.Name)
assert.Equal(t, "actions-runner-controller-2-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "actions-runner-controller-2-override", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, "bar", deployment.Labels["foo"])
assert.Equal(t, "actions", deployment.Labels["github"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "actions-runner-controller-2-override", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "actions-runner-controller-2-override", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "bar", deployment.Spec.Template.Annotations["foo"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 1)
assert.Equal(t, "dockerhub", deployment.Spec.Template.Spec.ImagePullSecrets[0].Name)
assert.Equal(t, "actions-runner-controller-2-sa", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Equal(t, int64(1000), *deployment.Spec.Template.Spec.SecurityContext.FSGroup)
assert.Equal(t, "test-priority-class", deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
assert.Len(t, deployment.Spec.Template.Spec.Volumes, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Volumes[0].Name)
assert.NotNil(t, 10, deployment.Spec.Template.Spec.Volumes[0].EmptyDir)
assert.Len(t, deployment.Spec.Template.Spec.NodeSelector, 1)
assert.Equal(t, "bar", deployment.Spec.Template.Spec.NodeSelector["foo"])
assert.NotNil(t, deployment.Spec.Template.Spec.Affinity.NodeAffinity)
assert.Equal(t, "foo", deployment.Spec.Template.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0].MatchExpressions[0].Key)
assert.Equal(t, "bar", string(deployment.Spec.Template.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0].MatchExpressions[0].Operator))
assert.Len(t, deployment.Spec.Template.Spec.Tolerations, 1)
assert.Equal(t, "foo", deployment.Spec.Template.Spec.Tolerations[0].Key)
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/actions-runner-controller-2:dev", deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullAlways, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--auto-scaler-image-pull-secrets=dockerhub", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 2)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAME", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, "metadata.name", deployment.Spec.Template.Spec.Containers[0].Env[0].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "500m", deployment.Spec.Template.Spec.Containers[0].Resources.Limits.Cpu().String())
assert.True(t, *deployment.Spec.Template.Spec.Containers[0].SecurityContext.RunAsNonRoot)
assert.Equal(t, int64(1000), *deployment.Spec.Template.Spec.Containers[0].SecurityContext.RunAsUser)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].VolumeMounts, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].Name)
assert.Equal(t, "/tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath)
}
func TestTemplate_EnableLeaderElectionRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"replicaCount": "2",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/leader_election_role.yaml"})
var leaderRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &leaderRole)
assert.Equal(t, "test-arc-actions-runner-controller-2-leader-election-role", leaderRole.Name)
assert.Equal(t, namespaceName, leaderRole.Namespace)
}
func TestTemplate_EnableLeaderElectionRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"replicaCount": "2",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/leader_election_role_binding.yaml"})
var leaderRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &leaderRoleBinding)
assert.Equal(t, "test-arc-actions-runner-controller-2-leader-election-rolebinding", leaderRoleBinding.Name)
assert.Equal(t, namespaceName, leaderRoleBinding.Namespace)
assert.Equal(t, "test-arc-actions-runner-controller-2-leader-election-role", leaderRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-actions-runner-controller-2", leaderRoleBinding.Subjects[0].Name)
}
func TestTemplate_EnableLeaderElection(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"replicaCount": "2",
"image.tag": "dev",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-actions-runner-controller-2", deployment.Name)
assert.Equal(t, int32(2), *deployment.Spec.Replicas)
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/actions-runner-controller-2:dev", deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullIfNotPresent, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 4)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--enable-leader-election", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--leader-election-id=test-arc-actions-runner-controller-2", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[3])
}
func TestTemplate_ControllerDeployment_ForwardImagePullSecrets(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../actions-runner-controller-2")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"imagePullSecrets[0].name": "dockerhub",
"imagePullSecrets[1].name": "ghcr",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--auto-scaler-image-pull-secrets=dockerhub,ghcr", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[2])
}

View File

@@ -15,7 +15,7 @@ spec:
metadata:
{{- with .Values.actionsMetricsServer.podAnnotations }}
annotations:
kubectl.kubernetes.io/default-logs-container: "github-webhook-server"
kubectl.kubernetes.io/default-container: "actions-metrics-server"
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
@@ -45,7 +45,7 @@ spec:
{{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }}
{{- if .Values.actionsMetricsServer.logFormat }}
{{- if .Values.actionsMetricsServer.logFormat }}
- "--log-format={{ .Values.actionsMetricsServer.logFormat }}"
{{- end }}
command:
@@ -74,25 +74,25 @@ spec:
valueFrom:
secretKeyRef:
key: github_token
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_ID
valueFrom:
secretKeyRef:
key: github_app_id
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_INSTALLATION_ID
valueFrom:
secretKeyRef:
key: github_app_installation_id
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_PRIVATE_KEY
valueFrom:
secretKeyRef:
key: github_app_private_key
name: {{ include "actions-runner-controller.githubWebhookServerSecretName" . }}
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
{{- if .Values.authSecret.github_basicauth_username }}
- name: GITHUB_BASICAUTH_USERNAME

View File

@@ -0,0 +1,28 @@
{{- if .Values.actionsMetricsServer.enabled }}
{{- if .Values.actionsMetricsServer.secret.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque
data:
{{- if .Values.actionsMetricsServer.secret.github_webhook_secret_token }}
github_webhook_secret_token: {{ .Values.actionsMetricsServer.secret.github_webhook_secret_token | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_id }}
github_app_id: {{ .Values.actionsMetricsServer.secret.github_app_id | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_installation_id }}
github_app_installation_id: {{ .Values.actionsMetricsServer.secret.github_app_installation_id | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_private_key }}
github_app_private_key: {{ .Values.actionsMetricsServer.secret.github_app_private_key | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_token }}
github_token: {{ .Values.actionsMetricsServer.secret.github_token | toString | b64enc }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -51,11 +51,17 @@ spec:
{{- if .Values.githubWebhookServer.queueLimit }}
- "--queue-limit={{ .Values.githubWebhookServer.queueLimit }}"
{{- end }}
{{- if .Values.githubWebhookServer.logFormat }}
{{- if .Values.githubWebhookServer.logFormat }}
- "--log-format={{ .Values.githubWebhookServer.logFormat }}"
{{- end }}
command:
- "/github-webhook-server"
{{- if .Values.githubWebhookServer.lifecycle }}
{{- with .Values.githubWebhookServer.lifecycle }}
lifecycle:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
env:
- name: GITHUB_WEBHOOK_SECRET_TOKEN
valueFrom:
@@ -148,7 +154,7 @@ spec:
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- end }}
terminationGracePeriodSeconds: 10
terminationGracePeriodSeconds: {{ .Values.githubWebhookServer.terminationGracePeriodSeconds }}
{{- with .Values.githubWebhookServer.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -23,4 +23,10 @@ spec:
{{- end }}
selector:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 4 }}
{{- if .Values.githubWebhookServer.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- range $ip := .Values.githubWebhookServer.service.loadBalancerSourceRanges }}
- {{ $ip -}}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -240,6 +240,7 @@ githubWebhookServer:
protocol: TCP
name: http
#nodePort: someFixedPortForUseWithTerraformCdkCfnEtc
loadBalancerSourceRanges: []
ingress:
enabled: false
ingressClassName: ""
@@ -276,6 +277,8 @@ githubWebhookServer:
# minAvailable: 1
# maxUnavailable: 3
# queueLimit: 100
terminationGracePeriodSeconds: 10
lifecycle: {}
actionsMetrics:
serviceAnnotations: {}

View File

@@ -1,330 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "auto-scaling-runner-set.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "auto-scaling-runner-set.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "auto-scaling-runner-set.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "auto-scaling-runner-set.labels" -}}
helm.sh/chart: {{ include "auto-scaling-runner-set.chart" . }}
{{ include "auto-scaling-runner-set.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "auto-scaling-runner-set.selectorLabels" -}}
app.kubernetes.io/name: {{ include "auto-scaling-runner-set.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{- define "auto-scaling-runner-set.githubsecret" -}}
{{- if kindIs "string" .Values.githubConfigSecret }}
{{- if not (empty .Values.githubConfigSecret) }}
{{- .Values.githubConfigSecret }}
{{- else}}
{{- fail "Values.githubConfigSecret is required for setting auth with GitHub server." }}
{{- end }}
{{- else }}
{{- include "auto-scaling-runner-set.fullname" . }}-github-secret
{{- end }}
{{- end }}
{{- define "auto-scaling-runner-set.noPermissionServiceAccountName" -}}
{{- include "auto-scaling-runner-set.fullname" . }}-no-permission-service-account
{{- end }}
{{- define "auto-scaling-runner-set.kubeModeRoleName" -}}
{{- include "auto-scaling-runner-set.fullname" . }}-kube-mode-role
{{- end }}
{{- define "auto-scaling-runner-set.kubeModeServiceAccountName" -}}
{{- include "auto-scaling-runner-set.fullname" . }}-kube-mode-service-account
{{- end }}
{{- define "auto-scaling-runner-set.dind-init-container" -}}
{{- range $i, $val := .Values.template.spec.containers -}}
{{- if eq $val.name "runner" -}}
image: {{ $val.image }}
{{- if $val.imagePullSecrets }}
imagePullSecrets:
{{ $val.imagePullSecrets | toYaml -}}
{{- end }}
command: ["cp"]
args: ["-r", "-v", "/actions-runner/externals/.", "/actions-runner/tmpDir/"]
volumeMounts:
- name: dind-externals
mountPath: /actions-runner/tmpDir
{{- end }}
{{- end }}
{{- end }}
{{- define "auto-scaling-runner-set.dind-container" -}}
image: docker:dind
securityContext:
privileged: true
volumeMounts:
- name: work
mountPath: /actions-runner/_work
- name: dind-cert
mountPath: /certs/client
- name: dind-externals
mountPath: /actions-runner/externals
{{- end }}
{{- define "auto-scaling-runner-set.dind-volume" -}}
- name: dind-cert
emptyDir: {}
- name: dind-externals
emptyDir: {}
{{- end }}
{{- define "auto-scaling-runner-set.dind-work-volume" -}}
{{- $createWorkVolume := 1 }}
{{- range $i, $volume := .Values.template.spec.volumes }}
{{- if eq $volume.name "work" }}
{{- $createWorkVolume = 0 -}}
- name: work
{{- range $key, $val := $volume }}
{{- if ne $key "name" }}
{{ $key }}: {{ $val }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if eq $createWorkVolume 1 }}
- name: work
emptyDir: {}
{{- end }}
{{- end }}
{{- define "auto-scaling-runner-set.kubernetes-mode-work-volume" -}}
{{- $createWorkVolume := 1 }}
{{- range $i, $volume := .Values.template.spec.volumes }}
{{- if eq $volume.name "work" }}
{{- $createWorkVolume = 0 -}}
- name: work
{{- range $key, $val := $volume }}
{{- if ne $key "name" }}
{{ $key }}: {{ $val }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if eq $createWorkVolume 1 }}
- name: work
ephemeral:
volumeClaimTemplate:
spec:
{{- .Values.containerMode.kubernetesModeWorkVolumeClaim | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- define "auto-scaling-runner-set.non-work-volumes" -}}
{{- range $i, $volume := .Values.template.spec.volumes }}
{{- if ne $volume.name "work" }}
- name: {{ $volume.name }}
{{- range $key, $val := $volume }}
{{- if ne $key "name" }}
{{ $key }}: {{ $val }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- define "auto-scaling-runner-set.non-runner-containers" -}}
{{- range $i, $container := .Values.template.spec.containers -}}
{{- if ne $container.name "runner" -}}
- name: {{ $container.name }}
{{- range $key, $val := $container }}
{{- if ne $key "name" }}
{{ $key }}: {{ $val }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- define "auto-scaling-runner-set.dind-runner-container" -}}
{{- range $i, $container := .Values.template.spec.containers -}}
{{- if eq $container.name "runner" -}}
{{- range $key, $val := $container }}
{{- if and (ne $key "env") (ne $key "volumeMounts") (ne $key "name") }}
{{ $key }}: {{ $val }}
{{- end }}
{{- end }}
{{- $setDockerHost := 1 }}
{{- $setDockerTlsVerify := 1 }}
{{- $setDockerCertPath := 1 }}
{{- $setRunnerWaitDocker := 1 }}
env:
{{- with $container.env }}
{{- range $i, $env := . }}
{{- if eq $env.name "DOCKER_HOST" }}
{{- $setDockerHost = 0 -}}
{{- end }}
{{- if eq $env.name "DOCKER_TLS_VERIFY" }}
{{- $setDockerTlsVerify = 0 -}}
{{- end }}
{{- if eq $env.name "DOCKER_CERT_PATH" }}
{{- $setDockerCertPath = 0 -}}
{{- end }}
{{- if eq $env.name "RUNNER_WAIT_FOR_DOCKER_IN_SECONDS" }}
{{- $setRunnerWaitDocker = 0 -}}
{{- end }}
- name: {{ $env.name }}
{{- range $envKey, $envVal := $env }}
{{- if ne $envKey "name" }}
{{ $envKey }}: {{ $envVal | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if $setDockerHost }}
- name: DOCKER_HOST
value: tcp://localhost:2376
{{- end }}
{{- if $setDockerTlsVerify }}
- name: DOCKER_TLS_VERIFY
value: "1"
{{- end }}
{{- if $setDockerCertPath }}
- name: DOCKER_CERT_PATH
value: /certs/client
{{- end }}
{{- if $setRunnerWaitDocker }}
- name: RUNNER_WAIT_FOR_DOCKER_IN_SECONDS
value: "120"
{{- end }}
{{- $mountWork := 1 }}
{{- $mountDindCert := 1 }}
volumeMounts:
{{- with $container.volumeMounts }}
{{- range $i, $volMount := . }}
{{- if eq $volMount.name "work" }}
{{- $mountWork = 0 -}}
{{- end }}
{{- if eq $volMount.name "dind-cert" }}
{{- $mountDindCert = 0 -}}
{{- end }}
- name: {{ $volMount.name }}
{{- range $mountKey, $mountVal := $volMount }}
{{- if ne $mountKey "name" }}
{{ $mountKey }}: {{ $mountVal | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if $mountWork }}
- name: work
mountPath: /actions-runner/_work
{{- end }}
{{- if $mountDindCert }}
- name: dind-cert
mountPath: /certs/client
readOnly: true
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- define "auto-scaling-runner-set.kubernetes-mode-runner-container" -}}
{{- range $i, $container := .Values.template.spec.containers -}}
{{- if eq $container.name "runner" -}}
{{- range $key, $val := $container }}
{{- if and (ne $key "env") (ne $key "volumeMounts") (ne $key "name") }}
{{ $key }}: {{ $val }}
{{- end }}
{{- end }}
{{- $setContainerHooks := 1 }}
{{- $setPodName := 1 }}
{{- $setRequireJobContainer := 1 }}
env:
{{- with $container.env }}
{{- range $i, $env := . }}
{{- if eq $env.name "ACTIONS_RUNNER_CONTAINER_HOOKS" }}
{{- $setContainerHooks = 0 -}}
{{- end }}
{{- if eq $env.name "ACTIONS_RUNNER_POD_NAME" }}
{{- $setPodName = 0 -}}
{{- end }}
{{- if eq $env.name "ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER" }}
{{- $setRequireJobContainer = 0 -}}
{{- end }}
- name: {{ $env.name }}
{{- range $envKey, $envVal := $env }}
{{- if ne $envKey "name" }}
{{ $envKey }}: {{ $envVal | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if $setContainerHooks }}
- name: ACTIONS_RUNNER_CONTAINER_HOOKS
value: /actions-runner/k8s/index.js
{{- end }}
{{- if $setPodName }}
- name: ACTIONS_RUNNER_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
{{- end }}
{{- if $setRequireJobContainer }}
- name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
value: "true"
{{- end }}
{{- $mountWork := 1 }}
volumeMounts:
{{- with $container.volumeMounts }}
{{- range $i, $volMount := . }}
{{- if eq $volMount.name "work" }}
{{- $mountWork = 0 -}}
{{- end }}
- name: {{ $volMount.name }}
{{- range $mountKey, $mountVal := $volMount }}
{{- if ne $mountKey "name" }}
{{ $mountKey }}: {{ $mountVal | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if $mountWork }}
- name: work
mountPath: /actions-runner/_work
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,782 +0,0 @@
package tests
import (
"path/filepath"
"strings"
"testing"
v1alpha1 "github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/gruntwork-io/terratest/modules/k8s"
"github.com/gruntwork-io/terratest/modules/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
)
func TestTemplateRenderedGitHubSecretWithGitHubToken(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/githubsecret.yaml"})
var githubSecret corev1.Secret
helm.UnmarshalK8SYaml(t, output, &githubSecret)
assert.Equal(t, namespaceName, githubSecret.Namespace)
assert.Equal(t, "test-runners-auto-scaling-runner-set-github-secret", githubSecret.Name)
assert.Equal(t, "gh_token12345", string(githubSecret.Data["github_token"]))
assert.Equal(t, "actions.github.com/secret-protection", githubSecret.Finalizers[0])
}
func TestTemplateRenderedGitHubSecretWithGitHubApp(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_app_id": "10",
"githubConfigSecret.github_app_installation_id": "100",
"githubConfigSecret.github_app_private_key": "private_key",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/githubsecret.yaml"})
var githubSecret corev1.Secret
helm.UnmarshalK8SYaml(t, output, &githubSecret)
assert.Equal(t, namespaceName, githubSecret.Namespace)
assert.Equal(t, "10", string(githubSecret.Data["github_app_id"]))
assert.Equal(t, "100", string(githubSecret.Data["github_app_installation_id"]))
assert.Equal(t, "private_key", string(githubSecret.Data["github_app_private_key"]))
}
func TestTemplateRenderedGitHubSecretErrorWithMissingAuthInput(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_app_id": "",
"githubConfigSecret.github_token": "",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/githubsecret.yaml"})
require.Error(t, err)
assert.ErrorContains(t, err, "provide .Values.githubConfigSecret.github_token or .Values.githubConfigSecret.github_app_id")
}
func TestTemplateRenderedGitHubSecretErrorWithMissingAppInput(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_app_id": "10",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/githubsecret.yaml"})
require.Error(t, err)
assert.ErrorContains(t, err, "provide .Values.githubConfigSecret.github_app_installation_id and .Values.githubConfigSecret.github_app_private_key")
}
func TestTemplateNotRenderedGitHubSecretWithPredefinedSecret(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secret",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/githubsecret.yaml"})
assert.ErrorContains(t, err, "could not find template templates/githubsecret.yaml in chart", "secret should not be rendered since a pre-defined secret is provided")
}
func TestTemplateRenderedSetServiceAccountToNoPermission(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/no_permission_serviceaccount.yaml"})
var serviceAccount corev1.ServiceAccount
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "test-runners-auto-scaling-runner-set-no-permission-service-account", serviceAccount.Name)
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, "test-runners-auto-scaling-runner-set-no-permission-service-account", ars.Spec.Template.Spec.ServiceAccountName)
}
func TestTemplateRenderedSetServiceAccountToKubeMode(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"containerMode.type": "kubernetes",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/kube_mode_serviceaccount.yaml"})
var serviceAccount corev1.ServiceAccount
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "test-runners-auto-scaling-runner-set-kube-mode-service-account", serviceAccount.Name)
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/kube_mode_role.yaml"})
var role rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &role)
assert.Equal(t, namespaceName, role.Namespace)
assert.Equal(t, "test-runners-auto-scaling-runner-set-kube-mode-role", role.Name)
assert.Len(t, role.Rules, 5, "kube mode role should have 5 rules")
assert.Equal(t, "pods", role.Rules[0].Resources[0])
assert.Equal(t, "pods/exec", role.Rules[1].Resources[0])
assert.Equal(t, "pods/log", role.Rules[2].Resources[0])
assert.Equal(t, "jobs", role.Rules[3].Resources[0])
assert.Equal(t, "secrets", role.Rules[4].Resources[0])
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/kube_mode_role_binding.yaml"})
var roleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &roleBinding)
assert.Equal(t, namespaceName, roleBinding.Namespace)
assert.Equal(t, "test-runners-auto-scaling-runner-set-kube-mode-role", roleBinding.Name)
assert.Len(t, roleBinding.Subjects, 1)
assert.Equal(t, "test-runners-auto-scaling-runner-set-kube-mode-service-account", roleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, roleBinding.Subjects[0].Namespace)
assert.Equal(t, "test-runners-auto-scaling-runner-set-kube-mode-role", roleBinding.RoleRef.Name)
assert.Equal(t, "Role", roleBinding.RoleRef.Kind)
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, "test-runners-auto-scaling-runner-set-kube-mode-service-account", ars.Spec.Template.Spec.ServiceAccountName)
}
func TestTemplateRenderedUserProvideSetServiceAccount(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"template.spec.serviceAccountName": "test-service-account",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/no_permission_serviceaccount.yaml"})
assert.ErrorContains(t, err, "could not find template templates/no_permission_serviceaccount.yaml in chart", "no permission service account should not be rendered")
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, "test-service-account", ars.Spec.Template.Spec.ServiceAccountName)
}
func TestTemplateRenderedAutoScalingRunnerSet(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, namespaceName, ars.Namespace)
assert.Equal(t, "test-runners", ars.Name)
assert.Equal(t, "auto-scaling-runner-set", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-runners", ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "https://github.com/actions", ars.Spec.GitHubConfigUrl)
assert.Equal(t, "test-runners-auto-scaling-runner-set-github-secret", ars.Spec.GitHubConfigSecret)
assert.Empty(t, ars.Spec.RunnerGroup, "RunnerGroup should be empty")
assert.Nil(t, ars.Spec.MinRunners, "MinRunners should be nil")
assert.Nil(t, ars.Spec.MaxRunners, "MaxRunners should be nil")
assert.Nil(t, ars.Spec.Proxy, "Proxy should be nil")
assert.Nil(t, ars.Spec.GitHubServerTLS, "GitHubServerTLS should be nil")
assert.NotNil(t, ars.Spec.Template.Spec, "Template.Spec should not be nil")
assert.Len(t, ars.Spec.Template.Spec.Containers, 1, "Template.Spec should have 1 container")
assert.Equal(t, "runner", ars.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/actions-runner:latest", ars.Spec.Template.Spec.Containers[0].Image)
}
func TestTemplateRenderedAutoScalingRunnerSet_ProvideMetadata(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"template.metadata.labels.test1": "test1",
"template.metadata.labels.test2": "test2",
"template.metadata.annotations.test3": "test3",
"template.metadata.annotations.test4": "test4",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, namespaceName, ars.Namespace)
assert.Equal(t, "test-runners", ars.Name)
assert.NotNil(t, ars.Spec.Template.Labels, "Template.Spec.Labels should not be nil")
assert.Equal(t, "test1", ars.Spec.Template.Labels["test1"], "Template.Spec.Labels should have test1")
assert.Equal(t, "test2", ars.Spec.Template.Labels["test2"], "Template.Spec.Labels should have test2")
assert.NotNil(t, ars.Spec.Template.Annotations, "Template.Spec.Annotations should not be nil")
assert.Equal(t, "test3", ars.Spec.Template.Annotations["test3"], "Template.Spec.Annotations should have test3")
assert.Equal(t, "test4", ars.Spec.Template.Annotations["test4"], "Template.Spec.Annotations should have test4")
assert.NotNil(t, ars.Spec.Template.Spec, "Template.Spec should not be nil")
assert.Len(t, ars.Spec.Template.Spec.Containers, 1, "Template.Spec should have 1 container")
assert.Equal(t, "runner", ars.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/actions-runner:latest", ars.Spec.Template.Spec.Containers[0].Image)
}
func TestTemplateRenderedAutoScalingRunnerSet_MaxRunnersValidationError(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"maxRunners": "-1",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
require.Error(t, err)
assert.ErrorContains(t, err, "maxRunners has to be greater or equal to 0")
}
func TestTemplateRenderedAutoScalingRunnerSet_MinRunnersValidationError(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"maxRunners": "1",
"minRunners": "-1",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
require.Error(t, err)
assert.ErrorContains(t, err, "minRunners has to be greater or equal to 0")
}
func TestTemplateRenderedAutoScalingRunnerSet_MinMaxRunnersValidationError(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"maxRunners": "0",
"minRunners": "1",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
require.Error(t, err)
assert.ErrorContains(t, err, "maxRunners has to be greater or equal to minRunners")
}
func TestTemplateRenderedAutoScalingRunnerSet_MinMaxRunnersValidationSameValue(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"maxRunners": "0",
"minRunners": "0",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, 0, *ars.Spec.MinRunners, "MinRunners should be 0")
assert.Equal(t, 0, *ars.Spec.MaxRunners, "MaxRunners should be 0")
}
func TestTemplateRenderedAutoScalingRunnerSet_MinMaxRunnersValidation_OnlyMin(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"minRunners": "5",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, 5, *ars.Spec.MinRunners, "MinRunners should be 5")
assert.Nil(t, ars.Spec.MaxRunners, "MaxRunners should be nil")
}
func TestTemplateRenderedAutoScalingRunnerSet_MinMaxRunnersValidation_OnlyMax(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"maxRunners": "5",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, 5, *ars.Spec.MaxRunners, "MaxRunners should be 5")
assert.Nil(t, ars.Spec.MinRunners, "MinRunners should be nil")
}
func TestTemplateRenderedAutoScalingRunnerSet_MinMaxRunners_FromValuesFile(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
testValuesPath, err := filepath.Abs("../tests/values.yaml")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
ValuesFiles: []string{testValuesPath},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, 5, *ars.Spec.MinRunners, "MinRunners should be 5")
assert.Equal(t, 10, *ars.Spec.MaxRunners, "MaxRunners should be 10")
}
func TestTemplateRenderedAutoScalingRunnerSet_EnableDinD(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"containerMode.type": "dind",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, namespaceName, ars.Namespace)
assert.Equal(t, "test-runners", ars.Name)
assert.Equal(t, "auto-scaling-runner-set", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-runners", ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "https://github.com/actions", ars.Spec.GitHubConfigUrl)
assert.Equal(t, "test-runners-auto-scaling-runner-set-github-secret", ars.Spec.GitHubConfigSecret)
assert.Empty(t, ars.Spec.RunnerGroup, "RunnerGroup should be empty")
assert.Nil(t, ars.Spec.MinRunners, "MinRunners should be nil")
assert.Nil(t, ars.Spec.MaxRunners, "MaxRunners should be nil")
assert.Nil(t, ars.Spec.Proxy, "Proxy should be nil")
assert.Nil(t, ars.Spec.GitHubServerTLS, "GitHubServerTLS should be nil")
assert.NotNil(t, ars.Spec.Template.Spec, "Template.Spec should not be nil")
assert.Len(t, ars.Spec.Template.Spec.InitContainers, 1, "Template.Spec should have 1 init container")
assert.Equal(t, "init-dind-externals", ars.Spec.Template.Spec.InitContainers[0].Name)
assert.Equal(t, "ghcr.io/actions/actions-runner:latest", ars.Spec.Template.Spec.InitContainers[0].Image)
assert.Equal(t, "cp", ars.Spec.Template.Spec.InitContainers[0].Command[0])
assert.Equal(t, "-r -v /actions-runner/externals/. /actions-runner/tmpDir/", strings.Join(ars.Spec.Template.Spec.InitContainers[0].Args, " "))
assert.Len(t, ars.Spec.Template.Spec.Containers, 2, "Template.Spec should have 2 container")
assert.Equal(t, "runner", ars.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/actions-runner:latest", ars.Spec.Template.Spec.Containers[0].Image)
assert.Len(t, ars.Spec.Template.Spec.Containers[0].Env, 4, "The runner container should have 4 env vars, DOCKER_HOST, DOCKER_TLS_VERIFY, DOCKER_CERT_PATH and RUNNER_WAIT_FOR_DOCKER_IN_SECONDS")
assert.Equal(t, "DOCKER_HOST", ars.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, "tcp://localhost:2376", ars.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "DOCKER_TLS_VERIFY", ars.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "1", ars.Spec.Template.Spec.Containers[0].Env[1].Value)
assert.Equal(t, "DOCKER_CERT_PATH", ars.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "/certs/client", ars.Spec.Template.Spec.Containers[0].Env[2].Value)
assert.Equal(t, "RUNNER_WAIT_FOR_DOCKER_IN_SECONDS", ars.Spec.Template.Spec.Containers[0].Env[3].Name)
assert.Equal(t, "120", ars.Spec.Template.Spec.Containers[0].Env[3].Value)
assert.Len(t, ars.Spec.Template.Spec.Containers[0].VolumeMounts, 2, "The runner container should have 2 volume mounts, dind-cert and work")
assert.Equal(t, "work", ars.Spec.Template.Spec.Containers[0].VolumeMounts[0].Name)
assert.Equal(t, "/actions-runner/_work", ars.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath)
assert.False(t, ars.Spec.Template.Spec.Containers[0].VolumeMounts[0].ReadOnly)
assert.Equal(t, "dind-cert", ars.Spec.Template.Spec.Containers[0].VolumeMounts[1].Name)
assert.Equal(t, "/certs/client", ars.Spec.Template.Spec.Containers[0].VolumeMounts[1].MountPath)
assert.True(t, ars.Spec.Template.Spec.Containers[0].VolumeMounts[1].ReadOnly)
assert.Equal(t, "dind", ars.Spec.Template.Spec.Containers[1].Name)
assert.Equal(t, "docker:dind", ars.Spec.Template.Spec.Containers[1].Image)
assert.True(t, *ars.Spec.Template.Spec.Containers[1].SecurityContext.Privileged)
assert.Len(t, ars.Spec.Template.Spec.Containers[1].VolumeMounts, 3, "The dind container should have 3 volume mounts, dind-cert, work and externals")
assert.Equal(t, "work", ars.Spec.Template.Spec.Containers[1].VolumeMounts[0].Name)
assert.Equal(t, "/actions-runner/_work", ars.Spec.Template.Spec.Containers[1].VolumeMounts[0].MountPath)
assert.Equal(t, "dind-cert", ars.Spec.Template.Spec.Containers[1].VolumeMounts[1].Name)
assert.Equal(t, "/certs/client", ars.Spec.Template.Spec.Containers[1].VolumeMounts[1].MountPath)
assert.Equal(t, "dind-externals", ars.Spec.Template.Spec.Containers[1].VolumeMounts[2].Name)
assert.Equal(t, "/actions-runner/externals", ars.Spec.Template.Spec.Containers[1].VolumeMounts[2].MountPath)
}
func TestTemplateRenderedAutoScalingRunnerSet_EnableKubernetesMode(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"containerMode.type": "kubernetes",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, namespaceName, ars.Namespace)
assert.Equal(t, "test-runners", ars.Name)
assert.Equal(t, "auto-scaling-runner-set", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-runners", ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "https://github.com/actions", ars.Spec.GitHubConfigUrl)
assert.Equal(t, "test-runners-auto-scaling-runner-set-github-secret", ars.Spec.GitHubConfigSecret)
assert.Empty(t, ars.Spec.RunnerGroup, "RunnerGroup should be empty")
assert.Nil(t, ars.Spec.MinRunners, "MinRunners should be nil")
assert.Nil(t, ars.Spec.MaxRunners, "MaxRunners should be nil")
assert.Nil(t, ars.Spec.Proxy, "Proxy should be nil")
assert.Nil(t, ars.Spec.GitHubServerTLS, "GitHubServerTLS should be nil")
assert.NotNil(t, ars.Spec.Template.Spec, "Template.Spec should not be nil")
assert.Len(t, ars.Spec.Template.Spec.Containers, 1, "Template.Spec should have 1 container")
assert.Equal(t, "runner", ars.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/actions-runner:latest", ars.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, "ACTIONS_RUNNER_CONTAINER_HOOKS", ars.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, "/actions-runner/k8s/index.js", ars.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "ACTIONS_RUNNER_POD_NAME", ars.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER", ars.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "true", ars.Spec.Template.Spec.Containers[0].Env[2].Value)
assert.Len(t, ars.Spec.Template.Spec.Volumes, 1, "Template.Spec should have 1 volume")
assert.Equal(t, "work", ars.Spec.Template.Spec.Volumes[0].Name)
assert.NotNil(t, ars.Spec.Template.Spec.Volumes[0].Ephemeral, "Template.Spec should have 1 ephemeral volume")
}
func TestTemplateRenderedAutoScalingRunnerSet_UsePredefinedSecret(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secrets",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, namespaceName, ars.Namespace)
assert.Equal(t, "test-runners", ars.Name)
assert.Equal(t, "auto-scaling-runner-set", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-runners", ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "https://github.com/actions", ars.Spec.GitHubConfigUrl)
assert.Equal(t, "pre-defined-secrets", ars.Spec.GitHubConfigSecret)
}
func TestTemplateRenderedAutoScalingRunnerSet_ErrorOnEmptyPredefinedSecret(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
require.Error(t, err)
assert.ErrorContains(t, err, "Values.githubConfigSecret is required for setting auth with GitHub server")
}
func TestTemplateRenderedWithProxy(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../auto-scaling-runner-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secrets",
"proxy.http.url": "http://proxy.example.com",
"proxy.http.credentialSecretRef": "http-secret",
"proxy.https.url": "https://proxy.example.com",
"proxy.https.credentialSecretRef": "https-secret",
"proxy.noProxy": "{example.com,example.org}",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
require.NotNil(t, ars.Spec.Proxy)
require.NotNil(t, ars.Spec.Proxy.HTTP)
assert.Equal(t, "http://proxy.example.com", ars.Spec.Proxy.HTTP.Url)
assert.Equal(t, "http-secret", ars.Spec.Proxy.HTTP.CredentialSecretRef)
require.NotNil(t, ars.Spec.Proxy.HTTPS)
assert.Equal(t, "https://proxy.example.com", ars.Spec.Proxy.HTTPS.Url)
assert.Equal(t, "https-secret", ars.Spec.Proxy.HTTPS.CredentialSecretRef)
require.NotNil(t, ars.Spec.Proxy.NoProxy)
require.Len(t, ars.Spec.Proxy.NoProxy, 2)
assert.Contains(t, ars.Spec.Proxy.NoProxy, "example.com")
assert.Contains(t, ars.Spec.Proxy.NoProxy, "example.org")
}

View File

@@ -1,5 +1,5 @@
apiVersion: v2
name: actions-runner-controller-2
name: gha-runner-scale-set-controller
description: A Helm chart for install actions-runner-controller CRD
# A chart can be either an 'application' or a 'library' chart.
@@ -15,13 +15,13 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0
version: 0.3.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.2.0"
appVersion: "0.3.0"
home: https://github.com/actions/actions-runner-controller
@@ -30,4 +30,4 @@ sources:
maintainers:
- name: actions
url: https://github.com/actions
url: https://github.com/actions

View File

@@ -0,0 +1,5 @@
# Set the following to dummy values.
# This is only useful in CI
image:
repository: test-arc
tag: dev

View File

@@ -55,6 +55,28 @@ spec:
githubConfigUrl:
description: Required
type: string
githubServerTLS:
properties:
certificateFrom:
description: Required
properties:
configMapKeyRef:
description: Required
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
type: object
type: object
image:
description: Required
type: string

View File

@@ -17,16 +17,28 @@ spec:
- additionalPrinterColumns:
- jsonPath: .spec.minRunners
name: Minimum Runners
type: number
type: integer
- jsonPath: .spec.maxRunners
name: Maximum Runners
type: number
type: integer
- jsonPath: .status.currentRunners
name: Current Runners
type: number
type: integer
- jsonPath: .status.state
name: State
type: string
- jsonPath: .status.pendingEphemeralRunners
name: Pending Runners
type: integer
- jsonPath: .status.runningEphemeralRunners
name: Running Runners
type: integer
- jsonPath: .status.finishedEphemeralRunners
name: Finished Runners
type: integer
- jsonPath: .status.deletingEphemeralRunners
name: Deleting Runners
type: integer
name: v1alpha1
schema:
openAPIV3Schema:
@@ -51,9 +63,25 @@ spec:
type: string
githubServerTLS:
properties:
certConfigMapRef:
certificateFrom:
description: Required
type: string
properties:
configMapKeyRef:
description: Required
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
type: object
type: object
maxRunners:
minimum: 0
@@ -86,6 +114,8 @@ spec:
type: object
runnerGroup:
type: string
runnerScaleSetName:
type: string
template:
description: Required
properties:
@@ -4288,6 +4318,12 @@ spec:
properties:
currentRunners:
type: integer
failedEphemeralRunners:
type: integer
pendingEphemeralRunners:
type: integer
runningEphemeralRunners:
type: integer
state:
type: string
type: object

View File

@@ -64,9 +64,25 @@ spec:
type: string
githubServerTLS:
properties:
certConfigMapRef:
certificateFrom:
description: Required
type: string
properties:
configMapKeyRef:
description: Required
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
type: object
type: object
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'

View File

@@ -21,6 +21,18 @@ spec:
- jsonPath: .status.currentReplicas
name: CurrentReplicas
type: integer
- jsonPath: .status.pendingEphemeralRunners
name: Pending Runners
type: integer
- jsonPath: .status.runningEphemeralRunners
name: Running Runners
type: integer
- jsonPath: .status.finishedEphemeralRunners
name: Finished Runners
type: integer
- jsonPath: .status.deletingEphemeralRunners
name: Deleting Runners
type: integer
name: v1alpha1
schema:
openAPIV3Schema:
@@ -46,9 +58,25 @@ spec:
type: string
githubServerTLS:
properties:
certConfigMapRef:
certificateFrom:
description: Required
type: string
properties:
configMapKeyRef:
description: Required
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
type: object
type: object
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'
@@ -4280,6 +4308,14 @@ spec:
currentReplicas:
description: CurrentReplicas is the number of currently running EphemeralRunner resources being managed by this EphemeralRunnerSet.
type: integer
failedEphemeralRunners:
type: integer
pendingEphemeralRunners:
type: integer
runningEphemeralRunners:
type: integer
required:
- currentReplicas
type: object
type: object
served: true

View File

@@ -0,0 +1,113 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "gha-runner-scale-set-controller.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "gha-runner-scale-set-controller.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "gha-runner-scale-set-controller.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "gha-runner-scale-set-controller.labels" -}}
helm.sh/chart: {{ include "gha-runner-scale-set-controller.chart" . }}
{{ include "gha-runner-scale-set-controller.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- range $k, $v := .Values.labels }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "gha-runner-scale-set-controller.selectorLabels" -}}
app.kubernetes.io/name: {{ include "gha-runner-scale-set-controller.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "gha-runner-scale-set-controller.serviceAccountName" -}}
{{- if eq .Values.serviceAccount.name "default"}}
{{- fail "serviceAccount.name cannot be set to 'default'" }}
{{- end }}
{{- if .Values.serviceAccount.create }}
{{- default (include "gha-runner-scale-set-controller.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- if not .Values.serviceAccount.name }}
{{- fail "serviceAccount.name must be set if serviceAccount.create is false" }}
{{- else }}
{{- .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set-controller.managerClusterRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-cluster-role
{{- end }}
{{- define "gha-runner-scale-set-controller.managerClusterRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-cluster-rolebinding
{{- end }}
{{- define "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-single-namespace-role
{{- end }}
{{- define "gha-runner-scale-set-controller.managerSingleNamespaceRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-single-namespace-rolebinding
{{- end }}
{{- define "gha-runner-scale-set-controller.managerListenerRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-listener-role
{{- end }}
{{- define "gha-runner-scale-set-controller.managerListenerRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-listener-rolebinding
{{- end }}
{{- define "gha-runner-scale-set-controller.leaderElectionRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-leader-election-role
{{- end }}
{{- define "gha-runner-scale-set-controller.leaderElectionRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-leader-election-rolebinding
{{- end }}
{{- define "gha-runner-scale-set-controller.imagePullSecretsNames" -}}
{{- $names := list }}
{{- range $k, $v := . }}
{{- $names = append $names $v.name }}
{{- end }}
{{- $names | join ","}}
{{- end }}

View File

@@ -1,15 +1,20 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "actions-runner-controller-2.fullname" . }}
name: {{ include "gha-runner-scale-set-controller.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller-2.labels" . | nindent 4 }}
{{- include "gha-runner-scale-set-controller.labels" . | nindent 4 }}
actions.github.com/controller-service-account-namespace: {{ .Release.Namespace }}
actions.github.com/controller-service-account-name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
{{- if .Values.flags.watchSingleNamespace }}
actions.github.com/controller-watch-single-namespace: {{ .Values.flags.watchSingleNamespace }}
{{- end }}
spec:
replicas: {{ default 1 .Values.replicaCount }}
selector:
matchLabels:
{{- include "actions-runner-controller-2.selectorLabels" . | nindent 6 }}
{{- include "gha-runner-scale-set-controller.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
@@ -18,16 +23,16 @@ spec:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
app.kubernetes.io/part-of: actions-runner-controller
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/component: controller-manager
app.kubernetes.io/version: {{ .Chart.Version }}
{{- include "actions-runner-controller-2.selectorLabels" . | nindent 8 }}
{{- include "gha-runner-scale-set-controller.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "actions-runner-controller-2.serviceAccountName" . }}
serviceAccountName: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
@@ -43,21 +48,22 @@ spec:
- "--auto-scaling-runner-set-only"
{{- if gt (int (default 1 .Values.replicaCount)) 1 }}
- "--enable-leader-election"
- "--leader-election-id={{ include "actions-runner-controller-2.fullname" . }}"
- "--leader-election-id={{ include "gha-runner-scale-set-controller.fullname" . }}"
{{- end }}
{{- with .Values.imagePullSecrets }}
- "--auto-scaler-image-pull-secrets={{ include "actions-runner-controller-2.imagePullSecretsNames" . }}"
- "--auto-scaler-image-pull-secrets={{ include "gha-runner-scale-set-controller.imagePullSecretsNames" . }}"
{{- end }}
{{- with .Values.flags.logLevel }}
- "--log-level={{ . }}"
{{- end }}
{{- with .Values.flags.watchSingleNamespace }}
- "--watch-single-namespace={{ . }}"
{{- end }}
command:
- "/manager"
env:
- name: CONTROLLER_MANAGER_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CONTROLLER_MANAGER_CONTAINER_IMAGE
value: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
- name: CONTROLLER_MANAGER_POD_NAMESPACE
valueFrom:
fieldRef:
@@ -98,4 +104,4 @@ spec:
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -1,9 +1,9 @@
{{- if gt (int (default 1 .Values.replicaCount)) 1 -}}
{{- if gt (int (default 1 .Values.replicaCount)) 1 }}
# permissions to do leader election.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "actions-runner-controller-2.leaderElectionRoleName" . }}
name: {{ include "gha-runner-scale-set-controller.leaderElectionRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: ["coordination.k8s.io"]

View File

@@ -0,0 +1,15 @@
{{- if gt (int (default 1 .Values.replicaCount)) 1 }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.leaderElectionRoleBinding" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set-controller.leaderElectionRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,7 +1,8 @@
{{- if empty .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "actions-runner-controller-2.managerRoleName" . }}
name: {{ include "gha-runner-scale-set-controller.managerClusterRoleName" . }}
rules:
- apiGroups:
- actions.github.com
@@ -20,6 +21,7 @@ rules:
resources:
- autoscalingrunnersets/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
@@ -54,6 +56,7 @@ rules:
resources:
- autoscalinglisteners/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
@@ -92,13 +95,8 @@ rules:
resources:
- ephemeralrunners/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
@@ -112,38 +110,13 @@ rules:
resources:
- pods
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods/status
verbs:
- get
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- list
- watch
- update
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- list
- watch
- apiGroups:
@@ -151,10 +124,6 @@ rules:
resources:
- rolebindings
verbs:
- create
- delete
- get
- update
- list
- watch
- apiGroups:
@@ -162,9 +131,6 @@ rules:
resources:
- roles
verbs:
- create
- delete
- get
- update
- list
- watch
- watch
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if empty .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.managerClusterRoleBinding" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "gha-runner-scale-set-controller.managerClusterRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,40 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set-controller.managerListenerRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- get
- apiGroups:
- ""
resources:
- pods/status
verbs:
- get
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- patch
- update
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- patch
- update

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.managerListenerRoleBinding" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set-controller.managerListenerRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,84 @@
{{- if .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners/finalizers
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- list
- watch
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets
verbs:
- list
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets
verbs:
- list
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners
verbs:
- list
- watch
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleBinding" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,117 @@
{{- if .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" . }}
namespace: {{ .Values.flags.watchSingleNamespace }}
rules:
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
- autoscalingrunnersets/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunners/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- autoscalinglisteners
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- list
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- list
- watch
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.flags.watchSingleNamespace }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleBinding" . }}
namespace: {{ .Values.flags.watchSingleNamespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,11 +1,11 @@
{{- if .Values.serviceAccount.create -}}
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "actions-runner-controller-2.serviceAccountName" . }}
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller-2.labels" . | nindent 4 }}
{{- include "gha-runner-scale-set-controller.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}

View File

@@ -0,0 +1,824 @@
package tests
import (
"os"
"path/filepath"
"strings"
"testing"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/gruntwork-io/terratest/modules/k8s"
"github.com/gruntwork-io/terratest/modules/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v2"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
)
type Chart struct {
Version string `yaml:"version"`
AppVersion string `yaml:"appVersion"`
}
func TestTemplate_CreateServiceAccount(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
var serviceAccount corev1.ServiceAccount
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", serviceAccount.Name)
assert.Equal(t, "bar", string(serviceAccount.Annotations["foo"]))
}
func TestTemplate_CreateServiceAccount_OverwriteName(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.name": "overwritten-name",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
var serviceAccount corev1.ServiceAccount
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "overwritten-name", serviceAccount.Name)
assert.Equal(t, "bar", string(serviceAccount.Annotations["foo"]))
}
func TestTemplate_CreateServiceAccount_CannotUseDefaultServiceAccount(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.name": "default",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
assert.ErrorContains(t, err, "serviceAccount.name cannot be set to 'default'", "We should get an error because the default service account cannot be used")
}
func TestTemplate_NotCreateServiceAccount(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "false",
"serviceAccount.name": "overwritten-name",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/serviceaccount.yaml"})
assert.ErrorContains(t, err, "could not find template templates/serviceaccount.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_NotCreateServiceAccount_ServiceAccountNotSet(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "false",
"serviceAccount.annotations.foo": "bar",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
assert.ErrorContains(t, err, "serviceAccount.name must be set if serviceAccount.create is false", "We should get an error because the default service account cannot be used")
}
func TestTemplate_CreateManagerClusterRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_cluster_role.yaml"})
var managerClusterRole rbacv1.ClusterRole
helm.UnmarshalK8SYaml(t, output, &managerClusterRole)
assert.Empty(t, managerClusterRole.Namespace, "ClusterRole should not have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-cluster-role", managerClusterRole.Name)
assert.Equal(t, 15, len(managerClusterRole.Rules))
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_single_namespace_controller_role.yaml in chart", "We should get an error because the template should be skipped")
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_watch_role.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_single_namespace_watch_role.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_ManagerClusterRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_cluster_role_binding.yaml"})
var managerClusterRoleBinding rbacv1.ClusterRoleBinding
helm.UnmarshalK8SYaml(t, output, &managerClusterRoleBinding)
assert.Empty(t, managerClusterRoleBinding.Namespace, "ClusterRoleBinding should not have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-cluster-rolebinding", managerClusterRoleBinding.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-cluster-role", managerClusterRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerClusterRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerClusterRoleBinding.Subjects[0].Namespace)
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role_binding.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_single_namespace_controller_role_binding.yaml in chart", "We should get an error because the template should be skipped")
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_watch_role_binding.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_single_namespace_watch_role_binding.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_CreateManagerListenerRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_listener_role.yaml"})
var managerListenerRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &managerListenerRole)
assert.Equal(t, namespaceName, managerListenerRole.Namespace, "Role should have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-listener-role", managerListenerRole.Name)
assert.Equal(t, 4, len(managerListenerRole.Rules))
assert.Equal(t, "pods", managerListenerRole.Rules[0].Resources[0])
assert.Equal(t, "pods/status", managerListenerRole.Rules[1].Resources[0])
assert.Equal(t, "secrets", managerListenerRole.Rules[2].Resources[0])
assert.Equal(t, "serviceaccounts", managerListenerRole.Rules[3].Resources[0])
}
func TestTemplate_ManagerListenerRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_listener_role_binding.yaml"})
var managerListenerRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &managerListenerRoleBinding)
assert.Equal(t, namespaceName, managerListenerRoleBinding.Namespace, "RoleBinding should have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-listener-rolebinding", managerListenerRoleBinding.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-listener-role", managerListenerRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerListenerRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerListenerRoleBinding.Subjects[0].Namespace)
}
func TestTemplate_ControllerDeployment_Defaults(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
chartContent, err := os.ReadFile(filepath.Join(helmChartPath, "Chart.yaml"))
require.NoError(t, err)
chart := new(Chart)
err = yaml.Unmarshal(chartContent, chart)
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"image.tag": "dev",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Equal(t, "gha-runner-scale-set-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, namespaceName, deployment.Labels["actions.github.com/controller-service-account-namespace"])
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Labels["actions.github.com/controller-service-account-name"])
assert.NotContains(t, deployment.Labels, "actions.github.com/controller-watch-single-namespace")
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 0)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Nil(t, deployment.Spec.Template.Spec.SecurityContext)
assert.Empty(t, deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
assert.Len(t, deployment.Spec.Template.Spec.Volumes, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Volumes[0].Name)
assert.NotNil(t, 10, deployment.Spec.Template.Spec.Volumes[0].EmptyDir)
assert.Len(t, deployment.Spec.Template.Spec.NodeSelector, 0)
assert.Nil(t, deployment.Spec.Template.Spec.Affinity)
assert.Len(t, deployment.Spec.Template.Spec.Tolerations, 0)
managerImage := "ghcr.io/actions/gha-runner-scale-set-controller:dev"
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullIfNotPresent, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 2)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 2)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Resources)
assert.Nil(t, deployment.Spec.Template.Spec.Containers[0].SecurityContext)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].VolumeMounts, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].Name)
assert.Equal(t, "/tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath)
}
func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
chartContent, err := os.ReadFile(filepath.Join(helmChartPath, "Chart.yaml"))
require.NoError(t, err)
chart := new(Chart)
err = yaml.Unmarshal(chartContent, chart)
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"labels.foo": "bar",
"labels.github": "actions",
"replicaCount": "1",
"image.pullPolicy": "Always",
"image.tag": "dev",
"imagePullSecrets[0].name": "dockerhub",
"nameOverride": "gha-runner-scale-set-controller-override",
"fullnameOverride": "gha-runner-scale-set-controller-fullname-override",
"serviceAccount.name": "gha-runner-scale-set-controller-sa",
"podAnnotations.foo": "bar",
"podSecurityContext.fsGroup": "1000",
"securityContext.runAsUser": "1000",
"securityContext.runAsNonRoot": "true",
"resources.limits.cpu": "500m",
"nodeSelector.foo": "bar",
"tolerations[0].key": "foo",
"affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key": "foo",
"affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator": "bar",
"priorityClassName": "test-priority-class",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "gha-runner-scale-set-controller-fullname-override", deployment.Name)
assert.Equal(t, "gha-runner-scale-set-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-runner-scale-set-controller-override", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, "bar", deployment.Labels["foo"])
assert.Equal(t, "actions", deployment.Labels["github"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "gha-runner-scale-set-controller-override", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set-controller-override", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "bar", deployment.Spec.Template.Annotations["foo"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 1)
assert.Equal(t, "dockerhub", deployment.Spec.Template.Spec.ImagePullSecrets[0].Name)
assert.Equal(t, "gha-runner-scale-set-controller-sa", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Equal(t, int64(1000), *deployment.Spec.Template.Spec.SecurityContext.FSGroup)
assert.Equal(t, "test-priority-class", deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
assert.Len(t, deployment.Spec.Template.Spec.Volumes, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Volumes[0].Name)
assert.NotNil(t, 10, deployment.Spec.Template.Spec.Volumes[0].EmptyDir)
assert.Len(t, deployment.Spec.Template.Spec.NodeSelector, 1)
assert.Equal(t, "bar", deployment.Spec.Template.Spec.NodeSelector["foo"])
assert.NotNil(t, deployment.Spec.Template.Spec.Affinity.NodeAffinity)
assert.Equal(t, "foo", deployment.Spec.Template.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0].MatchExpressions[0].Key)
assert.Equal(t, "bar", string(deployment.Spec.Template.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0].MatchExpressions[0].Operator))
assert.Len(t, deployment.Spec.Template.Spec.Tolerations, 1)
assert.Equal(t, "foo", deployment.Spec.Template.Spec.Tolerations[0].Key)
managerImage := "ghcr.io/actions/gha-runner-scale-set-controller:dev"
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullAlways, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--auto-scaler-image-pull-secrets=dockerhub", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 2)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "500m", deployment.Spec.Template.Spec.Containers[0].Resources.Limits.Cpu().String())
assert.True(t, *deployment.Spec.Template.Spec.Containers[0].SecurityContext.RunAsNonRoot)
assert.Equal(t, int64(1000), *deployment.Spec.Template.Spec.Containers[0].SecurityContext.RunAsUser)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].VolumeMounts, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].Name)
assert.Equal(t, "/tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath)
}
func TestTemplate_EnableLeaderElectionRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"replicaCount": "2",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/leader_election_role.yaml"})
var leaderRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &leaderRole)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-leader-election-role", leaderRole.Name)
assert.Equal(t, namespaceName, leaderRole.Namespace)
}
func TestTemplate_EnableLeaderElectionRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"replicaCount": "2",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/leader_election_role_binding.yaml"})
var leaderRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &leaderRoleBinding)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-leader-election-rolebinding", leaderRoleBinding.Name)
assert.Equal(t, namespaceName, leaderRoleBinding.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-leader-election-role", leaderRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", leaderRoleBinding.Subjects[0].Name)
}
func TestTemplate_EnableLeaderElection(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"replicaCount": "2",
"image.tag": "dev",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Equal(t, int32(2), *deployment.Spec.Replicas)
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/gha-runner-scale-set-controller:dev", deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullIfNotPresent, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 4)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--enable-leader-election", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--leader-election-id=test-arc-gha-runner-scale-set-controller", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[3])
}
func TestTemplate_ControllerDeployment_ForwardImagePullSecrets(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"imagePullSecrets[0].name": "dockerhub",
"imagePullSecrets[1].name": "ghcr",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--auto-scaler-image-pull-secrets=dockerhub,ghcr", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[2])
}
func TestTemplate_ControllerDeployment_WatchSingleNamespace(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
chartContent, err := os.ReadFile(filepath.Join(helmChartPath, "Chart.yaml"))
require.NoError(t, err)
chart := new(Chart)
err = yaml.Unmarshal(chartContent, chart)
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"image.tag": "dev",
"flags.watchSingleNamespace": "demo",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Equal(t, "gha-runner-scale-set-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, namespaceName, deployment.Labels["actions.github.com/controller-service-account-namespace"])
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Labels["actions.github.com/controller-service-account-name"])
assert.Equal(t, "demo", deployment.Labels["actions.github.com/controller-watch-single-namespace"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 0)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Nil(t, deployment.Spec.Template.Spec.SecurityContext)
assert.Empty(t, deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
assert.Len(t, deployment.Spec.Template.Spec.Volumes, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Volumes[0].Name)
assert.NotNil(t, 10, deployment.Spec.Template.Spec.Volumes[0].EmptyDir)
assert.Len(t, deployment.Spec.Template.Spec.NodeSelector, 0)
assert.Nil(t, deployment.Spec.Template.Spec.Affinity)
assert.Len(t, deployment.Spec.Template.Spec.Tolerations, 0)
managerImage := "ghcr.io/actions/gha-runner-scale-set-controller:dev"
assert.Len(t, deployment.Spec.Template.Spec.Containers, 1)
assert.Equal(t, "manager", deployment.Spec.Template.Spec.Containers[0].Name)
assert.Equal(t, "ghcr.io/actions/gha-runner-scale-set-controller:dev", deployment.Spec.Template.Spec.Containers[0].Image)
assert.Equal(t, corev1.PullIfNotPresent, deployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--watch-single-namespace=demo", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 2)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Resources)
assert.Nil(t, deployment.Spec.Template.Spec.Containers[0].SecurityContext)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].VolumeMounts, 1)
assert.Equal(t, "tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].Name)
assert.Equal(t, "/tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath)
}
func TestTemplate_WatchSingleNamespace_NotCreateManagerClusterRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"flags.watchSingleNamespace": "demo",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_cluster_role.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_cluster_role.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_WatchSingleNamespace_NotManagerClusterRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"serviceAccount.create": "true",
"flags.watchSingleNamespace": "demo",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_cluster_role_binding.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_cluster_role_binding.yaml in chart", "We should get an error because the template should be skipped")
}
func TestTemplate_CreateManagerSingleNamespaceRole(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"flags.watchSingleNamespace": "demo",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role.yaml"})
var managerSingleNamespaceControllerRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceControllerRole)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceControllerRole.Name)
assert.Equal(t, namespaceName, managerSingleNamespaceControllerRole.Namespace)
assert.Equal(t, 10, len(managerSingleNamespaceControllerRole.Rules))
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_watch_role.yaml"})
var managerSingleNamespaceWatchRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceWatchRole)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceWatchRole.Name)
assert.Equal(t, "demo", managerSingleNamespaceWatchRole.Namespace)
assert.Equal(t, 13, len(managerSingleNamespaceWatchRole.Rules))
}
func TestTemplate_ManagerSingleNamespaceRoleBinding(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"flags.watchSingleNamespace": "demo",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role_binding.yaml"})
var managerSingleNamespaceControllerRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceControllerRoleBinding)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-rolebinding", managerSingleNamespaceControllerRoleBinding.Name)
assert.Equal(t, namespaceName, managerSingleNamespaceControllerRoleBinding.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceControllerRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerSingleNamespaceControllerRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerSingleNamespaceControllerRoleBinding.Subjects[0].Namespace)
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_watch_role_binding.yaml"})
var managerSingleNamespaceWatchRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceWatchRoleBinding)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-rolebinding", managerSingleNamespaceWatchRoleBinding.Name)
assert.Equal(t, "demo", managerSingleNamespaceWatchRoleBinding.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceWatchRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerSingleNamespaceWatchRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerSingleNamespaceWatchRoleBinding.Subjects[0].Namespace)
}

View File

@@ -1,15 +1,15 @@
# Default values for actions-runner-controller-2.
# Default values for gha-runner-scale-set-controller.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
labels: {}
# leaderElection will be enabled when replicaCount>1,
# So, only one replica will in charge of reconciliation at a given time
# leaderElectionId will be set to {{ define actions-runner-controller-2.fullname }}.
# leaderElectionId will be set to {{ define gha-runner-scale-set-controller.fullname }}.
replicaCount: 1
image:
repository: "ghcr.io/actions/actions-runner-controller-2"
repository: "ghcr.io/actions/gha-runner-scale-set-controller"
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
@@ -68,3 +68,7 @@ flags:
# Log level can be set here with one of the following values: "debug", "info", "warn", "error".
# Defaults to "debug".
logLevel: "debug"
# Restricts the controller to only watch resources in the desired namespace.
# Defaults to watch all namespaces when unset.
# watchSingleNamespace: ""

View File

@@ -1,5 +1,5 @@
apiVersion: v2
name: auto-scaling-runner-set
name: gha-runner-scale-set
description: A Helm chart for deploying an AutoScalingRunnerSet
# A chart can be either an 'application' or a 'library' chart.
@@ -15,13 +15,13 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0
version: 0.3.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.2.0"
appVersion: "0.3.0"
home: https://github.com/actions/dev-arc
@@ -30,4 +30,4 @@ sources:
maintainers:
- name: actions
url: https://github.com/actions
url: https://github.com/actions

View File

@@ -3,4 +3,4 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
github_token: test

View File

@@ -0,0 +1,551 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "gha-runner-scale-set.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "gha-runner-scale-set.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "gha-runner-scale-set.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "gha-runner-scale-set.labels" -}}
helm.sh/chart: {{ include "gha-runner-scale-set.chart" . }}
{{ include "gha-runner-scale-set.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "gha-runner-scale-set.selectorLabels" -}}
app.kubernetes.io/name: {{ include "gha-runner-scale-set.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{- define "gha-runner-scale-set.githubsecret" -}}
{{- if kindIs "string" .Values.githubConfigSecret }}
{{- if not (empty .Values.githubConfigSecret) }}
{{- .Values.githubConfigSecret }}
{{- else}}
{{- fail "Values.githubConfigSecret is required for setting auth with GitHub server." }}
{{- end }}
{{- else }}
{{- include "gha-runner-scale-set.fullname" . }}-github-secret
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.noPermissionServiceAccountName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-no-permission-service-account
{{- end }}
{{- define "gha-runner-scale-set.kubeModeRoleName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-role
{{- end }}
{{- define "gha-runner-scale-set.kubeModeServiceAccountName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-service-account
{{- end }}
{{- define "gha-runner-scale-set.dind-init-container" -}}
{{- range $i, $val := .Values.template.spec.containers }}
{{- if eq $val.name "runner" }}
image: {{ $val.image }}
command: ["cp"]
args: ["-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
volumeMounts:
- name: dind-externals
mountPath: /home/runner/tmpDir
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.dind-container" -}}
image: docker:dind
securityContext:
privileged: true
volumeMounts:
- name: work
mountPath: /home/runner/_work
- name: dind-cert
mountPath: /certs/client
- name: dind-externals
mountPath: /home/runner/externals
{{- end }}
{{- define "gha-runner-scale-set.dind-volume" -}}
- name: dind-cert
emptyDir: {}
- name: dind-externals
emptyDir: {}
{{- end }}
{{- define "gha-runner-scale-set.tls-volume" -}}
- name: github-server-tls-cert
configMap:
name: {{ .certificateFrom.configMapKeyRef.name }}
items:
- key: {{ .certificateFrom.configMapKeyRef.key }}
path: {{ .certificateFrom.configMapKeyRef.key }}
{{- end }}
{{- define "gha-runner-scale-set.dind-work-volume" -}}
{{- $createWorkVolume := 1 }}
{{- range $i, $volume := .Values.template.spec.volumes }}
{{- if eq $volume.name "work" }}
{{- $createWorkVolume = 0 }}
- {{ $volume | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- if eq $createWorkVolume 1 }}
- name: work
emptyDir: {}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.kubernetes-mode-work-volume" -}}
{{- $createWorkVolume := 1 }}
{{- range $i, $volume := .Values.template.spec.volumes }}
{{- if eq $volume.name "work" }}
{{- $createWorkVolume = 0 }}
- {{ $volume | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- if eq $createWorkVolume 1 }}
- name: work
ephemeral:
volumeClaimTemplate:
spec:
{{- .Values.containerMode.kubernetesModeWorkVolumeClaim | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.non-work-volumes" -}}
{{- range $i, $volume := .Values.template.spec.volumes }}
{{- if ne $volume.name "work" }}
- {{ $volume | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.non-runner-containers" -}}
{{- range $i, $container := .Values.template.spec.containers }}
{{- if ne $container.name "runner" }}
- {{ $container | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.non-runner-non-dind-containers" -}}
{{- range $i, $container := .Values.template.spec.containers }}
{{- if and (ne $container.name "runner") (ne $container.name "dind") }}
- {{ $container | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.dind-runner-container" -}}
{{- $tlsConfig := (default (dict) .Values.githubServerTLS) }}
{{- range $i, $container := .Values.template.spec.containers }}
{{- if eq $container.name "runner" }}
{{- range $key, $val := $container }}
{{- if and (ne $key "env") (ne $key "volumeMounts") (ne $key "name") }}
{{ $key }}: {{ $val | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- $setDockerHost := 1 }}
{{- $setDockerTlsVerify := 1 }}
{{- $setDockerCertPath := 1 }}
{{- $setRunnerWaitDocker := 1 }}
{{- $setNodeExtraCaCerts := 0 }}
{{- $setRunnerUpdateCaCerts := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $setNodeExtraCaCerts = 1 }}
{{- $setRunnerUpdateCaCerts = 1 }}
{{- end }}
env:
{{- with $container.env }}
{{- range $i, $env := . }}
{{- if eq $env.name "DOCKER_HOST" }}
{{- $setDockerHost = 0 }}
{{- end }}
{{- if eq $env.name "DOCKER_TLS_VERIFY" }}
{{- $setDockerTlsVerify = 0 }}
{{- end }}
{{- if eq $env.name "DOCKER_CERT_PATH" }}
{{- $setDockerCertPath = 0 }}
{{- end }}
{{- if eq $env.name "RUNNER_WAIT_FOR_DOCKER_IN_SECONDS" }}
{{- $setRunnerWaitDocker = 0 }}
{{- end }}
{{- if eq $env.name "NODE_EXTRA_CA_CERTS" }}
{{- $setNodeExtraCaCerts = 0 }}
{{- end }}
{{- if eq $env.name "RUNNER_UPDATE_CA_CERTS" }}
{{- $setRunnerUpdateCaCerts = 0 }}
{{- end }}
- {{ $env | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- if $setDockerHost }}
- name: DOCKER_HOST
value: tcp://localhost:2376
{{- end }}
{{- if $setDockerTlsVerify }}
- name: DOCKER_TLS_VERIFY
value: "1"
{{- end }}
{{- if $setDockerCertPath }}
- name: DOCKER_CERT_PATH
value: /certs/client
{{- end }}
{{- if $setRunnerWaitDocker }}
- name: RUNNER_WAIT_FOR_DOCKER_IN_SECONDS
value: "120"
{{- end }}
{{- if $setNodeExtraCaCerts }}
- name: NODE_EXTRA_CA_CERTS
value: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
{{- end }}
{{- if $setRunnerUpdateCaCerts }}
- name: RUNNER_UPDATE_CA_CERTS
value: "1"
{{- end }}
{{- $mountWork := 1 }}
{{- $mountDindCert := 1 }}
{{- $mountGitHubServerTLS := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $mountGitHubServerTLS = 1 }}
{{- end }}
volumeMounts:
{{- with $container.volumeMounts }}
{{- range $i, $volMount := . }}
{{- if eq $volMount.name "work" }}
{{- $mountWork = 0 }}
{{- end }}
{{- if eq $volMount.name "dind-cert" }}
{{- $mountDindCert = 0 }}
{{- end }}
{{- if eq $volMount.name "github-server-tls-cert" }}
{{- $mountGitHubServerTLS = 0 }}
{{- end }}
- {{ $volMount | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- if $mountWork }}
- name: work
mountPath: /home/runner/_work
{{- end }}
{{- if $mountDindCert }}
- name: dind-cert
mountPath: /certs/client
readOnly: true
{{- end }}
{{- if $mountGitHubServerTLS }}
- name: github-server-tls-cert
mountPath: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
subPath: {{ $tlsConfig.certificateFrom.configMapKeyRef.key }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.kubernetes-mode-runner-container" -}}
{{- $tlsConfig := (default (dict) .Values.githubServerTLS) }}
{{- range $i, $container := .Values.template.spec.containers }}
{{- if eq $container.name "runner" }}
{{- range $key, $val := $container }}
{{- if and (ne $key "env") (ne $key "volumeMounts") (ne $key "name") }}
{{ $key }}: {{ $val | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- $setContainerHooks := 1 }}
{{- $setPodName := 1 }}
{{- $setRequireJobContainer := 1 }}
{{- $setNodeExtraCaCerts := 0 }}
{{- $setRunnerUpdateCaCerts := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $setNodeExtraCaCerts = 1 }}
{{- $setRunnerUpdateCaCerts = 1 }}
{{- end }}
env:
{{- with $container.env }}
{{- range $i, $env := . }}
{{- if eq $env.name "ACTIONS_RUNNER_CONTAINER_HOOKS" }}
{{- $setContainerHooks = 0 }}
{{- end }}
{{- if eq $env.name "ACTIONS_RUNNER_POD_NAME" }}
{{- $setPodName = 0 }}
{{- end }}
{{- if eq $env.name "ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER" }}
{{- $setRequireJobContainer = 0 }}
{{- end }}
{{- if eq $env.name "NODE_EXTRA_CA_CERTS" }}
{{- $setNodeExtraCaCerts = 0 }}
{{- end }}
{{- if eq $env.name "RUNNER_UPDATE_CA_CERTS" }}
{{- $setRunnerUpdateCaCerts = 0 }}
{{- end }}
- {{ $env | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- if $setContainerHooks }}
- name: ACTIONS_RUNNER_CONTAINER_HOOKS
value: /home/runner/k8s/index.js
{{- end }}
{{- if $setPodName }}
- name: ACTIONS_RUNNER_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
{{- end }}
{{- if $setRequireJobContainer }}
- name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
value: "true"
{{- end }}
{{- if $setNodeExtraCaCerts }}
- name: NODE_EXTRA_CA_CERTS
value: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
{{- end }}
{{- if $setRunnerUpdateCaCerts }}
- name: RUNNER_UPDATE_CA_CERTS
value: "1"
{{- end }}
{{- $mountWork := 1 }}
{{- $mountGitHubServerTLS := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $mountGitHubServerTLS = 1 }}
{{- end }}
volumeMounts:
{{- with $container.volumeMounts }}
{{- range $i, $volMount := . }}
{{- if eq $volMount.name "work" }}
{{- $mountWork = 0 }}
{{- end }}
{{- if eq $volMount.name "github-server-tls-cert" }}
{{- $mountGitHubServerTLS = 0 }}
{{- end }}
- {{ $volMount | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- if $mountWork }}
- name: work
mountPath: /home/runner/_work
{{- end }}
{{- if $mountGitHubServerTLS }}
- name: github-server-tls-cert
mountPath: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
subPath: {{ $tlsConfig.certificateFrom.configMapKeyRef.key }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.default-mode-runner-containers" -}}
{{- $tlsConfig := (default (dict) .Values.githubServerTLS) }}
{{- range $i, $container := .Values.template.spec.containers }}
{{- if ne $container.name "runner" }}
- {{ $container | toYaml | nindent 2 }}
{{- else }}
- name: {{ $container.name }}
{{- range $key, $val := $container }}
{{- if and (ne $key "env") (ne $key "volumeMounts") (ne $key "name") }}
{{ $key }}: {{ $val | toYaml | nindent 4 }}
{{- end }}
{{- end }}
{{- $setNodeExtraCaCerts := 0 }}
{{- $setRunnerUpdateCaCerts := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $setNodeExtraCaCerts = 1 }}
{{- $setRunnerUpdateCaCerts = 1 }}
{{- end }}
env:
{{- with $container.env }}
{{- range $i, $env := . }}
{{- if eq $env.name "NODE_EXTRA_CA_CERTS" }}
{{- $setNodeExtraCaCerts = 0 }}
{{- end }}
{{- if eq $env.name "RUNNER_UPDATE_CA_CERTS" }}
{{- $setRunnerUpdateCaCerts = 0 }}
{{- end }}
- {{ $env | toYaml | nindent 6 }}
{{- end }}
{{- end }}
{{- if $setNodeExtraCaCerts }}
- name: NODE_EXTRA_CA_CERTS
value: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
{{- end }}
{{- if $setRunnerUpdateCaCerts }}
- name: RUNNER_UPDATE_CA_CERTS
value: "1"
{{- end }}
{{- $mountGitHubServerTLS := 0 }}
{{- if $tlsConfig.runnerMountPath }}
{{- $mountGitHubServerTLS = 1 }}
{{- end }}
volumeMounts:
{{- with $container.volumeMounts }}
{{- range $i, $volMount := . }}
{{- if eq $volMount.name "github-server-tls-cert" }}
{{- $mountGitHubServerTLS = 0 }}
{{- end }}
- {{ $volMount | toYaml | nindent 6 }}
{{- end }}
{{- end }}
{{- if $mountGitHubServerTLS }}
- name: github-server-tls-cert
mountPath: {{ clean (print $tlsConfig.runnerMountPath "/" $tlsConfig.certificateFrom.configMapKeyRef.key) }}
subPath: {{ $tlsConfig.certificateFrom.configMapKeyRef.key }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.managerRoleName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-manager-role
{{- end }}
{{- define "gha-runner-scale-set.managerRoleBinding" -}}
{{- include "gha-runner-scale-set.fullname" . }}-manager-role-binding
{{- end }}
{{- define "gha-runner-scale-set.managerServiceAccountName" -}}
{{- $searchControllerDeployment := 1 }}
{{- if .Values.controllerServiceAccount }}
{{- if .Values.controllerServiceAccount.name }}
{{- $searchControllerDeployment = 0 }}
{{- .Values.controllerServiceAccount.name }}
{{- end }}
{{- end }}
{{- if eq $searchControllerDeployment 1 }}
{{- $multiNamespacesCounter := 0 }}
{{- $singleNamespaceCounter := 0 }}
{{- $controllerDeployment := dict }}
{{- $singleNamespaceControllerDeployments := dict }}
{{- $managerServiceAccountName := "" }}
{{- range $index, $deployment := (lookup "apps/v1" "Deployment" "" "").items }}
{{- if kindIs "map" $deployment.metadata.labels }}
{{- if eq (get $deployment.metadata.labels "app.kubernetes.io/part-of") "gha-runner-scale-set-controller" }}
{{- if hasKey $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace" }}
{{- $singleNamespaceCounter = add $singleNamespaceCounter 1 }}
{{- $_ := set $singleNamespaceControllerDeployments (get $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace") $deployment}}
{{- else }}
{{- $multiNamespacesCounter = add $multiNamespacesCounter 1 }}
{{- $controllerDeployment = $deployment }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if and (eq $multiNamespacesCounter 0) (eq $singleNamespaceCounter 0) }}
{{- fail "No gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if and (gt $multiNamespacesCounter 0) (gt $singleNamespaceCounter 0) }}
{{- fail "Found both gha-runner-scale-set-controller installed with flags.watchSingleNamespace set and unset in cluster, this is not supported. Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if gt $multiNamespacesCounter 1 }}
{{- fail "More than one gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if eq $multiNamespacesCounter 1 }}
{{- with $controllerDeployment.metadata }}
{{- $managerServiceAccountName = (get $controllerDeployment.metadata.labels "actions.github.com/controller-service-account-name") }}
{{- end }}
{{- else if gt $singleNamespaceCounter 0 }}
{{- if hasKey $singleNamespaceControllerDeployments .Release.Namespace }}
{{- $controllerDeployment = get $singleNamespaceControllerDeployments .Release.Namespace }}
{{- with $controllerDeployment.metadata }}
{{- $managerServiceAccountName = (get $controllerDeployment.metadata.labels "actions.github.com/controller-service-account-name") }}
{{- end }}
{{- else }}
{{- fail "No gha-runner-scale-set-controller deployment that watch this namespace found using label (actions.github.com/controller-watch-single-namespace). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- end }}
{{- if eq $managerServiceAccountName "" }}
{{- fail "No service account name found for gha-runner-scale-set-controller deployment using label (actions.github.com/controller-service-account-name), consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- $managerServiceAccountName }}
{{- end }}
{{- end }}
{{- define "gha-runner-scale-set.managerServiceAccountNamespace" -}}
{{- $searchControllerDeployment := 1 }}
{{- if .Values.controllerServiceAccount }}
{{- if .Values.controllerServiceAccount.namespace }}
{{- $searchControllerDeployment = 0 }}
{{- .Values.controllerServiceAccount.namespace }}
{{- end }}
{{- end }}
{{- if eq $searchControllerDeployment 1 }}
{{- $multiNamespacesCounter := 0 }}
{{- $singleNamespaceCounter := 0 }}
{{- $controllerDeployment := dict }}
{{- $singleNamespaceControllerDeployments := dict }}
{{- $managerServiceAccountNamespace := "" }}
{{- range $index, $deployment := (lookup "apps/v1" "Deployment" "" "").items }}
{{- if kindIs "map" $deployment.metadata.labels }}
{{- if eq (get $deployment.metadata.labels "app.kubernetes.io/part-of") "gha-runner-scale-set-controller" }}
{{- if hasKey $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace" }}
{{- $singleNamespaceCounter = add $singleNamespaceCounter 1 }}
{{- $_ := set $singleNamespaceControllerDeployments (get $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace") $deployment}}
{{- else }}
{{- $multiNamespacesCounter = add $multiNamespacesCounter 1 }}
{{- $controllerDeployment = $deployment }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if and (eq $multiNamespacesCounter 0) (eq $singleNamespaceCounter 0) }}
{{- fail "No gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if and (gt $multiNamespacesCounter 0) (gt $singleNamespaceCounter 0) }}
{{- fail "Found both gha-runner-scale-set-controller installed with flags.watchSingleNamespace set and unset in cluster, this is not supported. Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if gt $multiNamespacesCounter 1 }}
{{- fail "More than one gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if eq $multiNamespacesCounter 1 }}
{{- with $controllerDeployment.metadata }}
{{- $managerServiceAccountNamespace = (get $controllerDeployment.metadata.labels "actions.github.com/controller-service-account-namespace") }}
{{- end }}
{{- else if gt $singleNamespaceCounter 0 }}
{{- if hasKey $singleNamespaceControllerDeployments .Release.Namespace }}
{{- $controllerDeployment = get $singleNamespaceControllerDeployments .Release.Namespace }}
{{- with $controllerDeployment.metadata }}
{{- $managerServiceAccountNamespace = (get $controllerDeployment.metadata.labels "actions.github.com/controller-service-account-namespace") }}
{{- end }}
{{- else }}
{{- fail "No gha-runner-scale-set-controller deployment that watch this namespace found using label (actions.github.com/controller-watch-single-namespace). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- end }}
{{- if eq $managerServiceAccountNamespace "" }}
{{- fail "No service account namespace found for gha-runner-scale-set-controller deployment using label (actions.github.com/controller-service-account-namespace), consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- $managerServiceAccountNamespace }}
{{- end }}
{{- end }}

View File

@@ -1,33 +1,56 @@
apiVersion: actions.github.com/v1alpha1
kind: AutoscalingRunnerSet
metadata:
{{- if or (not .Release.Name) (gt (len .Release.Name) 45) }}
{{ fail "Name must have up to 45 characters" }}
{{- end }}
{{- if gt (len .Release.Namespace) 63 }}
{{ fail "Namespace must have up to 63 characters" }}
{{- end }}
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "auto-scaling-runner-set.labels" . | nindent 4 }}
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
spec:
githubConfigUrl: {{ required ".Values.githubConfigUrl is required" .Values.githubConfigUrl }}
githubConfigSecret: {{ include "auto-scaling-runner-set.githubsecret" . }}
githubConfigUrl: {{ required ".Values.githubConfigUrl is required" (trimSuffix "/" .Values.githubConfigUrl) }}
githubConfigSecret: {{ include "gha-runner-scale-set.githubsecret" . }}
{{- with .Values.runnerGroup }}
runnerGroup: {{ . }}
{{- end }}
{{- with .Values.runnerScaleSetName }}
runnerScaleSetName: {{ . }}
{{- end }}
{{- if .Values.githubServerTLS }}
githubServerTLS:
{{- with .Values.githubServerTLS.certificateFrom }}
certificateFrom:
configMapKeyRef:
name: {{ .configMapKeyRef.name }}
key: {{ .configMapKeyRef.key }}
{{- end }}
{{- end }}
{{- if .Values.proxy }}
proxy:
{{- if .Values.proxy.http }}
http:
url: {{ .Values.proxy.http.url }}
{{- if .Values.proxy.http.credentialSecretRef }}
credentialSecretRef: {{ .Values.proxy.http.credentialSecretRef }}
{{ end }}
{{- end }}
{{- end }}
{{- if .Values.proxy.https }}
https:
url: {{ .Values.proxy.https.url }}
{{- if .Values.proxy.https.credentialSecretRef }}
credentialSecretRef: {{ .Values.proxy.https.credentialSecretRef }}
{{ end }}
{{- end }}
{{- end }}
{{- if and .Values.proxy.noProxy (kindIs "slice" .Values.proxy.noProxy) }}
noProxy: {{ .Values.proxy.noProxy | toYaml | nindent 6}}
{{ end }}
{{ end }}
{{- end }}
{{- end }}
{{- if and (or (kindIs "int64" .Values.minRunners) (kindIs "float64" .Values.minRunners)) (or (kindIs "int64" .Values.maxRunners) (kindIs "float64" .Values.maxRunners)) }}
{{- if gt .Values.minRunners .Values.maxRunners }}
@@ -68,41 +91,50 @@ spec:
{{- end }}
{{- end }}
{{- if eq .Values.containerMode.type "kubernetes" }}
serviceAccountName: {{ default (include "auto-scaling-runner-set.kubeModeServiceAccountName" .) .Values.template.spec.serviceAccountName }}
serviceAccountName: {{ default (include "gha-runner-scale-set.kubeModeServiceAccountName" .) .Values.template.spec.serviceAccountName }}
{{- else }}
serviceAccountName: {{ default (include "auto-scaling-runner-set.noPermissionServiceAccountName" .) .Values.template.spec.serviceAccountName }}
serviceAccountName: {{ default (include "gha-runner-scale-set.noPermissionServiceAccountName" .) .Values.template.spec.serviceAccountName }}
{{- end }}
{{- if or .Values.template.spec.initContainers (eq .Values.containerMode.type "dind") }}
initContainers:
{{- if eq .Values.containerMode.type "dind" }}
- name: init-dind-externals
{{- include "auto-scaling-runner-set.dind-init-container" . | nindent 8 }}
{{- include "gha-runner-scale-set.dind-init-container" . | nindent 8 }}
{{- end }}
{{- with .Values.template.spec.initContainers }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
containers:
containers:
{{- if eq .Values.containerMode.type "dind" }}
- name: runner
{{- include "auto-scaling-runner-set.dind-runner-container" . | nindent 8 }}
{{- include "gha-runner-scale-set.dind-runner-container" . | nindent 8 }}
- name: dind
{{- include "auto-scaling-runner-set.dind-container" . | nindent 8 }}
{{- include "auto-scaling-runner-set.non-runner-containers" . | nindent 6 }}
{{- include "gha-runner-scale-set.dind-container" . | nindent 8 }}
{{- include "gha-runner-scale-set.non-runner-non-dind-containers" . | nindent 6 }}
{{- else if eq .Values.containerMode.type "kubernetes" }}
- name: runner
{{- include "auto-scaling-runner-set.kubernetes-mode-runner-container" . | nindent 8 }}
{{- include "auto-scaling-runner-set.non-runner-containers" . | nindent 6 }}
{{- include "gha-runner-scale-set.kubernetes-mode-runner-container" . | nindent 8 }}
{{- include "gha-runner-scale-set.non-runner-containers" . | nindent 6 }}
{{- else }}
{{ .Values.template.spec.containers | toYaml | nindent 6 }}
{{- include "gha-runner-scale-set.default-mode-runner-containers" . | nindent 6 }}
{{- end }}
{{- if or .Values.template.spec.volumes (eq .Values.containerMode.type "dind") (eq .Values.containerMode.type "kubernetes") }}
volumes:
{{- if eq .Values.containerMode.type "dind" }}
{{- include "auto-scaling-runner-set.dind-volume" . | nindent 6 }}
{{- include "auto-scaling-runner-set.dind-work-volume" . | nindent 6 }}
{{- else if eq .Values.containerMode.type "kubernetes" }}
{{- include "auto-scaling-runner-set.kubernetes-mode-work-volume" . | nindent 6 }}
{{- $tlsConfig := (default (dict) .Values.githubServerTLS) }}
{{- if or .Values.template.spec.volumes (eq .Values.containerMode.type "dind") (eq .Values.containerMode.type "kubernetes") $tlsConfig.runnerMountPath }}
volumes:
{{- if $tlsConfig.runnerMountPath }}
{{- include "gha-runner-scale-set.tls-volume" $tlsConfig | nindent 6 }}
{{- end }}
{{- include "auto-scaling-runner-set.non-work-volumes" . | nindent 6 }}
{{- end }}
{{- if eq .Values.containerMode.type "dind" }}
{{- include "gha-runner-scale-set.dind-volume" . | nindent 6 }}
{{- include "gha-runner-scale-set.dind-work-volume" . | nindent 6 }}
{{- include "gha-runner-scale-set.non-work-volumes" . | nindent 6 }}
{{- else if eq .Values.containerMode.type "kubernetes" }}
{{- include "gha-runner-scale-set.kubernetes-mode-work-volume" . | nindent 6 }}
{{- include "gha-runner-scale-set.non-work-volumes" . | nindent 6 }}
{{- else }}
{{- with .Values.template.spec.volumes }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -2,10 +2,10 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ include "auto-scaling-runner-set.githubsecret" . }}
name: {{ include "gha-runner-scale-set.githubsecret" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "auto-scaling-runner-set.labels" . | nindent 4 }}
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
finalizers:
- actions.github.com/secret-protection
data:

View File

@@ -3,7 +3,7 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "auto-scaling-runner-set.kubeModeRoleName" . }}
name: {{ include "gha-runner-scale-set.kubeModeRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: [""]

View File

@@ -2,14 +2,14 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "auto-scaling-runner-set.kubeModeRoleName" . }}
name: {{ include "gha-runner-scale-set.kubeModeRoleName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "auto-scaling-runner-set.kubeModeRoleName" . }}
name: {{ include "gha-runner-scale-set.kubeModeRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "auto-scaling-runner-set.kubeModeServiceAccountName" . }}
name: {{ include "gha-runner-scale-set.kubeModeServiceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -2,8 +2,8 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "auto-scaling-runner-set.kubeModeServiceAccountName" . }}
name: {{ include "gha-runner-scale-set.kubeModeServiceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "auto-scaling-runner-set.labels" . | nindent 4 }}
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,59 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set.managerRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- get
- apiGroups:
- ""
resources:
- pods/status
verbs:
- get
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- create
- delete
- get
- patch
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- create
- delete
- get
- patch
- update
{{- if .Values.githubServerTLS }}
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
{{- end }}

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set.managerRoleBinding" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set.managerRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set.managerServiceAccountName" . | nindent 4 }}
namespace: {{ include "gha-runner-scale-set.managerServiceAccountNamespace" . | nindent 4 }}

View File

@@ -2,8 +2,8 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "auto-scaling-runner-set.noPermissionServiceAccountName" . }}
name: {{ include "gha-runner-scale-set.noPermissionServiceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "auto-scaling-runner-set.labels" . | nindent 4 }}
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
{{- end }}

File diff suppressed because it is too large Load Diff

View File

@@ -2,4 +2,7 @@ githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
maxRunners: 10
minRunners: 5
minRunners: 5
controllerServiceAccount:
name: "arc"
namespace: "arc-system"

View File

@@ -0,0 +1,19 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: other
image: other-image:latest
volumes:
- name: foo
emptyDir: {}
- name: bar
emptyDir: {}
- name: work
hostPath:
path: /data
type: Directory
containerMode:
type: dind

View File

@@ -0,0 +1,31 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: runner
image: runner-image:latest
env:
- name: DOCKER_HOST
value: tcp://localhost:9999
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: work
mountPath: /work
- name: others
mountPath: /others
resources:
limits:
memory: "64Mi"
cpu: "250m"
volumes:
- name: work
hostPath:
path: /data
type: Directory
containerMode:
type: dind

View File

@@ -0,0 +1,46 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: runner
image: runner-image:latest
env:
- name: SOME_ENV
value: SOME_VALUE
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: work
mountPath: /work
- name: others
mountPath: /others
resources:
limits:
memory: "64Mi"
cpu: "250m"
- name: other
image: other-image:latest
volumeMounts:
- name: work
mountPath: /work
- name: others
mountPath: /others
resources:
limits:
memory: "64Mi"
cpu: "250m"
volumes:
- name: work
hostPath:
path: /data
type: Directory
dnsPolicy: "None"
dnsConfig:
nameservers:
- 192.0.2.1
containerMode:
type: none

View File

@@ -0,0 +1,12 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: runner
image: runner-image:latest
dnsPolicy: "None"
dnsConfig:
nameservers:
- 192.0.2.1

View File

@@ -0,0 +1,17 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: other
image: other-image:latest
volumes:
- name: foo
emptyDir: {}
- name: bar
emptyDir: {}
- name: work
hostPath:
path: /data
type: Directory

View File

@@ -0,0 +1,19 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: other
image: other-image:latest
volumes:
- name: foo
emptyDir: {}
- name: bar
emptyDir: {}
- name: work
hostPath:
path: /data
type: Directory
containerMode:
type: kubernetes

View File

@@ -0,0 +1,31 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
containers:
- name: runner
image: runner-image:latest
env:
- name: ACTIONS_RUNNER_CONTAINER_HOOKS
value: /k8s/index.js
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: work
mountPath: /work
- name: others
mountPath: /others
resources:
limits:
memory: "64Mi"
cpu: "250m"
volumes:
- name: work
hostPath:
path: /data
type: Directory
containerMode:
type: kubernetes

View File

@@ -4,7 +4,7 @@ githubConfigUrl: ""
## githubConfigSecret is the k8s secrets to use when auth with GitHub API.
## You can choose to use GitHub App or a PAT token
githubConfigSecret:
githubConfigSecret:
### GitHub Apps Configuration
## NOTE: IDs MUST be strings, use quotes
#github_app_id: ""
@@ -13,7 +13,7 @@ githubConfigSecret:
### GitHub PAT Configuration
github_token: ""
## If you have a pre-define Kubernetes secret in the same namespace the auto-scaling-runner-set is going to deploy,
## If you have a pre-define Kubernetes secret in the same namespace the gha-runner-scale-set is going to deploy,
## you can also reference it via `githubConfigSecret: pre-defined-secret`.
## You need to make sure your predefined secret has all the required secret data set properly.
## For a pre-defined secret using GitHub PAT, the secret needs to be created like this:
@@ -44,26 +44,54 @@ githubConfigSecret:
# runnerGroup: "default"
## template is the PodSpec for each runner Pod
template:
spec:
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/actions-runner/run.sh"]
## name of the runner scale set to create. Defaults to the helm release name
# runnerScaleSetName: ""
## A self-signed CA certificate for communication with the GitHub server can be
## provided using a config map key selector. If `runnerMountPath` is set, for
## each runner pod ARC will:
## - create a `github-server-tls-cert` volume containing the certificate
## specified in `certificateFrom`
## - mount that volume on path `runnerMountPath`/{certificate name}
## - set NODE_EXTRA_CA_CERTS environment variable to that same path
## - set RUNNER_UPDATE_CA_CERTS environment variable to "1" (as of version
## 2.303.0 this will instruct the runner to reload certificates on the host)
##
## If any of the above had already been set by the user in the runner pod
## template, ARC will observe those and not overwrite them.
## Example configuration:
#
# githubServerTLS:
# certificateFrom:
# configMapKeyRef:
# name: config-map-name
# key: ca.pem
# runnerMountPath: /usr/local/share/ca-certificates/
containerMode:
type: "" ## type can be set to dind or kubernetes
## the following is required when containerMode.type=kubernetes
# kubernetesModeWorkVolumeClaim:
# accessModes: ["ReadWriteOnce"]
# # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
# storageClassName: "dynamic-blob-storage"
# resources:
# requests:
# storage: 1Gi
## template is the PodSpec for each runner Pod
template:
## template.spec will be modified if you change the container mode
## with containerMode.type=dind, we will populate the template.spec with following pod spec
## template:
## spec:
## initContainers:
## - name: initExternalsInternalVolume
## - name: init-dind-externals
## image: ghcr.io/actions/actions-runner:latest
## command: ["cp", "-r", "-v", "/actions-runner/externals/.", "/actions-runner/tmpDir/"]
## command: ["cp", "-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
## volumeMounts:
## - name: externalsInternal
## mountPath: /actions-runner/tmpDir
## - name: dind-externals
## mountPath: /home/runner/tmpDir
## containers:
## - name: runner
## image: ghcr.io/actions/actions-runner:latest
@@ -75,9 +103,9 @@ containerMode:
## - name: DOCKER_CERT_PATH
## value: /certs/client
## volumeMounts:
## - name: workingDirectoryInternal
## mountPath: /actions-runner/_work
## - name: dinDInternal
## - name: work
## mountPath: /home/runner/_work
## - name: dind-cert
## mountPath: /certs/client
## readOnly: true
## - name: dind
@@ -85,18 +113,18 @@ containerMode:
## securityContext:
## privileged: true
## volumeMounts:
## - mountPath: /certs/client
## name: dinDInternal
## - mountPath: /actions-runner/_work
## name: workingDirectoryInternal
## - mountPath: /actions-runner/externals
## name: externalsInternal
## - name: work
## mountPath: /home/runner/_work
## - name: dind-cert
## mountPath: /certs/client
## - name: dind-externals
## mountPath: /home/runner/externals
## volumes:
## - name: dinDInternal
## - name: work
## emptyDir: {}
## - name: workingDirectoryInternal
## - name: dind-cert
## emptyDir: {}
## - name: externalsInternal
## - name: dind-externals
## emptyDir: {}
######################################################################################################
## with containerMode.type=kubernetes, we will populate the template.spec with following pod spec
@@ -107,7 +135,7 @@ containerMode:
## image: ghcr.io/actions/actions-runner:latest
## env:
## - name: ACTIONS_RUNNER_CONTAINER_HOOKS
## value: /actions-runner/k8s/index.js
## value: /home/runner/k8s/index.js
## - name: ACTIONS_RUNNER_POD_NAME
## valueFrom:
## fieldRef:
@@ -116,7 +144,7 @@ containerMode:
## value: "true"
## volumeMounts:
## - name: work
## mountPath: /actions-runner/_work
## mountPath: /home/runner/_work
## volumes:
## - name: work
## ephemeral:
@@ -127,13 +155,18 @@ containerMode:
## resources:
## requests:
## storage: 1Gi
spec:
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
## the following is required when containerMode.type=kubernetes
kubernetesModeWorkVolumeClaim:
accessModes: ["ReadWriteOnce"]
# For testing, use https://github.com/rancher/local-path-provisioner to provide dynamic provision volume
# TODO: remove before release
storageClassName: "dynamic-blob-storage"
resources:
requests:
storage: 1Gi
## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
# controllerServiceAccount:
# namespace: arc-system
# name: test-arc-gha-runner-scale-set-controller

View File

@@ -18,7 +18,10 @@ package main
import (
"context"
"crypto/x509"
"fmt"
"net/http"
"net/url"
"os"
"os/signal"
"syscall"
@@ -28,6 +31,7 @@ import (
"github.com/actions/actions-runner-controller/logging"
"github.com/go-logr/logr"
"github.com/kelseyhightower/envconfig"
"golang.org/x/net/http/httpproxy"
)
type RunnerScaleSetListenerConfig struct {
@@ -41,6 +45,7 @@ type RunnerScaleSetListenerConfig struct {
MaxRunners int `split_words:"true"`
MinRunners int `split_words:"true"`
RunnerScaleSetId int `split_words:"true"`
ServerRootCA string `split_words:"true"`
}
func main() {
@@ -84,11 +89,11 @@ func run(rc RunnerScaleSetListenerConfig, logger logr.Logger) error {
}
}
actionsServiceClient, err := actions.NewClient(
rc.ConfigureUrl,
actionsServiceClient, err := newActionsClientFromConfig(
rc,
creds,
actions.WithUserAgent(fmt.Sprintf("actions-runner-controller/%s", build.Version)),
actions.WithLogger(logger),
actions.WithUserAgent(fmt.Sprintf("actions-runner-controller/%s", build.Version)),
)
if err != nil {
return fmt.Errorf("failed to create an Actions Service client: %w", err)
@@ -155,3 +160,26 @@ func validateConfig(config *RunnerScaleSetListenerConfig) error {
return nil
}
func newActionsClientFromConfig(config RunnerScaleSetListenerConfig, creds *actions.ActionsAuth, options ...actions.ClientOption) (*actions.Client, error) {
if config.ServerRootCA != "" {
systemPool, err := x509.SystemCertPool()
if err != nil {
return nil, fmt.Errorf("failed to load system cert pool: %w", err)
}
pool := systemPool.Clone()
ok := pool.AppendCertsFromPEM([]byte(config.ServerRootCA))
if !ok {
return nil, fmt.Errorf("failed to parse root certificate")
}
options = append(options, actions.WithRootCAs(pool))
}
proxyFunc := httpproxy.FromEnvironment().ProxyFunc()
options = append(options, actions.WithProxy(func(req *http.Request) (*url.URL, error) {
return proxyFunc(req.URL)
}))
return actions.NewClient(config.ConfigureUrl, creds, options...)
}

View File

@@ -1,10 +1,20 @@
package main
import (
"context"
"crypto/tls"
"fmt"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/actions/actions-runner-controller/github/actions"
"github.com/actions/actions-runner-controller/github/actions/testserver"
)
func TestConfigValidationMinMax(t *testing.T) {
@@ -90,3 +100,154 @@ func TestConfigValidationConfigUrl(t *testing.T) {
assert.ErrorContains(t, err, "GitHubConfigUrl is not provided", "Expected error about missing ConfigureUrl")
}
func TestCustomerServerRootCA(t *testing.T) {
ctx := context.Background()
certsFolder := filepath.Join(
"../../",
"github",
"actions",
"testdata",
)
certPath := filepath.Join(certsFolder, "server.crt")
keyPath := filepath.Join(certsFolder, "server.key")
serverCalledSuccessfully := false
server := testserver.NewUnstarted(t, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
serverCalledSuccessfully = true
w.WriteHeader(http.StatusOK)
w.Write([]byte(`{"count": 0}`))
}))
cert, err := tls.LoadX509KeyPair(certPath, keyPath)
require.NoError(t, err)
server.TLS = &tls.Config{Certificates: []tls.Certificate{cert}}
server.StartTLS()
var certsString string
rootCA, err := os.ReadFile(filepath.Join(certsFolder, "rootCA.crt"))
require.NoError(t, err)
certsString = string(rootCA)
intermediate, err := os.ReadFile(filepath.Join(certsFolder, "intermediate.pem"))
require.NoError(t, err)
certsString = certsString + string(intermediate)
config := RunnerScaleSetListenerConfig{
ConfigureUrl: server.ConfigURLForOrg("myorg"),
ServerRootCA: certsString,
}
creds := &actions.ActionsAuth{
Token: "token",
}
client, err := newActionsClientFromConfig(config, creds)
require.NoError(t, err)
_, err = client.GetRunnerScaleSet(ctx, 1, "test")
require.NoError(t, err)
assert.True(t, serverCalledSuccessfully)
}
func TestProxySettings(t *testing.T) {
t.Run("http", func(t *testing.T) {
wentThroughProxy := false
proxy := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
wentThroughProxy = true
}))
t.Cleanup(func() {
proxy.Close()
})
prevProxy := os.Getenv("http_proxy")
os.Setenv("http_proxy", proxy.URL)
defer os.Setenv("http_proxy", prevProxy)
config := RunnerScaleSetListenerConfig{
ConfigureUrl: "https://github.com/org/repo",
}
creds := &actions.ActionsAuth{
Token: "token",
}
client, err := newActionsClientFromConfig(config, creds)
require.NoError(t, err)
req, err := http.NewRequest(http.MethodGet, "http://example.com", nil)
require.NoError(t, err)
_, err = client.Do(req)
require.NoError(t, err)
assert.True(t, wentThroughProxy)
})
t.Run("https", func(t *testing.T) {
wentThroughProxy := false
proxy := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
wentThroughProxy = true
}))
t.Cleanup(func() {
proxy.Close()
})
prevProxy := os.Getenv("https_proxy")
os.Setenv("https_proxy", proxy.URL)
defer os.Setenv("https_proxy", prevProxy)
config := RunnerScaleSetListenerConfig{
ConfigureUrl: "https://github.com/org/repo",
}
creds := &actions.ActionsAuth{
Token: "token",
}
client, err := newActionsClientFromConfig(config, creds, actions.WithRetryMax(0))
require.NoError(t, err)
req, err := http.NewRequest(http.MethodGet, "https://example.com", nil)
require.NoError(t, err)
_, err = client.Do(req)
// proxy doesn't support https
assert.Error(t, err)
assert.True(t, wentThroughProxy)
})
t.Run("no_proxy", func(t *testing.T) {
wentThroughProxy := false
proxy := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
wentThroughProxy = true
}))
t.Cleanup(func() {
proxy.Close()
})
prevProxy := os.Getenv("http_proxy")
os.Setenv("http_proxy", proxy.URL)
defer os.Setenv("http_proxy", prevProxy)
prevNoProxy := os.Getenv("no_proxy")
os.Setenv("no_proxy", "example.com")
defer os.Setenv("no_proxy", prevNoProxy)
config := RunnerScaleSetListenerConfig{
ConfigureUrl: "https://github.com/org/repo",
}
creds := &actions.ActionsAuth{
Token: "token",
}
client, err := newActionsClientFromConfig(config, creds)
require.NoError(t, err)
req, err := http.NewRequest(http.MethodGet, "http://example.com", nil)
require.NoError(t, err)
_, err = client.Do(req)
require.NoError(t, err)
assert.False(t, wentThroughProxy)
})
}

View File

@@ -124,7 +124,7 @@ func main() {
if watchNamespace == "" {
logger.Info("-watch-namespace is empty. HorizontalRunnerAutoscalers in all the namespaces are watched, cached, and considered as scale targets.")
} else {
logger.Info("-watch-namespace is %q. Only HorizontalRunnerAutoscalers in %q are watched, cached, and considered as scale targets.")
logger.Info("-watch-namespace is %q. Only HorizontalRunnerAutoscalers in %q are watched, cached, and considered as scale targets.", watchNamespace, watchNamespace)
}
ctrl.SetLogger(logger)

33
cmd/sleep/main.go Normal file
View File

@@ -0,0 +1,33 @@
/*
Copyright 2021 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"flag"
"fmt"
"time"
)
var Seconds int
func main() {
fmt.Printf("sleeping for %d seconds\n", Seconds)
time.Sleep(time.Duration(Seconds) * time.Second)
fmt.Println("done sleeping")
}
func init() {
flag.IntVar(&Seconds, "seconds", 60, "Number of seconds to sleep")
flag.Parse()
}

View File

@@ -55,6 +55,28 @@ spec:
githubConfigUrl:
description: Required
type: string
githubServerTLS:
properties:
certificateFrom:
description: Required
properties:
configMapKeyRef:
description: Required
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
type: object
type: object
image:
description: Required
type: string

View File

@@ -17,16 +17,28 @@ spec:
- additionalPrinterColumns:
- jsonPath: .spec.minRunners
name: Minimum Runners
type: number
type: integer
- jsonPath: .spec.maxRunners
name: Maximum Runners
type: number
type: integer
- jsonPath: .status.currentRunners
name: Current Runners
type: number
type: integer
- jsonPath: .status.state
name: State
type: string
- jsonPath: .status.pendingEphemeralRunners
name: Pending Runners
type: integer
- jsonPath: .status.runningEphemeralRunners
name: Running Runners
type: integer
- jsonPath: .status.finishedEphemeralRunners
name: Finished Runners
type: integer
- jsonPath: .status.deletingEphemeralRunners
name: Deleting Runners
type: integer
name: v1alpha1
schema:
openAPIV3Schema:
@@ -51,9 +63,25 @@ spec:
type: string
githubServerTLS:
properties:
certConfigMapRef:
certificateFrom:
description: Required
type: string
properties:
configMapKeyRef:
description: Required
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
type: object
type: object
maxRunners:
minimum: 0
@@ -86,6 +114,8 @@ spec:
type: object
runnerGroup:
type: string
runnerScaleSetName:
type: string
template:
description: Required
properties:
@@ -4288,6 +4318,12 @@ spec:
properties:
currentRunners:
type: integer
failedEphemeralRunners:
type: integer
pendingEphemeralRunners:
type: integer
runningEphemeralRunners:
type: integer
state:
type: string
type: object

View File

@@ -64,9 +64,25 @@ spec:
type: string
githubServerTLS:
properties:
certConfigMapRef:
certificateFrom:
description: Required
type: string
properties:
configMapKeyRef:
description: Required
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
type: object
type: object
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'

View File

@@ -21,6 +21,18 @@ spec:
- jsonPath: .status.currentReplicas
name: CurrentReplicas
type: integer
- jsonPath: .status.pendingEphemeralRunners
name: Pending Runners
type: integer
- jsonPath: .status.runningEphemeralRunners
name: Running Runners
type: integer
- jsonPath: .status.finishedEphemeralRunners
name: Finished Runners
type: integer
- jsonPath: .status.deletingEphemeralRunners
name: Deleting Runners
type: integer
name: v1alpha1
schema:
openAPIV3Schema:
@@ -46,9 +58,25 @@ spec:
type: string
githubServerTLS:
properties:
certConfigMapRef:
certificateFrom:
description: Required
type: string
properties:
configMapKeyRef:
description: Required
properties:
key:
description: The key to select.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the ConfigMap or its key must be defined
type: boolean
required:
- key
type: object
type: object
type: object
metadata:
description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata'
@@ -4280,6 +4308,14 @@ spec:
currentReplicas:
description: CurrentReplicas is the number of currently running EphemeralRunner resources being managed by this EphemeralRunnerSet.
type: integer
failedEphemeralRunners:
type: integer
pendingEphemeralRunners:
type: integer
runningEphemeralRunners:
type: integer
required:
- currentReplicas
type: object
type: object
served: true

View File

@@ -0,0 +1,10 @@
source:
kind: Deployment
name: controller-manager
fieldPath: spec.template.spec.containers.[name=manager].image
targets:
- select:
kind: Deployment
name: controller-manager
fieldPaths:
- spec.template.spec.containers.[name=manager].env.[name=CONTROLLER_MANAGER_CONTAINER_IMAGE].value

View File

@@ -6,3 +6,6 @@ images:
- name: controller
newName: summerwind/actions-runner-controller
newTag: dev
replacements:
- path: env-replacement.yaml

View File

@@ -50,10 +50,8 @@ spec:
optional: true
- name: GITHUB_APP_PRIVATE_KEY
value: /etc/actions-runner-controller/github_app_private_key
- name: CONTROLLER_MANAGER_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CONTROLLER_MANAGER_CONTAINER_IMAGE
value: CONTROLLER_MANAGER_CONTAINER_IMAGE
- name: CONTROLLER_MANAGER_POD_NAMESPACE
valueFrom:
fieldRef:

Some files were not shown because too many files have changed in this diff Show More