Compare commits

...

58 Commits

Author SHA1 Message Date
Yusuke Kuoka
8a5fb6ccb7 Bump chart version to v0.23.3 for ARC v0.27.4 (#2577) 2023-05-12 09:10:59 -04:00
github-actions[bot]
e930ba6e98 Updates: container-hooks to v0.3.1 (#2580)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-05-12 05:55:09 -04:00
Bassem Dghaidi
5ba3805a3f Fix update runners scheduled workflow to check for container-hooks upgrades (#2576) 2023-05-12 05:52:24 -04:00
kahirokunn
f798cddca1 docs: use INSTALLATION_NAME (#2552)
Signed-off-by: kahirokunn <okinakahiro@gmail.com>
2023-05-10 10:39:54 -04:00
Y. Luis
367ee46122 Fixed scaling runners doc link (#2474)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-05-09 14:45:18 -04:00
Seonghyeon Cho
f4a318fca6 docs: Update github docs links under /managing-self-hosted-runners (#2554)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-05-09 14:43:15 -04:00
Bassem Dghaidi
4ee21cb24b Add link to walkthrough video on youtube (#2570) 2023-05-08 15:24:32 -04:00
Yusuke Kuoka
102c9e1afa Update "People" section in README (#2537)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-05-04 08:04:42 -04:00
Nikola Jokic
73e676f951 Check release tag version and chart versions during the release process (#2524)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-05-03 11:53:42 +02:00
github-actions[bot]
41ebb43c65 Update runner to version 2.304.0 (#2543)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-04-28 10:05:46 -04:00
mspasoje
aa50b62c01 Fix for GHES when authorized through GitHub App with GITHUB_URL instead of GITHUB_ENTERPRISE_URL (#2464)
Ref #2457
2023-04-27 13:53:22 +09:00
Alex Williams
942f773fef Update helm chart to support actions metrics graceful termiantion (#2498)
# Summary

- add lifecycle, terminationGracePeriodSeconds, and loadBalancerSource ranges to metrics server
- these were missed when copying from the other webhook server
- original PR adding them to the other webhook server is here https://github.com/actions/actions-runner-controller/pull/2305

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-04-27 13:50:31 +09:00
Thomas B
21722a5de8 Add CR and CRB to the helm chart (#2504)
In response to https://github.com/actions/actions-runner-controller/issues/2212 , the ARC helm chart is missing ClusterRoleBinding and ClusterRole for the ActionsMetricsServer resulting on missing permissions.

This also fix the labels of the ActionsMetricsServer Service as it is selected by the ServiceMonitor with those labels.

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-04-27 13:33:48 +09:00
argokasper
a2d4b95b79 Fix GET validation for lowercase http methods (#2497)
Some requests send method in lowercase (verified with curl and as a default for AWS ALB health check requests), but Go HTTP library constant MethodGet is in upper.
2023-04-27 13:22:41 +09:00
Thilo Uttendorfer
04fb9f4fa1 Fix the default version of kube-rbac-proxy in the docs (#2535) 2023-04-27 13:16:12 +09:00
Paul Brousseau
8304b80955 docs: minor correction for actions metrics server secret (#2542)
Aligning docs with what the Helm chart produces
2023-04-27 13:15:49 +09:00
Nuru
9bd4025e9c Stricter filtering of check run completion events (#2520)
I observed that 100% of canceled jobs in my runner pool were not causing scale down events. This PR fixes that.

The problem was caused by #2119. 

#2119 ignores certain webhook events in order to fix #2118. However, #2119 overdoes it and filters out valid job cancellation events. This PR uses stricter filtering and add visibility for future troubleshooting.

<details><summary>Example cancellation event</summary>

This is the redacted top portion of a valid cancellation event my runner pool received and ignored.

```json
{
  "action": "completed",
  "workflow_job": {
    "id": 12848997134,
    "run_id": 4738060033,
    "workflow_name": "slack-notifier",
    "head_branch": "auto-update/slack-notifier-0.5.1",
    "run_url": "https://api.github.com/repos/nuru/<redacted>/actions/runs/4738060033",
    "run_attempt": 1,
    "node_id": "CR_kwDOB8Xtbc8AAAAC_dwjDg",
    "head_sha": "55bada8f3d0d3e12a510a1bf34d0c3e169b65f89",
    "url": "https://api.github.com/repos/nuru/<redacted>/actions/jobs/12848997134",
    "html_url": "https://github.com/nuru/<redacted>/actions/runs/4738060033/jobs/8411515430",
    "status": "completed",
    "conclusion": "cancelled",
    "created_at": "2023-04-19T00:03:12Z",
    "started_at": "2023-04-19T00:03:42Z",
    "completed_at": "2023-04-19T00:03:42Z",
    "name": "build (arm64)",
    "steps": [

    ],
    "check_run_url": "https://api.github.com/repos/nuru/<redacted>/check-runs/12848997134",
    "labels": [
      "self-hosted",
      "arm64"
    ],
    "runner_id": 0,
    "runner_name": "",
    "runner_group_id": 0,
    "runner_group_name": ""
  },
```

</details>
2023-04-27 13:15:23 +09:00
Yusuke Kuoka
94c089c407 Revert docker.sock path to /var/run/docker.sock (#2536)
Starting ARC v0.27.2, we've changed the `docker.sock` path from `/var/run/docker.sock` to `/var/run/docker/docker.sock`. That resulted in breaking some container-based actions due to the hard-coded `docker.sock` path in various places.

Even `actions/runner` seem to use `/var/run/docker.sock` for building container-based actions and for service containers?

Anyway, this fixes that by moving the sock file back to the previous location.

Once this gets merged, users stuck at ARC v0.27.1, previously upgraded to 0.27.2 or 0.27.3 and reverted back to v0.27.1 due to #2519, should be able to upgrade to the upcoming v0.27.4.

Resolves #2519
Resolves #2538
2023-04-27 13:06:35 +09:00
Nikola Jokic
9859bbc7f2 Use build.Version to check if resource version is a mismatch (#2521)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-04-24 10:40:15 +02:00
Thomas
c1e2c4ef9d docs: Fix typo for automatic runner scaling (#2375) 2023-04-21 11:15:53 +09:00
Edgar Kalinovski
2ee15dbca3 Add description for "dockerRegistryMirror" key (#2488) 2023-04-21 11:10:55 +09:00
Sam Greening
a4cf626410 Revert actions-runner-controller image tag in kustomization to latest (#2522) 2023-04-21 10:59:34 +09:00
cavila-evoliq
58f4b6ff2d Update ubuntu-22.04 Dockerfile to add python user script dir (#2508) 2023-04-18 08:26:14 +09:00
Bassem Dghaidi
22fbd10bd3 Fix the path of the index.yaml in job summary (#2515) 2023-04-17 14:09:56 -04:00
Yusuke Kuoka
52b97139b6 Bump chart version to v0.23.2 for ARC v0.27.3 (#2514)
Ref #2490
2023-04-17 09:00:57 -04:00
Yusuke Kuoka
3e0bc3f7be Fix docker.sock permission error for non-dind Ubuntu 20.04 runners since v0.27.2 (#2499)
#2490 has been happening since v0.27.2 for non-dind runners based on Ubuntu 20.04 runner images. It does not affect Ubuntu 22.04 non-dind runners(i.e. runners with dockerd sidecars) and Ubuntu 20.04/22.04 dind runners(i.e. runners without dockerd sidecars). However, presuming many folks are still using Ubuntu 20.04 runners and non-dind runners, we should fix it.

This change tries to fix it by defaulting to the docker group id 1001 used by Ubuntu 20.04 runners, and use gid 121 for Ubuntu 22.04 runners. We use the image tag to see which Ubuntu version the runner is based on. The algorithm is so simple- we assume it's Ubuntu-22.04-based if the image tag contains "22.04".

This might be a breaking change for folks who have already upgraded to Ubuntu 22.04 runners using their own custom runner images. Note again; we rely on the image tag to detect Ubuntu 22.04 runner images and use the proper docker gid- Folks using our official Ubuntu 22.04 runner images are not affected. It is a breaking change anyway, so I have added a remedy-

ARC got a new flag, `--docker-gid`, which defaults to `1001` but can be set to `121` or whatever gid the operator/admin likes. This can be set to `--docker-gid=121`, for example, if you are using your own custom runner image based on Ubuntu 22.04 and the image tag does not contain "22.04".

Fixes #2490
2023-04-17 21:30:41 +09:00
Nikola Jokic
ba1ac0990b Reordering methods and constants so it is easier to look it up (#2501) 2023-04-12 09:50:23 +02:00
Nikola Jokic
76fe43e8e0 Update limit manager role permissions ADR (#2500)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-04-11 16:25:43 +02:00
Nikola Jokic
8869ad28bb Fix e2e tests infinite looping when waiting for resources (#2496)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-04-10 21:03:02 +02:00
Nikola Jokic
b86af190f7 Extend manager roles to accept ephemeralrunnerset/finalizers (#2493) 2023-04-10 08:49:32 +02:00
Bassem Dghaidi
1a491cbfe5 Fix the publish chart workflow (#2489)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-04-06 08:01:48 -04:00
Yusuke Kuoka
087f20fd5d Fix chart publishing workflow (#2487) 2023-04-05 12:20:12 -04:00
Hidetake Iwata
a880114e57 chart: Bump version to 0.23.1 (#2483) 2023-04-05 22:39:29 +09:00
Nikola Jokic
e80bc21fa5 gha-runner-scale-set 0.4.0 release (#2467)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-04-05 08:56:27 -04:00
Tingluo Huang
56754094ea Remove deprecated method. (#2481) 2023-04-04 15:15:11 -04:00
Tingluo Huang
8fa4520376 Treat .ghe.com domain as hosted environment (#2480)
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-04-04 14:43:45 -04:00
Nikola Jokic
a804bf8b00 Add ImagePullPolicy to the AutoscalingListener, configurable through Manager env (#2477) 2023-04-04 19:07:20 +02:00
Nikola Jokic
5dea6db412 Fix helm uninstall cleanup by adding finalizers and cleaning them from the controller (#2433)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-04-03 21:06:12 +02:00
Bassem Dghaidi
2a0b770a63 Add troubleshooting advice (#2456) 2023-04-03 07:01:15 -04:00
Stewart Thomson
a7ef871248 Check if appID and instID are non-empty before attempting to parseInt (#2463) 2023-04-03 09:06:59 +09:00
Tingluo Huang
e45e4c53f1 Add E2E test to assert self-signed CA support. (#2458) 2023-03-31 10:31:25 -04:00
Yusuke Kuoka
a608abd124 actions-metrics: Do our best not to fail the whole event processing on no API creds (#2459) 2023-03-31 20:42:25 +09:00
Bassem Dghaidi
02d9add322 Fix bug preventing env variables from being specified (#2450)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-30 09:40:28 -04:00
Yusuke Kuoka
f5ac134787 Fix chart publishing workflow to not throw away releases between the latest and 0.21.0 (#2453)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-03-30 05:46:29 -04:00
Yusuke Kuoka
42abad5def chart: Bump version to 0.23.0 (#2449) 2023-03-30 10:10:18 +09:00
Milas Bowman
514b7da742 Install Docker Compose v2 as a Docker CLI plugin (#2326)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-29 10:40:10 +09:00
Francesco Renzi
c8e3bb5ec3 Remove containerMode from values (#2442) 2023-03-28 10:16:38 +01:00
Milas Bowman
878c9b8b49 runner: Use Docker socket via shared emptyDir instead of TCP/mTLS (#2324)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-28 11:29:16 +09:00
Jonathan Wiemers
4536707af6 chart: Allow webhook server env to be set individually (#2377)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-28 11:18:07 +09:00
Waldek Herka
13802c5a6d chart: Restricting the RBAC rules on secrets (#2265)
Co-authored-by: Waldek Herka <wherka-ama@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-28 08:43:33 +09:00
cskinfill
362fa5d52e crd: Add enterprise, organization, repository, and runner labels to runnerdeployments print columns (#2310)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-28 08:43:01 +09:00
Zane Hala
65184f1ed8 chart: Allow customization of admission webhook timeout (#2398)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-03-28 08:42:20 +09:00
Bassem Dghaidi
c23e31123c Housekeeping: move adrs/ to docs/ and update status (#2443)
Co-authored-by: Francesco Renzi <rentziass@github.com>
2023-03-27 10:38:27 -04:00
Nikola Jokic
56e1c62ac2 Add labels to autoscaling runner set subresources to allow easier inspection (#2391)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-27 11:19:34 +02:00
Bassem Dghaidi
64cedff2b4 Delete e2e-test-dispatch-workflow.yaml (#2441) 2023-03-24 07:11:57 -04:00
Bassem Dghaidi
37f93b794e Enhance quickstart troubleshooting guidelines (#2435) 2023-03-23 11:40:58 -04:00
Francesco Renzi
dc833e57a0 Add new workflows (#2423) 2023-03-23 14:39:37 +00:00
Tingluo Huang
5228aded87 Update e2e workflow (#2430) 2023-03-21 14:11:47 -04:00
112 changed files with 3815 additions and 1297 deletions

View File

@@ -0,0 +1,160 @@
name: 'Execute and Assert ARC E2E Test Action'
description: 'Queue E2E test workflow and assert workflow run result to be succeed'
inputs:
auth-token:
description: 'GitHub access token to queue workflow run'
required: true
repo-owner:
description: "The repository owner name that has the test workflow file, ex: actions"
required: true
repo-name:
description: "The repository name that has the test workflow file, ex: test"
required: true
workflow-file:
description: 'The file name of the workflow yaml, ex: test.yml'
required: true
arc-name:
description: 'The name of the configured gha-runner-scale-set'
required: true
arc-namespace:
description: 'The namespace of the configured gha-runner-scale-set'
required: true
arc-controller-namespace:
description: 'The namespace of the configured gha-runner-scale-set-controller'
required: true
runs:
using: "composite"
steps:
- name: Queue test workflow
shell: bash
id: queue_workflow
run: |
queue_time=`date +%FT%TZ`
echo "queue_time=$queue_time" >> $GITHUB_OUTPUT
curl -X POST https://api.github.com/repos/${{inputs.repo-owner}}/${{inputs.repo-name}}/actions/workflows/${{inputs.workflow-file}}/dispatches \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token ${{inputs.auth-token}}" \
-d '{"ref": "main", "inputs": { "arc_name": "${{inputs.arc-name}}" } }'
- name: Fetch workflow run & job ids
uses: actions/github-script@v6
id: query_workflow
with:
script: |
// Try to find the workflow run triggered by the previous step using the workflow_dispatch event.
// - Find recently create workflow runs in the test repository
// - For each workflow run, list its workflow job and see if the job's labels contain `inputs.arc-name`
// - Since the inputs.arc-name should be unique per e2e workflow run, once we find the job with the label, we find the workflow that we just triggered.
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms))
}
const owner = '${{inputs.repo-owner}}'
const repo = '${{inputs.repo-name}}'
const workflow_id = '${{inputs.workflow-file}}'
let workflow_run_id = 0
let workflow_job_id = 0
let workflow_run_html_url = ""
let count = 0
while (count++<12) {
await sleep(10 * 1000);
let listRunResponse = await github.rest.actions.listWorkflowRuns({
owner: owner,
repo: repo,
workflow_id: workflow_id,
created: '>${{steps.queue_workflow.outputs.queue_time}}'
})
if (listRunResponse.data.total_count > 0) {
console.log(`Found some new workflow runs for ${workflow_id}`)
for (let i = 0; i<listRunResponse.data.total_count; i++) {
let workflowRun = listRunResponse.data.workflow_runs[i]
console.log(`Check if workflow run ${workflowRun.id} is triggered by us.`)
let listJobResponse = await github.rest.actions.listJobsForWorkflowRun({
owner: owner,
repo: repo,
run_id: workflowRun.id
})
console.log(`Workflow run ${workflowRun.id} has ${listJobResponse.data.total_count} jobs.`)
if (listJobResponse.data.total_count > 0) {
for (let j = 0; j<listJobResponse.data.total_count; j++) {
let workflowJob = listJobResponse.data.jobs[j]
console.log(`Check if workflow job ${workflowJob.id} is triggered by us.`)
console.log(JSON.stringify(workflowJob.labels));
if (workflowJob.labels.includes('${{inputs.arc-name}}')) {
console.log(`Workflow job ${workflowJob.id} (Run id: ${workflowJob.run_id}) is triggered by us.`)
workflow_run_id = workflowJob.run_id
workflow_job_id = workflowJob.id
workflow_run_html_url = workflowRun.html_url
break
}
}
}
if (workflow_job_id > 0) {
break;
}
}
}
if (workflow_job_id > 0) {
break;
}
}
if (workflow_job_id == 0) {
core.setFailed(`Can't find workflow run and workflow job triggered to 'runs-on ${{inputs.arc-name}}'`)
} else {
core.setOutput('workflow_run', workflow_run_id);
core.setOutput('workflow_job', workflow_job_id);
core.setOutput('workflow_run_url', workflow_run_html_url);
}
- name: Generate summary about the triggered workflow run
shell: bash
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Triggered workflow run** |
|:--------------------------:|
| ${{steps.query_workflow.outputs.workflow_run_url}} |
EOF
- name: Wait for workflow to finish successfully
uses: actions/github-script@v6
with:
script: |
// Wait 5 minutes and make sure the workflow run we triggered completed with result 'success'
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms))
}
const owner = '${{inputs.repo-owner}}'
const repo = '${{inputs.repo-name}}'
const workflow_run_id = ${{steps.query_workflow.outputs.workflow_run}}
const workflow_job_id = ${{steps.query_workflow.outputs.workflow_job}}
let count = 0
while (count++<10) {
await sleep(30 * 1000);
let getRunResponse = await github.rest.actions.getWorkflowRun({
owner: owner,
repo: repo,
run_id: workflow_run_id
})
console.log(`${getRunResponse.data.html_url}: ${getRunResponse.data.status} (${getRunResponse.data.conclusion})`);
if (getRunResponse.data.status == 'completed') {
if ( getRunResponse.data.conclusion == 'success') {
console.log(`Workflow run finished properly.`)
return
} else {
core.setFailed(`The triggered workflow run finish with result ${getRunResponse.data.conclusion}`)
return
}
}
}
core.setFailed(`The triggered workflow run didn't finish properly using ${{inputs.arc-name}}`)
- name: Gather logs and cleanup
shell: bash
if: always()
run: |
helm uninstall ${{ inputs.arc-name }} --namespace ${{inputs.arc-namespace}} --debug
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n ${{inputs.arc-name}} -l app.kubernetes.io/instance=${{ inputs.arc-name }}
kubectl logs deployment/arc-gha-runner-scale-set-controller -n ${{inputs.arc-controller-namespace}}

View File

@@ -2,21 +2,21 @@ name: 'Setup ARC E2E Test Action'
description: 'Build controller image, create kind cluster, load the image, and exchange ARC configure token.'
inputs:
github-app-id:
app-id:
description: 'GitHub App Id for exchange access token'
required: true
github-app-pk:
app-pk:
description: "GitHub App private key for exchange access token"
required: true
github-app-org:
description: 'The organization the GitHub App has installed on'
required: true
docker-image-name:
image-name:
description: "Local docker image name for building"
required: true
docker-image-tag:
image-tag:
description: "Tag of ARC Docker image for building"
required: true
target-org:
description: "The test organization for ARC e2e test"
required: true
outputs:
token:
@@ -42,23 +42,22 @@ runs:
platforms: linux/amd64
load: true
build-args: |
DOCKER_IMAGE_NAME=${{inputs.docker-image-name}}
VERSION=${{inputs.docker-image-tag}}
DOCKER_IMAGE_NAME=${{inputs.image-name}}
VERSION=${{inputs.image-tag}}
tags: |
${{inputs.docker-image-name}}:${{inputs.docker-image-tag}}
cache-from: type=gha
cache-to: type=gha,mode=max
${{inputs.image-name}}:${{inputs.image-tag}}
no-cache: true
- name: Create minikube cluster and load image
shell: bash
run: |
minikube start
minikube image load ${{inputs.docker-image-name}}:${{inputs.docker-image-tag}}
minikube image load ${{inputs.image-name}}:${{inputs.image-tag}}
- name: Get configure token
id: config-token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ inputs.github-app-id }}
application_private_key: ${{ inputs.github-app-pk }}
organization: ${{ inputs.github-app-org }}
application_id: ${{ inputs.app-id }}
application_private_key: ${{ inputs.app-pk }}
organization: ${{ inputs.target-org}}

View File

@@ -1,16 +0,0 @@
name: ARC Reusable Workflow
on:
workflow_dispatch:
inputs:
date_time:
description: 'Datetime for runner name uniqueness, format: %Y-%m-%d-%H-%M-%S-%3N, example: 2023-02-14-13-00-16-791'
required: true
jobs:
arc-runner-job:
strategy:
fail-fast: false
matrix:
job: [1, 2, 3]
runs-on: arc-runner-${{ inputs.date_time }}
steps:
- run: echo "Hello World!" >> $GITHUB_STEP_SUMMARY

View File

@@ -8,52 +8,36 @@ on:
branches:
- master
workflow_dispatch:
inputs:
target_org:
description: The org of the test repository.
required: true
default: actions-runner-controller
target_repo:
description: The repository to install the ARC.
required: true
default: arc_e2e_test_dummy
permissions:
contents: read
env:
TARGET_ORG: actions-runner-controller
TARGET_REPO: arc_e2e_test_dummy
IMAGE_NAME: "arc-test-image"
IMAGE_VERSION: "dev"
IMAGE_VERSION: "0.4.0"
jobs:
default-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
@@ -72,11 +56,12 @@ jobs:
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
@@ -85,88 +70,63 @@ jobs:
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
single-namespace-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
@@ -187,11 +147,12 @@ jobs:
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
@@ -200,88 +161,63 @@ jobs:
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
dind-mode-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: arc-test-dind-workflow.yaml
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
@@ -300,11 +236,12 @@ jobs:
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
@@ -313,11 +250,11 @@ jobs:
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set containerMode.type="dind" \
./charts/gha-runner-scale-set \
@@ -325,81 +262,61 @@ jobs:
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
kubernetes-mode-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-kubernetes-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
echo "Install openebs/dynamic-localpv-provisioner"
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install openebs openebs/openebs -n openebs --create-namespace
helm install arc \
--namespace "arc-systems" \
--create-namespace \
@@ -414,29 +331,26 @@ jobs:
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
kubectl wait --timeout=30s --for=condition=ready pod -n openebs -l name=openebs-localpv-provisioner
- name: Install gha-runner-scale-set
id: install_arc
run: |
echo "Install openebs/dynamic-localpv-provisioner"
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install openebs openebs/openebs -n openebs --create-namespace
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set containerMode.type="kubernetes" \
--set containerMode.kubernetesModeWorkVolumeClaim.accessModes={"ReadWriteOnce"} \
@@ -447,77 +361,52 @@ jobs:
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
auth-proxy-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
@@ -536,11 +425,12 @@ jobs:
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
@@ -558,11 +448,11 @@ jobs:
--namespace=arc-runners \
--from-literal=username=github \
--from-literal=password='actions'
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:3128" \
--set proxy.https.credentialSecretRef="proxy-auth" \
@@ -572,77 +462,52 @@ jobs:
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{env.WORKFLOW_FILE}}"
go test ./test_e2e_arc -v
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
anonymous-proxy-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
- name: Resolve inputs
id: resolved_inputs
run: |
TARGET_ORG="${{env.TARGET_ORG}}"
TARGET_REPO="${{env.TARGET_REPO}}"
if [ ! -z "${{inputs.target_org}}" ]; then
TARGET_ORG="${{inputs.target_org}}"
fi
if [ ! -z "${{inputs.target_repo}}" ]; then
TARGET_REPO="${{inputs.target_repo}}"
fi
echo "TARGET_ORG=$TARGET_ORG" >> $GITHUB_OUTPUT
echo "TARGET_REPO=$TARGET_REPO" >> $GITHUB_OUTPUT
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
github-app-id: ${{secrets.ACTIONS_ACCESS_APP_ID}}
github-app-pk: ${{secrets.ACTIONS_ACCESS_PK}}
github-app-org: ${{steps.resolved_inputs.outputs.TARGET_ORG}}
docker-image-name: ${{env.IMAGE_NAME}}
docker-image-tag: ${{env.IMAGE_VERSION}}
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
@@ -661,11 +526,12 @@ jobs:
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
@@ -678,11 +544,11 @@ jobs:
--name squid \
--publish 3128:3128 \
ubuntu/squid:latest
ARC_NAME=arc-runner-${{github.job}}-$(date +'%M-%S')-$(($RANDOM % 100 + 1))
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}" \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:3128" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
@@ -691,44 +557,149 @@ jobs:
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 10 ]; then
echo "Timeout waiting for listener pod with label auto-scaling-runner-set-name=$ARC_NAME"
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l auto-scaling-runner-set-name=$ARC_NAME
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC scales pods up and down
id: test
run: |
export GITHUB_TOKEN="${{ steps.setup.outputs.token }}"
export ARC_NAME="${{ steps.install_arc.outputs.ARC_NAME }}"
export WORKFLOW_FILE="${{ env.WORKFLOW_FILE }}"
go test ./test_e2e_arc -v
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
- name: Uninstall gha-runner-scale-set
if: always() && steps.install_arc.outcome == 'success'
run: |
helm uninstall ${{ steps.install_arc.outputs.ARC_NAME }} --namespace arc-runners
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n demo -l app.kubernetes.io/instance=${{ steps.install_arc.outputs.ARC_NAME }}
self-signed-ca-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- name: Dump gha-runner-scale-set-controller logs
if: always() && steps.install_arc_controller.outcome == 'success'
run: |
kubectl logs deployment/arc-gha-runner-scale-set-controller -n arc-systems
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Job summary
if: always() && steps.install_arc.outcome == 'success'
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Outcome** | ${{ steps.test.outcome }} |
|----------------|--------------------------------------------- |
| **References** | [Test workflow runs](https://github.com/${{ steps.resolved_inputs.outputs.TARGET_ORG }}/${{steps.resolved_inputs.outputs.TARGET_REPO}}/actions/workflows/${{ env.WORKFLOW_FILE }}) |
EOF
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--rm \
--name mitmproxy \
--publish 8080:8080 \
-v ${{ github.workspace }}/mitmproxy:/home/mitmproxy/.mitmproxy \
mitmproxy/mitmproxy:latest \
mitmdump
count=0
while true; do
if [ -f "${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.pem" ]; then
echo "CA cert generated"
cat ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.pem
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for mitmproxy generate its CA cert"
exit 1
fi
sleep 1
count=$((count+1))
done
sudo cp ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.pem ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.crt
sudo chown runner ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.crt
kubectl create namespace arc-runners
kubectl -n arc-runners create configmap ca-cert --from-file="${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.crt"
kubectl -n arc-runners get configmap ca-cert -o yaml
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:8080" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
--set "githubServerTLS.certificateFrom.configMapKeyRef.name=ca-cert" \
--set "githubServerTLS.certificateFrom.configMapKeyRef.key=mitmproxy-ca-cert.crt" \
--set "githubServerTLS.runnerMountPath=/usr/local/share/ca-certificates/" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"

80
.github/workflows/go.yaml vendored Normal file
View File

@@ -0,0 +1,80 @@
name: Go
on:
push:
branches:
- master
paths:
- '.github/workflows/go.yaml'
- '**.go'
- 'go.mod'
- 'go.sum'
pull_request:
paths:
- '.github/workflows/go.yaml'
- '**.go'
- 'go.mod'
- 'go.sum'
permissions:
contents: read
jobs:
fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
cache: false
- name: fmt
run: go fmt ./...
- name: Check diff
run: git diff --exit-code
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
cache: false
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
only-new-issues: true
version: v1.51.1
generate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
cache: false
- name: Generate
run: make generate
- name: Check diff
run: git diff --exit-code
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
- run: make manifests
- name: Check diff
run: git diff --exit-code
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_linux_amd64.tar.gz
tar zxvf kubebuilder_2.3.2_linux_amd64.tar.gz
sudo mv kubebuilder_2.3.2_linux_amd64 /usr/local/kubebuilder
- name: Run go tests
run: |
go test -short `go list ./... | grep -v ./test_e2e_arc`

View File

@@ -1,23 +0,0 @@
name: golangci-lint
on:
push:
branches:
- master
pull_request:
permissions:
contents: read
pull-requests: read
jobs:
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: 1.19
- uses: actions/checkout@v3
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
only-new-issues: true
version: v1.51.1

View File

@@ -14,13 +14,19 @@ on:
- '!charts/gha-runner-scale-set/**'
- '!**.md'
workflow_dispatch:
inputs:
force:
description: 'Force publish even if the chart version is not bumped'
type: boolean
required: true
default: false
env:
KUBE_SCORE_VERSION: 1.10.0
HELM_VERSION: v3.8.0
permissions:
contents: read
contents: write
jobs:
lint-chart:
@@ -45,20 +51,12 @@ jobs:
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score - --ignore-test pod-networkpolicy --ignore-test deployment-has-poddisruptionbudget --ignore-test deployment-has-host-podantiaffinity --ignore-test container-security-context --ignore-test pod-probes --ignore-test container-image-tag --enable-optional-test container-security-context-privileged --enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.7'
python-version: '3.11'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
@@ -98,9 +96,12 @@ jobs:
NEW_CHART_VERSION=$(echo "$CHART_TEXT" | grep version: | cut -d ' ' -f 2)
RELEASE_LIST=$(curl -fs https://api.github.com/repos/${{ github.repository }}/releases | jq .[].tag_name | grep actions-runner-controller | cut -d '"' -f 2 | cut -d '-' -f 4)
LATEST_RELEASED_CHART_VERSION=$(echo $RELEASE_LIST | cut -d ' ' -f 1)
echo "CHART_VERSION_IN_MASTER=$NEW_CHART_VERSION" >> $GITHUB_ENV
echo "LATEST_CHART_VERSION=$LATEST_RELEASED_CHART_VERSION" >> $GITHUB_ENV
if [[ $NEW_CHART_VERSION != $LATEST_RELEASED_CHART_VERSION ]]; then
# Always publish if force is true
if [[ $NEW_CHART_VERSION != $LATEST_RELEASED_CHART_VERSION || "${{ inputs.force }}" == "true" ]]; then
echo "publish=true" >> $GITHUB_OUTPUT
else
echo "publish=false" >> $GITHUB_OUTPUT
@@ -170,13 +171,14 @@ jobs:
--owner "$(echo ${{ github.repository }} | cut -d '/' -f 1)" \
--git-repo "$(echo ${{ github.repository }} | cut -d '/' -f 2)" \
--index-path ${{ github.workspace }}/index.yaml \
--push \
--pages-branch 'gh-pages' \
--pages-index-path 'index.yaml'
# Chart Release was never intended to publish to a different repo
# this workaround is intended to move the index.yaml to the target repo
# where the github pages are hosted
- name: Checkout pages repository
- name: Checkout target repository
uses: actions/checkout@v3
with:
repository: ${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}
@@ -188,7 +190,7 @@ jobs:
run: |
cp ${{ github.workspace }}/index.yaml ${{ env.CHART_TARGET_REPO }}/actions-runner-controller/index.yaml
- name: Commit and push
- name: Commit and push to target repository
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
@@ -202,4 +204,4 @@ jobs:
echo "New helm chart has been published" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "- New [index.yaml](https://github.com/${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}/tree/main/actions-runner-controller) pushed" >> $GITHUB_STEP_SUMMARY
echo "- New [index.yaml](https://github.com/${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}/tree/master/actions-runner-controller) pushed" >> $GITHUB_STEP_SUMMARY

View File

@@ -46,6 +46,13 @@ jobs:
# If inputs.ref is empty, it'll resolve to the default branch
ref: ${{ inputs.ref }}
- name: Check chart versions
# Binary version and chart versions need to match.
# In case of an upgrade, the controller will try to clean up
# resources with older versions that should have been cleaned up
# during the upgrade process
run: ./hack/check-gh-chart-versions.sh ${{ inputs.release_tag_name }}
- name: Resolve parameters
id: resolve_parameters
run: |

View File

@@ -1,4 +1,4 @@
name: Runners
name: Release Runner Images
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
@@ -18,7 +18,6 @@ env:
TARGET_ORG: actions-runner-controller
TARGET_WORKFLOW: release-runners.yaml
DOCKER_VERSION: 20.10.23
RUNNER_CONTAINER_HOOKS_VERSION: 0.2.0
jobs:
build-runners:
@@ -27,10 +26,12 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Get runner version
id: runner_version
id: versions
run: |
version=$(echo -n $(cat runner/VERSION))
echo runner_version=$version >> $GITHUB_OUTPUT
runner_current_version="$(echo -n $(cat runner/VERSION | grep 'RUNNER_VERSION=' | cut -d '=' -f2))"
container_hooks_current_version="$(echo -n $(cat runner/VERSION | grep 'RUNNER_CONTAINER_HOOKS_VERSION=' | cut -d '=' -f2))"
echo runner_version=$runner_current_version >> $GITHUB_OUTPUT
echo container_hooks_version=$container_hooks_current_version >> $GITHUB_OUTPUT
- name: Get Token
id: get_workflow_token
@@ -42,7 +43,8 @@ jobs:
- name: Trigger Build And Push Runner Images To Registries
env:
RUNNER_VERSION: ${{ steps.runner_version.outputs.runner_version }}
RUNNER_VERSION: ${{ steps.versions.outputs.runner_version }}
CONTAINER_HOOKS_VERSION: ${{ steps.versions.outputs.container_hooks_version }}
run: |
# Authenticate
gh auth login --with-token <<< ${{ steps.get_workflow_token.outputs.token }}
@@ -51,20 +53,21 @@ jobs:
gh workflow run ${{ env.TARGET_WORKFLOW }} -R ${{ env.TARGET_ORG }}/releases \
-f runner_version=${{ env.RUNNER_VERSION }} \
-f docker_version=${{ env.DOCKER_VERSION }} \
-f runner_container_hooks_version=${{ env.RUNNER_CONTAINER_HOOKS_VERSION }} \
-f runner_container_hooks_version=${{ env.CONTAINER_HOOKS_VERSION }} \
-f sha='${{ github.sha }}' \
-f push_to_registries=${{ env.PUSH_TO_REGISTRIES }}
- name: Job summary
env:
RUNNER_VERSION: ${{ steps.runner_version.outputs.runner_version }}
RUNNER_VERSION: ${{ steps.versions.outputs.runner_version }}
CONTAINER_HOOKS_VERSION: ${{ steps.versions.outputs.container_hooks_version }}
run: |
echo "The [release-runners.yaml](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/release-runners.yaml) workflow has been triggered!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- runner_version: ${{ env.RUNNER_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- docker_version: ${{ env.DOCKER_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- runner_container_hooks_version: ${{ env.RUNNER_CONTAINER_HOOKS_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- runner_container_hooks_version: ${{ env.CONTAINER_HOOKS_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- sha: ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
echo "- push_to_registries: ${{ env.PUSH_TO_REGISTRIES }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY

View File

@@ -16,21 +16,34 @@ jobs:
env:
GH_TOKEN: ${{ github.token }}
outputs:
current_version: ${{ steps.versions.outputs.current_version }}
latest_version: ${{ steps.versions.outputs.latest_version }}
runner_current_version: ${{ steps.runner_versions.outputs.runner_current_version }}
runner_latest_version: ${{ steps.runner_versions.outputs.runner_latest_version }}
container_hooks_current_version: ${{ steps.container_hooks_versions.outputs.container_hooks_current_version }}
container_hooks_latest_version: ${{ steps.container_hooks_versions.outputs.container_hooks_latest_version }}
steps:
- uses: actions/checkout@v3
- name: Get current and latest versions
id: versions
- name: Get runner current and latest versions
id: runner_versions
run: |
CURRENT_VERSION=$(echo -n $(cat runner/VERSION))
CURRENT_VERSION="$(echo -n $(cat runner/VERSION | grep 'RUNNER_VERSION=' | cut -d '=' -f2))"
echo "Current version: $CURRENT_VERSION"
echo current_version=$CURRENT_VERSION >> $GITHUB_OUTPUT
echo runner_current_version=$CURRENT_VERSION >> $GITHUB_OUTPUT
LATEST_VERSION=$(gh release list --exclude-drafts --exclude-pre-releases --limit 1 -R actions/runner | grep -oP '(?<=v)[0-9.]+' | head -1)
echo "Latest version: $LATEST_VERSION"
echo latest_version=$LATEST_VERSION >> $GITHUB_OUTPUT
echo runner_latest_version=$LATEST_VERSION >> $GITHUB_OUTPUT
- name: Get container-hooks current and latest versions
id: container_hooks_versions
run: |
CURRENT_VERSION="$(echo -n $(cat runner/VERSION | grep 'RUNNER_CONTAINER_HOOKS_VERSION=' | cut -d '=' -f2))"
echo "Current version: $CURRENT_VERSION"
echo container_hooks_current_version=$CURRENT_VERSION >> $GITHUB_OUTPUT
LATEST_VERSION=$(gh release list --exclude-drafts --exclude-pre-releases --limit 1 -R actions/runner-container-hooks | grep -oP '(?<=v)[0-9.]+' | head -1)
echo "Latest version: $LATEST_VERSION"
echo container_hooks_latest_version=$LATEST_VERSION >> $GITHUB_OUTPUT
# check_pr checks if a PR for the same update already exists. It only runs if
# runner latest version != our current version. If no existing PR is found,
@@ -38,7 +51,7 @@ jobs:
check_pr:
runs-on: ubuntu-latest
needs: check_versions
if: needs.check_versions.outputs.current_version != needs.check_versions.outputs.latest_version
if: needs.check_versions.outputs.runner_current_version != needs.check_versions.outputs.runner_latest_version || needs.check_versions.outputs.container_hooks_current_version != needs.check_versions.outputs.container_hooks_latest_version
outputs:
pr_name: ${{ steps.pr_name.outputs.pr_name }}
env:
@@ -46,16 +59,35 @@ jobs:
steps:
- name: debug
run:
echo ${{ needs.check_versions.outputs.current_version }}
echo ${{ needs.check_versions.outputs.latest_version }}
echo "RUNNER_CURRENT_VERSION=${{ needs.check_versions.outputs.runner_current_version }}"
echo "RUNNER_LATEST_VERSION=${{ needs.check_versions.outputs.runner_latest_version }}"
echo "CONTAINER_HOOKS_CURRENT_VERSION=${{ needs.check_versions.outputs.container_hooks_current_version }}"
echo "CONTAINER_HOOKS_LATEST_VERSION=${{ needs.check_versions.outputs.container_hooks_latest_version }}"
- uses: actions/checkout@v3
- name: PR Name
id: pr_name
env:
LATEST_VERSION: ${{ needs.check_versions.outputs.latest_version }}
RUNNER_CURRENT_VERSION: ${{ needs.check_versions.outputs.runner_current_version }}
RUNNER_LATEST_VERSION: ${{ needs.check_versions.outputs.runner_latest_version }}
CONTAINER_HOOKS_CURRENT_VERSION: ${{ needs.check_versions.outputs.container_hooks_current_version }}
CONTAINER_HOOKS_LATEST_VERSION: ${{ needs.check_versions.outputs.container_hooks_latest_version }}
# Generate a PR name with the following title:
# Updates: runner to v2.304.0 and container-hooks to v0.3.1
run: |
PR_NAME="Update runner to version ${LATEST_VERSION}"
RUNNER_MESSAGE="runner to v${RUNNER_LATEST_VERSION}"
CONTAINER_HOOKS_MESSAGE="container-hooks to v${CONTAINER_HOOKS_LATEST_VERSION}"
PR_NAME="Updates:"
if [ "$RUNNER_CURRENT_VERSION" != "$RUNNER_LATEST_VERSION" ]
then
PR_NAME="$PR_NAME $RUNNER_MESSAGE"
fi
if [ "$CONTAINER_HOOKS_CURRENT_VERSION" != "$CONTAINER_HOOKS_LATEST_VERSION" ]
then
PR_NAME="$PR_NAME $CONTAINER_HOOKS_MESSAGE"
fi
result=$(gh pr list --search "$PR_NAME" --json number --jq ".[].number" --limit 1)
if [ -z "$result" ]
@@ -80,21 +112,29 @@ jobs:
actions: write
env:
GH_TOKEN: ${{ github.token }}
CURRENT_VERSION: ${{ needs.check_versions.outputs.current_version }}
LATEST_VERSION: ${{ needs.check_versions.outputs.latest_version }}
RUNNER_CURRENT_VERSION: ${{ needs.check_versions.outputs.runner_current_version }}
RUNNER_LATEST_VERSION: ${{ needs.check_versions.outputs.runner_latest_version }}
CONTAINER_HOOKS_CURRENT_VERSION: ${{ needs.check_versions.outputs.container_hooks_current_version }}
CONTAINER_HOOKS_LATEST_VERSION: ${{ needs.check_versions.outputs.container_hooks_latest_version }}
PR_NAME: ${{ needs.check_pr.outputs.pr_name }}
steps:
- uses: actions/checkout@v3
- name: New branch
run: git checkout -b update-runner-$LATEST_VERSION
run: git checkout -b update-runner-"$(date +%Y-%m-%d)"
- name: Update files
run: |
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/VERSION
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" test/e2e/e2e_test.go
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" .github/workflows/e2e-test-linux-vm.yaml
sed -i "s/$RUNNER_CURRENT_VERSION/$RUNNER_LATEST_VERSION/g" runner/VERSION
sed -i "s/$RUNNER_CURRENT_VERSION/$RUNNER_LATEST_VERSION/g" runner/Makefile
sed -i "s/$RUNNER_CURRENT_VERSION/$RUNNER_LATEST_VERSION/g" Makefile
sed -i "s/$RUNNER_CURRENT_VERSION/$RUNNER_LATEST_VERSION/g" test/e2e/e2e_test.go
sed -i "s/$CONTAINER_HOOKS_CURRENT_VERSION/$CONTAINER_HOOKS_LATEST_VERSION/g" runner/VERSION
sed -i "s/$CONTAINER_HOOKS_CURRENT_VERSION/$CONTAINER_HOOKS_LATEST_VERSION/g" runner/Makefile
sed -i "s/$CONTAINER_HOOKS_CURRENT_VERSION/$CONTAINER_HOOKS_LATEST_VERSION/g" Makefile
sed -i "s/$CONTAINER_HOOKS_CURRENT_VERSION/$CONTAINER_HOOKS_LATEST_VERSION/g" test/e2e/e2e_test.go
- name: Commit changes
run: |

View File

@@ -1,60 +0,0 @@
name: Validate ARC
on:
pull_request:
branches:
- master
paths-ignore:
- '**.md'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/publish-canary.yaml'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/runners.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/validate-entrypoint.yaml'
- '.github/renovate.*'
- 'runner/**'
- '.gitignore'
- 'PROJECT'
- 'LICENSE'
- 'Makefile'
permissions:
contents: read
jobs:
test-controller:
name: Test ARC
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set-up Go
uses: actions/setup-go@v3
with:
go-version: '1.19'
check-latest: false
- uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_linux_amd64.tar.gz
tar zxvf kubebuilder_2.3.2_linux_amd64.tar.gz
sudo mv kubebuilder_2.3.2_linux_amd64 /usr/local/kubebuilder
- name: Run tests
run: |
make test
- name: Verify manifests are up-to-date
run: |
make manifests
git diff --exit-code

2
.gitignore vendored
View File

@@ -35,3 +35,5 @@ bin
.DS_STORE
/test-assets
/.tools

View File

@@ -5,7 +5,7 @@ else
endif
DOCKER_USER ?= $(shell echo ${DOCKER_IMAGE_NAME} | cut -d / -f1)
VERSION ?= dev
RUNNER_VERSION ?= 2.303.0
RUNNER_VERSION ?= 2.304.0
TARGETPLATFORM ?= $(shell arch)
RUNNER_NAME ?= ${DOCKER_USER}/actions-runner
RUNNER_TAG ?= ${VERSION}
@@ -202,7 +202,7 @@ generate: controller-gen
# Run shellcheck on runner scripts
shellcheck: shellcheck-install
$(TOOLS_PATH)/shellcheck --shell bash --source-path runner runner/*.sh
$(TOOLS_PATH)/shellcheck --shell bash --source-path runner runner/*.sh hack/*.sh
docker-buildx:
export DOCKER_CLI_EXPERIMENTAL=enabled ;\

View File

@@ -6,17 +6,14 @@
## People
`actions-runner-controller` is an open-source project currently developed and maintained in collaboration with maintainers @mumoshu and @toast-gear, various [contributors](https://github.com/actions/actions-runner-controller/graphs/contributors), and the [awesome community](https://github.com/actions/actions-runner-controller/discussions), mostly in their spare time.
`actions-runner-controller` is an open-source project currently developed and maintained in collaboration with the GitHub Actions team, external maintainers @mumoshu and @toast-gear, various [contributors](https://github.com/actions/actions-runner-controller/graphs/contributors), and the [awesome community](https://github.com/actions/actions-runner-controller/discussions).
If you think the project is awesome and it's becoming a basis for your important business, consider [sponsoring us](https://github.com/sponsors/actions-runner-controller)!
If you think the project is awesome and is adding value to your business, please consider directly sponsoring [community maintainers](https://github.com/sponsors/actions-runner-controller) and individual contributors via GitHub Sponsors.
In case you are already the employer of one of contributors, sponsoring via GitHub Sponsors might not be an option. Just support them in other means!
We don't currently have [any sponsors dedicated to this project yet](https://github.com/sponsors/actions-runner-controller).
However, [HelloFresh](https://www.hellofreshgroup.com/en/) has recently started sponsoring @mumoshu for this project along with his other works. A part of their sponsorship will enable @mumoshu to add an E2E test to keep ARC even more reliable on AWS. Thank you for your sponsorship!
[<img src="https://user-images.githubusercontent.com/22009/170898715-07f02941-35ec-418b-8cd4-251b422fa9ac.png" width="219" height="71" />](https://careers.hellofresh.com/)
See [the sponsorship dashboard](https://github.com/sponsors/actions-runner-controller) for the former and the current sponsors.
## Status

View File

@@ -61,6 +61,9 @@ if [ "${tool}" == "helm" ]; then
flags+=( --set githubWebhookServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
flags+=( --set actionsMetricsServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
fi
if [ "${WATCH_NAMESPACE}" != "" ]; then
flags+=( --set watchNamespace=${WATCH_NAMESPACE} --set singleNamespace=true)
fi
if [ "${CHART_VERSION}" != "" ]; then
flags+=( --version ${CHART_VERSION})
fi
@@ -69,6 +72,9 @@ if [ "${tool}" == "helm" ]; then
flags+=( --set githubWebhookServer.logFormat=${LOG_FORMAT})
flags+=( --set actionsMetricsServer.logFormat=${LOG_FORMAT})
fi
if [ "${ADMISSION_WEBHOOKS_TIMEOUT}" != "" ]; then
flags+=( --set admissionWebHooks.timeoutSeconds=${ADMISSION_WEBHOOKS_TIMEOUT})
fi
if [ -n "${CREATE_SECRETS_USING_HELM}" ]; then
if [ -z "${WEBHOOK_GITHUB_TOKEN}" ]; then
echo 'Failed deploying secret "actions-metrics-server" using helm. Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
@@ -77,6 +83,10 @@ if [ "${tool}" == "helm" ]; then
flags+=( --set actionsMetricsServer.secret.create=true)
flags+=( --set actionsMetricsServer.secret.github_token=${WEBHOOK_GITHUB_TOKEN})
fi
if [ -n "${GITHUB_WEBHOOK_SERVER_ENV_NAME}" ] && [ -n "${GITHUB_WEBHOOK_SERVER_ENV_VALUE}" ]; then
flags+=( --set githubWebhookServer.env[0].name=${GITHUB_WEBHOOK_SERVER_ENV_NAME})
flags+=( --set githubWebhookServer.env[0].value=${GITHUB_WEBHOOK_SERVER_ENV_VALUE})
fi
set -vx
@@ -92,6 +102,7 @@ if [ "${tool}" == "helm" ]; then
--set githubWebhookServer.podAnnotations.test-id=${TEST_ID} \
--set actionsMetricsServer.podAnnotations.test-id=${TEST_ID} \
${flags[@]} --set image.imagePullPolicy=${IMAGE_PULL_POLICY} \
--set image.dindSidecarRepositoryAndTag=${DIND_SIDECAR_REPOSITORY_AND_TAG} \
-f ${VALUES_FILE}
set +v
# To prevent `CustomResourceDefinition.apiextensions.k8s.io "runners.actions.summerwind.dev" is invalid: metadata.annotations: Too long: must have at most 262144 bytes`

View File

@@ -6,6 +6,10 @@ OP=${OP:-apply}
RUNNER_LABEL=${RUNNER_LABEL:-self-hosted}
# See https://github.com/actions/actions-runner-controller/issues/2123
kubectl delete secret generic docker-config || :
kubectl create secret generic docker-config --from-file .dockerconfigjson=<(jq -M 'del(.aliases)' $HOME/.docker/config.json) --type=kubernetes.io/dockerconfigjson || :
cat acceptance/testdata/kubernetes_container_mode.envsubst.yaml | NAMESPACE=${RUNNER_NAMESPACE} envsubst | kubectl apply -f -
if [ -n "${TEST_REPO}" ]; then

View File

@@ -95,6 +95,24 @@ spec:
# that part is created by dockerd.
mountPath: /home/runner/.local
readOnly: false
# See https://github.com/actions/actions-runner-controller/issues/2123
# Be sure to omit the "aliases" field from the config.json.
# Otherwise you may encounter nasty errors like:
# $ docker build
# docker: 'buildx' is not a docker command.
# See 'docker --help'
# due to the incompatibility between your host docker config.json and the runner environment.
# That is, your host dockcer config.json might contain this:
# "aliases": {
# "builder": "buildx"
# }
# And this results in the above error when the runner does not have buildx installed yet.
- name: docker-config
mountPath: /home/runner/.docker/config.json
subPath: config.json
readOnly: true
- name: docker-config-root
mountPath: /home/runner/.docker
volumes:
- name: rootless-dind-work-dir
ephemeral:
@@ -105,6 +123,15 @@ spec:
resources:
requests:
storage: 3Gi
- name: docker-config
# Refer to .dockerconfigjson/.docker/config.json
secret:
secretName: docker-config
items:
- key: .dockerconfigjson
path: config.json
- name: docker-config-root
emptyDir: {}
#
# Non-standard working directory

View File

@@ -52,6 +52,9 @@ type AutoscalingListenerSpec struct {
// Required
Image string `json:"image,omitempty"`
// Required
ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"`
// Required
ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"`

View File

@@ -77,6 +77,11 @@ type RunnerDeploymentStatus struct {
// +kubebuilder:object:root=true
// +kubebuilder:resource:shortName=rdeploy
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.enterprise",name=Enterprise,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.organization",name=Organization,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.repository",name=Repository,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.group",name=Group,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.labels",name=Labels,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Current,type=number
// +kubebuilder:printcolumn:JSONPath=".status.updatedReplicas",name=Up-To-Date,type=number

View File

@@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.22.0
version: 0.23.3
# Used as the default manager tag value when no tag property is provided in the values.yaml
appVersion: 0.27.0
appVersion: 0.27.4
home: https://github.com/actions/actions-runner-controller

View File

@@ -46,7 +46,7 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
| `metrics.port` | Set port of metrics service | 8443 |
| `metrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
| `metrics.proxy.image.repository` | The "repository/image" of the kube-proxy container | quay.io/brancz/kube-rbac-proxy |
| `metrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.10.0 |
| `metrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.13.1 |
| `metrics.serviceMonitorLabels` | Set labels to apply to ServiceMonitor resources | |
| `imagePullSecrets` | Specifies the secret to be used when pulling the controller pod containers | |
| `fullnameOverride` | Override the full resource names | |
@@ -102,8 +102,11 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
| `githubWebhookServer.tolerations` | Set the githubWebhookServer pod tolerations | |
| `githubWebhookServer.affinity` | Set the githubWebhookServer pod affinity rules | |
| `githubWebhookServer.priorityClassName` | Set the githubWebhookServer pod priorityClassName | |
| `githubWebhookServer.terminationGracePeriodSeconds` | Set the githubWebhookServer pod terminationGracePeriodSeconds. Useful when using preStop hooks to drain/sleep. | `10` |
| `githubWebhookServer.lifecycle` | Set the githubWebhookServer pod lifecycle hooks | `{}` |
| `githubWebhookServer.service.type` | Set githubWebhookServer service type | |
| `githubWebhookServer.service.ports` | Set githubWebhookServer service ports | `[{"port":80, "targetPort:"http", "protocol":"TCP", "name":"http"}]` |
| `githubWebhookServer.service.loadBalancerSourceRanges` | Set githubWebhookServer loadBalancerSourceRanges for restricting loadBalancer type services | `[]` |
| `githubWebhookServer.ingress.enabled` | Deploy an ingress kind for the githubWebhookServer | false |
| `githubWebhookServer.ingress.annotations` | Set annotations for the ingress kind | |
| `githubWebhookServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
@@ -115,9 +118,9 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
| `actionsMetricsServer.logLevel` | Set the log level of the actionsMetricsServer container | |
| `actionsMetricsServer.logFormat` | Set the log format of the actionsMetricsServer controller. Valid options are "text" and "json" | text |
| `actionsMetricsServer.enabled` | Deploy the actions metrics server pod | false |
| `actionsMetricsServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false |
| `actionsMetricsServer.secret.enabled` | Passes the webhook hook secret to the actions-metrics-server | false |
| `actionsMetricsServer.secret.create` | Deploy the webhook hook secret | false |
| `actionsMetricsServer.secret.name` | Set the name of the webhook hook secret | github-webhook-server |
| `actionsMetricsServer.secret.name` | Set the name of the webhook hook secret | actions-metrics-server |
| `actionsMetricsServer.secret.github_webhook_secret_token` | Set the webhook secret token value | |
| `actionsMetricsServer.imagePullSecrets` | Specifies the secret to be used when pulling the actionsMetricsServer pod containers | |
| `actionsMetricsServer.nameOverride` | Override the resource name prefix | |
@@ -135,8 +138,11 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
| `actionsMetricsServer.tolerations` | Set the actionsMetricsServer pod tolerations | |
| `actionsMetricsServer.affinity` | Set the actionsMetricsServer pod affinity rules | |
| `actionsMetricsServer.priorityClassName` | Set the actionsMetricsServer pod priorityClassName | |
| `actionsMetricsServer.terminationGracePeriodSeconds` | Set the actionsMetricsServer pod terminationGracePeriodSeconds. Useful when using preStop hooks to drain/sleep. | `10` |
| `actionsMetricsServer.lifecycle` | Set the actionsMetricsServer pod lifecycle hooks | `{}` |
| `actionsMetricsServer.service.type` | Set actionsMetricsServer service type | |
| `actionsMetricsServer.service.ports` | Set actionsMetricsServer service ports | `[{"port":80, "targetPort:"http", "protocol":"TCP", "name":"http"}]` |
| `actionsMetricsServer.service.loadBalancerSourceRanges` | Set actionsMetricsServer loadBalancerSourceRanges for restricting loadBalancer type services | `[]` |
| `actionsMetricsServer.ingress.enabled` | Deploy an ingress kind for the actionsMetricsServer | false |
| `actionsMetricsServer.ingress.annotations` | Set annotations for the ingress kind | |
| `actionsMetricsServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
@@ -147,5 +153,5 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
| `actionsMetrics.port` | Set port of actions metrics service | 8443 |
| `actionsMetrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
| `actionsMetrics.proxy.image.repository` | The "repository/image" of the kube-proxy container | quay.io/brancz/kube-rbac-proxy |
| `actionsMetrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.10.0 |
| `actionsMetrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.13.1 |
| `actionsMetrics.serviceMonitorLabels` | Set labels to apply to ServiceMonitor resources | |

View File

@@ -17,6 +17,21 @@ spec:
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.template.spec.enterprise
name: Enterprise
type: string
- jsonPath: .spec.template.spec.organization
name: Organization
type: string
- jsonPath: .spec.template.spec.repository
name: Repository
type: string
- jsonPath: .spec.template.spec.group
name: Group
type: string
- jsonPath: .spec.template.spec.labels
name: Labels
type: string
- jsonPath: .spec.replicas
name: Desired
type: number

View File

@@ -50,6 +50,12 @@ spec:
{{- end }}
command:
- "/actions-metrics-server"
{{- if .Values.actionsMetricsServer.lifecycle }}
{{- with .Values.actionsMetricsServer.lifecycle }}
lifecycle:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
env:
- name: GITHUB_WEBHOOK_SECRET_TOKEN
valueFrom:
@@ -142,7 +148,7 @@ spec:
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- end }}
terminationGracePeriodSeconds: 10
terminationGracePeriodSeconds: {{ .Values.actionsMetricsServer.terminationGracePeriodSeconds }}
{{- with .Values.actionsMetricsServer.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -0,0 +1,90 @@
{{- if .Values.actionsMetricsServer.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller-actions-metrics-server.roleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runnersets
verbs:
- get
- list
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/status
verbs:
- get
- patch
- update
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if .Values.actionsMetricsServer.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.roleName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller-actions-metrics-server.roleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller-actions-metrics-server.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -5,7 +5,7 @@ metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 4 }}
{{- if .Values.actionsMetricsServer.service.annotations }}
annotations:
{{ toYaml .Values.actionsMetricsServer.service.annotations | nindent 4 }}
@@ -23,4 +23,10 @@ spec:
{{- end }}
selector:
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 4 }}
{{- if .Values.actionsMetricsServer.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- range $ip := .Values.actionsMetricsServer.service.loadBalancerSourceRanges }}
- {{ $ip -}}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -117,10 +117,14 @@ spec:
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
{{- end }}
{{- if kindIs "slice" .Values.githubWebhookServer.env }}
{{- toYaml .Values.githubWebhookServer.env | nindent 8 }}
{{- else }}
{{- range $key, $val := .Values.githubWebhookServer.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: github-webhook-server
imagePullPolicy: {{ .Values.image.pullPolicy }}

View File

@@ -250,14 +250,6 @@ rules:
- patch
- update
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
{{- if .Values.runner.statusUpdateHook.enabled }}
- apiGroups:
- ""
@@ -311,11 +303,4 @@ rules:
- list
- create
- delete
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
{{- end }}

View File

@@ -0,0 +1,21 @@
apiVersion: rbac.authorization.k8s.io/v1
{{- if .Values.scope.singleNamespace }}
kind: RoleBinding
{{- else }}
kind: ClusterRoleBinding
{{- end }}
metadata:
name: {{ include "actions-runner-controller.managerRoleName" . }}-secrets
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
{{- if .Values.scope.singleNamespace }}
kind: Role
{{- else }}
kind: ClusterRole
{{- end }}
name: {{ include "actions-runner-controller.managerRoleName" . }}-secrets
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,24 @@
apiVersion: rbac.authorization.k8s.io/v1
{{- if .Values.scope.singleNamespace }}
kind: Role
{{- else }}
kind: ClusterRole
{{- end }}
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.managerRoleName" . }}-secrets
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
{{- if .Values.rbac.allowGrantingKubernetesContainerModePermissions }}
{{/* These permissions are required by ARC to create RBAC resources for the runner pod to use the kubernetes container mode. */}}
{{/* See https://github.com/actions/actions-runner-controller/pull/1268/files#r917331632 */}}
- create
- delete
{{- end }}

View File

@@ -44,6 +44,7 @@ webhooks:
resources:
- runners
sideEffects: None
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
@@ -74,6 +75,7 @@ webhooks:
resources:
- runnerdeployments
sideEffects: None
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
@@ -104,6 +106,7 @@ webhooks:
resources:
- runnerreplicasets
sideEffects: None
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
@@ -136,6 +139,7 @@ webhooks:
objectSelector:
matchLabels:
"actions-runner-controller/inject-registration-token": "true"
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
@@ -177,6 +181,7 @@ webhooks:
resources:
- runners
sideEffects: None
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
@@ -207,6 +212,7 @@ webhooks:
resources:
- runnerdeployments
sideEffects: None
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
@@ -238,6 +244,7 @@ webhooks:
- runnerreplicasets
sideEffects: None
{{ if not (or (hasKey .Values.admissionWebHooks "caBundle") .Values.certManagerEnabled) }}
timeoutSeconds: {{ .Values.admissionWebHooks.timeoutSeconds | default 10}}
---
apiVersion: v1
kind: Secret

View File

@@ -47,6 +47,7 @@ authSecret:
#github_basicauth_username: ""
#github_basicauth_password: ""
# http(s) should be specified for dockerRegistryMirror, e.g.: dockerRegistryMirror="https://<your-docker-registry-mirror>"
dockerRegistryMirror: ""
image:
repository: "summerwind/actions-runner-controller"
@@ -279,6 +280,19 @@ githubWebhookServer:
# queueLimit: 100
terminationGracePeriodSeconds: 10
lifecycle: {}
# specify additional environment variables for the webhook server pod.
# It's possible to specify either key vale pairs e.g.:
# my_env_var: "some value"
# my_other_env_var: "other value"
# or a list of complete environment variable definitions e.g.:
# - name: GITHUB_WEBHOOK_SECRET_TOKEN
# valueFrom:
# secretKeyRef:
# key: GITHUB_WEBHOOK_SECRET_TOKEN
# name: prod-gha-controller-webhook-token
# optional: true
env: {}
actionsMetrics:
serviceAnnotations: {}
@@ -346,6 +360,7 @@ actionsMetricsServer:
protocol: TCP
name: http
#nodePort: someFixedPortForUseWithTerraformCdkCfnEtc
loadBalancerSourceRanges: []
ingress:
enabled: false
ingressClassName: ""
@@ -375,4 +390,5 @@ actionsMetricsServer:
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
terminationGracePeriodSeconds: 10
lifecycle: {}

View File

@@ -15,13 +15,13 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0
version: 0.4.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.3.0"
appVersion: "0.4.0"
home: https://github.com/actions/actions-runner-controller

View File

@@ -80,6 +80,9 @@ spec:
image:
description: Required
type: string
imagePullPolicy:
description: Required
type: string
imagePullSecrets:
description: Required
items:

View File

@@ -68,14 +68,11 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY
value: "{{ .Values.image.pullPolicy | default "IfNotPresent" }}"
{{- with .Values.env }}
{{- if kindIs "slice" .Values.env }}
{{- toYaml .Values.env | nindent 8 }}
{{- else }}
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
{{- if kindIs "slice" . }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- with .Values.resources }}

View File

@@ -78,6 +78,13 @@ rules:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
@@ -133,4 +140,5 @@ rules:
verbs:
- list
- watch
- patch
{{- end }}

View File

@@ -52,6 +52,13 @@ rules:
- get
- patch
- update
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:
@@ -114,4 +121,5 @@ rules:
verbs:
- list
- watch
- patch
{{- end }}

View File

@@ -169,7 +169,7 @@ func TestTemplate_CreateManagerClusterRole(t *testing.T) {
assert.Empty(t, managerClusterRole.Namespace, "ClusterRole should not have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-cluster-role", managerClusterRole.Name)
assert.Equal(t, 15, len(managerClusterRole.Rules))
assert.Equal(t, 16, len(managerClusterRole.Rules))
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role.yaml"})
assert.ErrorContains(t, err, "could not find template templates/manager_single_namespace_controller_role.yaml in chart", "We should get an error because the template should be skipped")
@@ -349,13 +349,16 @@ func TestTemplate_ControllerDeployment_Defaults(t *testing.T) {
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 2)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 3)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY", deployment.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "IfNotPresent", deployment.Spec.Template.Spec.Containers[0].Env[2].Value) // default value. Needs to align with controllers/actions.github.com/resourcebuilder.go
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Resources)
assert.Nil(t, deployment.Spec.Template.Spec.Containers[0].SecurityContext)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].VolumeMounts, 1)
@@ -390,6 +393,8 @@ func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
"imagePullSecrets[0].name": "dockerhub",
"nameOverride": "gha-runner-scale-set-controller-override",
"fullnameOverride": "gha-runner-scale-set-controller-fullname-override",
"env[0].name": "ENV_VAR_NAME_1",
"env[0].value": "ENV_VAR_VALUE_1",
"serviceAccount.name": "gha-runner-scale-set-controller-sa",
"podAnnotations.foo": "bar",
"podSecurityContext.fsGroup": "1000",
@@ -432,6 +437,9 @@ func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
assert.Equal(t, "bar", deployment.Spec.Template.Annotations["foo"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Equal(t, "ENV_VAR_NAME_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Name)
assert.Equal(t, "ENV_VAR_VALUE_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Value)
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 1)
assert.Equal(t, "dockerhub", deployment.Spec.Template.Spec.ImagePullSecrets[0].Name)
assert.Equal(t, "gha-runner-scale-set-controller-sa", deployment.Spec.Template.Spec.ServiceAccountName)
@@ -467,10 +475,16 @@ func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
assert.Equal(t, "--auto-scaler-image-pull-secrets=dockerhub", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 2)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 4)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY", deployment.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "Always", deployment.Spec.Template.Spec.Containers[0].Env[2].Value) // default value. Needs to align with controllers/actions.github.com/resourcebuilder.go
assert.Equal(t, "ENV_VAR_NAME_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Name)
assert.Equal(t, "ENV_VAR_VALUE_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
@@ -690,13 +704,16 @@ func TestTemplate_ControllerDeployment_WatchSingleNamespace(t *testing.T) {
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--watch-single-namespace=demo", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 2)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 3)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY", deployment.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "IfNotPresent", deployment.Spec.Template.Spec.Containers[0].Env[2].Value) // default value. Needs to align with controllers/actions.github.com/resourcebuilder.go
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Resources)
assert.Nil(t, deployment.Spec.Template.Spec.Containers[0].SecurityContext)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].VolumeMounts, 1)
@@ -704,6 +721,52 @@ func TestTemplate_ControllerDeployment_WatchSingleNamespace(t *testing.T) {
assert.Equal(t, "/tmp", deployment.Spec.Template.Spec.Containers[0].VolumeMounts[0].MountPath)
}
func TestTemplate_ControllerContainerEnvironmentVariables(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"env[0].Name": "ENV_VAR_NAME_1",
"env[0].Value": "ENV_VAR_VALUE_1",
"env[1].Name": "ENV_VAR_NAME_2",
"env[1].ValueFrom.SecretKeyRef.Key": "ENV_VAR_NAME_2",
"env[1].ValueFrom.SecretKeyRef.Name": "secret-name",
"env[1].ValueFrom.SecretKeyRef.Optional": "true",
"env[2].Name": "ENV_VAR_NAME_3",
"env[2].Value": "",
"env[3].Name": "ENV_VAR_NAME_4",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 7)
assert.Equal(t, "ENV_VAR_NAME_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Name)
assert.Equal(t, "ENV_VAR_VALUE_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Value)
assert.Equal(t, "ENV_VAR_NAME_2", deployment.Spec.Template.Spec.Containers[0].Env[4].Name)
assert.Equal(t, "secret-name", deployment.Spec.Template.Spec.Containers[0].Env[4].ValueFrom.SecretKeyRef.Name)
assert.Equal(t, "ENV_VAR_NAME_2", deployment.Spec.Template.Spec.Containers[0].Env[4].ValueFrom.SecretKeyRef.Key)
assert.True(t, *deployment.Spec.Template.Spec.Containers[0].Env[4].ValueFrom.SecretKeyRef.Optional)
assert.Equal(t, "ENV_VAR_NAME_3", deployment.Spec.Template.Spec.Containers[0].Env[5].Name)
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Env[5].Value)
assert.Equal(t, "ENV_VAR_NAME_4", deployment.Spec.Template.Spec.Containers[0].Env[6].Name)
assert.Empty(t, deployment.Spec.Template.Spec.Containers[0].Env[6].ValueFrom)
}
func TestTemplate_WatchSingleNamespace_NotCreateManagerClusterRole(t *testing.T) {
t.Parallel()
@@ -780,7 +843,7 @@ func TestTemplate_CreateManagerSingleNamespaceRole(t *testing.T) {
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceWatchRole.Name)
assert.Equal(t, "demo", managerSingleNamespaceWatchRole.Namespace)
assert.Equal(t, 13, len(managerSingleNamespaceWatchRole.Rules))
assert.Equal(t, 14, len(managerSingleNamespaceWatchRole.Rules))
}
func TestTemplate_ManagerSingleNamespaceRoleBinding(t *testing.T) {

View File

@@ -18,6 +18,17 @@ imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
env:
## Define environment variables for the controller pod
# - name: "ENV_VAR_NAME_1"
# value: "ENV_VAR_VALUE_1"
# - name: "ENV_VAR_NAME_2"
# valueFrom:
# secretKeyRef:
# key: ENV_VAR_NAME_2
# name: secret-name
# optional: true
serviceAccount:
# Specifies whether a service account should be created for running the controller pod
create: true
@@ -31,27 +42,27 @@ serviceAccount:
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## We usually recommend not to specify default resources and to leave this as a conscious
## choice for the user. This also increases chances charts run on environments with little
## resources, such as Minikube. If you do want to specify resources, uncomment the following
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
@@ -69,6 +80,6 @@ flags:
# Defaults to "debug".
logLevel: "debug"
# Restricts the controller to only watch resources in the desired namespace.
# Defaults to watch all namespaces when unset.
## Restricts the controller to only watch resources in the desired namespace.
## Defaults to watch all namespaces when unset.
# watchSingleNamespace: ""

View File

@@ -15,13 +15,13 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0
version: 0.4.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.3.0"
appVersion: "0.4.0"
home: https://github.com/actions/dev-arc

View File

@@ -11,17 +11,9 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
If release name contains chart name it will be used as a full name.
*/}}
{{- define "gha-runner-scale-set.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
@@ -40,6 +32,9 @@ helm.sh/chart: {{ include "gha-runner-scale-set.chart" . }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/part-of: gha-runner-scale-set
actions.github.com/scale-set-name: {{ .Release.Name }}
actions.github.com/scale-set-namespace: {{ .Release.Namespace }}
{{- end }}
{{/*
@@ -70,6 +65,10 @@ app.kubernetes.io/instance: {{ .Release.Name }}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-role
{{- end }}
{{- define "gha-runner-scale-set.kubeModeRoleBindingName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-role-binding
{{- end }}
{{- define "gha-runner-scale-set.kubeModeServiceAccountName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-service-account
{{- end }}
@@ -432,7 +431,7 @@ volumeMounts:
{{- include "gha-runner-scale-set.fullname" . }}-manager-role
{{- end }}
{{- define "gha-runner-scale-set.managerRoleBinding" -}}
{{- define "gha-runner-scale-set.managerRoleBindingName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-manager-role-binding
{{- end }}

View File

@@ -10,7 +10,23 @@ metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/component: "autoscaling-runner-set"
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
annotations:
{{- $containerMode := .Values.containerMode }}
{{- if not (kindIs "string" .Values.githubConfigSecret) }}
actions.github.com/cleanup-github-secret-name: {{ include "gha-runner-scale-set.githubsecret" . }}
{{- end }}
actions.github.com/cleanup-manager-role-binding: {{ include "gha-runner-scale-set.managerRoleBindingName" . }}
actions.github.com/cleanup-manager-role-name: {{ include "gha-runner-scale-set.managerRoleName" . }}
{{- if and $containerMode (eq $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
actions.github.com/cleanup-kubernetes-mode-role-binding-name: {{ include "gha-runner-scale-set.kubeModeRoleBindingName" . }}
actions.github.com/cleanup-kubernetes-mode-role-name: {{ include "gha-runner-scale-set.kubeModeRoleName" . }}
actions.github.com/cleanup-kubernetes-mode-service-account-name: {{ include "gha-runner-scale-set.kubeModeServiceAccountName" . }}
{{- end }}
{{- if and (ne $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
actions.github.com/cleanup-no-permission-service-account-name: {{ include "gha-runner-scale-set.noPermissionServiceAccountName" . }}
{{- end }}
spec:
githubConfigUrl: {{ required ".Values.githubConfigUrl is required" (trimSuffix "/" .Values.githubConfigUrl) }}
githubConfigSecret: {{ include "gha-runner-scale-set.githubsecret" . }}
@@ -90,14 +106,15 @@ spec:
{{ $key }}: {{ $val | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- if eq .Values.containerMode.type "kubernetes" }}
{{- $containerMode := .Values.containerMode }}
{{- if eq $containerMode.type "kubernetes" }}
serviceAccountName: {{ default (include "gha-runner-scale-set.kubeModeServiceAccountName" .) .Values.template.spec.serviceAccountName }}
{{- else }}
serviceAccountName: {{ default (include "gha-runner-scale-set.noPermissionServiceAccountName" .) .Values.template.spec.serviceAccountName }}
{{- end }}
{{- if or .Values.template.spec.initContainers (eq .Values.containerMode.type "dind") }}
{{- if or .Values.template.spec.initContainers (eq $containerMode.type "dind") }}
initContainers:
{{- if eq .Values.containerMode.type "dind" }}
{{- if eq $containerMode.type "dind" }}
- name: init-dind-externals
{{- include "gha-runner-scale-set.dind-init-container" . | nindent 8 }}
{{- end }}
@@ -106,13 +123,13 @@ spec:
{{- end }}
{{- end }}
containers:
{{- if eq .Values.containerMode.type "dind" }}
{{- if eq $containerMode.type "dind" }}
- name: runner
{{- include "gha-runner-scale-set.dind-runner-container" . | nindent 8 }}
- name: dind
{{- include "gha-runner-scale-set.dind-container" . | nindent 8 }}
{{- include "gha-runner-scale-set.non-runner-non-dind-containers" . | nindent 6 }}
{{- else if eq .Values.containerMode.type "kubernetes" }}
{{- else if eq $containerMode.type "kubernetes" }}
- name: runner
{{- include "gha-runner-scale-set.kubernetes-mode-runner-container" . | nindent 8 }}
{{- include "gha-runner-scale-set.non-runner-containers" . | nindent 6 }}
@@ -120,16 +137,16 @@ spec:
{{- include "gha-runner-scale-set.default-mode-runner-containers" . | nindent 6 }}
{{- end }}
{{- $tlsConfig := (default (dict) .Values.githubServerTLS) }}
{{- if or .Values.template.spec.volumes (eq .Values.containerMode.type "dind") (eq .Values.containerMode.type "kubernetes") $tlsConfig.runnerMountPath }}
{{- if or .Values.template.spec.volumes (eq $containerMode.type "dind") (eq $containerMode.type "kubernetes") $tlsConfig.runnerMountPath }}
volumes:
{{- if $tlsConfig.runnerMountPath }}
{{- include "gha-runner-scale-set.tls-volume" $tlsConfig | nindent 6 }}
{{- end }}
{{- if eq .Values.containerMode.type "dind" }}
{{- if eq $containerMode.type "dind" }}
{{- include "gha-runner-scale-set.dind-volume" . | nindent 6 }}
{{- include "gha-runner-scale-set.dind-work-volume" . | nindent 6 }}
{{- include "gha-runner-scale-set.non-work-volumes" . | nindent 6 }}
{{- else if eq .Values.containerMode.type "kubernetes" }}
{{- else if eq $containerMode.type "kubernetes" }}
{{- include "gha-runner-scale-set.kubernetes-mode-work-volume" . | nindent 6 }}
{{- include "gha-runner-scale-set.non-work-volumes" . | nindent 6 }}
{{- else }}

View File

@@ -7,7 +7,7 @@ metadata:
labels:
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
finalizers:
- actions.github.com/secret-protection
- actions.github.com/cleanup-protection
data:
{{- $hasToken := false }}
{{- $hasAppId := false }}

View File

@@ -1,10 +1,13 @@
{{- if and (eq .Values.containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
{{- $containerMode := .Values.containerMode }}
{{- if and (eq $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
# default permission for runner pod service account in kubernetes mode (container hook)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set.kubeModeRoleName" . }}
namespace: {{ .Release.Namespace }}
finalizers:
- actions.github.com/cleanup-protection
rules:
- apiGroups: [""]
resources: ["pods"]

View File

@@ -1,9 +1,12 @@
{{- if and (eq .Values.containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
{{- $containerMode := .Values.containerMode }}
{{- if and (eq $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set.kubeModeRoleName" . }}
name: {{ include "gha-runner-scale-set.kubeModeRoleBindingName" . }}
namespace: {{ .Release.Namespace }}
finalizers:
- actions.github.com/cleanup-protection
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role

View File

@@ -1,9 +1,12 @@
{{- if and (eq .Values.containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
{{- $containerMode := .Values.containerMode }}
{{- if and (eq $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "gha-runner-scale-set.kubeModeServiceAccountName" . }}
namespace: {{ .Release.Namespace }}
finalizers:
- actions.github.com/cleanup-protection
labels:
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
{{- end }}

View File

@@ -3,6 +3,11 @@ kind: Role
metadata:
name: {{ include "gha-runner-scale-set.managerRoleName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
app.kubernetes.io/component: manager-role
finalizers:
- actions.github.com/cleanup-protection
rules:
- apiGroups:
- ""
@@ -29,6 +34,17 @@ rules:
- list
- patch
- update
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- list
- patch
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:

View File

@@ -1,8 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set.managerRoleBinding" . }}
name: {{ include "gha-runner-scale-set.managerRoleBindingName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
app.kubernetes.io/component: manager-role-binding
finalizers:
- actions.github.com/cleanup-protection
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role

View File

@@ -1,4 +1,5 @@
{{- if and (ne .Values.containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
{{- $containerMode := .Values.containerMode }}
{{- if and (ne $containerMode.type "kubernetes") (not .Values.template.spec.serviceAccountName) }}
apiVersion: v1
kind: ServiceAccount
metadata:
@@ -6,4 +7,6 @@ metadata:
namespace: {{ .Release.Namespace }}
labels:
{{- include "gha-runner-scale-set.labels" . | nindent 4 }}
finalizers:
- actions.github.com/cleanup-protection
{{- end }}

View File

@@ -1,11 +1,13 @@
package tests
import (
"fmt"
"path/filepath"
"strings"
"testing"
v1alpha1 "github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
actionsgithubcom "github.com/actions/actions-runner-controller/controllers/actions.github.com"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/gruntwork-io/terratest/modules/k8s"
"github.com/gruntwork-io/terratest/modules/random"
@@ -43,7 +45,7 @@ func TestTemplateRenderedGitHubSecretWithGitHubToken(t *testing.T) {
assert.Equal(t, namespaceName, githubSecret.Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-github-secret", githubSecret.Name)
assert.Equal(t, "gh_token12345", string(githubSecret.Data["github_token"]))
assert.Equal(t, "actions.github.com/secret-protection", githubSecret.Finalizers[0])
assert.Equal(t, "actions.github.com/cleanup-protection", githubSecret.Finalizers[0])
}
func TestTemplateRenderedGitHubSecretWithGitHubApp(t *testing.T) {
@@ -188,6 +190,7 @@ func TestTemplateRenderedSetServiceAccountToNoPermission(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, "test-runners-gha-runner-scale-set-no-permission-service-account", ars.Spec.Template.Spec.ServiceAccountName)
assert.Empty(t, ars.Annotations[actionsgithubcom.AnnotationKeyKubernetesModeServiceAccountName]) // no finalizer protections in place
}
func TestTemplateRenderedSetServiceAccountToKubeMode(t *testing.T) {
@@ -217,6 +220,7 @@ func TestTemplateRenderedSetServiceAccountToKubeMode(t *testing.T) {
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-service-account", serviceAccount.Name)
assert.Equal(t, "actions.github.com/cleanup-protection", serviceAccount.Finalizers[0])
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/kube_mode_role.yaml"})
var role rbacv1.Role
@@ -224,6 +228,9 @@ func TestTemplateRenderedSetServiceAccountToKubeMode(t *testing.T) {
assert.Equal(t, namespaceName, role.Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-role", role.Name)
assert.Equal(t, "actions.github.com/cleanup-protection", role.Finalizers[0])
assert.Len(t, role.Rules, 5, "kube mode role should have 5 rules")
assert.Equal(t, "pods", role.Rules[0].Resources[0])
assert.Equal(t, "pods/exec", role.Rules[1].Resources[0])
@@ -236,18 +243,21 @@ func TestTemplateRenderedSetServiceAccountToKubeMode(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &roleBinding)
assert.Equal(t, namespaceName, roleBinding.Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-role", roleBinding.Name)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-role-binding", roleBinding.Name)
assert.Len(t, roleBinding.Subjects, 1)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-service-account", roleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, roleBinding.Subjects[0].Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-role", roleBinding.RoleRef.Name)
assert.Equal(t, "Role", roleBinding.RoleRef.Kind)
assert.Equal(t, "actions.github.com/cleanup-protection", serviceAccount.Finalizers[0])
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-service-account", ars.Spec.Template.Spec.ServiceAccountName)
expectedServiceAccountName := "test-runners-gha-runner-scale-set-kube-mode-service-account"
assert.Equal(t, expectedServiceAccountName, ars.Spec.Template.Spec.ServiceAccountName)
assert.Equal(t, expectedServiceAccountName, ars.Annotations[actionsgithubcom.AnnotationKeyKubernetesModeServiceAccountName])
}
func TestTemplateRenderedUserProvideSetServiceAccount(t *testing.T) {
@@ -279,6 +289,7 @@ func TestTemplateRenderedUserProvideSetServiceAccount(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, "test-service-account", ars.Spec.Template.Spec.ServiceAccountName)
assert.Empty(t, ars.Annotations[actionsgithubcom.AnnotationKeyKubernetesModeServiceAccountName])
}
func TestTemplateRenderedAutoScalingRunnerSet(t *testing.T) {
@@ -311,6 +322,10 @@ func TestTemplateRenderedAutoScalingRunnerSet(t *testing.T) {
assert.Equal(t, "gha-runner-scale-set", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-runners", ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set", ars.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, "autoscaling-runner-set", ars.Labels["app.kubernetes.io/component"])
assert.NotEmpty(t, ars.Labels["app.kubernetes.io/version"])
assert.Equal(t, "https://github.com/actions", ars.Spec.GitHubConfigUrl)
assert.Equal(t, "test-runners-gha-runner-scale-set-github-secret", ars.Spec.GitHubConfigSecret)
@@ -1454,7 +1469,11 @@ func TestTemplate_CreateManagerRole(t *testing.T) {
assert.Equal(t, namespaceName, managerRole.Namespace, "namespace should match the namespace of the Helm release")
assert.Equal(t, "test-runners-gha-runner-scale-set-manager-role", managerRole.Name)
assert.Equal(t, 5, len(managerRole.Rules))
assert.Equal(t, "actions.github.com/cleanup-protection", managerRole.Finalizers[0])
assert.Equal(t, 6, len(managerRole.Rules))
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
}
func TestTemplate_CreateManagerRole_UseConfigMaps(t *testing.T) {
@@ -1485,8 +1504,9 @@ func TestTemplate_CreateManagerRole_UseConfigMaps(t *testing.T) {
assert.Equal(t, namespaceName, managerRole.Namespace, "namespace should match the namespace of the Helm release")
assert.Equal(t, "test-runners-gha-runner-scale-set-manager-role", managerRole.Name)
assert.Equal(t, 6, len(managerRole.Rules))
assert.Equal(t, "configmaps", managerRole.Rules[5].Resources[0])
assert.Equal(t, "actions.github.com/cleanup-protection", managerRole.Finalizers[0])
assert.Equal(t, 7, len(managerRole.Rules))
assert.Equal(t, "configmaps", managerRole.Rules[6].Resources[0])
}
func TestTemplate_CreateManagerRoleBinding(t *testing.T) {
@@ -1517,6 +1537,7 @@ func TestTemplate_CreateManagerRoleBinding(t *testing.T) {
assert.Equal(t, namespaceName, managerRoleBinding.Namespace, "namespace should match the namespace of the Helm release")
assert.Equal(t, "test-runners-gha-runner-scale-set-manager-role-binding", managerRoleBinding.Name)
assert.Equal(t, "test-runners-gha-runner-scale-set-manager-role", managerRoleBinding.RoleRef.Name)
assert.Equal(t, "actions.github.com/cleanup-protection", managerRoleBinding.Finalizers[0])
assert.Equal(t, "arc", managerRoleBinding.Subjects[0].Name)
assert.Equal(t, "arc-system", managerRoleBinding.Subjects[0].Namespace)
}
@@ -1688,3 +1709,103 @@ func TestTemplateRenderedAutoScalingRunnerSet_KubeModeMergePodSpec(t *testing.T)
assert.Equal(t, "others", ars.Spec.Template.Spec.Containers[0].VolumeMounts[1].Name, "VolumeMount name should be others")
assert.Equal(t, "/others", ars.Spec.Template.Spec.Containers[0].VolumeMounts[1].MountPath, "VolumeMount mountPath should be /others")
}
func TestTemplateRenderedAutoscalingRunnerSetAnnotation_GitHubSecret(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
annotationExpectedTests := map[string]*helm.Options{
"GitHub token": {
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
},
"GitHub app": {
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_app_id": "10",
"githubConfigSecret.github_app_installation_id": "100",
"githubConfigSecret.github_app_private_key": "private_key",
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
},
}
for name, options := range annotationExpectedTests {
t.Run("Annotation set: "+name, func(t *testing.T) {
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var autoscalingRunnerSet v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &autoscalingRunnerSet)
assert.NotEmpty(t, autoscalingRunnerSet.Annotations[actionsgithubcom.AnnotationKeyGitHubSecretName])
})
}
t.Run("Annotation should not be set", func(t *testing.T) {
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secret",
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var autoscalingRunnerSet v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &autoscalingRunnerSet)
assert.Empty(t, autoscalingRunnerSet.Annotations[actionsgithubcom.AnnotationKeyGitHubSecretName])
})
}
func TestTemplateRenderedAutoscalingRunnerSetAnnotation_KubernetesModeCleanup(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
"containerMode.type": "kubernetes",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var autoscalingRunnerSet v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &autoscalingRunnerSet)
annotationValues := map[string]string{
actionsgithubcom.AnnotationKeyGitHubSecretName: "test-runners-gha-runner-scale-set-github-secret",
actionsgithubcom.AnnotationKeyManagerRoleName: "test-runners-gha-runner-scale-set-manager-role",
actionsgithubcom.AnnotationKeyManagerRoleBindingName: "test-runners-gha-runner-scale-set-manager-role-binding",
actionsgithubcom.AnnotationKeyKubernetesModeServiceAccountName: "test-runners-gha-runner-scale-set-kube-mode-service-account",
actionsgithubcom.AnnotationKeyKubernetesModeRoleName: "test-runners-gha-runner-scale-set-kube-mode-role",
actionsgithubcom.AnnotationKeyKubernetesModeRoleBindingName: "test-runners-gha-runner-scale-set-kube-mode-role-binding",
}
for annotation, value := range annotationValues {
assert.Equal(t, value, autoscalingRunnerSet.Annotations[annotation], fmt.Sprintf("Annotation %q does not match the expected value", annotation))
}
}

View File

@@ -65,19 +65,19 @@ githubConfigSecret:
# certificateFrom:
# configMapKeyRef:
# name: config-map-name
# key: ca.pem
# key: ca.crt
# runnerMountPath: /usr/local/share/ca-certificates/
containerMode:
type: "" ## type can be set to dind or kubernetes
## the following is required when containerMode.type=kubernetes
# kubernetesModeWorkVolumeClaim:
# accessModes: ["ReadWriteOnce"]
# # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
# storageClassName: "dynamic-blob-storage"
# resources:
# requests:
# storage: 1Gi
# containerMode:
# type: "dind" ## type can be set to dind or kubernetes
# ## the following is required when containerMode.type=kubernetes
# kubernetesModeWorkVolumeClaim:
# accessModes: ["ReadWriteOnce"]
# # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
# storageClassName: "dynamic-blob-storage"
# resources:
# requests:
# storage: 1Gi
## template is the PodSpec for each runner Pod
template:

View File

@@ -80,6 +80,9 @@ spec:
image:
description: Required
type: string
imagePullPolicy:
description: Required
type: string
imagePullSecrets:
description: Required
items:

View File

@@ -17,6 +17,21 @@ spec:
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.template.spec.enterprise
name: Enterprise
type: string
- jsonPath: .spec.template.spec.organization
name: Organization
type: string
- jsonPath: .spec.template.spec.repository
name: Repository
type: string
- jsonPath: .spec.template.spec.group
name: Group
type: string
- jsonPath: .spec.template.spec.labels
name: Labels
type: string
- jsonPath: .spec.replicas
name: Desired
type: number

View File

@@ -5,7 +5,7 @@ kind: Kustomization
images:
- name: controller
newName: summerwind/actions-runner-controller
newTag: dev
newTag: latest
replacements:
- path: env-replacement.yaml

View File

@@ -56,6 +56,8 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY
value: IfNotPresent
volumeMounts:
- name: controller-manager
mountPath: "/etc/actions-runner-controller"

View File

@@ -102,6 +102,13 @@ rules:
- patch
- update
- watch
- apiGroups:
- actions.github.com
resources:
- ephemeralrunnersets/finalizers
verbs:
- patch
- update
- apiGroups:
- actions.github.com
resources:

View File

@@ -31,6 +31,6 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
| `autoscaler.enabled` | Enable the HorizontalRunnerAutoscaler, if its enabled then replica count will not be used | true |
| `autoscaler.minReplicas` | Minimum no of replicas | 1 |
| `autoscaler.maxReplicas` | Maximum no of replicas | 5 |
| `autoscaler.scaleDownDelaySecondsAfterScaleOut` | [Anti-Flapping Configuration](https://github.com/actions/actions-runner-controller#anti-flapping-configuration) | 120 |
| `autoscaler.metrics` | [Pull driven scaling](https://github.com/actions/actions-runner-controller#pull-driven-scaling) | default |
| `autoscaler.scaleUpTriggers` | [Webhook driven scaling](https://github.com/actions/actions-runner-controller#webhook-driven-scaling) | |
| `autoscaler.scaleDownDelaySecondsAfterScaleOut` | [Anti-Flapping Configuration](https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#anti-flapping-configuration) | 120 |
| `autoscaler.metrics` | [Pull driven scaling](https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#pull-driven-scaling) | default |
| `autoscaler.scaleUpTriggers` | [Webhook driven scaling](https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#webhook-driven-scaling) | |

View File

@@ -17,7 +17,7 @@ runnerLabels:
replicaCount: 1
# The Runner Group that the runner(s) should be associated with.
# See https://docs.github.com/en/github-ae@latest/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups.
# See https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/managing-access-to-self-hosted-runners-using-groups.
group: Default
autoscaler:

View File

@@ -41,7 +41,6 @@ import (
const (
autoscalingListenerContainerName = "autoscaler"
autoscalingListenerOwnerKey = ".metadata.controller"
autoscalingListenerFinalizerName = "autoscalinglistener.actions.github.com/finalizer"
)
@@ -246,65 +245,6 @@ func (r *AutoscalingListenerReconciler) Reconcile(ctx context.Context, req ctrl.
return ctrl.Result{}, nil
}
// SetupWithManager sets up the controller with the Manager.
func (r *AutoscalingListenerReconciler) SetupWithManager(mgr ctrl.Manager) error {
groupVersionIndexer := func(rawObj client.Object) []string {
groupVersion := v1alpha1.GroupVersion.String()
owner := metav1.GetControllerOf(rawObj)
if owner == nil {
return nil
}
// ...make sure it is owned by this controller
if owner.APIVersion != groupVersion || owner.Kind != "AutoscalingListener" {
return nil
}
// ...and if so, return it
return []string{owner.Name}
}
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &corev1.Pod{}, autoscalingListenerOwnerKey, groupVersionIndexer); err != nil {
return err
}
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &corev1.ServiceAccount{}, autoscalingListenerOwnerKey, groupVersionIndexer); err != nil {
return err
}
labelBasedWatchFunc := func(obj client.Object) []reconcile.Request {
var requests []reconcile.Request
labels := obj.GetLabels()
namespace, ok := labels["auto-scaling-listener-namespace"]
if !ok {
return nil
}
name, ok := labels["auto-scaling-listener-name"]
if !ok {
return nil
}
requests = append(requests,
reconcile.Request{
NamespacedName: types.NamespacedName{
Name: name,
Namespace: namespace,
},
},
)
return requests
}
return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.AutoscalingListener{}).
Owns(&corev1.Pod{}).
Owns(&corev1.ServiceAccount{}).
Watches(&source.Kind{Type: &rbacv1.Role{}}, handler.EnqueueRequestsFromMapFunc(labelBasedWatchFunc)).
Watches(&source.Kind{Type: &rbacv1.RoleBinding{}}, handler.EnqueueRequestsFromMapFunc(labelBasedWatchFunc)).
WithEventFilter(predicate.ResourceVersionChangedPredicate{}).
Complete(r)
}
func (r *AutoscalingListenerReconciler) cleanupResources(ctx context.Context, autoscalingListener *v1alpha1.AutoscalingListener, logger logr.Logger) (done bool, err error) {
logger.Info("Cleaning up the listener pod")
listenerPod := new(corev1.Pod)
@@ -523,8 +463,8 @@ func (r *AutoscalingListenerReconciler) createProxySecret(ctx context.Context, a
Name: proxyListenerSecretName(autoscalingListener),
Namespace: autoscalingListener.Namespace,
Labels: map[string]string{
"auto-scaling-runner-set-namespace": autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
"auto-scaling-runner-set-name": autoscalingListener.Spec.AutoscalingRunnerSetName,
LabelKeyGitHubScaleSetNamespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
LabelKeyGitHubScaleSetName: autoscalingListener.Spec.AutoscalingRunnerSetName,
},
},
Data: data,
@@ -615,3 +555,62 @@ func (r *AutoscalingListenerReconciler) createRoleBindingForListener(ctx context
"serviceAccount", serviceAccount.Name)
return ctrl.Result{Requeue: true}, nil
}
// SetupWithManager sets up the controller with the Manager.
func (r *AutoscalingListenerReconciler) SetupWithManager(mgr ctrl.Manager) error {
groupVersionIndexer := func(rawObj client.Object) []string {
groupVersion := v1alpha1.GroupVersion.String()
owner := metav1.GetControllerOf(rawObj)
if owner == nil {
return nil
}
// ...make sure it is owned by this controller
if owner.APIVersion != groupVersion || owner.Kind != "AutoscalingListener" {
return nil
}
// ...and if so, return it
return []string{owner.Name}
}
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &corev1.Pod{}, resourceOwnerKey, groupVersionIndexer); err != nil {
return err
}
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &corev1.ServiceAccount{}, resourceOwnerKey, groupVersionIndexer); err != nil {
return err
}
labelBasedWatchFunc := func(obj client.Object) []reconcile.Request {
var requests []reconcile.Request
labels := obj.GetLabels()
namespace, ok := labels["auto-scaling-listener-namespace"]
if !ok {
return nil
}
name, ok := labels["auto-scaling-listener-name"]
if !ok {
return nil
}
requests = append(requests,
reconcile.Request{
NamespacedName: types.NamespacedName{
Name: name,
Namespace: namespace,
},
},
)
return requests
}
return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.AutoscalingListener{}).
Owns(&corev1.Pod{}).
Owns(&corev1.ServiceAccount{}).
Watches(&source.Kind{Type: &rbacv1.Role{}}, handler.EnqueueRequestsFromMapFunc(labelBasedWatchFunc)).
Watches(&source.Kind{Type: &rbacv1.RoleBinding{}}, handler.EnqueueRequestsFromMapFunc(labelBasedWatchFunc)).
WithEventFilter(predicate.ResourceVersionChangedPredicate{}).
Complete(r)
}

View File

@@ -213,7 +213,7 @@ var _ = Describe("Test AutoScalingListener controller", func() {
Eventually(
func() error {
podList := new(corev1.PodList)
err := k8sClient.List(ctx, podList, client.InNamespace(autoscalingListener.Namespace), client.MatchingFields{autoscalingRunnerSetOwnerKey: autoscalingListener.Name})
err := k8sClient.List(ctx, podList, client.InNamespace(autoscalingListener.Namespace), client.MatchingFields{resourceOwnerKey: autoscalingListener.Name})
if err != nil {
return err
}
@@ -231,7 +231,7 @@ var _ = Describe("Test AutoScalingListener controller", func() {
Eventually(
func() error {
serviceAccountList := new(corev1.ServiceAccountList)
err := k8sClient.List(ctx, serviceAccountList, client.InNamespace(autoscalingListener.Namespace), client.MatchingFields{autoscalingRunnerSetOwnerKey: autoscalingListener.Name})
err := k8sClient.List(ctx, serviceAccountList, client.InNamespace(autoscalingListener.Namespace), client.MatchingFields{resourceOwnerKey: autoscalingListener.Name})
if err != nil {
return err
}

View File

@@ -24,9 +24,11 @@ import (
"strings"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/actions/actions-runner-controller/build"
"github.com/actions/actions-runner-controller/github/actions"
"github.com/go-logr/logr"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
@@ -41,13 +43,10 @@ import (
)
const (
// TODO: Replace with shared image.
autoscalingRunnerSetOwnerKey = ".metadata.controller"
LabelKeyRunnerSpecHash = "runner-spec-hash"
labelKeyRunnerSpecHash = "runner-spec-hash"
autoscalingRunnerSetFinalizerName = "autoscalingrunnerset.actions.github.com/finalizer"
runnerScaleSetIdKey = "runner-scale-set-id"
runnerScaleSetNameKey = "runner-scale-set-name"
runnerScaleSetRunnerGroupNameKey = "runner-scale-set-runner-group-name"
runnerScaleSetIdAnnotationKey = "runner-scale-set-id"
runnerScaleSetNameAnnotationKey = "runner-scale-set-name"
)
// AutoscalingRunnerSetReconciler reconciles a AutoscalingRunnerSet object
@@ -114,6 +113,17 @@ func (r *AutoscalingRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl
return ctrl.Result{}, err
}
requeue, err := r.removeFinalizersFromDependentResources(ctx, autoscalingRunnerSet, log)
if err != nil {
log.Error(err, "Failed to remove finalizers on dependent resources")
return ctrl.Result{}, err
}
if requeue {
log.Info("Waiting for dependent resources to be deleted")
return ctrl.Result{Requeue: true}, nil
}
log.Info("Removing finalizer")
err = patch(ctx, r.Client, autoscalingRunnerSet, func(obj *v1alpha1.AutoscalingRunnerSet) {
controllerutil.RemoveFinalizer(obj, autoscalingRunnerSetFinalizerName)
@@ -127,6 +137,22 @@ func (r *AutoscalingRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl
return ctrl.Result{}, nil
}
if autoscalingRunnerSet.Labels[LabelKeyKubernetesVersion] != build.Version {
if err := r.Delete(ctx, autoscalingRunnerSet); err != nil {
log.Error(err, "Failed to delete autoscaling runner set on version mismatch",
"targetVersion", build.Version,
"actualVersion", autoscalingRunnerSet.Labels[LabelKeyKubernetesVersion],
)
return ctrl.Result{}, nil
}
log.Info("Autoscaling runner set version doesn't match the build version. Deleting the resource.",
"targetVersion", build.Version,
"actualVersion", autoscalingRunnerSet.Labels[LabelKeyKubernetesVersion],
)
return ctrl.Result{}, nil
}
if !controllerutil.ContainsFinalizer(autoscalingRunnerSet, autoscalingRunnerSetFinalizerName) {
log.Info("Adding finalizer")
if err := patch(ctx, r.Client, autoscalingRunnerSet, func(obj *v1alpha1.AutoscalingRunnerSet) {
@@ -140,7 +166,7 @@ func (r *AutoscalingRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl
return ctrl.Result{}, nil
}
scaleSetIdRaw, ok := autoscalingRunnerSet.Annotations[runnerScaleSetIdKey]
scaleSetIdRaw, ok := autoscalingRunnerSet.Annotations[runnerScaleSetIdAnnotationKey]
if !ok {
// Need to create a new runner scale set on Actions service
log.Info("Runner scale set id annotation does not exist. Creating a new runner scale set.")
@@ -154,14 +180,14 @@ func (r *AutoscalingRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl
}
// Make sure the runner group of the scale set is up to date
currentRunnerGroupName, ok := autoscalingRunnerSet.Annotations[runnerScaleSetRunnerGroupNameKey]
currentRunnerGroupName, ok := autoscalingRunnerSet.Annotations[AnnotationKeyGitHubRunnerGroupName]
if !ok || (len(autoscalingRunnerSet.Spec.RunnerGroup) > 0 && !strings.EqualFold(currentRunnerGroupName, autoscalingRunnerSet.Spec.RunnerGroup)) {
log.Info("AutoScalingRunnerSet runner group changed. Updating the runner scale set.")
return r.updateRunnerScaleSetRunnerGroup(ctx, autoscalingRunnerSet, log)
}
// Make sure the runner scale set name is up to date
currentRunnerScaleSetName, ok := autoscalingRunnerSet.Annotations[runnerScaleSetNameKey]
currentRunnerScaleSetName, ok := autoscalingRunnerSet.Annotations[runnerScaleSetNameAnnotationKey]
if !ok || (len(autoscalingRunnerSet.Spec.RunnerScaleSetName) > 0 && !strings.EqualFold(currentRunnerScaleSetName, autoscalingRunnerSet.Spec.RunnerScaleSetName)) {
log.Info("AutoScalingRunnerSet runner scale set name changed. Updating the runner scale set.")
return r.updateRunnerScaleSetName(ctx, autoscalingRunnerSet, log)
@@ -189,10 +215,10 @@ func (r *AutoscalingRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl
desiredSpecHash := autoscalingRunnerSet.RunnerSetSpecHash()
for _, runnerSet := range existingRunnerSets.all() {
log.Info("Find existing ephemeral runner set", "name", runnerSet.Name, "specHash", runnerSet.Labels[LabelKeyRunnerSpecHash])
log.Info("Find existing ephemeral runner set", "name", runnerSet.Name, "specHash", runnerSet.Labels[labelKeyRunnerSpecHash])
}
if desiredSpecHash != latestRunnerSet.Labels[LabelKeyRunnerSpecHash] {
if desiredSpecHash != latestRunnerSet.Labels[labelKeyRunnerSpecHash] {
log.Info("Latest runner set spec hash does not match the current autoscaling runner set. Creating a new runner set")
return r.createEphemeralRunnerSet(ctx, autoscalingRunnerSet, log)
}
@@ -220,7 +246,7 @@ func (r *AutoscalingRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl
}
// Our listener pod is out of date, so we need to delete it to get a new recreate.
if listener.Labels[LabelKeyRunnerSpecHash] != autoscalingRunnerSet.ListenerSpecHash() {
if listener.Labels[labelKeyRunnerSpecHash] != autoscalingRunnerSet.ListenerSpecHash() {
log.Info("RunnerScaleSetListener is out of date. Deleting it so that it is recreated", "name", listener.Name)
if err := r.Delete(ctx, listener); err != nil {
if kerrors.IsNotFound(err) {
@@ -306,6 +332,29 @@ func (r *AutoscalingRunnerSetReconciler) deleteEphemeralRunnerSets(ctx context.C
return nil
}
func (r *AutoscalingRunnerSetReconciler) removeFinalizersFromDependentResources(ctx context.Context, autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet, logger logr.Logger) (requeue bool, err error) {
c := autoscalingRunnerSetFinalizerDependencyCleaner{
client: r.Client,
autoscalingRunnerSet: autoscalingRunnerSet,
logger: logger,
}
c.removeKubernetesModeRoleBindingFinalizer(ctx)
c.removeKubernetesModeRoleFinalizer(ctx)
c.removeKubernetesModeServiceAccountFinalizer(ctx)
c.removeNoPermissionServiceAccountFinalizer(ctx)
c.removeGitHubSecretFinalizer(ctx)
c.removeManagerRoleBindingFinalizer(ctx)
c.removeManagerRoleFinalizer(ctx)
requeue, err = c.result()
if err != nil {
logger.Error(err, "Failed to cleanup finalizer from dependent resource")
return true, err
}
return requeue, nil
}
func (r *AutoscalingRunnerSetReconciler) createRunnerScaleSet(ctx context.Context, autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet, logger logr.Logger) (ctrl.Result, error) {
logger.Info("Creating a new runner scale set")
actionsClient, err := r.actionsClientFor(ctx, autoscalingRunnerSet)
@@ -365,12 +414,18 @@ func (r *AutoscalingRunnerSetReconciler) createRunnerScaleSet(ctx context.Contex
if autoscalingRunnerSet.Annotations == nil {
autoscalingRunnerSet.Annotations = map[string]string{}
}
if autoscalingRunnerSet.Labels == nil {
autoscalingRunnerSet.Labels = map[string]string{}
}
logger.Info("Adding runner scale set ID, name and runner group name as an annotation")
logger.Info("Adding runner scale set ID, name and runner group name as an annotation and url labels")
if err = patch(ctx, r.Client, autoscalingRunnerSet, func(obj *v1alpha1.AutoscalingRunnerSet) {
obj.Annotations[runnerScaleSetNameKey] = runnerScaleSet.Name
obj.Annotations[runnerScaleSetIdKey] = strconv.Itoa(runnerScaleSet.Id)
obj.Annotations[runnerScaleSetRunnerGroupNameKey] = runnerScaleSet.RunnerGroupName
obj.Annotations[runnerScaleSetNameAnnotationKey] = runnerScaleSet.Name
obj.Annotations[runnerScaleSetIdAnnotationKey] = strconv.Itoa(runnerScaleSet.Id)
obj.Annotations[AnnotationKeyGitHubRunnerGroupName] = runnerScaleSet.RunnerGroupName
if err := applyGitHubURLLabels(obj.Spec.GitHubConfigUrl, obj.Labels); err != nil { // should never happen
logger.Error(err, "Failed to apply GitHub URL labels")
}
}); err != nil {
logger.Error(err, "Failed to add runner scale set ID, name and runner group name as an annotation")
return ctrl.Result{}, err
@@ -384,7 +439,7 @@ func (r *AutoscalingRunnerSetReconciler) createRunnerScaleSet(ctx context.Contex
}
func (r *AutoscalingRunnerSetReconciler) updateRunnerScaleSetRunnerGroup(ctx context.Context, autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet, logger logr.Logger) (ctrl.Result, error) {
runnerScaleSetId, err := strconv.Atoi(autoscalingRunnerSet.Annotations[runnerScaleSetIdKey])
runnerScaleSetId, err := strconv.Atoi(autoscalingRunnerSet.Annotations[runnerScaleSetIdAnnotationKey])
if err != nil {
logger.Error(err, "Failed to parse runner scale set ID")
return ctrl.Result{}, err
@@ -415,7 +470,7 @@ func (r *AutoscalingRunnerSetReconciler) updateRunnerScaleSetRunnerGroup(ctx con
logger.Info("Updating runner scale set runner group name as an annotation")
if err := patch(ctx, r.Client, autoscalingRunnerSet, func(obj *v1alpha1.AutoscalingRunnerSet) {
obj.Annotations[runnerScaleSetRunnerGroupNameKey] = updatedRunnerScaleSet.RunnerGroupName
obj.Annotations[AnnotationKeyGitHubRunnerGroupName] = updatedRunnerScaleSet.RunnerGroupName
}); err != nil {
logger.Error(err, "Failed to update runner group name annotation")
return ctrl.Result{}, err
@@ -426,7 +481,7 @@ func (r *AutoscalingRunnerSetReconciler) updateRunnerScaleSetRunnerGroup(ctx con
}
func (r *AutoscalingRunnerSetReconciler) updateRunnerScaleSetName(ctx context.Context, autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet, logger logr.Logger) (ctrl.Result, error) {
runnerScaleSetId, err := strconv.Atoi(autoscalingRunnerSet.Annotations[runnerScaleSetIdKey])
runnerScaleSetId, err := strconv.Atoi(autoscalingRunnerSet.Annotations[runnerScaleSetIdAnnotationKey])
if err != nil {
logger.Error(err, "Failed to parse runner scale set ID")
return ctrl.Result{}, err
@@ -451,7 +506,7 @@ func (r *AutoscalingRunnerSetReconciler) updateRunnerScaleSetName(ctx context.Co
logger.Info("Updating runner scale set name as an annotation")
if err := patch(ctx, r.Client, autoscalingRunnerSet, func(obj *v1alpha1.AutoscalingRunnerSet) {
obj.Annotations[runnerScaleSetNameKey] = updatedRunnerScaleSet.Name
obj.Annotations[runnerScaleSetNameAnnotationKey] = updatedRunnerScaleSet.Name
}); err != nil {
logger.Error(err, "Failed to update runner scale set name annotation")
return ctrl.Result{}, err
@@ -462,12 +517,28 @@ func (r *AutoscalingRunnerSetReconciler) updateRunnerScaleSetName(ctx context.Co
}
func (r *AutoscalingRunnerSetReconciler) deleteRunnerScaleSet(ctx context.Context, autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet, logger logr.Logger) error {
scaleSetId, ok := autoscalingRunnerSet.Annotations[runnerScaleSetIdAnnotationKey]
if !ok {
// Annotation not being present can occur in 3 scenarios
// 1. Scale set is never created.
// In this case, we don't need to fetch the actions client to delete the scale set that does not exist
//
// 2. The scale set has been deleted by the controller.
// In that case, the controller will clean up annotation because the scale set does not exist anymore.
// Removal of the scale set id is also useful because permission cleanup will eventually lose permission
// assigned to it on a GitHub secret, causing actions client from secret to result in permission denied
//
// 3. Annotation is removed manually.
// In this case, the controller will treat this as if the scale set is being removed from the actions service
// Then, manual deletion of the scale set is required.
return nil
}
logger.Info("Deleting the runner scale set from Actions service")
runnerScaleSetId, err := strconv.Atoi(autoscalingRunnerSet.Annotations[runnerScaleSetIdKey])
runnerScaleSetId, err := strconv.Atoi(scaleSetId)
if err != nil {
// If the annotation is not set correctly, or if it does not exist, we are going to get stuck in a loop trying to parse the scale set id.
// If the configuration is invalid (secret does not exist for example), we never get to the point to create runner set. But then, manual cleanup
// would get stuck finalizing the resource trying to parse annotation indefinitely
// If the annotation is not set correctly, we are going to get stuck in a loop trying to parse the scale set id.
// If the configuration is invalid (secret does not exist for example), we never got to the point to create runner set.
// But then, manual cleanup would get stuck finalizing the resource trying to parse annotation indefinitely
logger.Info("autoscaling runner set does not have annotation describing scale set id. Skip deletion", "err", err.Error())
return nil
}
@@ -484,6 +555,14 @@ func (r *AutoscalingRunnerSetReconciler) deleteRunnerScaleSet(ctx context.Contex
return err
}
err = patch(ctx, r.Client, autoscalingRunnerSet, func(obj *v1alpha1.AutoscalingRunnerSet) {
delete(obj.Annotations, runnerScaleSetIdAnnotationKey)
})
if err != nil {
logger.Error(err, "Failed to patch autoscaling runner set with annotation removed", "annotation", runnerScaleSetIdAnnotationKey)
return err
}
logger.Info("Deleted the runner scale set from Actions service")
return nil
}
@@ -536,7 +615,7 @@ func (r *AutoscalingRunnerSetReconciler) createAutoScalingListenerForRunnerSet(c
func (r *AutoscalingRunnerSetReconciler) listEphemeralRunnerSets(ctx context.Context, autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet) (*EphemeralRunnerSets, error) {
list := new(v1alpha1.EphemeralRunnerSetList)
if err := r.List(ctx, list, client.InNamespace(autoscalingRunnerSet.Namespace), client.MatchingFields{autoscalingRunnerSetOwnerKey: autoscalingRunnerSet.Name}); err != nil {
if err := r.List(ctx, list, client.InNamespace(autoscalingRunnerSet.Namespace), client.MatchingFields{resourceOwnerKey: autoscalingRunnerSet.Name}); err != nil {
return nil, fmt.Errorf("failed to list ephemeral runner sets: %v", err)
}
@@ -629,7 +708,7 @@ func (r *AutoscalingRunnerSetReconciler) SetupWithManager(mgr ctrl.Manager) erro
return []string{owner.Name}
}
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &v1alpha1.EphemeralRunnerSet{}, autoscalingRunnerSetOwnerKey, groupVersionIndexer); err != nil {
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &v1alpha1.EphemeralRunnerSet{}, resourceOwnerKey, groupVersionIndexer); err != nil {
return err
}
@@ -653,6 +732,328 @@ func (r *AutoscalingRunnerSetReconciler) SetupWithManager(mgr ctrl.Manager) erro
Complete(r)
}
type autoscalingRunnerSetFinalizerDependencyCleaner struct {
// configuration fields
client client.Client
autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet
logger logr.Logger
// fields to operate on
requeue bool
err error
}
func (c *autoscalingRunnerSetFinalizerDependencyCleaner) result() (requeue bool, err error) {
return c.requeue, c.err
}
func (c *autoscalingRunnerSetFinalizerDependencyCleaner) removeKubernetesModeRoleBindingFinalizer(ctx context.Context) {
if c.requeue || c.err != nil {
return
}
roleBindingName, ok := c.autoscalingRunnerSet.Annotations[AnnotationKeyKubernetesModeRoleBindingName]
if !ok {
c.logger.Info(
"Skipping cleaning up kubernetes mode service account",
"reason",
fmt.Sprintf("annotation key %q not present", AnnotationKeyKubernetesModeRoleBindingName),
)
return
}
c.logger.Info("Removing finalizer from container mode kubernetes role binding", "name", roleBindingName)
roleBinding := new(rbacv1.RoleBinding)
err := c.client.Get(ctx, types.NamespacedName{Name: roleBindingName, Namespace: c.autoscalingRunnerSet.Namespace}, roleBinding)
switch {
case err == nil:
if !controllerutil.ContainsFinalizer(roleBinding, AutoscalingRunnerSetCleanupFinalizerName) {
c.logger.Info("Kubernetes mode role binding finalizer has already been removed", "name", roleBindingName)
return
}
err = patch(ctx, c.client, roleBinding, func(obj *rbacv1.RoleBinding) {
controllerutil.RemoveFinalizer(obj, AutoscalingRunnerSetCleanupFinalizerName)
})
if err != nil {
c.err = fmt.Errorf("failed to patch kubernetes mode role binding without finalizer: %w", err)
return
}
c.requeue = true
c.logger.Info("Removed finalizer from container mode kubernetes role binding", "name", roleBindingName)
return
case err != nil && !kerrors.IsNotFound(err):
c.err = fmt.Errorf("failed to fetch kubernetes mode role binding: %w", err)
return
default:
c.logger.Info("Container mode kubernetes role binding has already been deleted", "name", roleBindingName)
return
}
}
func (c *autoscalingRunnerSetFinalizerDependencyCleaner) removeKubernetesModeRoleFinalizer(ctx context.Context) {
if c.requeue || c.err != nil {
return
}
roleName, ok := c.autoscalingRunnerSet.Annotations[AnnotationKeyKubernetesModeRoleName]
if !ok {
c.logger.Info(
"Skipping cleaning up kubernetes mode role",
"reason",
fmt.Sprintf("annotation key %q not present", AnnotationKeyKubernetesModeRoleName),
)
return
}
c.logger.Info("Removing finalizer from container mode kubernetes role", "name", roleName)
role := new(rbacv1.Role)
err := c.client.Get(ctx, types.NamespacedName{Name: roleName, Namespace: c.autoscalingRunnerSet.Namespace}, role)
switch {
case err == nil:
if !controllerutil.ContainsFinalizer(role, AutoscalingRunnerSetCleanupFinalizerName) {
c.logger.Info("Kubernetes mode role finalizer has already been removed", "name", roleName)
return
}
err = patch(ctx, c.client, role, func(obj *rbacv1.Role) {
controllerutil.RemoveFinalizer(obj, AutoscalingRunnerSetCleanupFinalizerName)
})
if err != nil {
c.err = fmt.Errorf("failed to patch kubernetes mode role without finalizer: %w", err)
return
}
c.requeue = true
c.logger.Info("Removed finalizer from container mode kubernetes role")
return
case err != nil && !kerrors.IsNotFound(err):
c.err = fmt.Errorf("failed to fetch kubernetes mode role: %w", err)
return
default:
c.logger.Info("Container mode kubernetes role has already been deleted", "name", roleName)
return
}
}
func (c *autoscalingRunnerSetFinalizerDependencyCleaner) removeKubernetesModeServiceAccountFinalizer(ctx context.Context) {
if c.requeue || c.err != nil {
return
}
serviceAccountName, ok := c.autoscalingRunnerSet.Annotations[AnnotationKeyKubernetesModeServiceAccountName]
if !ok {
c.logger.Info(
"Skipping cleaning up kubernetes mode role binding",
"reason",
fmt.Sprintf("annotation key %q not present", AnnotationKeyKubernetesModeServiceAccountName),
)
return
}
c.logger.Info("Removing finalizer from container mode kubernetes service account", "name", serviceAccountName)
serviceAccount := new(corev1.ServiceAccount)
err := c.client.Get(ctx, types.NamespacedName{Name: serviceAccountName, Namespace: c.autoscalingRunnerSet.Namespace}, serviceAccount)
switch {
case err == nil:
if !controllerutil.ContainsFinalizer(serviceAccount, AutoscalingRunnerSetCleanupFinalizerName) {
c.logger.Info("Kubernetes mode service account finalizer has already been removed", "name", serviceAccountName)
return
}
err = patch(ctx, c.client, serviceAccount, func(obj *corev1.ServiceAccount) {
controllerutil.RemoveFinalizer(obj, AutoscalingRunnerSetCleanupFinalizerName)
})
if err != nil {
c.err = fmt.Errorf("failed to patch kubernetes mode service account without finalizer: %w", err)
return
}
c.requeue = true
c.logger.Info("Removed finalizer from container mode kubernetes service account")
return
case err != nil && !kerrors.IsNotFound(err):
c.err = fmt.Errorf("failed to fetch kubernetes mode service account: %w", err)
return
default:
c.logger.Info("Container mode kubernetes service account has already been deleted", "name", serviceAccountName)
return
}
}
func (c *autoscalingRunnerSetFinalizerDependencyCleaner) removeNoPermissionServiceAccountFinalizer(ctx context.Context) {
if c.requeue || c.err != nil {
return
}
serviceAccountName, ok := c.autoscalingRunnerSet.Annotations[AnnotationKeyNoPermissionServiceAccountName]
if !ok {
c.logger.Info(
"Skipping cleaning up no permission service account",
"reason",
fmt.Sprintf("annotation key %q not present", AnnotationKeyNoPermissionServiceAccountName),
)
return
}
c.logger.Info("Removing finalizer from no permission service account", "name", serviceAccountName)
serviceAccount := new(corev1.ServiceAccount)
err := c.client.Get(ctx, types.NamespacedName{Name: serviceAccountName, Namespace: c.autoscalingRunnerSet.Namespace}, serviceAccount)
switch {
case err == nil:
if !controllerutil.ContainsFinalizer(serviceAccount, AutoscalingRunnerSetCleanupFinalizerName) {
c.logger.Info("No permission service account finalizer has already been removed", "name", serviceAccountName)
return
}
err = patch(ctx, c.client, serviceAccount, func(obj *corev1.ServiceAccount) {
controllerutil.RemoveFinalizer(obj, AutoscalingRunnerSetCleanupFinalizerName)
})
if err != nil {
c.err = fmt.Errorf("failed to patch service account without finalizer: %w", err)
return
}
c.requeue = true
c.logger.Info("Removed finalizer from no permission service account", "name", serviceAccountName)
return
case err != nil && !kerrors.IsNotFound(err):
c.err = fmt.Errorf("failed to fetch service account: %w", err)
return
default:
c.logger.Info("No permission service account has already been deleted", "name", serviceAccountName)
return
}
}
func (c *autoscalingRunnerSetFinalizerDependencyCleaner) removeGitHubSecretFinalizer(ctx context.Context) {
if c.requeue || c.err != nil {
return
}
githubSecretName, ok := c.autoscalingRunnerSet.Annotations[AnnotationKeyGitHubSecretName]
if !ok {
c.logger.Info(
"Skipping cleaning up no permission service account",
"reason",
fmt.Sprintf("annotation key %q not present", AnnotationKeyGitHubSecretName),
)
return
}
c.logger.Info("Removing finalizer from GitHub secret", "name", githubSecretName)
githubSecret := new(corev1.Secret)
err := c.client.Get(ctx, types.NamespacedName{Name: githubSecretName, Namespace: c.autoscalingRunnerSet.Namespace}, githubSecret)
switch {
case err == nil:
if !controllerutil.ContainsFinalizer(githubSecret, AutoscalingRunnerSetCleanupFinalizerName) {
c.logger.Info("GitHub secret finalizer has already been removed", "name", githubSecretName)
return
}
err = patch(ctx, c.client, githubSecret, func(obj *corev1.Secret) {
controllerutil.RemoveFinalizer(obj, AutoscalingRunnerSetCleanupFinalizerName)
})
if err != nil {
c.err = fmt.Errorf("failed to patch GitHub secret without finalizer: %w", err)
return
}
c.requeue = true
c.logger.Info("Removed finalizer from GitHub secret", "name", githubSecretName)
return
case err != nil && !kerrors.IsNotFound(err) && !kerrors.IsForbidden(err):
c.err = fmt.Errorf("failed to fetch GitHub secret: %w", err)
return
default:
c.logger.Info("GitHub secret has already been deleted", "name", githubSecretName)
return
}
}
func (c *autoscalingRunnerSetFinalizerDependencyCleaner) removeManagerRoleBindingFinalizer(ctx context.Context) {
if c.requeue || c.err != nil {
return
}
managerRoleBindingName, ok := c.autoscalingRunnerSet.Annotations[AnnotationKeyManagerRoleBindingName]
if !ok {
c.logger.Info(
"Skipping cleaning up manager role binding",
"reason",
fmt.Sprintf("annotation key %q not present", AnnotationKeyManagerRoleBindingName),
)
return
}
c.logger.Info("Removing finalizer from manager role binding", "name", managerRoleBindingName)
roleBinding := new(rbacv1.RoleBinding)
err := c.client.Get(ctx, types.NamespacedName{Name: managerRoleBindingName, Namespace: c.autoscalingRunnerSet.Namespace}, roleBinding)
switch {
case err == nil:
if !controllerutil.ContainsFinalizer(roleBinding, AutoscalingRunnerSetCleanupFinalizerName) {
c.logger.Info("Manager role binding finalizer has already been removed", "name", managerRoleBindingName)
return
}
err = patch(ctx, c.client, roleBinding, func(obj *rbacv1.RoleBinding) {
controllerutil.RemoveFinalizer(obj, AutoscalingRunnerSetCleanupFinalizerName)
})
if err != nil {
c.err = fmt.Errorf("failed to patch manager role binding without finalizer: %w", err)
return
}
c.requeue = true
c.logger.Info("Removed finalizer from manager role binding", "name", managerRoleBindingName)
return
case err != nil && !kerrors.IsNotFound(err):
c.err = fmt.Errorf("failed to fetch manager role binding: %w", err)
return
default:
c.logger.Info("Manager role binding has already been deleted", "name", managerRoleBindingName)
return
}
}
func (c *autoscalingRunnerSetFinalizerDependencyCleaner) removeManagerRoleFinalizer(ctx context.Context) {
if c.requeue || c.err != nil {
return
}
managerRoleName, ok := c.autoscalingRunnerSet.Annotations[AnnotationKeyManagerRoleName]
if !ok {
c.logger.Info(
"Skipping cleaning up manager role",
"reason",
fmt.Sprintf("annotation key %q not present", AnnotationKeyManagerRoleName),
)
return
}
c.logger.Info("Removing finalizer from manager role", "name", managerRoleName)
role := new(rbacv1.Role)
err := c.client.Get(ctx, types.NamespacedName{Name: managerRoleName, Namespace: c.autoscalingRunnerSet.Namespace}, role)
switch {
case err == nil:
if !controllerutil.ContainsFinalizer(role, AutoscalingRunnerSetCleanupFinalizerName) {
c.logger.Info("Manager role finalizer has already been removed", "name", managerRoleName)
return
}
err = patch(ctx, c.client, role, func(obj *rbacv1.Role) {
controllerutil.RemoveFinalizer(obj, AutoscalingRunnerSetCleanupFinalizerName)
})
if err != nil {
c.err = fmt.Errorf("failed to patch manager role without finalizer: %w", err)
return
}
c.requeue = true
c.logger.Info("Removed finalizer from manager role", "name", managerRoleName)
return
case err != nil && !kerrors.IsNotFound(err):
c.err = fmt.Errorf("failed to fetch manager role: %w", err)
return
default:
c.logger.Info("Manager role has already been deleted", "name", managerRoleName)
return
}
}
// NOTE: if this is logic should be used for other resources,
// consider using generics
type EphemeralRunnerSets struct {

View File

@@ -13,6 +13,7 @@ import (
"time"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
@@ -23,8 +24,10 @@ import (
. "github.com/onsi/gomega"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/actions/actions-runner-controller/build"
"github.com/actions/actions-runner-controller/github/actions"
"github.com/actions/actions-runner-controller/github/actions/fake"
"github.com/actions/actions-runner-controller/github/actions/testserver"
@@ -36,13 +39,25 @@ const (
autoscalingRunnerSetTestGitHubToken = "gh_token"
)
var _ = Describe("Test AutoScalingRunnerSet controller", func() {
var _ = Describe("Test AutoScalingRunnerSet controller", Ordered, func() {
var ctx context.Context
var mgr ctrl.Manager
var autoscalingNS *corev1.Namespace
var autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet
var configSecret *corev1.Secret
var originalBuildVersion string
buildVersion := "0.1.0"
BeforeAll(func() {
originalBuildVersion = build.Version
build.Version = buildVersion
})
AfterAll(func() {
build.Version = originalBuildVersion
})
BeforeEach(func() {
ctx = context.Background()
autoscalingNS, mgr = createNamespace(GinkgoT(), k8sClient)
@@ -65,6 +80,9 @@ var _ = Describe("Test AutoScalingRunnerSet controller", func() {
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
LabelKeyKubernetesVersion: buildVersion,
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "https://github.com/owner/repo",
@@ -117,19 +135,39 @@ var _ = Describe("Test AutoScalingRunnerSet controller", func() {
return "", err
}
if _, ok := created.Annotations[runnerScaleSetIdKey]; !ok {
if _, ok := created.Annotations[runnerScaleSetIdAnnotationKey]; !ok {
return "", nil
}
if _, ok := created.Annotations[runnerScaleSetRunnerGroupNameKey]; !ok {
if _, ok := created.Annotations[AnnotationKeyGitHubRunnerGroupName]; !ok {
return "", nil
}
return fmt.Sprintf("%s_%s", created.Annotations[runnerScaleSetIdKey], created.Annotations[runnerScaleSetRunnerGroupNameKey]), nil
return fmt.Sprintf("%s_%s", created.Annotations[runnerScaleSetIdAnnotationKey], created.Annotations[AnnotationKeyGitHubRunnerGroupName]), nil
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval).Should(BeEquivalentTo("1_testgroup"), "RunnerScaleSet should be created/fetched and update the AutoScalingRunnerSet's annotation")
Eventually(
func() (string, error) {
err := k8sClient.Get(ctx, client.ObjectKey{Name: autoscalingRunnerSet.Name, Namespace: autoscalingRunnerSet.Namespace}, created)
if err != nil {
return "", err
}
if _, ok := created.Labels[LabelKeyGitHubOrganization]; !ok {
return "", nil
}
if _, ok := created.Labels[LabelKeyGitHubRepository]; !ok {
return "", nil
}
return fmt.Sprintf("%s/%s", created.Labels[LabelKeyGitHubOrganization], created.Labels[LabelKeyGitHubRepository]), nil
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval).Should(BeEquivalentTo("owner/repo"), "RunnerScaleSet should be created/fetched and update the AutoScalingRunnerSet's label")
// Check if ephemeral runner set is created
Eventually(
func() (int, error) {
@@ -258,10 +296,10 @@ var _ = Describe("Test AutoScalingRunnerSet controller", func() {
return "", fmt.Errorf("We should have only 1 EphemeralRunnerSet, but got %v", len(runnerSetList.Items))
}
return runnerSetList.Items[0].Labels[LabelKeyRunnerSpecHash], nil
return runnerSetList.Items[0].Labels[labelKeyRunnerSpecHash], nil
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval).ShouldNot(BeEquivalentTo(runnerSet.Labels[LabelKeyRunnerSpecHash]), "New EphemeralRunnerSet should be created")
autoscalingRunnerSetTestInterval).ShouldNot(BeEquivalentTo(runnerSet.Labels[labelKeyRunnerSpecHash]), "New EphemeralRunnerSet should be created")
// We should create a new listener
Eventually(
@@ -351,18 +389,18 @@ var _ = Describe("Test AutoScalingRunnerSet controller", func() {
return "", err
}
if _, ok := updated.Annotations[runnerScaleSetRunnerGroupNameKey]; !ok {
if _, ok := updated.Annotations[AnnotationKeyGitHubRunnerGroupName]; !ok {
return "", nil
}
return updated.Annotations[runnerScaleSetRunnerGroupNameKey], nil
return updated.Annotations[AnnotationKeyGitHubRunnerGroupName], nil
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval).Should(BeEquivalentTo("testgroup2"), "AutoScalingRunnerSet should have the new runner group in its annotation")
// delete the annotation and it should be re-added
patched = autoscalingRunnerSet.DeepCopy()
delete(patched.Annotations, runnerScaleSetRunnerGroupNameKey)
delete(patched.Annotations, AnnotationKeyGitHubRunnerGroupName)
err = k8sClient.Patch(ctx, patched, client.MergeFrom(autoscalingRunnerSet))
Expect(err).NotTo(HaveOccurred(), "failed to patch AutoScalingRunnerSet")
@@ -374,11 +412,11 @@ var _ = Describe("Test AutoScalingRunnerSet controller", func() {
return "", err
}
if _, ok := updated.Annotations[runnerScaleSetRunnerGroupNameKey]; !ok {
if _, ok := updated.Annotations[AnnotationKeyGitHubRunnerGroupName]; !ok {
return "", nil
}
return updated.Annotations[runnerScaleSetRunnerGroupNameKey], nil
return updated.Annotations[AnnotationKeyGitHubRunnerGroupName], nil
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
@@ -452,7 +490,19 @@ var _ = Describe("Test AutoScalingRunnerSet controller", func() {
})
})
var _ = Describe("Test AutoScalingController updates", func() {
var _ = Describe("Test AutoScalingController updates", Ordered, func() {
var originalBuildVersion string
buildVersion := "0.1.0"
BeforeAll(func() {
originalBuildVersion = build.Version
build.Version = buildVersion
})
AfterAll(func() {
build.Version = originalBuildVersion
})
Context("Creating autoscaling runner set with RunnerScaleSetName set", func() {
var ctx context.Context
var mgr ctrl.Manager
@@ -461,6 +511,7 @@ var _ = Describe("Test AutoScalingController updates", func() {
var configSecret *corev1.Secret
BeforeEach(func() {
originalBuildVersion = build.Version
ctx = context.Background()
autoscalingNS, mgr = createNamespace(GinkgoT(), k8sClient)
configSecret = createDefaultSecret(GinkgoT(), k8sClient, autoscalingNS.Name)
@@ -506,6 +557,9 @@ var _ = Describe("Test AutoScalingController updates", func() {
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
LabelKeyKubernetesVersion: buildVersion,
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "https://github.com/owner/repo",
@@ -539,7 +593,7 @@ var _ = Describe("Test AutoScalingController updates", func() {
return "", err
}
if val, ok := ars.Annotations[runnerScaleSetNameKey]; ok {
if val, ok := ars.Annotations[runnerScaleSetNameAnnotationKey]; ok {
return val, nil
}
@@ -551,6 +605,7 @@ var _ = Describe("Test AutoScalingController updates", func() {
update := autoscalingRunnerSet.DeepCopy()
update.Spec.RunnerScaleSetName = "testset_update"
err = k8sClient.Patch(ctx, update, client.MergeFrom(autoscalingRunnerSet))
Expect(err).NotTo(HaveOccurred(), "failed to update AutoScalingRunnerSet")
@@ -562,7 +617,7 @@ var _ = Describe("Test AutoScalingController updates", func() {
return "", err
}
if val, ok := ars.Annotations[runnerScaleSetNameKey]; ok {
if val, ok := ars.Annotations[runnerScaleSetNameAnnotationKey]; ok {
return val, nil
}
@@ -575,7 +630,18 @@ var _ = Describe("Test AutoScalingController updates", func() {
})
})
var _ = Describe("Test AutoscalingController creation failures", func() {
var _ = Describe("Test AutoscalingController creation failures", Ordered, func() {
var originalBuildVersion string
buildVersion := "0.1.0"
BeforeAll(func() {
originalBuildVersion = build.Version
build.Version = buildVersion
})
AfterAll(func() {
build.Version = originalBuildVersion
})
Context("When autoscaling runner set creation fails on the client", func() {
var ctx context.Context
var mgr ctrl.Manager
@@ -606,6 +672,9 @@ var _ = Describe("Test AutoscalingController creation failures", func() {
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
LabelKeyKubernetesVersion: buildVersion,
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "https://github.com/owner/repo",
@@ -684,7 +753,18 @@ var _ = Describe("Test AutoscalingController creation failures", func() {
})
})
var _ = Describe("Test Client optional configuration", func() {
var _ = Describe("Test client optional configuration", Ordered, func() {
var originalBuildVersion string
buildVersion := "0.1.0"
BeforeAll(func() {
originalBuildVersion = build.Version
build.Version = buildVersion
})
AfterAll(func() {
build.Version = originalBuildVersion
})
Context("When specifying a proxy", func() {
var ctx context.Context
var mgr ctrl.Manager
@@ -724,6 +804,9 @@ var _ = Describe("Test Client optional configuration", func() {
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
LabelKeyKubernetesVersion: buildVersion,
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "http://example.com/org/repo",
@@ -800,6 +883,9 @@ var _ = Describe("Test Client optional configuration", func() {
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
LabelKeyKubernetesVersion: buildVersion,
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "http://example.com/org/repo",
@@ -916,6 +1002,9 @@ var _ = Describe("Test Client optional configuration", func() {
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
LabelKeyKubernetesVersion: buildVersion,
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: server.ConfigURLForOrg("my-org"),
@@ -966,6 +1055,9 @@ var _ = Describe("Test Client optional configuration", func() {
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
LabelKeyKubernetesVersion: buildVersion,
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "https://github.com/owner/repo",
@@ -1016,7 +1108,7 @@ var _ = Describe("Test Client optional configuration", func() {
g.Expect(listener.Spec.GitHubServerTLS).To(BeEquivalentTo(autoscalingRunnerSet.Spec.GitHubServerTLS), "listener does not have TLS config")
},
autoscalingRunnerSetTestTimeout,
autoscalingListenerTestInterval,
autoscalingRunnerSetTestInterval,
).Should(Succeed(), "tls config is incorrect")
})
@@ -1027,6 +1119,9 @@ var _ = Describe("Test Client optional configuration", func() {
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
LabelKeyKubernetesVersion: buildVersion,
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "https://github.com/owner/repo",
@@ -1073,8 +1168,459 @@ var _ = Describe("Test Client optional configuration", func() {
g.Expect(runnerSet.Spec.EphemeralRunnerSpec.GitHubServerTLS).To(BeEquivalentTo(autoscalingRunnerSet.Spec.GitHubServerTLS), "EphemeralRunnerSpec does not have TLS config")
},
autoscalingRunnerSetTestTimeout,
autoscalingListenerTestInterval,
autoscalingRunnerSetTestInterval,
).Should(Succeed())
})
})
})
var _ = Describe("Test external permissions cleanup", Ordered, func() {
var originalBuildVersion string
buildVersion := "0.1.0"
BeforeAll(func() {
originalBuildVersion = build.Version
build.Version = buildVersion
})
AfterAll(func() {
build.Version = originalBuildVersion
})
It("Should clean up kubernetes mode permissions", func() {
ctx := context.Background()
autoscalingNS, mgr := createNamespace(GinkgoT(), k8sClient)
configSecret := createDefaultSecret(GinkgoT(), k8sClient, autoscalingNS.Name)
controller := &AutoscalingRunnerSetReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Log: logf.Log,
ControllerNamespace: autoscalingNS.Name,
DefaultRunnerScaleSetListenerImage: "ghcr.io/actions/arc",
ActionsClient: fake.NewMultiClient(),
}
err := controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
startManagers(GinkgoT(), mgr)
min := 1
max := 10
autoscalingRunnerSet := &v1alpha1.AutoscalingRunnerSet{
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
"app.kubernetes.io/name": "gha-runner-scale-set",
LabelKeyKubernetesVersion: buildVersion,
},
Annotations: map[string]string{
AnnotationKeyKubernetesModeRoleBindingName: "kube-mode-role-binding",
AnnotationKeyKubernetesModeRoleName: "kube-mode-role",
AnnotationKeyKubernetesModeServiceAccountName: "kube-mode-service-account",
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "https://github.com/owner/repo",
GitHubConfigSecret: configSecret.Name,
MaxRunners: &max,
MinRunners: &min,
RunnerGroup: "testgroup",
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "runner",
Image: "ghcr.io/actions/runner",
},
},
},
},
},
}
role := &rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: autoscalingRunnerSet.Annotations[AnnotationKeyKubernetesModeRoleName],
Namespace: autoscalingRunnerSet.Namespace,
Finalizers: []string{AutoscalingRunnerSetCleanupFinalizerName},
},
}
err = k8sClient.Create(ctx, role)
Expect(err).NotTo(HaveOccurred(), "failed to create kubernetes mode role")
serviceAccount := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: autoscalingRunnerSet.Annotations[AnnotationKeyKubernetesModeServiceAccountName],
Namespace: autoscalingRunnerSet.Namespace,
Finalizers: []string{AutoscalingRunnerSetCleanupFinalizerName},
},
}
err = k8sClient.Create(ctx, serviceAccount)
Expect(err).NotTo(HaveOccurred(), "failed to create kubernetes mode service account")
roleBinding := &rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: autoscalingRunnerSet.Annotations[AnnotationKeyKubernetesModeRoleBindingName],
Namespace: autoscalingRunnerSet.Namespace,
Finalizers: []string{AutoscalingRunnerSetCleanupFinalizerName},
},
Subjects: []rbacv1.Subject{
{
Kind: "ServiceAccount",
Name: serviceAccount.Name,
Namespace: serviceAccount.Namespace,
},
},
RoleRef: rbacv1.RoleRef{
APIGroup: rbacv1.GroupName,
// Kind is the type of resource being referenced
Kind: "Role",
Name: role.Name,
},
}
err = k8sClient.Create(ctx, roleBinding)
Expect(err).NotTo(HaveOccurred(), "failed to create kubernetes mode role binding")
err = k8sClient.Create(ctx, autoscalingRunnerSet)
Expect(err).NotTo(HaveOccurred(), "failed to create AutoScalingRunnerSet")
Eventually(
func() (string, error) {
created := new(v1alpha1.AutoscalingRunnerSet)
err := k8sClient.Get(ctx, client.ObjectKey{Name: autoscalingRunnerSet.Name, Namespace: autoscalingRunnerSet.Namespace}, created)
if err != nil {
return "", err
}
if len(created.Finalizers) == 0 {
return "", nil
}
return created.Finalizers[0], nil
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeEquivalentTo(autoscalingRunnerSetFinalizerName), "AutoScalingRunnerSet should have a finalizer")
err = k8sClient.Delete(ctx, autoscalingRunnerSet)
Expect(err).NotTo(HaveOccurred(), "failed to delete autoscaling runner set")
err = k8sClient.Delete(ctx, roleBinding)
Expect(err).NotTo(HaveOccurred(), "failed to delete kubernetes mode role binding")
err = k8sClient.Delete(ctx, role)
Expect(err).NotTo(HaveOccurred(), "failed to delete kubernetes mode role")
err = k8sClient.Delete(ctx, serviceAccount)
Expect(err).NotTo(HaveOccurred(), "failed to delete kubernetes mode service account")
Eventually(
func() bool {
r := new(rbacv1.RoleBinding)
err := k8sClient.Get(ctx, types.NamespacedName{
Name: roleBinding.Name,
Namespace: roleBinding.Namespace,
}, r)
return errors.IsNotFound(err)
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeTrue(), "Expected role binding to be cleaned up")
Eventually(
func() bool {
r := new(rbacv1.Role)
err := k8sClient.Get(ctx, types.NamespacedName{
Name: role.Name,
Namespace: role.Namespace,
}, r)
return errors.IsNotFound(err)
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeTrue(), "Expected role to be cleaned up")
})
It("Should clean up manager permissions and no-permission service account", func() {
ctx := context.Background()
autoscalingNS, mgr := createNamespace(GinkgoT(), k8sClient)
controller := &AutoscalingRunnerSetReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Log: logf.Log,
ControllerNamespace: autoscalingNS.Name,
DefaultRunnerScaleSetListenerImage: "ghcr.io/actions/arc",
ActionsClient: fake.NewMultiClient(),
}
err := controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
startManagers(GinkgoT(), mgr)
min := 1
max := 10
autoscalingRunnerSet := &v1alpha1.AutoscalingRunnerSet{
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
"app.kubernetes.io/name": "gha-runner-scale-set",
LabelKeyKubernetesVersion: buildVersion,
},
Annotations: map[string]string{
AnnotationKeyManagerRoleName: "manager-role",
AnnotationKeyManagerRoleBindingName: "manager-role-binding",
AnnotationKeyGitHubSecretName: "gh-secret-name",
AnnotationKeyNoPermissionServiceAccountName: "no-permission-sa",
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "https://github.com/owner/repo",
MaxRunners: &max,
MinRunners: &min,
RunnerGroup: "testgroup",
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "runner",
Image: "ghcr.io/actions/runner",
},
},
},
},
},
}
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: autoscalingRunnerSet.Annotations[AnnotationKeyGitHubSecretName],
Namespace: autoscalingRunnerSet.Namespace,
Finalizers: []string{AutoscalingRunnerSetCleanupFinalizerName},
},
Data: map[string][]byte{
"github_token": []byte(defaultGitHubToken),
},
}
err = k8sClient.Create(context.Background(), secret)
Expect(err).NotTo(HaveOccurred(), "failed to create github secret")
autoscalingRunnerSet.Spec.GitHubConfigSecret = secret.Name
role := &rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: autoscalingRunnerSet.Annotations[AnnotationKeyManagerRoleName],
Namespace: autoscalingRunnerSet.Namespace,
Finalizers: []string{AutoscalingRunnerSetCleanupFinalizerName},
},
}
err = k8sClient.Create(ctx, role)
Expect(err).NotTo(HaveOccurred(), "failed to create manager role")
roleBinding := &rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: autoscalingRunnerSet.Annotations[AnnotationKeyManagerRoleBindingName],
Namespace: autoscalingRunnerSet.Namespace,
Finalizers: []string{AutoscalingRunnerSetCleanupFinalizerName},
},
RoleRef: rbacv1.RoleRef{
APIGroup: rbacv1.GroupName,
Kind: "Role",
Name: role.Name,
},
}
err = k8sClient.Create(ctx, roleBinding)
Expect(err).NotTo(HaveOccurred(), "failed to create manager role binding")
noPermissionServiceAccount := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: autoscalingRunnerSet.Annotations[AnnotationKeyNoPermissionServiceAccountName],
Namespace: autoscalingRunnerSet.Namespace,
Finalizers: []string{AutoscalingRunnerSetCleanupFinalizerName},
},
}
err = k8sClient.Create(ctx, noPermissionServiceAccount)
Expect(err).NotTo(HaveOccurred(), "failed to create no permission service account")
err = k8sClient.Create(ctx, autoscalingRunnerSet)
Expect(err).NotTo(HaveOccurred(), "failed to create AutoScalingRunnerSet")
Eventually(
func() (string, error) {
created := new(v1alpha1.AutoscalingRunnerSet)
err := k8sClient.Get(ctx, client.ObjectKey{Name: autoscalingRunnerSet.Name, Namespace: autoscalingRunnerSet.Namespace}, created)
if err != nil {
return "", err
}
if len(created.Finalizers) == 0 {
return "", nil
}
return created.Finalizers[0], nil
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeEquivalentTo(autoscalingRunnerSetFinalizerName), "AutoScalingRunnerSet should have a finalizer")
err = k8sClient.Delete(ctx, autoscalingRunnerSet)
Expect(err).NotTo(HaveOccurred(), "failed to delete autoscaling runner set")
err = k8sClient.Delete(ctx, noPermissionServiceAccount)
Expect(err).NotTo(HaveOccurred(), "failed to delete no permission service account")
err = k8sClient.Delete(ctx, secret)
Expect(err).NotTo(HaveOccurred(), "failed to delete GitHub secret")
err = k8sClient.Delete(ctx, roleBinding)
Expect(err).NotTo(HaveOccurred(), "failed to delete manager role binding")
err = k8sClient.Delete(ctx, role)
Expect(err).NotTo(HaveOccurred(), "failed to delete manager role")
Eventually(
func() bool {
r := new(corev1.ServiceAccount)
err := k8sClient.Get(
ctx,
types.NamespacedName{
Name: noPermissionServiceAccount.Name,
Namespace: noPermissionServiceAccount.Namespace,
},
r,
)
return errors.IsNotFound(err)
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeTrue(), "Expected no permission service account to be cleaned up")
Eventually(
func() bool {
r := new(corev1.Secret)
err := k8sClient.Get(ctx, types.NamespacedName{
Name: secret.Name,
Namespace: secret.Namespace,
}, r)
return errors.IsNotFound(err)
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeTrue(), "Expected role binding to be cleaned up")
Eventually(
func() bool {
r := new(rbacv1.RoleBinding)
err := k8sClient.Get(ctx, types.NamespacedName{
Name: roleBinding.Name,
Namespace: roleBinding.Namespace,
}, r)
return errors.IsNotFound(err)
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeTrue(), "Expected role binding to be cleaned up")
Eventually(
func() bool {
r := new(rbacv1.Role)
err := k8sClient.Get(
ctx,
types.NamespacedName{
Name: role.Name,
Namespace: role.Namespace,
},
r,
)
return errors.IsNotFound(err)
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeTrue(), "Expected role to be cleaned up")
})
})
var _ = Describe("Test resource version and build version mismatch", func() {
It("Should delete and recreate the autoscaling runner set to match the build version", func() {
ctx := context.Background()
autoscalingNS, mgr := createNamespace(GinkgoT(), k8sClient)
configSecret := createDefaultSecret(GinkgoT(), k8sClient, autoscalingNS.Name)
controller := &AutoscalingRunnerSetReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Log: logf.Log,
ControllerNamespace: autoscalingNS.Name,
DefaultRunnerScaleSetListenerImage: "ghcr.io/actions/arc",
ActionsClient: fake.NewMultiClient(),
}
err := controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
originalVersion := build.Version
defer func() {
build.Version = originalVersion
}()
build.Version = "0.2.0"
min := 1
max := 10
autoscalingRunnerSet := &v1alpha1.AutoscalingRunnerSet{
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
Labels: map[string]string{
"app.kubernetes.io/name": "gha-runner-scale-set",
"app.kubernetes.io/version": "0.1.0",
},
Annotations: map[string]string{
AnnotationKeyKubernetesModeRoleBindingName: "kube-mode-role-binding",
AnnotationKeyKubernetesModeRoleName: "kube-mode-role",
AnnotationKeyKubernetesModeServiceAccountName: "kube-mode-service-account",
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "https://github.com/owner/repo",
GitHubConfigSecret: configSecret.Name,
MaxRunners: &max,
MinRunners: &min,
RunnerGroup: "testgroup",
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "runner",
Image: "ghcr.io/actions/runner",
},
},
},
},
},
}
// create autoscaling runner set before starting a manager
err = k8sClient.Create(ctx, autoscalingRunnerSet)
Expect(err).NotTo(HaveOccurred())
startManagers(GinkgoT(), mgr)
Eventually(func() bool {
ars := new(v1alpha1.AutoscalingRunnerSet)
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: autoscalingRunnerSet.Namespace, Name: autoscalingRunnerSet.Name}, ars)
return errors.IsNotFound(err)
}).Should(BeTrue())
})
})

View File

@@ -1,5 +1,7 @@
package actionsgithubcom
import corev1 "k8s.io/api/core/v1"
const (
LabelKeyRunnerTemplateHash = "runner-template-hash"
LabelKeyPodTemplateHash = "pod-template-hash"
@@ -16,3 +18,47 @@ const (
EnvVarHTTPSProxy = "https_proxy"
EnvVarNoProxy = "no_proxy"
)
// Labels applied to resources
const (
// Kubernetes labels
LabelKeyKubernetesPartOf = "app.kubernetes.io/part-of"
LabelKeyKubernetesComponent = "app.kubernetes.io/component"
LabelKeyKubernetesVersion = "app.kubernetes.io/version"
// Github labels
LabelKeyGitHubScaleSetName = "actions.github.com/scale-set-name"
LabelKeyGitHubScaleSetNamespace = "actions.github.com/scale-set-namespace"
LabelKeyGitHubEnterprise = "actions.github.com/enterprise"
LabelKeyGitHubOrganization = "actions.github.com/organization"
LabelKeyGitHubRepository = "actions.github.com/repository"
)
// Finalizer used to protect resources from deletion while AutoscalingRunnerSet is running
const AutoscalingRunnerSetCleanupFinalizerName = "actions.github.com/cleanup-protection"
const AnnotationKeyGitHubRunnerGroupName = "actions.github.com/runner-group-name"
// Labels applied to listener roles
const (
labelKeyListenerName = "auto-scaling-listener-name"
labelKeyListenerNamespace = "auto-scaling-listener-namespace"
)
// Annotations applied for later cleanup of resources
const (
AnnotationKeyManagerRoleBindingName = "actions.github.com/cleanup-manager-role-binding"
AnnotationKeyManagerRoleName = "actions.github.com/cleanup-manager-role-name"
AnnotationKeyKubernetesModeRoleName = "actions.github.com/cleanup-kubernetes-mode-role-name"
AnnotationKeyKubernetesModeRoleBindingName = "actions.github.com/cleanup-kubernetes-mode-role-binding-name"
AnnotationKeyKubernetesModeServiceAccountName = "actions.github.com/cleanup-kubernetes-mode-service-account-name"
AnnotationKeyGitHubSecretName = "actions.github.com/cleanup-github-secret-name"
AnnotationKeyNoPermissionServiceAccountName = "actions.github.com/cleanup-no-permission-service-account-name"
)
// DefaultScaleSetListenerImagePullPolicy is the default pull policy applied
// to the listener when ImagePullPolicy is not specified
const DefaultScaleSetListenerImagePullPolicy = corev1.PullIfNotPresent
// ownerKey is field selector matching the owner name of a particular resource
const resourceOwnerKey = ".metadata.controller"

View File

@@ -40,7 +40,6 @@ import (
)
const (
ephemeralRunnerSetReconcilerOwnerKey = ".metadata.controller"
ephemeralRunnerSetFinalizerName = "ephemeralrunner.actions.github.com/finalizer"
)
@@ -56,6 +55,7 @@ type EphemeralRunnerSetReconciler struct {
//+kubebuilder:rbac:groups=actions.github.com,resources=ephemeralrunnersets,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=actions.github.com,resources=ephemeralrunnersets/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=actions.github.com,resources=ephemeralrunnersets/finalizers,verbs=update;patch
//+kubebuilder:rbac:groups=actions.github.com,resources=ephemeralrunners,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=actions.github.com,resources=ephemeralrunners/status,verbs=get
@@ -146,7 +146,7 @@ func (r *EphemeralRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl.R
ctx,
ephemeralRunnerList,
client.InNamespace(req.Namespace),
client.MatchingFields{ephemeralRunnerSetReconcilerOwnerKey: req.Name},
client.MatchingFields{resourceOwnerKey: req.Name},
)
if err != nil {
log.Error(err, "Unable to list child ephemeral runners")
@@ -242,7 +242,7 @@ func (r *EphemeralRunnerSetReconciler) cleanUpProxySecret(ctx context.Context, e
func (r *EphemeralRunnerSetReconciler) cleanUpEphemeralRunners(ctx context.Context, ephemeralRunnerSet *v1alpha1.EphemeralRunnerSet, log logr.Logger) (bool, error) {
ephemeralRunnerList := new(v1alpha1.EphemeralRunnerList)
err := r.List(ctx, ephemeralRunnerList, client.InNamespace(ephemeralRunnerSet.Namespace), client.MatchingFields{ephemeralRunnerSetReconcilerOwnerKey: ephemeralRunnerSet.Name})
err := r.List(ctx, ephemeralRunnerList, client.InNamespace(ephemeralRunnerSet.Namespace), client.MatchingFields{resourceOwnerKey: ephemeralRunnerSet.Name})
if err != nil {
return false, fmt.Errorf("failed to list child ephemeral runners: %v", err)
}
@@ -357,9 +357,8 @@ func (r *EphemeralRunnerSetReconciler) createProxySecret(ctx context.Context, ep
Name: proxyEphemeralRunnerSetSecretName(ephemeralRunnerSet),
Namespace: ephemeralRunnerSet.Namespace,
Labels: map[string]string{
// TODO: figure out autoScalingRunnerSet name and set it as a label for this secret
// "auto-scaling-runner-set-namespace": ephemeralRunnerSet.Namespace,
// "auto-scaling-runner-set-name": ephemeralRunnerSet.Name,
LabelKeyGitHubScaleSetName: ephemeralRunnerSet.Labels[LabelKeyGitHubScaleSetName],
LabelKeyGitHubScaleSetNamespace: ephemeralRunnerSet.Labels[LabelKeyGitHubScaleSetNamespace],
},
},
Data: proxySecretData,
@@ -522,7 +521,7 @@ func (r *EphemeralRunnerSetReconciler) actionsClientOptionsFor(ctx context.Conte
// SetupWithManager sets up the controller with the Manager.
func (r *EphemeralRunnerSetReconciler) SetupWithManager(mgr ctrl.Manager) error {
// Index EphemeralRunner owned by EphemeralRunnerSet so we can perform faster look ups.
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &v1alpha1.EphemeralRunner{}, ephemeralRunnerSetReconcilerOwnerKey, func(rawObj client.Object) []string {
if err := mgr.GetFieldIndexer().IndexField(context.Background(), &v1alpha1.EphemeralRunner{}, resourceOwnerKey, func(rawObj client.Object) []string {
groupVersion := v1alpha1.GroupVersion.String()
// grab the job object, extract the owner...

View File

@@ -8,6 +8,7 @@ import (
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/actions/actions-runner-controller/build"
"github.com/actions/actions-runner-controller/github/actions"
"github.com/actions/actions-runner-controller/hash"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
@@ -19,14 +20,90 @@ const (
jitTokenKey = "jitToken"
)
// labels applied to resources
const (
LabelKeyAutoScaleRunnerSetName = "auto-scaling-runner-set-name"
LabelKeyAutoScaleRunnerSetNamespace = "auto-scaling-runner-set-namespace"
)
var commonLabelKeys = [...]string{
LabelKeyKubernetesPartOf,
LabelKeyKubernetesComponent,
LabelKeyKubernetesVersion,
LabelKeyGitHubScaleSetName,
LabelKeyGitHubScaleSetNamespace,
LabelKeyGitHubEnterprise,
LabelKeyGitHubOrganization,
LabelKeyGitHubRepository,
}
const labelValueKubernetesPartOf = "gha-runner-scale-set"
// scaleSetListenerImagePullPolicy is applied to all listeners
var scaleSetListenerImagePullPolicy = DefaultScaleSetListenerImagePullPolicy
func SetListenerImagePullPolicy(pullPolicy string) bool {
switch p := corev1.PullPolicy(pullPolicy); p {
case corev1.PullAlways, corev1.PullNever, corev1.PullIfNotPresent:
scaleSetListenerImagePullPolicy = p
return true
default:
return false
}
}
type resourceBuilder struct{}
func (b *resourceBuilder) newAutoScalingListener(autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet, ephemeralRunnerSet *v1alpha1.EphemeralRunnerSet, namespace, image string, imagePullSecrets []corev1.LocalObjectReference) (*v1alpha1.AutoscalingListener, error) {
runnerScaleSetId, err := strconv.Atoi(autoscalingRunnerSet.Annotations[runnerScaleSetIdAnnotationKey])
if err != nil {
return nil, err
}
effectiveMinRunners := 0
effectiveMaxRunners := math.MaxInt32
if autoscalingRunnerSet.Spec.MaxRunners != nil {
effectiveMaxRunners = *autoscalingRunnerSet.Spec.MaxRunners
}
if autoscalingRunnerSet.Spec.MinRunners != nil {
effectiveMinRunners = *autoscalingRunnerSet.Spec.MinRunners
}
githubConfig, err := actions.ParseGitHubConfigFromURL(autoscalingRunnerSet.Spec.GitHubConfigUrl)
if err != nil {
return nil, fmt.Errorf("failed to parse github config from url: %v", err)
}
autoscalingListener := &v1alpha1.AutoscalingListener{
ObjectMeta: metav1.ObjectMeta{
Name: scaleSetListenerName(autoscalingRunnerSet),
Namespace: namespace,
Labels: map[string]string{
LabelKeyGitHubScaleSetNamespace: autoscalingRunnerSet.Namespace,
LabelKeyGitHubScaleSetName: autoscalingRunnerSet.Name,
LabelKeyKubernetesPartOf: labelValueKubernetesPartOf,
LabelKeyKubernetesComponent: "runner-scale-set-listener",
LabelKeyKubernetesVersion: autoscalingRunnerSet.Labels[LabelKeyKubernetesVersion],
LabelKeyGitHubEnterprise: githubConfig.Enterprise,
LabelKeyGitHubOrganization: githubConfig.Organization,
LabelKeyGitHubRepository: githubConfig.Repository,
labelKeyRunnerSpecHash: autoscalingRunnerSet.ListenerSpecHash(),
},
},
Spec: v1alpha1.AutoscalingListenerSpec{
GitHubConfigUrl: autoscalingRunnerSet.Spec.GitHubConfigUrl,
GitHubConfigSecret: autoscalingRunnerSet.Spec.GitHubConfigSecret,
RunnerScaleSetId: runnerScaleSetId,
AutoscalingRunnerSetNamespace: autoscalingRunnerSet.Namespace,
AutoscalingRunnerSetName: autoscalingRunnerSet.Name,
EphemeralRunnerSetName: ephemeralRunnerSet.Name,
MinRunners: effectiveMinRunners,
MaxRunners: effectiveMaxRunners,
Image: image,
ImagePullPolicy: scaleSetListenerImagePullPolicy,
ImagePullSecrets: imagePullSecrets,
Proxy: autoscalingRunnerSet.Spec.Proxy,
GitHubServerTLS: autoscalingRunnerSet.Spec.GitHubServerTLS,
},
}
return autoscalingListener, nil
}
func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.AutoscalingListener, serviceAccount *corev1.ServiceAccount, secret *corev1.Secret, envs ...corev1.EnvVar) *corev1.Pod {
listenerEnv := []corev1.EnvVar{
{
@@ -119,7 +196,7 @@ func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.A
Name: autoscalingListenerContainerName,
Image: autoscalingListener.Spec.Image,
Env: listenerEnv,
ImagePullPolicy: corev1.PullIfNotPresent,
ImagePullPolicy: autoscalingListener.Spec.ImagePullPolicy,
Command: []string{
"/github-runnerscaleset-listener",
},
@@ -129,6 +206,11 @@ func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.A
RestartPolicy: corev1.RestartPolicyNever,
}
labels := make(map[string]string, len(autoscalingListener.Labels))
for key, val := range autoscalingListener.Labels {
labels[key] = val
}
newRunnerScaleSetListenerPod := &corev1.Pod{
TypeMeta: metav1.TypeMeta{
Kind: "Pod",
@@ -137,10 +219,7 @@ func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.A
ObjectMeta: metav1.ObjectMeta{
Name: autoscalingListener.Name,
Namespace: autoscalingListener.Namespace,
Labels: map[string]string{
LabelKeyAutoScaleRunnerSetNamespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
LabelKeyAutoScaleRunnerSetName: autoscalingListener.Spec.AutoscalingRunnerSetName,
},
Labels: labels,
},
Spec: podSpec,
}
@@ -148,47 +227,14 @@ func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.A
return newRunnerScaleSetListenerPod
}
func (b *resourceBuilder) newEphemeralRunnerSet(autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet) (*v1alpha1.EphemeralRunnerSet, error) {
runnerScaleSetId, err := strconv.Atoi(autoscalingRunnerSet.Annotations[runnerScaleSetIdKey])
if err != nil {
return nil, err
}
runnerSpecHash := autoscalingRunnerSet.RunnerSetSpecHash()
newLabels := map[string]string{}
newLabels[LabelKeyRunnerSpecHash] = runnerSpecHash
newEphemeralRunnerSet := &v1alpha1.EphemeralRunnerSet{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
GenerateName: autoscalingRunnerSet.ObjectMeta.Name + "-",
Namespace: autoscalingRunnerSet.ObjectMeta.Namespace,
Labels: newLabels,
},
Spec: v1alpha1.EphemeralRunnerSetSpec{
Replicas: 0,
EphemeralRunnerSpec: v1alpha1.EphemeralRunnerSpec{
RunnerScaleSetId: runnerScaleSetId,
GitHubConfigUrl: autoscalingRunnerSet.Spec.GitHubConfigUrl,
GitHubConfigSecret: autoscalingRunnerSet.Spec.GitHubConfigSecret,
Proxy: autoscalingRunnerSet.Spec.Proxy,
GitHubServerTLS: autoscalingRunnerSet.Spec.GitHubServerTLS,
PodTemplateSpec: autoscalingRunnerSet.Spec.Template,
},
},
}
return newEphemeralRunnerSet, nil
}
func (b *resourceBuilder) newScaleSetListenerServiceAccount(autoscalingListener *v1alpha1.AutoscalingListener) *corev1.ServiceAccount {
return &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: scaleSetListenerServiceAccountName(autoscalingListener),
Namespace: autoscalingListener.Namespace,
Labels: map[string]string{
LabelKeyAutoScaleRunnerSetNamespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
LabelKeyAutoScaleRunnerSetName: autoscalingListener.Spec.AutoscalingRunnerSetName,
LabelKeyGitHubScaleSetNamespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
LabelKeyGitHubScaleSetName: autoscalingListener.Spec.AutoscalingRunnerSetName,
},
},
}
@@ -202,10 +248,10 @@ func (b *resourceBuilder) newScaleSetListenerRole(autoscalingListener *v1alpha1.
Name: scaleSetListenerRoleName(autoscalingListener),
Namespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
Labels: map[string]string{
LabelKeyAutoScaleRunnerSetNamespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
LabelKeyAutoScaleRunnerSetName: autoscalingListener.Spec.AutoscalingRunnerSetName,
"auto-scaling-listener-namespace": autoscalingListener.Namespace,
"auto-scaling-listener-name": autoscalingListener.Name,
LabelKeyGitHubScaleSetNamespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
LabelKeyGitHubScaleSetName: autoscalingListener.Spec.AutoscalingRunnerSetName,
labelKeyListenerNamespace: autoscalingListener.Namespace,
labelKeyListenerName: autoscalingListener.Name,
"role-policy-rules-hash": rulesHash,
},
},
@@ -236,10 +282,10 @@ func (b *resourceBuilder) newScaleSetListenerRoleBinding(autoscalingListener *v1
Name: scaleSetListenerRoleName(autoscalingListener),
Namespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
Labels: map[string]string{
LabelKeyAutoScaleRunnerSetNamespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
LabelKeyAutoScaleRunnerSetName: autoscalingListener.Spec.AutoscalingRunnerSetName,
"auto-scaling-listener-namespace": autoscalingListener.Namespace,
"auto-scaling-listener-name": autoscalingListener.Name,
LabelKeyGitHubScaleSetNamespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
LabelKeyGitHubScaleSetName: autoscalingListener.Spec.AutoscalingRunnerSetName,
labelKeyListenerNamespace: autoscalingListener.Namespace,
labelKeyListenerName: autoscalingListener.Name,
"role-binding-role-ref-hash": roleRefHash,
"role-binding-subject-hash": subjectHash,
},
@@ -259,8 +305,8 @@ func (b *resourceBuilder) newScaleSetListenerSecretMirror(autoscalingListener *v
Name: scaleSetListenerSecretMirrorName(autoscalingListener),
Namespace: autoscalingListener.Namespace,
Labels: map[string]string{
LabelKeyAutoScaleRunnerSetNamespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
LabelKeyAutoScaleRunnerSetName: autoscalingListener.Spec.AutoscalingRunnerSetName,
LabelKeyGitHubScaleSetNamespace: autoscalingListener.Spec.AutoscalingRunnerSetNamespace,
LabelKeyGitHubScaleSetName: autoscalingListener.Spec.AutoscalingRunnerSetName,
"secret-data-hash": dataHash,
},
},
@@ -270,56 +316,79 @@ func (b *resourceBuilder) newScaleSetListenerSecretMirror(autoscalingListener *v
return newListenerSecret
}
func (b *resourceBuilder) newAutoScalingListener(autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet, ephemeralRunnerSet *v1alpha1.EphemeralRunnerSet, namespace, image string, imagePullSecrets []corev1.LocalObjectReference) (*v1alpha1.AutoscalingListener, error) {
runnerScaleSetId, err := strconv.Atoi(autoscalingRunnerSet.Annotations[runnerScaleSetIdKey])
func (b *resourceBuilder) newEphemeralRunnerSet(autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet) (*v1alpha1.EphemeralRunnerSet, error) {
runnerScaleSetId, err := strconv.Atoi(autoscalingRunnerSet.Annotations[runnerScaleSetIdAnnotationKey])
if err != nil {
return nil, err
}
runnerSpecHash := autoscalingRunnerSet.RunnerSetSpecHash()
effectiveMinRunners := 0
effectiveMaxRunners := math.MaxInt32
if autoscalingRunnerSet.Spec.MaxRunners != nil {
effectiveMaxRunners = *autoscalingRunnerSet.Spec.MaxRunners
}
if autoscalingRunnerSet.Spec.MinRunners != nil {
effectiveMinRunners = *autoscalingRunnerSet.Spec.MinRunners
newLabels := map[string]string{
labelKeyRunnerSpecHash: runnerSpecHash,
LabelKeyKubernetesPartOf: labelValueKubernetesPartOf,
LabelKeyKubernetesComponent: "runner-set",
LabelKeyKubernetesVersion: autoscalingRunnerSet.Labels[LabelKeyKubernetesVersion],
LabelKeyGitHubScaleSetName: autoscalingRunnerSet.Name,
LabelKeyGitHubScaleSetNamespace: autoscalingRunnerSet.Namespace,
}
autoscalingListener := &v1alpha1.AutoscalingListener{
if err := applyGitHubURLLabels(autoscalingRunnerSet.Spec.GitHubConfigUrl, newLabels); err != nil {
return nil, fmt.Errorf("failed to apply GitHub URL labels: %v", err)
}
newAnnotations := map[string]string{
AnnotationKeyGitHubRunnerGroupName: autoscalingRunnerSet.Annotations[AnnotationKeyGitHubRunnerGroupName],
}
newEphemeralRunnerSet := &v1alpha1.EphemeralRunnerSet{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: scaleSetListenerName(autoscalingRunnerSet),
Namespace: namespace,
Labels: map[string]string{
LabelKeyAutoScaleRunnerSetNamespace: autoscalingRunnerSet.Namespace,
LabelKeyAutoScaleRunnerSetName: autoscalingRunnerSet.Name,
LabelKeyRunnerSpecHash: autoscalingRunnerSet.ListenerSpecHash(),
GenerateName: autoscalingRunnerSet.ObjectMeta.Name + "-",
Namespace: autoscalingRunnerSet.ObjectMeta.Namespace,
Labels: newLabels,
Annotations: newAnnotations,
},
},
Spec: v1alpha1.AutoscalingListenerSpec{
Spec: v1alpha1.EphemeralRunnerSetSpec{
Replicas: 0,
EphemeralRunnerSpec: v1alpha1.EphemeralRunnerSpec{
RunnerScaleSetId: runnerScaleSetId,
GitHubConfigUrl: autoscalingRunnerSet.Spec.GitHubConfigUrl,
GitHubConfigSecret: autoscalingRunnerSet.Spec.GitHubConfigSecret,
RunnerScaleSetId: runnerScaleSetId,
AutoscalingRunnerSetNamespace: autoscalingRunnerSet.Namespace,
AutoscalingRunnerSetName: autoscalingRunnerSet.Name,
EphemeralRunnerSetName: ephemeralRunnerSet.Name,
MinRunners: effectiveMinRunners,
MaxRunners: effectiveMaxRunners,
Image: image,
ImagePullSecrets: imagePullSecrets,
Proxy: autoscalingRunnerSet.Spec.Proxy,
GitHubServerTLS: autoscalingRunnerSet.Spec.GitHubServerTLS,
PodTemplateSpec: autoscalingRunnerSet.Spec.Template,
},
},
}
return autoscalingListener, nil
return newEphemeralRunnerSet, nil
}
func (b *resourceBuilder) newEphemeralRunner(ephemeralRunnerSet *v1alpha1.EphemeralRunnerSet) *v1alpha1.EphemeralRunner {
labels := make(map[string]string)
for _, key := range commonLabelKeys {
switch key {
case LabelKeyKubernetesComponent:
labels[key] = "runner"
default:
v, ok := ephemeralRunnerSet.Labels[key]
if !ok {
continue
}
labels[key] = v
}
}
annotations := make(map[string]string)
for key, val := range ephemeralRunnerSet.Annotations {
annotations[key] = val
}
return &v1alpha1.EphemeralRunner{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
GenerateName: ephemeralRunnerSet.Name + "-runner-",
Namespace: ephemeralRunnerSet.Namespace,
Labels: labels,
Annotations: annotations,
},
Spec: ephemeralRunnerSet.Spec.EphemeralRunnerSpec,
}
@@ -337,6 +406,7 @@ func (b *resourceBuilder) newEphemeralRunnerPod(ctx context.Context, runner *v1a
for k, v := range runner.Spec.PodTemplateSpec.Labels {
labels[k] = v
}
labels["actions-ephemeral-runner"] = string(corev1.ConditionTrue)
for k, v := range runner.ObjectMeta.Annotations {
annotations[k] = v
@@ -352,8 +422,6 @@ func (b *resourceBuilder) newEphemeralRunnerPod(ctx context.Context, runner *v1a
runner.Status.RunnerJITConfig,
)
labels["actions-ephemeral-runner"] = string(corev1.ConditionTrue)
objectMeta := metav1.ObjectMeta{
Name: runner.ObjectMeta.Name,
Namespace: runner.ObjectMeta.Namespace,
@@ -469,3 +537,22 @@ func rulesForListenerRole(resourceNames []string) []rbacv1.PolicyRule {
},
}
}
func applyGitHubURLLabels(url string, labels map[string]string) error {
githubConfig, err := actions.ParseGitHubConfigFromURL(url)
if err != nil {
return fmt.Errorf("failed to parse github config from url: %v", err)
}
if len(githubConfig.Enterprise) > 0 {
labels[LabelKeyGitHubEnterprise] = githubConfig.Enterprise
}
if len(githubConfig.Organization) > 0 {
labels[LabelKeyGitHubOrganization] = githubConfig.Organization
}
if len(githubConfig.Repository) > 0 {
labels[LabelKeyGitHubRepository] = githubConfig.Repository
}
return nil
}

View File

@@ -0,0 +1,93 @@
package actionsgithubcom
import (
"context"
"testing"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func TestLabelPropagation(t *testing.T) {
autoscalingRunnerSet := v1alpha1.AutoscalingRunnerSet{
ObjectMeta: metav1.ObjectMeta{
Name: "test-scale-set",
Namespace: "test-ns",
Labels: map[string]string{
LabelKeyKubernetesPartOf: labelValueKubernetesPartOf,
LabelKeyKubernetesVersion: "0.2.0",
},
Annotations: map[string]string{
runnerScaleSetIdAnnotationKey: "1",
AnnotationKeyGitHubRunnerGroupName: "test-group",
},
},
Spec: v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "https://github.com/org/repo",
},
}
var b resourceBuilder
ephemeralRunnerSet, err := b.newEphemeralRunnerSet(&autoscalingRunnerSet)
require.NoError(t, err)
assert.Equal(t, labelValueKubernetesPartOf, ephemeralRunnerSet.Labels[LabelKeyKubernetesPartOf])
assert.Equal(t, "runner-set", ephemeralRunnerSet.Labels[LabelKeyKubernetesComponent])
assert.Equal(t, autoscalingRunnerSet.Labels[LabelKeyKubernetesVersion], ephemeralRunnerSet.Labels[LabelKeyKubernetesVersion])
assert.NotEmpty(t, ephemeralRunnerSet.Labels[labelKeyRunnerSpecHash])
assert.Equal(t, autoscalingRunnerSet.Name, ephemeralRunnerSet.Labels[LabelKeyGitHubScaleSetName])
assert.Equal(t, autoscalingRunnerSet.Namespace, ephemeralRunnerSet.Labels[LabelKeyGitHubScaleSetNamespace])
assert.Equal(t, "", ephemeralRunnerSet.Labels[LabelKeyGitHubEnterprise])
assert.Equal(t, "org", ephemeralRunnerSet.Labels[LabelKeyGitHubOrganization])
assert.Equal(t, "repo", ephemeralRunnerSet.Labels[LabelKeyGitHubRepository])
assert.Equal(t, autoscalingRunnerSet.Annotations[AnnotationKeyGitHubRunnerGroupName], ephemeralRunnerSet.Annotations[AnnotationKeyGitHubRunnerGroupName])
listener, err := b.newAutoScalingListener(&autoscalingRunnerSet, ephemeralRunnerSet, autoscalingRunnerSet.Namespace, "test:latest", nil)
require.NoError(t, err)
assert.Equal(t, labelValueKubernetesPartOf, listener.Labels[LabelKeyKubernetesPartOf])
assert.Equal(t, "runner-scale-set-listener", listener.Labels[LabelKeyKubernetesComponent])
assert.Equal(t, autoscalingRunnerSet.Labels[LabelKeyKubernetesVersion], listener.Labels[LabelKeyKubernetesVersion])
assert.NotEmpty(t, ephemeralRunnerSet.Labels[labelKeyRunnerSpecHash])
assert.Equal(t, autoscalingRunnerSet.Name, listener.Labels[LabelKeyGitHubScaleSetName])
assert.Equal(t, autoscalingRunnerSet.Namespace, listener.Labels[LabelKeyGitHubScaleSetNamespace])
assert.Equal(t, "", listener.Labels[LabelKeyGitHubEnterprise])
assert.Equal(t, "org", listener.Labels[LabelKeyGitHubOrganization])
assert.Equal(t, "repo", listener.Labels[LabelKeyGitHubRepository])
listenerServiceAccount := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
}
listenerSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
}
listenerPod := b.newScaleSetListenerPod(listener, listenerServiceAccount, listenerSecret)
assert.Equal(t, listenerPod.Labels, listener.Labels)
ephemeralRunner := b.newEphemeralRunner(ephemeralRunnerSet)
require.NoError(t, err)
for _, key := range commonLabelKeys {
if key == LabelKeyKubernetesComponent {
continue
}
assert.Equal(t, ephemeralRunnerSet.Labels[key], ephemeralRunner.Labels[key])
}
assert.Equal(t, "runner", ephemeralRunner.Labels[LabelKeyKubernetesComponent])
assert.Equal(t, autoscalingRunnerSet.Annotations[AnnotationKeyGitHubRunnerGroupName], ephemeralRunner.Annotations[AnnotationKeyGitHubRunnerGroupName])
runnerSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
}
pod := b.newEphemeralRunnerPod(context.TODO(), ephemeralRunner, runnerSecret)
for key := range ephemeralRunner.Labels {
assert.Equal(t, ephemeralRunner.Labels[key], pod.Labels[key])
}
}

View File

@@ -115,7 +115,7 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
}()
// respond ok to GET / e.g. for health check
if r.Method == http.MethodGet {
if strings.ToUpper(r.Method) == http.MethodGet {
ok = true
fmt.Fprintln(w, "webhook server is running")
return
@@ -210,7 +210,16 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
if e.GetAction() == "queued" {
target.Amount = 1
break
} else if e.GetAction() == "completed" && e.GetWorkflowJob().GetConclusion() != "skipped" && e.GetWorkflowJob().GetRunnerID() > 0 {
} else if e.GetAction() == "completed" && e.GetWorkflowJob().GetConclusion() != "skipped" {
// We want to filter out "completed" events sent by check runs.
// See https://github.com/actions/actions-runner-controller/issues/2118
// and https://github.com/actions/actions-runner-controller/pull/2119
// But canceled events have runner_id == 0 and GetRunnerID() returns 0 when RunnerID == nil,
// so we need to be more specific in filtering out the check runs.
// See example check run completion at https://gist.github.com/nathanklick/268fea6496a4d7b14cecb2999747ef84
if e.GetWorkflowJob().GetConclusion() == "success" && e.GetWorkflowJob().RunnerID == nil {
log.V(1).Info("Ignoring workflow_job event because it does not relate to a self-hosted runner")
} else {
// A negative amount is processed in the tryScale func as a scale-down request,
// that erases the oldest CapacityReservation with the same amount.
// If the first CapacityReservation was with Replicas=1, this negative scale target erases that,
@@ -218,6 +227,7 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
target.Amount = -1
break
}
}
// If the conclusion is "skipped", we will ignore it and fallthrough to the default case.
fallthrough
default:

View File

@@ -105,12 +105,14 @@ func SetupIntegrationTest(ctx2 context.Context) *testEnvironment {
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: multiClient,
RunnerImage: "example/runner:test",
DockerImage: "example/docker:test",
Name: controllerName("runner"),
RegistrationRecheckInterval: time.Millisecond * 100,
RegistrationRecheckJitter: time.Millisecond * 10,
UnregistrationRetryDelay: 1 * time.Second,
RunnerPodDefaults: RunnerPodDefaults{
RunnerImage: "example/runner:test",
DockerImage: "example/docker:test",
},
}
err = runnerController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup runner controller")

View File

@@ -285,17 +285,21 @@ func secretDataToGitHubClientConfig(data map[string][]byte) (*github.Config, err
appID := string(data["github_app_id"])
if appID != "" {
conf.AppID, err = strconv.ParseInt(appID, 10, 64)
if err != nil {
return nil, err
}
}
instID := string(data["github_app_installation_id"])
if instID != "" {
conf.AppInstallationID, err = strconv.ParseInt(instID, 10, 64)
if err != nil {
return nil, err
}
}
conf.AppPrivateKey = string(data["github_app_private_key"])

View File

@@ -15,6 +15,21 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
)
func newRunnerPod(template corev1.Pod, runnerSpec arcv1alpha1.RunnerConfig, githubBaseURL string, d RunnerPodDefaults) (corev1.Pod, error) {
return newRunnerPodWithContainerMode("", template, runnerSpec, githubBaseURL, d)
}
func setEnv(c *corev1.Container, name, value string) {
for j := range c.Env {
e := &c.Env[j]
if e.Name == name {
e.Value = value
return
}
}
}
func newWorkGenericEphemeralVolume(t *testing.T, storageReq string) corev1.Volume {
GBs, err := resource.ParseQuantity(storageReq)
if err != nil {
@@ -76,9 +91,12 @@ func TestNewRunnerPod(t *testing.T) {
},
},
{
Name: "certs-client",
Name: "var-run",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
EmptyDir: &corev1.EmptyDirVolumeSource{
Medium: corev1.StorageMediumMemory,
SizeLimit: resource.NewScaledQuantity(1, resource.Mega),
},
},
},
},
@@ -137,15 +155,7 @@ func TestNewRunnerPod(t *testing.T) {
},
{
Name: "DOCKER_HOST",
Value: "tcp://localhost:2376",
},
{
Name: "DOCKER_TLS_VERIFY",
Value: "1",
},
{
Name: "DOCKER_CERT_PATH",
Value: "/certs/client",
Value: "unix:///run/docker.sock",
},
},
VolumeMounts: []corev1.VolumeMount{
@@ -158,9 +168,8 @@ func TestNewRunnerPod(t *testing.T) {
MountPath: "/runner/_work",
},
{
Name: "certs-client",
MountPath: "/certs/client",
ReadOnly: true,
Name: "var-run",
MountPath: "/run",
},
},
ImagePullPolicy: corev1.PullAlways,
@@ -169,10 +178,15 @@ func TestNewRunnerPod(t *testing.T) {
{
Name: "docker",
Image: "default-docker-image",
Args: []string{
"dockerd",
"--host=unix:///run/docker.sock",
"--group=$(DOCKER_GROUP_GID)",
},
Env: []corev1.EnvVar{
{
Name: "DOCKER_TLS_CERTDIR",
Value: "/certs",
Name: "DOCKER_GROUP_GID",
Value: "1234",
},
},
VolumeMounts: []corev1.VolumeMount{
@@ -181,8 +195,8 @@ func TestNewRunnerPod(t *testing.T) {
MountPath: "/runner",
},
{
Name: "certs-client",
MountPath: "/certs/client",
Name: "var-run",
MountPath: "/run",
},
{
Name: "work",
@@ -398,6 +412,50 @@ func TestNewRunnerPod(t *testing.T) {
config: arcv1alpha1.RunnerConfig{},
want: newTestPod(base, nil),
},
{
description: "it should respect DOCKER_GROUP_GID of the dockerd sidecar container",
template: corev1.Pod{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "docker",
Env: []corev1.EnvVar{
{
Name: "DOCKER_GROUP_GID",
Value: "2345",
},
},
},
},
},
},
config: arcv1alpha1.RunnerConfig{},
want: newTestPod(base, func(p *corev1.Pod) {
setEnv(&p.Spec.Containers[1], "DOCKER_GROUP_GID", "2345")
}),
},
{
description: "it should add DOCKER_GROUP_GID=1001 to the dockerd sidecar container for Ubuntu 20.04 runners",
template: corev1.Pod{},
config: arcv1alpha1.RunnerConfig{
Image: "ghcr.io/summerwind/actions-runner:ubuntu-20.04-20210726-1",
},
want: newTestPod(base, func(p *corev1.Pod) {
setEnv(&p.Spec.Containers[1], "DOCKER_GROUP_GID", "1001")
p.Spec.Containers[0].Image = "ghcr.io/summerwind/actions-runner:ubuntu-20.04-20210726-1"
}),
},
{
description: "it should add DOCKER_GROUP_GID=121 to the dockerd sidecar container for Ubuntu 22.04 runners",
template: corev1.Pod{},
config: arcv1alpha1.RunnerConfig{
Image: "ghcr.io/summerwind/actions-runner:ubuntu-22.04-20210726-1",
},
want: newTestPod(base, func(p *corev1.Pod) {
setEnv(&p.Spec.Containers[1], "DOCKER_GROUP_GID", "121")
p.Spec.Containers[0].Image = "ghcr.io/summerwind/actions-runner:ubuntu-22.04-20210726-1"
}),
},
{
description: "dockerdWithinRunnerContainer=true should set privileged=true and omit the dind sidecar container",
template: corev1.Pod{},
@@ -485,9 +543,12 @@ func TestNewRunnerPod(t *testing.T) {
},
},
{
Name: "certs-client",
Name: "var-run",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
EmptyDir: &corev1.EmptyDirVolumeSource{
Medium: corev1.StorageMediumMemory,
SizeLimit: resource.NewScaledQuantity(1, resource.Mega),
},
},
},
}
@@ -501,9 +562,8 @@ func TestNewRunnerPod(t *testing.T) {
MountPath: "/runner",
},
{
Name: "certs-client",
MountPath: "/certs/client",
ReadOnly: true,
Name: "var-run",
MountPath: "/run",
},
}
}),
@@ -527,9 +587,12 @@ func TestNewRunnerPod(t *testing.T) {
},
},
{
Name: "certs-client",
Name: "var-run",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
EmptyDir: &corev1.EmptyDirVolumeSource{
Medium: corev1.StorageMediumMemory,
SizeLimit: resource.NewScaledQuantity(1, resource.Mega),
},
},
},
}
@@ -548,7 +611,14 @@ func TestNewRunnerPod(t *testing.T) {
for i := range testcases {
tc := testcases[i]
t.Run(tc.description, func(t *testing.T) {
got, err := newRunnerPod(tc.template, tc.config, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL, false)
got, err := newRunnerPod(tc.template, tc.config, githubBaseURL, RunnerPodDefaults{
RunnerImage: defaultRunnerImage,
RunnerImagePullSecrets: defaultRunnerImagePullSecrets,
DockerImage: defaultDockerImage,
DockerRegistryMirror: defaultDockerRegistryMirror,
DockerGID: "1234",
UseRunnerStatusUpdateHook: false,
})
require.NoError(t, err)
require.Equal(t, tc.want, got)
})
@@ -606,9 +676,12 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
},
},
{
Name: "certs-client",
Name: "var-run",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
EmptyDir: &corev1.EmptyDirVolumeSource{
Medium: corev1.StorageMediumMemory,
SizeLimit: resource.NewScaledQuantity(1, resource.Mega),
},
},
},
},
@@ -667,15 +740,7 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
},
{
Name: "DOCKER_HOST",
Value: "tcp://localhost:2376",
},
{
Name: "DOCKER_TLS_VERIFY",
Value: "1",
},
{
Name: "DOCKER_CERT_PATH",
Value: "/certs/client",
Value: "unix:///run/docker.sock",
},
{
Name: "RUNNER_NAME",
@@ -696,9 +761,8 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
MountPath: "/runner/_work",
},
{
Name: "certs-client",
MountPath: "/certs/client",
ReadOnly: true,
Name: "var-run",
MountPath: "/run",
},
},
ImagePullPolicy: corev1.PullAlways,
@@ -707,10 +771,15 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
{
Name: "docker",
Image: "default-docker-image",
Args: []string{
"dockerd",
"--host=unix:///run/docker.sock",
"--group=$(DOCKER_GROUP_GID)",
},
Env: []corev1.EnvVar{
{
Name: "DOCKER_TLS_CERTDIR",
Value: "/certs",
Name: "DOCKER_GROUP_GID",
Value: "1234",
},
},
VolumeMounts: []corev1.VolumeMount{
@@ -719,8 +788,8 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
MountPath: "/runner",
},
{
Name: "certs-client",
MountPath: "/certs/client",
Name: "var-run",
MountPath: "/run",
},
{
Name: "work",
@@ -1079,6 +1148,10 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
Name: "work",
MountPath: "/runner/_work",
},
{
Name: "var-run",
MountPath: "/run",
},
},
},
},
@@ -1097,9 +1170,12 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
},
},
{
Name: "certs-client",
Name: "var-run",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
EmptyDir: &corev1.EmptyDirVolumeSource{
Medium: corev1.StorageMediumMemory,
SizeLimit: resource.NewScaledQuantity(1, resource.Mega),
},
},
},
workGenericEphemeralVolume,
@@ -1110,13 +1186,12 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
MountPath: "/runner/_work",
},
{
Name: "runner",
MountPath: "/runner",
Name: "var-run",
MountPath: "/run",
},
{
Name: "certs-client",
MountPath: "/certs/client",
ReadOnly: true,
Name: "runner",
MountPath: "/runner",
},
}
}),
@@ -1144,9 +1219,12 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
},
},
{
Name: "certs-client",
Name: "var-run",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
EmptyDir: &corev1.EmptyDirVolumeSource{
Medium: corev1.StorageMediumMemory,
SizeLimit: resource.NewScaledQuantity(1, resource.Mega),
},
},
},
workGenericEphemeralVolume,
@@ -1159,6 +1237,7 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
defaultRunnerImage = "default-runner-image"
defaultRunnerImagePullSecrets = []string{}
defaultDockerImage = "default-docker-image"
defaultDockerGID = "1234"
defaultDockerRegistryMirror = ""
githubBaseURL = "api.github.com"
)
@@ -1178,12 +1257,15 @@ func TestNewRunnerPodFromRunnerController(t *testing.T) {
t.Run(tc.description, func(t *testing.T) {
r := &RunnerReconciler{
GitHubClient: multiClient,
Scheme: scheme,
RunnerPodDefaults: RunnerPodDefaults{
RunnerImage: defaultRunnerImage,
RunnerImagePullSecrets: defaultRunnerImagePullSecrets,
DockerImage: defaultDockerImage,
DockerRegistryMirror: defaultDockerRegistryMirror,
GitHubClient: multiClient,
Scheme: scheme,
DockerGID: defaultDockerGID,
},
}
got, err := r.newPod(tc.runner)
require.NoError(t, err)

View File

@@ -30,6 +30,7 @@ import (
"github.com/go-logr/logr"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
@@ -67,15 +68,24 @@ type RunnerReconciler struct {
Recorder record.EventRecorder
Scheme *runtime.Scheme
GitHubClient *MultiGitHubClient
Name string
RegistrationRecheckInterval time.Duration
RegistrationRecheckJitter time.Duration
UnregistrationRetryDelay time.Duration
RunnerPodDefaults RunnerPodDefaults
}
type RunnerPodDefaults struct {
RunnerImage string
RunnerImagePullSecrets []string
DockerImage string
DockerRegistryMirror string
Name string
RegistrationRecheckInterval time.Duration
RegistrationRecheckJitter time.Duration
// The default Docker group ID to use for the dockerd sidecar container.
// Ubuntu 20.04 runner images assumes 1001 and the 22.04 variant assumes 121 by default.
DockerGID string
UseRunnerStatusUpdateHook bool
UnregistrationRetryDelay time.Duration
}
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners,verbs=get;list;watch;create;update;patch;delete
@@ -144,7 +154,7 @@ func (r *RunnerReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
ready := runnerPodReady(&pod)
if (runner.Status.Phase != phase || runner.Status.Ready != ready) && !r.UseRunnerStatusUpdateHook || runner.Status.Phase == "" && r.UseRunnerStatusUpdateHook {
if (runner.Status.Phase != phase || runner.Status.Ready != ready) && !r.RunnerPodDefaults.UseRunnerStatusUpdateHook || runner.Status.Phase == "" && r.RunnerPodDefaults.UseRunnerStatusUpdateHook {
if pod.Status.Phase == corev1.PodRunning {
// Seeing this message, you can expect the runner to become `Running` soon.
log.V(1).Info(
@@ -291,7 +301,7 @@ func (r *RunnerReconciler) processRunnerCreation(ctx context.Context, runner v1a
return ctrl.Result{}, err
}
needsServiceAccount := runner.Spec.ServiceAccountName == "" && (r.UseRunnerStatusUpdateHook || runner.Spec.ContainerMode == "kubernetes")
needsServiceAccount := runner.Spec.ServiceAccountName == "" && (r.RunnerPodDefaults.UseRunnerStatusUpdateHook || runner.Spec.ContainerMode == "kubernetes")
if needsServiceAccount {
serviceAccount := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
@@ -305,7 +315,7 @@ func (r *RunnerReconciler) processRunnerCreation(ctx context.Context, runner v1a
rules := []rbacv1.PolicyRule{}
if r.UseRunnerStatusUpdateHook {
if r.RunnerPodDefaults.UseRunnerStatusUpdateHook {
rules = append(rules, []rbacv1.PolicyRule{
{
APIGroups: []string{"actions.summerwind.dev"},
@@ -582,7 +592,7 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
}
}
pod, err := newRunnerPodWithContainerMode(runner.Spec.ContainerMode, template, runner.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, ghc.GithubBaseURL, r.UseRunnerStatusUpdateHook)
pod, err := newRunnerPodWithContainerMode(runner.Spec.ContainerMode, template, runner.Spec.RunnerConfig, ghc.GithubBaseURL, r.RunnerPodDefaults)
if err != nil {
return pod, err
}
@@ -633,7 +643,7 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
if runnerSpec.ServiceAccountName != "" {
pod.Spec.ServiceAccountName = runnerSpec.ServiceAccountName
} else if r.UseRunnerStatusUpdateHook || runner.Spec.ContainerMode == "kubernetes" {
} else if r.RunnerPodDefaults.UseRunnerStatusUpdateHook || runner.Spec.ContainerMode == "kubernetes" {
pod.Spec.ServiceAccountName = runner.ObjectMeta.Name
}
@@ -753,13 +763,24 @@ func runnerHookEnvs(pod *corev1.Pod) ([]corev1.EnvVar, error) {
}, nil
}
func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string, useRunnerStatusUpdateHook bool) (corev1.Pod, error) {
func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, githubBaseURL string, d RunnerPodDefaults) (corev1.Pod, error) {
var (
privileged bool = true
dockerdInRunner bool = runnerSpec.DockerdWithinRunnerContainer != nil && *runnerSpec.DockerdWithinRunnerContainer
dockerEnabled bool = runnerSpec.DockerEnabled == nil || *runnerSpec.DockerEnabled
ephemeral bool = runnerSpec.Ephemeral == nil || *runnerSpec.Ephemeral
dockerdInRunnerPrivileged bool = dockerdInRunner
defaultRunnerImage = d.RunnerImage
defaultRunnerImagePullSecrets = d.RunnerImagePullSecrets
defaultDockerImage = d.DockerImage
defaultDockerRegistryMirror = d.DockerRegistryMirror
useRunnerStatusUpdateHook = d.UseRunnerStatusUpdateHook
)
const (
varRunVolumeName = "var-run"
varRunVolumeMountPath = "/run"
)
if containerMode == "kubernetes" {
@@ -1001,6 +1022,47 @@ func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, ru
)
}
// explicitly invoke `dockerd` to avoid automatic TLS / TCP binding
dockerdContainer.Args = append([]string{
"dockerd",
"--host=unix:///run/docker.sock",
}, dockerdContainer.Args...)
// this must match a GID for the user in the runner image
// default matches GitHub Actions infra (and default runner images
// for actions-runner-controller) so typically should not need to be
// overridden
if ok, _ := envVarPresent("DOCKER_GROUP_GID", dockerdContainer.Env); !ok {
gid := d.DockerGID
// We default to gid 121 for Ubuntu 22.04 images
// See below for more details
// - https://github.com/actions/actions-runner-controller/issues/2490#issuecomment-1501561923
// - https://github.com/actions/actions-runner-controller/blob/8869ad28bb5a1daaedefe0e988571fe1fb36addd/runner/actions-runner.ubuntu-20.04.dockerfile#L14
// - https://github.com/actions/actions-runner-controller/blob/8869ad28bb5a1daaedefe0e988571fe1fb36addd/runner/actions-runner.ubuntu-22.04.dockerfile#L12
if strings.Contains(runnerContainer.Image, "22.04") {
gid = "121"
} else if strings.Contains(runnerContainer.Image, "20.04") {
gid = "1001"
}
dockerdContainer.Env = append(dockerdContainer.Env,
corev1.EnvVar{
Name: "DOCKER_GROUP_GID",
Value: gid,
})
}
dockerdContainer.Args = append(dockerdContainer.Args, "--group=$(DOCKER_GROUP_GID)")
// ideally, we could mount the socket directly at `/var/run/docker.sock`
// to use the default, but that's not practical since it won't exist
// when the container starts, so can't use subPath on the volume mount
runnerContainer.Env = append(runnerContainer.Env,
corev1.EnvVar{
Name: "DOCKER_HOST",
Value: "unix:///run/docker.sock",
},
)
if ok, _ := workVolumePresent(pod.Spec.Volumes); !ok {
pod.Spec.Volumes = append(pod.Spec.Volumes,
corev1.Volume{
@@ -1014,9 +1076,12 @@ func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, ru
pod.Spec.Volumes = append(pod.Spec.Volumes,
corev1.Volume{
Name: "certs-client",
Name: varRunVolumeName,
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
EmptyDir: &corev1.EmptyDirVolumeSource{
Medium: corev1.StorageMediumMemory,
SizeLimit: resource.NewScaledQuantity(1, resource.Mega),
},
},
},
)
@@ -1030,28 +1095,14 @@ func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, ru
)
}
if ok, _ := volumeMountPresent(varRunVolumeName, runnerContainer.VolumeMounts); !ok {
runnerContainer.VolumeMounts = append(runnerContainer.VolumeMounts,
corev1.VolumeMount{
Name: "certs-client",
MountPath: "/certs/client",
ReadOnly: true,
Name: varRunVolumeName,
MountPath: varRunVolumeMountPath,
},
)
runnerContainer.Env = append(runnerContainer.Env, []corev1.EnvVar{
{
Name: "DOCKER_HOST",
Value: "tcp://localhost:2376",
},
{
Name: "DOCKER_TLS_VERIFY",
Value: "1",
},
{
Name: "DOCKER_CERT_PATH",
Value: "/certs/client",
},
}...)
}
// Determine the volume mounts assigned to the docker sidecar. In case extra mounts are included in the RunnerSpec, append them to the standard
// set of mounts. See https://github.com/actions/actions-runner-controller/issues/435 for context.
@@ -1060,14 +1111,16 @@ func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, ru
Name: runnerVolumeName,
MountPath: runnerVolumeMountPath,
},
{
Name: "certs-client",
MountPath: "/certs/client",
},
}
mountPresent, _ := workVolumeMountPresent(dockerdContainer.VolumeMounts)
if !mountPresent {
if p, _ := volumeMountPresent(varRunVolumeName, dockerdContainer.VolumeMounts); !p {
dockerVolumeMounts = append(dockerVolumeMounts, corev1.VolumeMount{
Name: varRunVolumeName,
MountPath: varRunVolumeMountPath,
})
}
if p, _ := workVolumeMountPresent(dockerdContainer.VolumeMounts); !p {
dockerVolumeMounts = append(dockerVolumeMounts, corev1.VolumeMount{
Name: "work",
MountPath: workDir,
@@ -1078,11 +1131,6 @@ func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, ru
dockerdContainer.Image = defaultDockerImage
}
dockerdContainer.Env = append(dockerdContainer.Env, corev1.EnvVar{
Name: "DOCKER_TLS_CERTDIR",
Value: "/certs",
})
if dockerdContainer.SecurityContext == nil {
dockerdContainer.SecurityContext = &corev1.SecurityContext{
Privileged: &privileged,
@@ -1224,10 +1272,6 @@ func newRunnerPodWithContainerMode(containerMode string, template corev1.Pod, ru
return *pod, nil
}
func newRunnerPod(template corev1.Pod, runnerSpec v1alpha1.RunnerConfig, defaultRunnerImage string, defaultRunnerImagePullSecrets []string, defaultDockerImage, defaultDockerRegistryMirror string, githubBaseURL string, useRunnerStatusUpdateHookEphemeralRole bool) (corev1.Pod, error) {
return newRunnerPodWithContainerMode("", template, runnerSpec, defaultRunnerImage, defaultRunnerImagePullSecrets, defaultDockerImage, defaultDockerRegistryMirror, githubBaseURL, useRunnerStatusUpdateHookEphemeralRole)
}
func (r *RunnerReconciler) SetupWithManager(mgr ctrl.Manager) error {
name := "runner-controller"
if r.Name != "" {
@@ -1273,6 +1317,15 @@ func removeFinalizer(finalizers []string, finalizerName string) ([]string, bool)
return result, removed
}
func envVarPresent(name string, items []corev1.EnvVar) (bool, int) {
for index, item := range items {
if item.Name == name {
return true, index
}
}
return false, -1
}
func workVolumePresent(items []corev1.Volume) (bool, int) {
for index, item := range items {
if item.Name == "work" {
@@ -1283,12 +1336,16 @@ func workVolumePresent(items []corev1.Volume) (bool, int) {
}
func workVolumeMountPresent(items []corev1.VolumeMount) (bool, int) {
return volumeMountPresent("work", items)
}
func volumeMountPresent(name string, items []corev1.VolumeMount) (bool, int) {
for index, item := range items {
if item.Name == "work" {
if item.Name == name {
return true, index
}
}
return false, 0
return false, -1
}
func applyWorkVolumeClaimTemplateToPod(pod *corev1.Pod, workVolumeClaimTemplate *v1alpha1.WorkVolumeClaimTemplate, workDir string) error {

View File

@@ -47,11 +47,8 @@ type RunnerSetReconciler struct {
CommonRunnerLabels []string
GitHubClient *MultiGitHubClient
RunnerImage string
RunnerImagePullSecrets []string
DockerImage string
DockerRegistryMirror string
UseRunnerStatusUpdateHook bool
RunnerPodDefaults RunnerPodDefaults
}
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnersets,verbs=get;list;watch;create;update;patch;delete
@@ -231,7 +228,7 @@ func (r *RunnerSetReconciler) newStatefulSet(ctx context.Context, runnerSet *v1a
githubBaseURL := ghc.GithubBaseURL
pod, err := newRunnerPodWithContainerMode(runnerSet.Spec.RunnerConfig.ContainerMode, template, runnerSet.Spec.RunnerConfig, r.RunnerImage, r.RunnerImagePullSecrets, r.DockerImage, r.DockerRegistryMirror, githubBaseURL, r.UseRunnerStatusUpdateHook)
pod, err := newRunnerPodWithContainerMode(runnerSet.Spec.RunnerConfig.ContainerMode, template, runnerSet.Spec.RunnerConfig, githubBaseURL, r.RunnerPodDefaults)
if err != nil {
return nil, err
}

View File

@@ -14,7 +14,7 @@ You can create workflows that build and test every pull request to your reposito
Runners execute the job that is assigned to them by Github Actions workflow. There are two types of Runners:
- [Github-hosted runners](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners) - GitHub provides Linux, Windows, and macOS virtual machines to run your workflows. These virtual machines are hosted in the cloud by Github.
- [Self-hosted runners](https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners) - you can host your own self-hosted runners in your own data center or cloud infrastructure. ARC deploys self-hosted runners.
- [Self-hosted runners](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners) - you can host your own self-hosted runners in your own data center or cloud infrastructure. ARC deploys self-hosted runners.
## Self hosted runners
Self-hosted runners offer more control of hardware, operating system, and software tools than GitHub-hosted runners. With self-hosted runners, you can create custom hardware configurations that meet your needs with processing power or memory to run larger jobs, install software available on your local network, and choose an operating system not offered by GitHub-hosted runners.
@@ -83,7 +83,7 @@ The GitHub hosted runners include a large amount of pre-installed software packa
ARC maintains a few runner images with `latest` aligning with GitHub's Ubuntu version. These images do not contain all of the software installed on the GitHub runners. They contain subset of packages from the GitHub runners: Basic CLI packages, git, docker and build-essentials. To install additional software, it is recommended to use the corresponding setup actions. For instance, `actions/setup-java` for Java or `actions/setup-node` for Node.
## Executing workflows
Now, all the setup and configuration is done. A workflow can be created in the same repository that could target the self hosted runner created from ARC. The workflow needs to have `runs-on: self-hosted` so it can target the self host pool. For more information on targeting workflows to run on self hosted runners, see "[Using Self-hosted runners](https://docs.github.com/en/actions/hosting-your-own-runners/using-self-hosted-runners-in-a-workflow)."
Now, all the setup and configuration is done. A workflow can be created in the same repository that could target the self hosted runner created from ARC. The workflow needs to have `runs-on: self-hosted` so it can target the self host pool. For more information on targeting workflows to run on self hosted runners, see "[Using Self-hosted runners](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/using-self-hosted-runners-in-a-workflow)."
## Scaling runners - statically with replicas count
With a small tweak to the replicas count (for eg - `replicas: 2`) in the `runnerdeployment.yaml` file, more runners can be created. Depending on the count of replicas, those many sets of pods would be created. As before, Each pod contains the two containers.

View File

@@ -86,4 +86,4 @@ Or for example if they're having problems specifically with runners:
This way users don't have to understand ARC moving parts but we still have a
way to target them specifically if we need to.
[^1]: Superseded by [ADR 2023-04-14](2023-04-14-adding-labels-k8s-resources.md)
[^1]: Superseded by [ADR 2023-03-14](2023-03-14-adding-labels-k8s-resources.md)

View File

@@ -2,7 +2,7 @@
**Date**: 2023-02-02
**Status**: Proposed
**Status**: Done
## Context

View File

@@ -2,7 +2,7 @@
**Date**: 2023-02-10
**Status**: Pending
**Status**: Superceded [^1]
## Context
@@ -136,3 +136,5 @@ The downside of this mode:
- When you have multiple controllers deployed, they will still use the same version of the CRD. So you will need to make sure every controller you deployed has to be the same version as each other.
- You can't mismatch install both `actions-runner-controller` in this mode (watchSingleNamespace) with the regular installation mode (watchAllClusterNamespaces) in your cluster.
[^1]: Superseded by [ADR 2023-04-11](2023-04-11-limit-manager-role-permission.md)

View File

@@ -86,4 +86,4 @@ Or for example if they're having problems specifically with runners:
This way users don't have to understand ARC moving parts but we still have a
way to target them specifically if we need to.
[^1]: [ADR 2022-12-05](2022-12-05-adding-labels-k8s-resources.md)
[^1]: Supersedes [ADR 2022-12-05](2022-12-05-adding-labels-k8s-resources.md)

View File

@@ -0,0 +1,84 @@
# Improve ARC workflows for autoscaling runner sets
**Date**: 2023-03-17
**Status**: Done
## Context
In the [actions-runner-controller](https://github.com/actions/actions-runner-controller)
repository we essentially have two projects living side by side: the "legacy"
actions-runner-controller and the new one GitHub is supporting
(gha-runner-scale-set). To hasten progress we relied on existing workflows and
added some of our own (e.g.: end-to-end tests). We now got to a point where it's
sort of confusing what does what and why, not to mention the increased running
times of some those workflows and some GHA-related flaky tests getting in the
way of legacy ARC and viceversa. The three main areas we want to cover are: Go
code, Kubernetes manifests / Helm charts and E2E tests.
## Go code
At the moment we have three workflows that validate Go code:
- [golangci-lint](https://github.com/actions/actions-runner-controller/blob/34f3878/.github/workflows/golangci-lint.yaml):
this is a collection of linters that currently runs on all PRs and push to
master
- [Validate ARC](https://github.com/actions/actions-runner-controller/blob/01e9dd3/.github/workflows/validate-arc.yaml):
this is a bit of a catch-all workflow, other than Go tests this also validates
Kubernetes manifests, runs `go generate`, `go fmt` and `go vet`
- [Run CodeQL](https://github.com/actions/actions-runner-controller/blob/a095f0b66aad5fbc8aa8d7032f3299233e4c84d2/.github/workflows/run-codeql.yaml)
### Proposal
I think having one `Go` workflow that collects everything-Go would help a ton with
reliability and understandability of what's going on. This shouldn't be limited
to the GHA-supported mode as there are changes that even if made outside the GHA
code base could affect us (such as a dependency update).
This workflow should only run on changes to `*.go` files, `go.mod` and `go.sum`.
It should have these jobs, aiming to cover all existing functionality and
eliminate some duplication:
- `test`: run all Go tests in the project. We currently use the `-short` and
`-coverprofile` flags: while `-short` is used to skip [old ARC E2E
tests](https://github.com/actions/actions-runner-controller/blob/master/test/e2e/e2e_test.go#L85-L87),
`-coverprofile` is adding to the test time without really giving us any value
in return. We should also start using `actions/setup-go@v4` to take advantage
of caching (it would speed up our tests by a lot) or enable it on `v3` if we
have a strong reason not to upgrade. We should keep ignoring our E2E tests too
as those will be run elsewhere (either use `Short` there too or ignoring the
package like we currently do). As a dependency for tests this needs to run
`make manifests` first: we should fail there and then if there is a diff.
- `fmt`: we currently run `go fmt ./...` as part of `Validate ARC` but do
nothing with the results. We should fail in case of a diff. We don't need
caching for this job.
- `lint`: this corresponds to what's currently the `golanci-lint` workflow (this
also covers `go vet` which currently happens as part of `Validate ARC too`)
- `generate`: the current behaviour for this is actually quite risky, we
generate our code in `Validate ARC` workflow and use the results to run the
tests but we don't validate that up to date generate code is checked in. This
job should run `go generate` and fail on a diff.
- `vulncheck`: **EDIT: this is covered by CodeQL** the Go team is maintaining [`govulncheck`](https://go.dev/blog/vuln), a tool to recursively
analyzing all function calls in Go code and spot vulnerabilities on the call
stack.
## Kubernetes manifests / Helm charts
We have [recently separated](https://github.com/actions/actions-runner-controller/commit/bd9f32e3540663360cf47f04acad26e6010f772e)
Helm chart validation and we validate up-to-dateness of manifests as part of `Go
/ test`.
## End to end tests
These tests are giving us really good coverage and should be one of the main
actors when it comes to trusting our releases. Two improvements that could be
done here are:
- renaming the workflow to `GHA E2E`: since renaming our resources the `gha`
prefix has been used to identify things related to the mode GitHub supports
and these jobs strictly validate the GitHub mode _only_. Having a shorter name
allows for more readability of the various scenarios (e.g. `GHA E2E /
single-namespace-setup`).
- the test currently monitors and validates the number of pods spawning during
the workflow but not the outcome of the workflow. While not necessary to look
at pods specifics, we should at least guarantee that the workflow can
successfully conclude.

View File

@@ -0,0 +1,167 @@
# ADR 2023-04-11: Limit Permissions for Service Accounts in Actions-Runner-Controller
**Date**: 2023-04-11
**Status**: Done [^1]
## Context
- `actions-runner-controller` is a Kubernetes CRD (with controller) built using https://github.com/kubernetes-sigs/controller-runtime
- [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) has a default cache based k8s API client.Reader to make query k8s API server more efficiency.
- The cache-based API client requires cluster scope `list` and `watch` permission for any resource the controller may query.
- This documentation only scopes to the AutoscalingRunnerSet CRD and its controller.
## Service accounts and their role binding in actions-runner-controller
There are 3 service accounts involved for a working `AutoscalingRunnerSet` based `actions-runner-controller`
1. Service account for each Ephemeral runner Pod
This should have the lowest privilege (not any `RoleBinding` nor `ClusterRoleBinding`) by default, in the case of `containerMode=kubernetes`, it will get certain write permission with `RoleBinding` to limit the permission to a single namespace.
> References:
>
> - ./charts/gha-runner-scale-set/templates/no_permission_serviceaccount.yaml
> - ./charts/gha-runner-scale-set/templates/kube_mode_role.yaml
> - ./charts/gha-runner-scale-set/templates/kube_mode_role_binding.yaml
> - ./charts/gha-runner-scale-set/templates/kube_mode_serviceaccount.yaml
2. Service account for AutoScalingListener Pod
This has a `RoleBinding` to a single namespace with a `Role` that has permission to `PATCH` `EphemeralRunnerSet` and `EphemeralRunner`.
3. Service account for the controller manager
Since the CRD controller is a singleton installed in the cluster that manages the CRD across multiple namespaces by default, the service account of the controller manager pod has a `ClusterRoleBinding` to a `ClusterRole` with broader permissions.
The current `ClusterRole` has the following permissions:
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingListeners` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunners` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `Pods` (with `Status` sub-resource)
- **Get/List/Create/Delete/Update/Patch/Watch on `Secrets`**
- Get/List/Create/Delete/Update/Patch/Watch on `Roles`
- Get/List/Create/Delete/Update/Patch/Watch on `RoleBindings`
- Get/List/Create/Delete/Update/Patch/Watch on `ServiceAccounts`
> Full list can be found at: https://github.com/actions/actions-runner-controller/blob/facae69e0b189d3b5dd659f36df8a829516d2896/charts/actions-runner-controller-2/templates/manager_role.yaml
## Limit cluster role permission on Secrets
The cluster scope `List` `Secrets` permission might be a blocker for adopting `actions-runner-controller` for certain customers as they may have certain restriction in their cluster that simply doesn't allow any service account to have cluster scope `List Secrets` permission.
To help these customers and improve security for `actions-runner-controller` in general, we will try to limit the `ClusterRole` permission of the controller manager's service account down to the following:
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `AutoScalingListeners` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunnerSets` (with `Status` and `Finalizer` sub-resource)
- Get/List/Create/Delete/Update/Patch/Watch on `EphemeralRunners` (with `Status` and `Finalizer` sub-resource)
- List/Watch on `Pods`
- List/Watch/Patch on `Roles`
- List/Watch on `RoleBindings`
- List/Watch on `ServiceAccounts`
> We will change the default cache-based client to bypass cache on reading `Secrets` and `ConfigMaps`(ConfigMap is used when you configure `githubServerTLS`), so we can eliminate the need for `List` and `Watch` `Secrets` permission in cluster scope.
Introduce a new `Role` for the controller and `RoleBinding` the `Role` with the controller's `ServiceAccount` in the namespace the controller is deployed. This role will grant the controller's service account required permission to work with `AutoScalingListeners` in the controller namespace.
- Get/Create/Delete on `Pods`
- Get on `Pods/status`
- Get/Create/Delete/Update/Patch on `Secrets`
- Get/Create/Delete/Update/Patch on `ServiceAccounts`
The `Role` and `RoleBinding` creation will happen during the `helm install demo oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller`
During `helm install demo oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller`, we will store the controller's service account info as labels on the controller `Deployment`.
Ex:
```yaml
actions.github.com/controller-service-account-namespace: {{ .Release.Namespace }}
actions.github.com/controller-service-account-name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
```
Introduce a new `Role` per `AutoScalingRunnerSet` installation and `RoleBinding` the `Role` with the controller's `ServiceAccount` in the namespace that each `AutoScalingRunnerSet` deployed with the following permission.
- Get/Create/Delete/Update/Patch/List on `Secrets`
- Create/Delete on `Pods`
- Get on `Pods/status`
- Get/Create/Delete/Update/Patch on `Roles`
- Get/Create/Delete/Update/Patch on `RoleBindings`
- Get on `ConfigMaps`
The `Role` and `RoleBinding` creation will happen during `helm install demo oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set` to grant the controller's service account required permissions to operate in the namespace the `AutoScalingRunnerSet` deployed.
The `gha-runner-scale-set` helm chart will try to find the `Deployment` of the controller using `helm lookup`, and get the service account info from the labels of the controller `Deployment` (`actions.github.com/controller-service-account-namespace` and `actions.github.com/controller-service-account-name`).
The `gha-runner-scale-set` helm chart will use this service account to properly render the `RoleBinding` template.
The `gha-runner-scale-set` helm chart will also allow customers to explicitly provide the controller service account info, in case the `helm lookup` couldn't locate the right controller `Deployment`.
New sections in `values.yaml` of `gha-runner-scale-set`:
```yaml
## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
controllerServiceAccount:
namespace: arc-system
name: test-arc-gha-runner-scale-set-controller
```
## Install ARC to only watch/react resources in a single namespace
In case the user doesn't want to have any `ClusterRole`, they can choose to install the `actions-runner-controller` in a mode that only requires a `Role` with `RoleBinding` in a particular namespace.
In this mode, the `actions-runner-controller` will only be able to watch the `AutoScalingRunnerSet` resource in a single namespace.
If you want to deploy multiple `AutoScalingRunnerSet` into different namespaces, you will need to install `actions-runner-controller` in this mode multiple times as well and have each installation watch the namespace you want to deploy an `AutoScalingRunnerSet`
You will install `actions-runner-controller` with something like `helm install arc --namespace arc-system --set watchSingleNamespace=test-namespace oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller` (the `test-namespace` namespace needs to be created first).
You will deploy the `AutoScalingRunnerSet` with something like `helm install demo --namespace TestNamespace oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set`
In this mode, you will end up with a manager `Role` that has all Get/List/Create/Delete/Update/Patch/Watch permissions on resources we need, and a `RoleBinding` to bind the `Role` with the controller `ServiceAccount` in the watched single namespace and the controller namespace, ex: `test-namespace` and `arc-system` in the above example.
The downside of this mode:
- When you have multiple controllers deployed, they will still use the same version of the CRD. So you will need to make sure every controller you deployed has to be the same version as each other.
- You can't mismatch install both `actions-runner-controller` in this mode (watchSingleNamespace) with the regular installation mode (watchAllClusterNamespaces) in your cluster.
## Cleanup process
We will apply following annotations during the installation that are going to be used in the cleanup process (`helm uninstall`). If annotation is not present, cleanup of that resource is going to be skipped.
The cleanup only patches the resource removing the `actions.github.com/cleanup-protection` finalizer. The client that created a resource is responsible for deleting them. Keep in mind, `helm uninstall` will automatically delete resources, causing the cleanup procedure to be complete.
Annotations applied to the `AutoscalingRunnerSet` used in the cleanup procedure
are:
- `actions.github.com/cleanup-github-secret-name`
- `actions.github.com/cleanup-manager-role-binding`
- `actions.github.com/cleanup-manager-role-name`
- `actions.github.com/cleanup-kubernetes-mode-role-binding-name`
- `actions.github.com/cleanup-kubernetes-mode-role-name`
- `actions.github.com/cleanup-kubernetes-mode-service-account-name`
- `actions.github.com/cleanup-no-permission-service-account-name`
The order in which resources are being patched to remove finalizers:
1. Kubernetes mode `RoleBinding`
1. Kubernetes mode `Role`
1. Kubernetes mode `ServiceAccount`
1. No permission `ServiceAccount`
1. GitHub `Secret`
1. Manager `RoleBinding`
1. Manager `Role`
[^1]: Supersedes [ADR 2023-02-10](2023-02-10-limit-manager-role-permission.md)

View File

@@ -17,7 +17,7 @@ This anti-flap configuration also has the final say on if a runner can be scaled
This delay is configurable via 2 methods:
1. By setting a new default via the controller's `--default-scale-down-delay` flag
2. By setting by setting the attribute `scaleDownDelaySecondsAfterScaleOut:` in a `HorizontalRunnerAutoscaler` kind's `spec:`.
2. By setting the attribute `scaleDownDelaySecondsAfterScaleOut:` in a `HorizontalRunnerAutoscaler` kind's `spec:`.
Below is a complete basic example of one of the pull driven scaling metrics.

View File

@@ -2,7 +2,7 @@
## Usage
[GitHub self-hosted runners can be deployed at various levels in a management hierarchy](https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners#about-self-hosted-runners):
[GitHub self-hosted runners can be deployed at various levels in a management hierarchy](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners#about-self-hosted-runners):
- The repository level
- The organization level
- The enterprise level

View File

@@ -2,7 +2,7 @@
## Runner Groups
Runner groups can be used to limit which repositories are able to use the GitHub Runner at an organization level. Runner groups have to be [created in GitHub first](https://docs.github.com/en/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups) before they can be referenced.
Runner groups can be used to limit which repositories are able to use the GitHub Runner at an organization level. Runner groups have to be [created in GitHub first](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/managing-access-to-self-hosted-runners-using-groups) before they can be referenced.
To add the runner to the group `NewGroup`, specify the group in your `Runner` or `RunnerDeployment` spec.

View File

@@ -1,7 +1,5 @@
# Autoscaling Runner Scale Sets mode
**⚠️ This mode is currently only available for a limited number of organizations.**
This new autoscaling mode brings numerous enhancements (described in the following sections) that will make your experience more reliable and secure.
## How it works
@@ -18,7 +16,7 @@ In addition to the increased reliability of the automatic scaling, we have worke
### Demo
https://user-images.githubusercontent.com/568794/212668313-8946ddc5-60c1-461f-a73e-27f5e8c75720.mp4
[![Watch the walkthrough](https://img.youtube.com/vi/wQ0k5k6KW5Y/hqdefault.jpg)](https://youtu.be/wQ0k5k6KW5Y)
## Setup
@@ -38,7 +36,7 @@ https://user-images.githubusercontent.com/568794/212668313-8946ddc5-60c1-461f-a7
--namespace "${NAMESPACE}" \
--create-namespace \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller \
--version 0.3.0
--version 0.4.0
```
1. Generate a Personal Access Token (PAT) or create and install a GitHub App. See [Creating a personal access token](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token) and [Creating a GitHub App](https://docs.github.com/en/developers/apps/creating-a-github-app).
@@ -59,7 +57,7 @@ https://user-images.githubusercontent.com/568794/212668313-8946ddc5-60c1-461f-a7
--create-namespace \
--set githubConfigUrl="${GITHUB_CONFIG_URL}" \
--set githubConfigSecret.github_token="${GITHUB_PAT}" \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set --version 0.3.0
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set --version 0.4.0
```
```bash
@@ -70,14 +68,14 @@ https://user-images.githubusercontent.com/568794/212668313-8946ddc5-60c1-461f-a7
GITHUB_APP_ID="<GITHUB_APP_ID>"
GITHUB_APP_INSTALLATION_ID="<GITHUB_APP_INSTALLATION_ID>"
GITHUB_APP_PRIVATE_KEY="<GITHUB_APP_PRIVATE_KEY>"
helm install arc-runner-set \
helm install "${INSTALLATION_NAME}" \
--namespace "${NAMESPACE}" \
--create-namespace \
--set githubConfigUrl="${GITHUB_CONFIG_URL}" \
--set githubConfigSecret.github_app_id="${GITHUB_APP_ID}" \
--set githubConfigSecret.github_app_installation_id="${GITHUB_APP_INSTALLATION_ID}" \
--set githubConfigSecret.github_app_private_key="${GITHUB_APP_PRIVATE_KEY}" \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set --version 0.3.0
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set --version 0.4.0
```
1. Check your installation. If everything went well, you should see the following:
@@ -86,8 +84,8 @@ https://user-images.githubusercontent.com/568794/212668313-8946ddc5-60c1-461f-a7
$ helm list -n "${NAMESPACE}"
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
arc arc-systems 1 2023-01-18 10:03:36.610534934 +0000 UTC deployed gha-runner-scale-set-controller-0.3.0 preview
arc-runner-set arc-systems 1 2023-01-18 10:20:14.795285645 +0000 UTC deployed gha-runner-scale-set-0.3.0 0.3.0
arc arc-systems 1 2023-01-18 10:03:36.610534934 +0000 UTC deployed gha-runner-scale-set-controller-0.4.0 preview
arc-runner-set arc-systems 1 2023-01-18 10:20:14.795285645 +0000 UTC deployed gha-runner-scale-set-0.4.0 0.4.0
```
```bash
@@ -104,7 +102,6 @@ https://user-images.githubusercontent.com/568794/212668313-8946ddc5-60c1-461f-a7
name: Test workflow
on:
workflow_dispatch:
jobs:
test:
runs-on: arc-runner-set
@@ -142,7 +139,7 @@ Upgrading actions-runner-controller requires a few extra steps because CRDs will
```bash
helm pull oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller \
--version 0.3.0 \
--version 0.4.0 \
--untar && \
kubectl replace -f <PATH>/gha-runner-scale-set-controller/crds/
```
@@ -151,16 +148,36 @@ Upgrading actions-runner-controller requires a few extra steps because CRDs will
## Troubleshooting
### I'm using the charts from the `master` branch and the controller is not working
The `master` branch is highly unstable! We offer no guarantees that the charts in the `master` branch will work at any given time. If you're using the charts from the `master` branch, you should expect to encounter issues. Please use the latest release instead.
### Controller pod is running but the runner set listener pod is not
You need to inspect the logs of the controller first and see if there are any errors. If there are no errors, and the runner set listener pod is still not running, you need to make sure that the **controller pod has access to the Kubernetes API server in your cluster!**
You'll see something similar to the following in the logs of the controller pod:
```log
kubectl logs <controller_pod_name> -c manager
17:35:28.661069 1 request.go:690] Waited for 1.032376652s due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.1:443/apis/monitoring.coreos.com/v1alpha1?timeout=32s
2023-03-15T17:35:29Z INFO starting manager
```
If you have a proxy configured or you're using a sidecar proxy that's automatically injected (think [Istio](https://istio.io/)), you need to make sure it's configured appropriately to allow traffic from the controller container (manager) to the Kubernetes API server.
### Check the logs
You can check the logs of the controller pod using the following command:
```bash
# Controller logs
$ kubectl logs -n "${NAMESPACE}" -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl logs -n "${NAMESPACE}" -l app.kubernetes.io/name=gha-runner-scale-set-controller
```
```bash
# Runner set listener logs
kubectl logs -n "${NAMESPACE}" -l auto-scaling-runner-set-namespace=arc-systems -l auto-scaling-runner-set-name=arc-runner-set
kubectl logs -n "${NAMESPACE}" -l actions.github.com/scale-set-namespace=arc-systems -l actions.github.com/scale-set-name=arc-runner-set
```
### Naming error: `Name must have up to characters`
@@ -181,8 +198,73 @@ Error: INSTALLATION FAILED: execution error at (gha-runner-scale-set/templates/a
Verify that the secret you provided is correct and that the `githubConfigUrl` you provided is accurate.
### Access to the path `/home/runner/_work/_tool` is denied error
You might see this error if you're using kubernetes mode with persistent volumes. This is because the runner container is running with a non-root user and is causing a permissions mismatch with the mounted volume.
To fix this, you can either:
1. Use a volume type that supports `securityContext.fsGroup` (`hostPath` volumes don't support it, `local` volumes do as well as other types). Update the `fsGroup` of your runner pod to match the GID of the runner. You can do that by updating the `gha-runner-scale-set` helm chart values to include the following:
```yaml
spec:
securityContext:
fsGroup: 123
containers:
- name: runner
image: ghcr.io/actions/actions-runner:<VERSION> # Replace <VERSION> with the version you want to use
command: ["/home/runner/run.sh"]
```
1. If updating the `securityContext` of your runner pod is not a viable solution, you can workaround the issue by using `initContainers` to change the mounted volume's ownership, as follows:
```yaml
template:
spec:
initContainers:
- name: kube-init
image: ghcr.io/actions/actions-runner:latest
command: ["sudo", "chown", "-R", "1001:123", "/home/runner/_work"]
volumeMounts:
- name: work
mountPath: /home/runner/_work
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
```
## Changelog
### v0.4.0
#### ⚠️ Warning
This release contains a major change related to the way permissions are
applied to the manager ([#2276](https://github.com/actions/actions-runner-controller/pull/2276) and [#2363](https://github.com/actions/actions-runner-controller/pull/2363)).
Please evaluate these changes carefully before upgrading.
#### Major changes
1. Surface EphemeralRunnerSet stats to AutoscalingRunnerSet [#2382](https://github.com/actions/actions-runner-controller/pull/2382)
1. Improved security posture by removing list/watch secrets permission from manager cluster role
[#2276](https://github.com/actions/actions-runner-controller/pull/2276)
1. Improved security posture by delaying role/rolebinding creation to gha-runner-scale-set during installation
[#2363](https://github.com/actions/actions-runner-controller/pull/2363)
1. Improved security posture by supporting watching a single namespace from the controller
[#2374](https://github.com/actions/actions-runner-controller/pull/2374)
1. Added labels to AutoscalingRunnerSet subresources to allow easier inspection [#2391](https://github.com/actions/actions-runner-controller/pull/2391)
1. Fixed bug preventing env variables from being specified
[#2450](https://github.com/actions/actions-runner-controller/pull/2450)
1. Enhance quickstart troubleshooting guides
[#2435](https://github.com/actions/actions-runner-controller/pull/2435)
1. Fixed ignore extra dind container when container mode type is "dind"
[#2418](https://github.com/actions/actions-runner-controller/pull/2418)
1. Added additional cleanup finalizers [#2433](https://github.com/actions/actions-runner-controller/pull/2433)
1. gha-runner-scale-set listener pod inherits the ImagePullPolicy from the manager pod [#2477](https://github.com/actions/actions-runner-controller/pull/2477)
1. Treat `.ghe.com` domain as hosted environment [#2480](https://github.com/actions/actions-runner-controller/pull/2480)
### v0.3.0
#### Major changes
@@ -207,56 +289,3 @@ Verify that the secret you provided is correct and that the `githubConfigUrl` yo
1. Fixed a bug that was preventing runner scale from being removed from the backend when they were deleted from the cluster [#2255](https://github.com/actions/actions-runner-controller/pull/2255) [#2223](https://github.com/actions/actions-runner-controller/pull/2223)
1. Fixed bugs with the helm chart definitions preventing certain values from being set [#2222](https://github.com/actions/actions-runner-controller/pull/2222)
1. Fixed a bug that prevented the configuration of a runner group for a runner scale set [#2216](https://github.com/actions/actions-runner-controller/pull/2216)
#### Log
- [1c7b7f4](https://github.com/actions/actions-runner-controller/commit/1c7b7f4) Bump arc-2 chart version and prepare 0.2.0 release [#2313](https://github.com/actions/actions-runner-controller/pull/2313)
- [73e22a1](https://github.com/actions/actions-runner-controller/commit/73e22a1) Disable metrics serving in proxy tests [#2307](https://github.com/actions/actions-runner-controller/pull/2307)
- [9b44f00](https://github.com/actions/actions-runner-controller/commit/9b44f00) Documentation corrections [#2116](https://github.com/actions/actions-runner-controller/pull/2116)
- [6b4250c](https://github.com/actions/actions-runner-controller/commit/6b4250c) Add support for proxy [#2286](https://github.com/actions/actions-runner-controller/pull/2286)
- [ced8822](https://github.com/actions/actions-runner-controller/commit/ced8822) Resolves the erroneous webhook scale down due to check runs [#2119](https://github.com/actions/actions-runner-controller/pull/2119)
- [44c06c2](https://github.com/actions/actions-runner-controller/commit/44c06c2) fix: case-insensitive webhook label matching [#2302](https://github.com/actions/actions-runner-controller/pull/2302)
- [4103fe3](https://github.com/actions/actions-runner-controller/commit/4103fe3) Use DOCKER_IMAGE_NAME instead of NAME to avoid conflict. [#2303](https://github.com/actions/actions-runner-controller/pull/2303)
- [a44fe04](https://github.com/actions/actions-runner-controller/commit/a44fe04) Fix manager crashloopback for ARC deployments without scaleset-related controllers [#2293](https://github.com/actions/actions-runner-controller/pull/2293)
- [274d0c8](https://github.com/actions/actions-runner-controller/commit/274d0c8) Added ability to configure log level from chart values [#2252](https://github.com/actions/actions-runner-controller/pull/2252)
- [256e08e](https://github.com/actions/actions-runner-controller/commit/256e08e) Ask runner to wait for docker daemon from DinD. [#2292](https://github.com/actions/actions-runner-controller/pull/2292)
- [f677fd5](https://github.com/actions/actions-runner-controller/commit/f677fd5) doc: Fix chart name for helm commands in docs [#2287](https://github.com/actions/actions-runner-controller/pull/2287)
- [d962714](https://github.com/actions/actions-runner-controller/commit/d962714) Fix helm chart when containerMode.type=dind. [#2291](https://github.com/actions/actions-runner-controller/pull/2291)
- [3886f28](https://github.com/actions/actions-runner-controller/commit/3886f28) Add EKS test environment Terraform templates [#2290](https://github.com/actions/actions-runner-controller/pull/2290)
- [dab9004](https://github.com/actions/actions-runner-controller/commit/dab9004) Added workflow to be triggered via rest api dispatch in e2e test [#2283](https://github.com/actions/actions-runner-controller/pull/2283)
- [dd8ec1a](https://github.com/actions/actions-runner-controller/commit/dd8ec1a) Add testserver package [#2281](https://github.com/actions/actions-runner-controller/pull/2281)
- [8e52a6d](https://github.com/actions/actions-runner-controller/commit/8e52a6d) EphemeralRunner: On cleanup, if pod is pending, delete from service [#2255](https://github.com/actions/actions-runner-controller/pull/2255)
- [9990243](https://github.com/actions/actions-runner-controller/commit/9990243) Early return if finalizer does not exist to make it more readable [#2262](https://github.com/actions/actions-runner-controller/pull/2262)
- [0891981](https://github.com/actions/actions-runner-controller/commit/0891981) Port ADRs from internal repo [#2267](https://github.com/actions/actions-runner-controller/pull/2267)
- [facae69](https://github.com/actions/actions-runner-controller/commit/facae69) Remove un-required permissions for the manager-role of the new `AutoScalingRunnerSet` [#2260](https://github.com/actions/actions-runner-controller/pull/2260)
- [8f62e35](https://github.com/actions/actions-runner-controller/commit/8f62e35) Add options to multi client [#2257](https://github.com/actions/actions-runner-controller/pull/2257)
- [55951c2](https://github.com/actions/actions-runner-controller/commit/55951c2) Add new workflow to automate runner updates [#2247](https://github.com/actions/actions-runner-controller/pull/2247)
- [c4297d2](https://github.com/actions/actions-runner-controller/commit/c4297d2) Avoid deleting scale set if annotation is not parsable or if it does not exist [#2239](https://github.com/actions/actions-runner-controller/pull/2239)
- [0774f06](https://github.com/actions/actions-runner-controller/commit/0774f06) ADR: automate runner updates [#2244](https://github.com/actions/actions-runner-controller/pull/2244)
- [92ab11b](https://github.com/actions/actions-runner-controller/commit/92ab11b) Use UUID v5 for client identifiers [#2241](https://github.com/actions/actions-runner-controller/pull/2241)
- [7414dc6](https://github.com/actions/actions-runner-controller/commit/7414dc6) Add Identifier to actions.Client [#2237](https://github.com/actions/actions-runner-controller/pull/2237)
- [34efb9d](https://github.com/actions/actions-runner-controller/commit/34efb9d) Add documentation to update ARC with prometheus CRDs needed by actions metrics server [#2209](https://github.com/actions/actions-runner-controller/pull/2209)
- [fbad561](https://github.com/actions/actions-runner-controller/commit/fbad561) Allow provide pre-defined kubernetes secret when helm-install AutoScalingRunnerSet [#2234](https://github.com/actions/actions-runner-controller/pull/2234)
- [a5cef7e](https://github.com/actions/actions-runner-controller/commit/a5cef7e) Resolve CI break due to bad merge. [#2236](https://github.com/actions/actions-runner-controller/pull/2236)
- [1f4fe46](https://github.com/actions/actions-runner-controller/commit/1f4fe46) Delete RunnerScaleSet on service when AutoScalingRunnerSet is deleted. [#2223](https://github.com/actions/actions-runner-controller/pull/2223)
- [067686c](https://github.com/actions/actions-runner-controller/commit/067686c) Fix typos and markdown structure in troubleshooting guide [#2148](https://github.com/actions/actions-runner-controller/pull/2148)
- [df12e00](https://github.com/actions/actions-runner-controller/commit/df12e00) Remove network requests from actions.NewClient [#2219](https://github.com/actions/actions-runner-controller/pull/2219)
- [cc26593](https://github.com/actions/actions-runner-controller/commit/cc26593) Skip CT when list-changed=false. [#2228](https://github.com/actions/actions-runner-controller/pull/2228)
- [835eac7](https://github.com/actions/actions-runner-controller/commit/835eac7) Fix helm charts when pass values file. [#2222](https://github.com/actions/actions-runner-controller/pull/2222)
- [01e9dd3](https://github.com/actions/actions-runner-controller/commit/01e9dd3) Update Validate ARC workflow to go 1.19 [#2220](https://github.com/actions/actions-runner-controller/pull/2220)
- [8038181](https://github.com/actions/actions-runner-controller/commit/8038181) Allow update runner group for AutoScalingRunnerSet [#2216](https://github.com/actions/actions-runner-controller/pull/2216)
- [219ba5b](https://github.com/actions/actions-runner-controller/commit/219ba5b) chore(deps): bump sigs.k8s.io/controller-runtime from 0.13.1 to 0.14.1 [#2132](https://github.com/actions/actions-runner-controller/pull/2132)
- [b09e3a2](https://github.com/actions/actions-runner-controller/commit/b09e3a2) Return error for non-existing runner group. [#2215](https://github.com/actions/actions-runner-controller/pull/2215)
- [7ea60e4](https://github.com/actions/actions-runner-controller/commit/7ea60e4) Fix intermittent image push failures to GHCR [#2214](https://github.com/actions/actions-runner-controller/pull/2214)
- [c8918f5](https://github.com/actions/actions-runner-controller/commit/c8918f5) Fix URL for authenticating using a GitHub app [#2206](https://github.com/actions/actions-runner-controller/pull/2206)
- [d57d17f](https://github.com/actions/actions-runner-controller/commit/d57d17f) Add support for custom CA in actions.Client [#2199](https://github.com/actions/actions-runner-controller/pull/2199)
- [6e69c75](https://github.com/actions/actions-runner-controller/commit/6e69c75) chore(deps): bump github.com/hashicorp/go-retryablehttp from 0.7.1 to 0.7.2 [#2203](https://github.com/actions/actions-runner-controller/pull/2203)
- [882bfab](https://github.com/actions/actions-runner-controller/commit/882bfab) Renaming autoScaling to autoscaling in tests matching the convention [#2201](https://github.com/actions/actions-runner-controller/pull/2201)
- [3327f62](https://github.com/actions/actions-runner-controller/commit/3327f62) Refactor actions.Client with options to help extensibility [#2193](https://github.com/actions/actions-runner-controller/pull/2193)
- [282f2dd](https://github.com/actions/actions-runner-controller/commit/282f2dd) chore(deps): bump github.com/onsi/gomega from 1.20.2 to 1.25.0 [#2169](https://github.com/actions/actions-runner-controller/pull/2169)
- [d67f808](https://github.com/actions/actions-runner-controller/commit/d67f808) Include nikola-jokic in CODEOWNERS file [#2184](https://github.com/actions/actions-runner-controller/pull/2184)
- [4932412](https://github.com/actions/actions-runner-controller/commit/4932412) Fix L0 test to make it more reliable. [#2178](https://github.com/actions/actions-runner-controller/pull/2178)
- [6da1cde](https://github.com/actions/actions-runner-controller/commit/6da1cde) Update runner version to 2.301.1 [#2182](https://github.com/actions/actions-runner-controller/pull/2182)
- [f9bae70](https://github.com/actions/actions-runner-controller/commit/f9bae70) Add distinct namespace best practice note [#2181](https://github.com/actions/actions-runner-controller/pull/2181)
- [05a3908](https://github.com/actions/actions-runner-controller/commit/05a3908) Add arc-2 quickstart guide [#2180](https://github.com/actions/actions-runner-controller/pull/2180)
- [606ed1b](https://github.com/actions/actions-runner-controller/commit/606ed1b) Add Repository information to Runner Status [#2093](https://github.com/actions/actions-runner-controller/pull/2093)

View File

@@ -132,9 +132,9 @@ NAME READY STATUS RESTARTS AGE
example-runnerdeploy2475ht2qbr 2/2 Running 0 1m
````
Also, this runner has been registered directly to the specified repository, you can see it in repository settings. For more information, see "[Checking the status of a self-hosted runner - GitHub Docs](https://docs.github.com/en/actions/hosting-your-own-runners/monitoring-and-troubleshooting-self-hosted-runners#checking-the-status-of-a-self-hosted-runner)."
Also, this runner has been registered directly to the specified repository, you can see it in repository settings. For more information, see "[Checking the status of a self-hosted runner - GitHub Docs](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/monitoring-and-troubleshooting-self-hosted-runners#checking-the-status-of-a-self-hosted-runner)."
:two: You are ready to execute workflows against this self-hosted runner. For more information, see "[Using self-hosted runners in a workflow - GitHub Docs](https://docs.github.com/en/actions/hosting-your-own-runners/using-self-hosted-runners-in-a-workflow#using-self-hosted-runners-in-a-workflow)."
:two: You are ready to execute workflows against this self-hosted runner. For more information, see "[Using self-hosted runners in a workflow - GitHub Docs](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/using-self-hosted-runners-in-a-workflow#using-self-hosted-runners-in-a-workflow)."
There is also a quick start guide to get started on Actions, For more information, please refer to "[Quick start Guide to GitHub Actions](https://docs.github.com/en/actions/quickstart)."

View File

@@ -37,4 +37,4 @@ jobs:
When using labels there are a few things to be aware of:
1. `self-hosted` is implict with every runner as this is an automatic label GitHub apply to any self-hosted runner. As a result ARC can treat all runners as having this label without having it explicitly defined in a runner's manifest. You do not need to explicitly define this label in your runner manifests (you can if you want though).
2. In addition to the `self-hosted` label, GitHub also applies a few other [default](https://docs.github.com/en/actions/hosting-your-own-runners/using-self-hosted-runners-in-a-workflow#using-default-labels-to-route-jobs) labels to any self-hosted runner. The other default labels relate to the architecture of the runner and so can't be implicitly applied by ARC as ARC doesn't know if the runner is `linux` or `windows`, `x64` or `ARM64` etc. If you wish to use these labels in your workflows and have ARC scale runners accurately you must also add them to your runner manifests.
2. In addition to the `self-hosted` label, GitHub also applies a few other [default](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/using-self-hosted-runners-in-a-workflow#using-default-labels-to-route-jobs) labels to any self-hosted runner. The other default labels relate to the architecture of the runner and so can't be implicitly applied by ARC as ARC doesn't know if the runner is `linux` or `windows`, `x64` or `ARM64` etc. If you wish to use these labels in your workflows and have ARC scale runners accurately you must also add them to your runner manifests.

View File

@@ -160,7 +160,7 @@ spec:
### PV-backed runner work directory
ARC works by automatically creating runner pods for running [`actions/runner`](https://github.com/actions/runner) and [running `config.sh`](https://docs.github.com/en/actions/hosting-your-own-runners/adding-self-hosted-runners#adding-a-self-hosted-runner-to-a-repository) which you had to ran manually without ARC.
ARC works by automatically creating runner pods for running [`actions/runner`](https://github.com/actions/runner) and [running `config.sh`](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/adding-self-hosted-runners#adding-a-self-hosted-runner-to-a-repository) which you had to ran manually without ARC.
`config.sh` is the script provided by `actions/runner` to pre-configure the runner process before being started. One of the options provided by `config.sh` is `--work`,
which specifies the working directory where the runner runs your workflow jobs in.

View File

@@ -3,6 +3,7 @@ package actions
import (
"fmt"
"net/url"
"os"
"strings"
)
@@ -34,9 +35,7 @@ func ParseGitHubConfigFromURL(in string) (*GitHubConfig, error) {
return nil, err
}
isHosted := u.Host == "github.com" ||
u.Host == "www.github.com" ||
u.Host == "github.localhost"
isHosted := isHostedGitHubURL(u)
configURL := &GitHubConfig{
ConfigURL: u,
@@ -76,23 +75,35 @@ func ParseGitHubConfigFromURL(in string) (*GitHubConfig, error) {
func (c *GitHubConfig) GitHubAPIURL(path string) *url.URL {
result := &url.URL{
Scheme: c.ConfigURL.Scheme,
Host: c.ConfigURL.Host, // default for Enterprise mode
Path: "/api/v3", // default for Enterprise mode
}
switch c.ConfigURL.Host {
// Hosted
case "github.com", "github.localhost":
result.Host = fmt.Sprintf("api.%s", c.ConfigURL.Host)
// re-routing www.github.com to api.github.com
case "www.github.com":
result.Host = "api.github.com"
isHosted := isHostedGitHubURL(c.ConfigURL)
// Enterprise
default:
result.Host = c.ConfigURL.Host
result.Path = "/api/v3"
if isHosted {
result.Host = fmt.Sprintf("api.%s", c.ConfigURL.Host)
result.Path = ""
if strings.EqualFold("www.github.com", c.ConfigURL.Host) {
// re-routing www.github.com to api.github.com
result.Host = "api.github.com"
}
}
result.Path += path
return result
}
func isHostedGitHubURL(u *url.URL) bool {
_, forceGhes := os.LookupEnv("GITHUB_ACTIONS_FORCE_GHES")
if forceGhes {
return false
}
return strings.EqualFold(u.Host, "github.com") ||
strings.EqualFold(u.Host, "www.github.com") ||
strings.EqualFold(u.Host, "github.localhost") ||
strings.HasSuffix(u.Host, ".ghe.com")
}

View File

@@ -3,6 +3,7 @@ package actions_test
import (
"errors"
"net/url"
"os"
"strings"
"testing"
@@ -117,6 +118,16 @@ func TestGitHubConfig(t *testing.T) {
IsHosted: false,
},
},
{
configURL: "https://my-ghes.ghe.com/org/",
expected: &actions.GitHubConfig{
Scope: actions.GitHubScopeOrganization,
Enterprise: "",
Organization: "org",
Repository: "",
IsHosted: true,
},
},
}
for _, test := range tests {
@@ -151,9 +162,35 @@ func TestGitHubConfig_GitHubAPIURL(t *testing.T) {
t.Run("when hosted", func(t *testing.T) {
config, err := actions.ParseGitHubConfigFromURL("https://github.com/org/repo")
require.NoError(t, err)
assert.True(t, config.IsHosted)
result := config.GitHubAPIURL("/some/path")
assert.Equal(t, "https://api.github.com/some/path", result.String())
})
t.Run("when not hosted", func(t *testing.T) {})
t.Run("when hosted with ghe.com", func(t *testing.T) {
config, err := actions.ParseGitHubConfigFromURL("https://github.ghe.com/org/repo")
require.NoError(t, err)
assert.True(t, config.IsHosted)
result := config.GitHubAPIURL("/some/path")
assert.Equal(t, "https://api.github.ghe.com/some/path", result.String())
})
t.Run("when not hosted", func(t *testing.T) {
config, err := actions.ParseGitHubConfigFromURL("https://ghes.com/org/repo")
require.NoError(t, err)
assert.False(t, config.IsHosted)
result := config.GitHubAPIURL("/some/path")
assert.Equal(t, "https://ghes.com/api/v3/some/path", result.String())
})
t.Run("when not hosted with ghe.com", func(t *testing.T) {
os.Setenv("GITHUB_ACTIONS_FORCE_GHES", "1")
defer os.Unsetenv("GITHUB_ACTIONS_FORCE_GHES")
config, err := actions.ParseGitHubConfigFromURL("https://test.ghe.com/org/repo")
require.NoError(t, err)
assert.False(t, config.IsHosted)
result := config.GitHubAPIURL("/some/path")
assert.Equal(t, "https://test.ghe.com/api/v3/some/path", result.String())
})
}

View File

@@ -84,6 +84,8 @@ func (c *Config) NewClient() (*Client, error) {
return nil, fmt.Errorf("enterprise url incorrect: %v", err)
}
tr.BaseURL = githubAPIURL
} else if c.URL != "" && tr.BaseURL != c.URL {
tr.BaseURL = c.URL
}
transport = tr
}

2
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/actions/actions-runner-controller
go 1.19
go 1.20
require (
github.com/bradleyfalzon/ghinstallation/v2 v2.1.0

42
hack/check-gh-chart-versions.sh Executable file
View File

@@ -0,0 +1,42 @@
#!/bin/bash
# Checks the chart versions against an input version. Fails on mismatch.
#
# Usage:
# check-gh-chart-versions.sh <VERSION>
set -eo pipefail
TEXT_RED='\033[0;31m'
TEXT_RESET='\033[0m'
TEXT_GREEN='\033[0;32m'
target_version=$1
if [[ $# -eq 0 ]]; then
echo "Release version argument is required"
echo
echo "Usage: ${0} <VERSION>"
exit 1
fi
chart_dir="$(pwd)/charts"
controller_version=$(yq .version < "${chart_dir}/gha-runner-scale-set-controller/Chart.yaml")
controller_app_version=$(yq .appVersion < "${chart_dir}/gha-runner-scale-set-controller/Chart.yaml")
scaleset_version=$(yq .version < "${chart_dir}/gha-runner-scale-set/Chart.yaml")
scaleset_app_version=$(yq .appVersion < "${chart_dir}/gha-runner-scale-set/Chart.yaml")
if [[ "${controller_version}" != "${target_version}" ]] ||
[[ "${controller_app_version}" != "${target_version}" ]] ||
[[ "${scaleset_version}" != "${target_version}" ]] ||
[[ "${scaleset_app_version}" != "${target_version}" ]]; then
echo -e "${TEXT_RED}Chart versions do not match${TEXT_RESET}"
echo "Target version: ${target_version}"
echo "Controller version: ${controller_version}"
echo "Controller app version: ${controller_app_version}"
echo "Scale set version: ${scaleset_version}"
echo "Scale set app version: ${scaleset_app_version}"
exit 1
fi
echo -e "${TEXT_GREEN}Chart versions: ${controller_version}"

View File

@@ -2,7 +2,7 @@
COMMIT=$(git rev-parse HEAD)
TAG=$(git describe --exact-match --abbrev=0 --tags "${COMMIT}" 2> /dev/null || true)
BRANCH=$(git branch | grep \* | cut -d ' ' -f2 | sed -e 's/[^a-zA-Z0-9+=._:/-]*//g' || true)
BRANCH=$(git branch | grep "\*" | cut -d ' ' -f2 | sed -e 's/[^a-zA-Z0-9+=._:/-]*//g' || true)
VERSION=""
if [ -z "$TAG" ]; then

43
main.go
View File

@@ -45,6 +45,7 @@ import (
const (
defaultRunnerImage = "summerwind/actions-runner:latest"
defaultDockerImage = "docker:dind"
defaultDockerGID = "1001"
)
var scheme = runtime.NewScheme()
@@ -76,18 +77,15 @@ func main() {
autoScalingRunnerSetOnly bool
enableLeaderElection bool
disableAdmissionWebhook bool
runnerStatusUpdateHook bool
leaderElectionId string
port int
syncPeriod time.Duration
defaultScaleDownDelay time.Duration
runnerImage string
runnerImagePullSecrets stringSlice
runnerPodDefaults actionssummerwindnet.RunnerPodDefaults
dockerImage string
dockerRegistryMirror string
namespace string
logLevel string
logFormat string
@@ -108,10 +106,11 @@ func main() {
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
flag.StringVar(&leaderElectionId, "leader-election-id", "actions-runner-controller", "Controller id for leader election.")
flag.StringVar(&runnerImage, "runner-image", defaultRunnerImage, "The image name of self-hosted runner container to use by default if one isn't defined in yaml.")
flag.StringVar(&dockerImage, "docker-image", defaultDockerImage, "The image name of docker sidecar container to use by default if one isn't defined in yaml.")
flag.StringVar(&runnerPodDefaults.RunnerImage, "runner-image", defaultRunnerImage, "The image name of self-hosted runner container to use by default if one isn't defined in yaml.")
flag.StringVar(&runnerPodDefaults.DockerImage, "docker-image", defaultDockerImage, "The image name of docker sidecar container to use by default if one isn't defined in yaml.")
flag.StringVar(&runnerPodDefaults.DockerGID, "docker-gid", defaultDockerGID, "The default GID of docker group in the docker sidecar container. Use 1001 for dockerd sidecars of Ubuntu 20.04 runners 121 for Ubuntu 22.04.")
flag.Var(&runnerImagePullSecrets, "runner-image-pull-secret", "The default image-pull secret name for self-hosted runner container.")
flag.StringVar(&dockerRegistryMirror, "docker-registry-mirror", "", "The default Docker Registry Mirror used by runners.")
flag.StringVar(&runnerPodDefaults.DockerRegistryMirror, "docker-registry-mirror", "", "The default Docker Registry Mirror used by runners.")
flag.StringVar(&c.Token, "github-token", c.Token, "The personal access token of GitHub.")
flag.StringVar(&c.EnterpriseURL, "github-enterprise-url", c.EnterpriseURL, "Enterprise URL to be used for your GitHub API calls")
flag.Int64Var(&c.AppID, "github-app-id", c.AppID, "The application ID of GitHub App.")
@@ -122,7 +121,7 @@ func main() {
flag.StringVar(&c.BasicauthUsername, "github-basicauth-username", c.BasicauthUsername, "Username for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API")
flag.StringVar(&c.BasicauthPassword, "github-basicauth-password", c.BasicauthPassword, "Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API")
flag.StringVar(&c.RunnerGitHubURL, "runner-github-url", c.RunnerGitHubURL, "GitHub URL to be used by runners during registration")
flag.BoolVar(&runnerStatusUpdateHook, "runner-status-update-hook", false, "Use custom RBAC for runners (role, role binding and service account).")
flag.BoolVar(&runnerPodDefaults.UseRunnerStatusUpdateHook, "runner-status-update-hook", false, "Use custom RBAC for runners (role, role binding and service account).")
flag.DurationVar(&defaultScaleDownDelay, "default-scale-down-delay", actionssummerwindnet.DefaultScaleDownDelay, "The approximate delay for a scale down followed by a scale up, used to prevent flapping (down->up->down->... loop)")
flag.IntVar(&port, "port", 9443, "The port to which the admission webhook endpoint should bind")
flag.DurationVar(&syncPeriod, "sync-period", 1*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled.")
@@ -135,6 +134,8 @@ func main() {
flag.Var(&autoScalerImagePullSecrets, "auto-scaler-image-pull-secrets", "The default image-pull secret name for auto-scaler listener container.")
flag.Parse()
runnerPodDefaults.RunnerImagePullSecrets = runnerImagePullSecrets
log, err := logging.NewLogger(logLevel, logFormat)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: creating logger: %v\n", err)
@@ -170,6 +171,13 @@ func main() {
}
}
listenerPullPolicy := os.Getenv("CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY")
if ok := actionsgithubcom.SetListenerImagePullPolicy(listenerPullPolicy); ok {
log.Info("AutoscalingListener image pull policy changed", "ImagePullPolicy", listenerPullPolicy)
} else {
log.Info("Using default AutoscalingListener image pull policy", "ImagePullPolicy", actionsgithubcom.DefaultScaleSetListenerImagePullPolicy)
}
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
NewCache: newCache,
@@ -252,12 +260,7 @@ func main() {
Log: log.WithName("runner"),
Scheme: mgr.GetScheme(),
GitHubClient: multiClient,
DockerImage: dockerImage,
DockerRegistryMirror: dockerRegistryMirror,
UseRunnerStatusUpdateHook: runnerStatusUpdateHook,
// Defaults for self-hosted runner containers
RunnerImage: runnerImage,
RunnerImagePullSecrets: runnerImagePullSecrets,
RunnerPodDefaults: runnerPodDefaults,
}
if err = runnerReconciler.SetupWithManager(mgr); err != nil {
@@ -293,13 +296,8 @@ func main() {
Log: log.WithName("runnerset"),
Scheme: mgr.GetScheme(),
CommonRunnerLabels: commonRunnerLabels,
DockerImage: dockerImage,
DockerRegistryMirror: dockerRegistryMirror,
GitHubClient: multiClient,
// Defaults for self-hosted runner containers
RunnerImage: runnerImage,
RunnerImagePullSecrets: runnerImagePullSecrets,
UseRunnerStatusUpdateHook: runnerStatusUpdateHook,
RunnerPodDefaults: runnerPodDefaults,
}
if err = runnerSetReconciler.SetupWithManager(mgr); err != nil {
@@ -312,8 +310,9 @@ func main() {
"version", build.Version,
"default-scale-down-delay", defaultScaleDownDelay,
"sync-period", syncPeriod,
"default-runner-image", runnerImage,
"default-docker-image", dockerImage,
"default-runner-image", runnerPodDefaults.RunnerImage,
"default-docker-image", runnerPodDefaults.DockerImage,
"default-docker-gid", runnerPodDefaults.DockerGID,
"common-runnner-labels", commonRunnerLabels,
"leader-election-enabled", enableLeaderElection,
"leader-election-id", leaderElectionId,

View File

@@ -136,12 +136,27 @@ func (reader *EventReader) ProcessWorkflowJobEvent(ctx context.Context, event in
// job_conclusion -> (neutral, success, skipped, cancelled, timed_out, action_required, failure)
githubWorkflowJobConclusionsTotal.With(extraLabel("job_conclusion", *e.WorkflowJob.Conclusion, labels)).Inc()
var (
exitCode = "na"
runTimeSeconds *float64
)
// We need to do our best not to fail the whole event processing
// when the user provided no GitHub API credentials.
// See https://github.com/actions/actions-runner-controller/issues/2424
if reader.GitHubClient != nil {
parseResult, err := reader.fetchAndParseWorkflowJobLogs(ctx, e)
if err != nil {
log.Error(err, "reading workflow job log")
return
} else {
log.Info("reading workflow_job logs", keysAndValues...)
}
exitCode = parseResult.ExitCode
s := parseResult.RunTime.Seconds()
runTimeSeconds = &s
log.WithValues(keysAndValues...).Info("reading workflow_job logs", "exit_code", exitCode)
}
if *e.WorkflowJob.Conclusion == "failure" {
@@ -167,18 +182,20 @@ func (reader *EventReader) ProcessWorkflowJobEvent(ctx context.Context, event in
}
if *conclusion == "timed_out" {
failedStep = fmt.Sprint(i)
parseResult.ExitCode = "timed_out"
exitCode = "timed_out"
break
}
}
githubWorkflowJobFailuresTotal.With(
extraLabel("failed_step", failedStep,
extraLabel("exit_code", parseResult.ExitCode, labels),
extraLabel("exit_code", exitCode, labels),
),
).Inc()
}
githubWorkflowJobRunDurationSeconds.With(extraLabel("job_conclusion", *e.WorkflowJob.Conclusion, labels)).Observe(parseResult.RunTime.Seconds())
if runTimeSeconds != nil {
githubWorkflowJobRunDurationSeconds.With(extraLabel("job_conclusion", *e.WorkflowJob.Conclusion, labels)).Observe(*runTimeSeconds)
}
}
}

View File

@@ -6,8 +6,8 @@ DIND_ROOTLESS_RUNNER_NAME ?= ${DOCKER_USER}/actions-runner-dind-rootless
OS_IMAGE ?= ubuntu-22.04
TARGETPLATFORM ?= $(shell arch)
RUNNER_VERSION ?= 2.303.0
RUNNER_CONTAINER_HOOKS_VERSION ?= 0.2.0
RUNNER_VERSION ?= 2.304.0
RUNNER_CONTAINER_HOOKS_VERSION ?= 0.3.1
DOCKER_VERSION ?= 20.10.23
# default list of platforms for which multiarch image is built

Some files were not shown because too many files have changed in this diff Show More