Compare commits

..

86 Commits

Author SHA1 Message Date
Nikola Jokic
cf24ab584d Prepare 0.6.0 release (#2900) 2023-09-15 12:04:06 +02:00
Nikola Jokic
07bff8aa1e Extend the user agent and fix the build version for the listener app (#2892) 2023-09-14 20:10:49 +02:00
Nikola Jokic
ea2fb32e20 Extend and generate crds allowing listener pod spec change (#2758)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-09-14 15:33:29 +02:00
Nikola Jokic
6a022e5489 Fix chart test for name override (#2896) 2023-09-14 15:24:07 +02:00
Nicholas Hawkes
837a1cb850 Set the AutoscalingRunnerSet name to runnerScaleSetName (#2803) 2023-09-13 09:55:08 +02:00
github-actions[bot]
dce49a003d Updates: runner to v2.309.0 (#2876)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-09-12 16:31:51 +02:00
Lukas Beranek
c8216e1396 Fix missing \ in about-arc.md (#2866) 2023-09-07 13:52:25 +02:00
Andi Büchler
564c112b1a Fix trivial typos (#2856) 2023-09-07 13:51:41 +02:00
Francesco Virga
c7dce2bbb7 Documenting the runner container command in values.yaml (#2854) 2023-09-07 13:47:08 +02:00
Nikola Jokic
10d79342d7 Set restart policy on the runner pod to Never if restartPolicy is not set in template (#2787) 2023-09-07 13:39:08 +02:00
Jongwoo Han
64eafb58b6 Replace deprecated ::set-output with $GITHUB_OUTPUT (#2679) 2023-09-07 13:35:12 +02:00
mubashirusman
030efd82c5 Fix spacing in about-arc.md (#2790) 2023-09-07 12:24:55 +02:00
Kirill Bilchenko
f1d7c52253 bump appVersion to latest available app (#2840) 2023-08-30 15:01:31 +09:00
Jonathan Wiemers
76d622b86b feature: allow custom envornment variables in metricsservice (#2839) 2023-08-30 15:01:06 +09:00
Nathan Heaps
0b24b0d60b Add docs for setting the RUNNER_GRACEFUL_STOP_TIMEOUT env var on docker container (#2843) 2023-08-30 12:30:18 +09:00
Bassem Dghaidi
5e23c598a8 Move top level metrics property up (#2841) 2023-08-29 03:58:08 -04:00
Nikola Jokic
3652932780 Fix canary VERSION parameter (#2842) 2023-08-28 14:46:53 +02:00
Stefan Andres
94065d2fc5 [helm actions-runner-controller] Use namespaceSelector.matchExpression instead of matchLabels (#2830) 2023-08-28 14:24:20 +09:00
jb-2020
b1cc4da5dc Switch git-lfs source to packagecloud (#2838) 2023-08-28 14:23:57 +09:00
Lukas Hauser
8b7bfa5ffb Fix - Actually Enable Sets in addition to Slices in env (#2828) 2023-08-28 13:48:29 +09:00
Nikola Jokic
52fc819339 Fix parsing AcquireJob MessageQueueTokenExpiredError (#2837) 2023-08-25 20:35:01 +02:00
Bassem Dghaidi
215b245881 Upgrade e2e tests to latest version (0.5.0) (#2826) 2023-08-21 17:09:16 +02:00
Bassem Dghaidi
a3df23b07c Add grafana dashboard sample (#2825) 2023-08-21 16:31:55 +02:00
Bassem Dghaidi
f5c69654e7 Revert back the helm chart renaming hotfix (#2823) 2023-08-21 15:44:20 +02:00
Nikola Jokic
abc0b678d3 Revert chart name and use helper constant to trim the name base (#2824)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-08-21 15:36:14 +02:00
Bassem Dghaidi
963ab2a748 Fix workflow after chart renaming (#2822) 2023-08-21 14:28:55 +02:00
Bassem Dghaidi
8a41a596b6 Prepare 0.5.0 release (#2783) 2023-08-21 14:10:36 +02:00
Bassem Dghaidi
e10c437f46 Move gha-* docs out of preview (#2779) 2023-08-21 14:06:12 +02:00
Nikola Jokic
a0a3916c80 Provide scale-set listener metrics (#2559)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-08-21 13:50:07 +02:00
Nikola Jokic
1c360d7e26 Document customization for containerModes (#2777) 2023-08-18 11:03:28 +02:00
github-actions[bot]
20bb860a37 Updates: runner to v2.308.0 (#2814)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-08-15 12:53:03 +02:00
Nikola Jokic
6a75bc0880 Trim gha-runner-scale-set to gha-rs in names and remove role type suffixes (#2706) 2023-08-09 11:11:45 +02:00
Lukas Hauser
78271000c0 Logs - Add missing formatting (#2780) 2023-08-09 17:54:24 +09:00
Juliet Boyd
a36b0e58b0 Clarify multiple metrics in docs (#2712)
Co-authored-by: Dylan Boyd <5061312+dylanjboyd@users.noreply.github.com>
2023-08-09 17:53:39 +09:00
Nikola Jokic
336e11a4e9 Fix scaling back to 0 after min runners were set to number > 0 (#2742) 2023-08-09 10:32:08 +02:00
github-actions[bot]
dcb64f0b9e Updates: runner to v2.307.1 (#2778)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-07-26 20:02:33 +02:00
Nikola Jokic
0dadfc4d37 ADR: Customize listener pod (#2752) 2023-07-25 16:47:26 +02:00
Thorsten Wildberger
dc58f6ba13 feat: allow more dockerd options (#2701) 2023-07-25 13:59:49 +09:00
arielly-parussulo
06cbd632b8 add interval and timeout configuration for the actions-runner-controler serviceMonitors (#2654)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-07-25 13:59:41 +09:00
Paweł Rein
9f33ae1507 fixed indent in a README example (#2725) 2023-07-25 13:45:44 +09:00
Ekaterina Sobolevskaia
63a6b5a7f0 add opportunity write dnsPolicy for controller by helm values (#2708) 2023-07-25 13:38:13 +09:00
marcin-motyl
fddc5bf1c8 Fix deployment & service values in actionsMetrics (#2683) 2023-07-25 09:56:20 +09:00
Daniel Kubat
d90ce2bed5 Upgrade Docker Compose to v2.20.0 (#2738) 2023-07-25 09:54:09 +09:00
Gavin Williams
cd996e7c27 Fix panic: slice bounds out of range when runner spec contains volumeMounts. (#2720)
Signed-off-by: Gavin Williams <gavin.williams@machinemax.com>
2023-07-25 09:53:50 +09:00
Lars Lange
297442975e fix: remove callbacks resulting in scales due to incomplete response (#2671)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-07-25 09:04:54 +09:00
dependabot[bot]
5271f316e6 Bump golang.org/x/net from 0.11.0 to 0.12.0 (#2750)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-18 12:07:33 +02:00
dependabot[bot]
9845a934f4 Bump github.com/cloudflare/circl from 1.1.0 to 1.3.3 (#2628)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-07-14 13:48:27 +02:00
github-actions[bot]
e0a7e142e0 Updates: runner to v2.306.0 (#2727)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-07-07 14:48:40 +02:00
dependabot[bot]
f9a11a8b0b chore(deps): bump github.com/stretchr/testify from 1.8.2 to 1.8.4 (#2716)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-07-06 12:41:55 +02:00
Nikola Jokic
fde1893494 Add status check before deserializing runner-registration response (#2699) 2023-07-05 21:09:07 +02:00
Nikola Jokic
6fe8008640 Add configurable log format to values.yaml and propagate it to listener (#2686) 2023-07-05 21:06:42 +02:00
Yusuke Kuoka
2fee26ddce chore: Set build version on make-runscaleset (#2713) 2023-07-03 11:52:04 +02:00
marcin-motyl
685f7162a4 Fix serviceMonitor labels in actionsMetrics (#2682) 2023-07-01 13:59:44 +09:00
Lars Lange
d134dee14b fix: template test of service account (#2705) 2023-06-28 10:24:49 +02:00
dependabot[bot]
c33ce998f4 chore(deps): bump github.com/onsi/ginkgo/v2 from 2.9.1 to 2.11.0 (#2689)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-06-27 13:14:31 +02:00
kahirokunn
78a93566af chore: remove 16 characters from -service-account (#2567)
Signed-off-by: kahirokunn <okinakahiro@gmail.com>
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-06-27 12:32:47 +02:00
Rose Soriano
81dea9b3dc Fix more broken links in docs (#2473)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-06-23 08:54:13 -04:00
Nikola Jokic
7ca3df3605 fix chart test (#2694) 2023-06-21 08:43:03 -04:00
kahirokunn
2343cd2d7b chore(gha-runner-scale-set): update indentation of initContainers (#2638) 2023-06-21 13:50:02 +02:00
Timm Drevensek
cf18cb3fb0 Adapt role name to prevent namespace collision (#2617) 2023-06-20 17:35:53 +02:00
Bassem Dghaidi
ae8b27a9a3 Apply the label "runners update" on runner update PRs (#2680) 2023-06-16 09:11:58 -04:00
dependabot[bot]
58ee5e8c4e chore(deps): bump github.com/onsi/ginkgo/v2 from 2.9.0 to 2.9.1 (#2401)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-06-15 14:56:53 +02:00
dependabot[bot]
fade63a663 chore(deps): bump go.uber.org/multierr from 1.7.0 to 1.10.0 (#2400)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-06-15 14:05:35 +02:00
Nikola Jokic
ac4056f85b Upgrade golang.org/x/net to 0.11 (#2676) 2023-06-15 13:38:55 +02:00
github-actions[bot]
462d044604 Updates: runner to v2.305.0 (#2674)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-06-15 06:07:09 -04:00
Nikola Jokic
94934819c4 Trim repo/org/enterprise to 63 characters in label values (#2657) 2023-06-09 20:57:20 +02:00
Nuru
aac811f210 Update unconsumed HRA capacity reservation's expiration more frequently and consistently (#2502)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-05-30 09:04:57 +09:00
Thang Le
e7ec736738 Use head_branch metric (#2549)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-05-28 16:36:55 +09:00
Daniel Hobley
90ea691e72 feat: allow for modifying var-run mount maximum size limit (#2624) 2023-05-27 11:47:23 +09:00
robert lestak
32a653c0ca enable passing docker-gid in helm chart (#2574) 2023-05-27 11:33:46 +09:00
Vincent Rivellino
c7b2dd1764 fix: labels on github webhook service template (#2582) 2023-05-27 11:33:20 +09:00
Changliang Wu
80af7fc125 feat: support configure docker insecure registry with env (#2606) 2023-05-27 11:32:46 +09:00
Armin Becher
34909f0cf1 Fix typo in HorizontalRunnerAutoscaler (#2563)
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2023-05-27 11:22:44 +09:00
Bassem Dghaidi
8afef51c8b Add DrainJobsMode (aka UpdateStrategy feature) (#2569) 2023-05-23 07:42:30 -04:00
Bassem Dghaidi
032443fcfd Fix workflows concurrency group names (#2611) 2023-05-22 07:16:38 -04:00
Nikola Jokic
91c8991835 Scale Set Metrics ADR (#2568)
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
2023-05-18 15:37:41 +02:00
Nikola Jokic
c5ebe750dc Discard logs on helm chart tests (#2607) 2023-05-18 14:15:05 +02:00
Bassem Dghaidi
34fdbf1231 Add concurrency limits on all workflows to eliminate wasted cycles (#2603) 2023-05-18 04:55:03 -04:00
Bassem Dghaidi
44e9b7d8eb Add new architecture diagram (#2598) 2023-05-17 08:36:16 -04:00
Bassem Dghaidi
7ab516fdab Update CONTRIBUTING.md with new contribution guidelines and release process documentation (#2596)
Co-authored-by: John Sudol <24583161+johnsudol@users.noreply.github.com>
2023-05-17 07:42:35 -04:00
github-actions[bot]
e571df52b5 Updates: container-hooks to v0.3.2 (#2597)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-05-17 05:57:23 -04:00
Bassem Dghaidi
706ec17bf4 Fix broken chart validation workflows (#2589) 2023-05-15 10:12:03 -04:00
Bassem Dghaidi
30355f742b Apply naming convention to workflows (#2581)
Co-authored-by: John Sudol <24583161+johnsudol@users.noreply.github.com>
2023-05-15 08:31:18 -04:00
Yusuke Kuoka
8a5fb6ccb7 Bump chart version to v0.23.3 for ARC v0.27.4 (#2577) 2023-05-12 09:10:59 -04:00
github-actions[bot]
e930ba6e98 Updates: container-hooks to v0.3.1 (#2580)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-05-12 05:55:09 -04:00
Bassem Dghaidi
5ba3805a3f Fix update runners scheduled workflow to check for container-hooks upgrades (#2576) 2023-05-12 05:52:24 -04:00
160 changed files with 23277 additions and 1681 deletions

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
*.png filter=lfs diff=lfs merge=lfs -text

View File

@@ -23,6 +23,14 @@ inputs:
arc-controller-namespace:
description: 'The namespace of the configured gha-runner-scale-set-controller'
required: true
wait-to-finish:
description: 'Wait for the workflow run to finish'
required: true
default: "true"
wait-to-running:
description: 'Wait for the workflow run to start running'
required: true
default: "false"
runs:
using: "composite"
@@ -111,14 +119,43 @@ runs:
- name: Generate summary about the triggered workflow run
shell: bash
run: |
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Triggered workflow run** |
|:--------------------------:|
| ${{steps.query_workflow.outputs.workflow_run_url}} |
EOF
- name: Wait for workflow to start running
if: inputs.wait-to-running == 'true' && inputs.wait-to-finish == 'false'
uses: actions/github-script@v6
with:
script: |
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms))
}
const owner = '${{inputs.repo-owner}}'
const repo = '${{inputs.repo-name}}'
const workflow_run_id = ${{steps.query_workflow.outputs.workflow_run}}
const workflow_job_id = ${{steps.query_workflow.outputs.workflow_job}}
let count = 0
while (count++<10) {
await sleep(30 * 1000);
let getRunResponse = await github.rest.actions.getWorkflowRun({
owner: owner,
repo: repo,
run_id: workflow_run_id
})
console.log(`${getRunResponse.data.html_url}: ${getRunResponse.data.status} (${getRunResponse.data.conclusion})`);
if (getRunResponse.data.status == 'in_progress') {
console.log(`Workflow run is in progress.`)
return
}
}
core.setFailed(`The triggered workflow run didn't start properly using ${{inputs.arc-name}}`)
- name: Wait for workflow to finish successfully
if: inputs.wait-to-finish == 'true'
uses: actions/github-script@v6
with:
script: |
@@ -151,10 +188,15 @@ runs:
}
core.setFailed(`The triggered workflow run didn't finish properly using ${{inputs.arc-name}}`)
- name: cleanup
if: inputs.wait-to-finish == 'true'
shell: bash
run: |
helm uninstall ${{ inputs.arc-name }} --namespace ${{inputs.arc-namespace}} --debug
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n ${{inputs.arc-name}} -l app.kubernetes.io/instance=${{ inputs.arc-name }}
- name: Gather logs and cleanup
shell: bash
if: always()
run: |
helm uninstall ${{ inputs.arc-name }} --namespace ${{inputs.arc-namespace}} --debug
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n ${{inputs.arc-name}} -l app.kubernetes.io/instance=${{ inputs.arc-name }}
kubectl logs deployment/arc-gha-runner-scale-set-controller -n ${{inputs.arc-controller-namespace}}
kubectl logs deployment/arc-gha-rs-controller -n ${{inputs.arc-controller-namespace}}

View File

@@ -1,4 +1,4 @@
name: Publish Helm Chart
name: Publish ARC Helm Charts
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
@@ -8,7 +8,7 @@ on:
- master
paths:
- 'charts/**'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/arc-publish-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
@@ -28,6 +28,10 @@ env:
permissions:
contents: write
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: true
jobs:
lint-chart:
name: Lint Chart
@@ -66,7 +70,7 @@ jobs:
run: |
changed=$(ct list-changed --config charts/.ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
echo "changed=true" >> $GITHUB_OUTPUT
fi
- name: Run chart-testing (lint)
@@ -171,6 +175,7 @@ jobs:
--owner "$(echo ${{ github.repository }} | cut -d '/' -f 1)" \
--git-repo "$(echo ${{ github.repository }} | cut -d '/' -f 2)" \
--index-path ${{ github.workspace }}/index.yaml \
--token ${{ secrets.GITHUB_TOKEN }} \
--push \
--pages-branch 'gh-pages' \
--pages-index-path 'index.yaml'

View File

@@ -1,4 +1,4 @@
name: Publish ARC
name: Publish ARC Image
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
@@ -18,28 +18,32 @@ on:
default: false
permissions:
contents: write
contents: write
packages: write
env:
TARGET_ORG: actions-runner-controller
TARGET_REPO: actions-runner-controller
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: true
jobs:
release-controller:
name: Release
runs-on: ubuntu-latest
# gha-runner-scale-set has its own release workflow.
# We don't want to publish a new actions-runner-controller image
# We don't want to publish a new actions-runner-controller image
# we release gha-runner-scale-set.
if: ${{ !startsWith(github.event.inputs.release_tag_name, 'gha-runner-scale-set-') }}
steps:
- name: Checkout
uses: actions/checkout@v3
- uses: actions/setup-go@v3
- uses: actions/setup-go@v4
with:
go-version: '1.18.2'
go-version-file: 'go.mod'
- name: Install tools
run: |

View File

@@ -1,4 +1,4 @@
name: Runners
name: Release ARC Runner Images
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
@@ -10,7 +10,7 @@ on:
- 'master'
paths:
- 'runner/VERSION'
- '.github/workflows/release-runners.yaml'
- '.github/workflows/arc-release-runners.yaml'
env:
# Safeguard to prevent pushing images to registeries after build
@@ -18,7 +18,10 @@ env:
TARGET_ORG: actions-runner-controller
TARGET_WORKFLOW: release-runners.yaml
DOCKER_VERSION: 20.10.23
RUNNER_CONTAINER_HOOKS_VERSION: 0.2.0
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: true
jobs:
build-runners:
@@ -27,10 +30,12 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Get runner version
id: runner_version
id: versions
run: |
version=$(echo -n $(cat runner/VERSION))
echo runner_version=$version >> $GITHUB_OUTPUT
runner_current_version="$(echo -n $(cat runner/VERSION | grep 'RUNNER_VERSION=' | cut -d '=' -f2))"
container_hooks_current_version="$(echo -n $(cat runner/VERSION | grep 'RUNNER_CONTAINER_HOOKS_VERSION=' | cut -d '=' -f2))"
echo runner_version=$runner_current_version >> $GITHUB_OUTPUT
echo container_hooks_version=$container_hooks_current_version >> $GITHUB_OUTPUT
- name: Get Token
id: get_workflow_token
@@ -42,7 +47,8 @@ jobs:
- name: Trigger Build And Push Runner Images To Registries
env:
RUNNER_VERSION: ${{ steps.runner_version.outputs.runner_version }}
RUNNER_VERSION: ${{ steps.versions.outputs.runner_version }}
CONTAINER_HOOKS_VERSION: ${{ steps.versions.outputs.container_hooks_version }}
run: |
# Authenticate
gh auth login --with-token <<< ${{ steps.get_workflow_token.outputs.token }}
@@ -51,20 +57,21 @@ jobs:
gh workflow run ${{ env.TARGET_WORKFLOW }} -R ${{ env.TARGET_ORG }}/releases \
-f runner_version=${{ env.RUNNER_VERSION }} \
-f docker_version=${{ env.DOCKER_VERSION }} \
-f runner_container_hooks_version=${{ env.RUNNER_CONTAINER_HOOKS_VERSION }} \
-f runner_container_hooks_version=${{ env.CONTAINER_HOOKS_VERSION }} \
-f sha='${{ github.sha }}' \
-f push_to_registries=${{ env.PUSH_TO_REGISTRIES }}
- name: Job summary
env:
RUNNER_VERSION: ${{ steps.runner_version.outputs.runner_version }}
RUNNER_VERSION: ${{ steps.versions.outputs.runner_version }}
CONTAINER_HOOKS_VERSION: ${{ steps.versions.outputs.container_hooks_version }}
run: |
echo "The [release-runners.yaml](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/release-runners.yaml) workflow has been triggered!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- runner_version: ${{ env.RUNNER_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- docker_version: ${{ env.DOCKER_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- runner_container_hooks_version: ${{ env.RUNNER_CONTAINER_HOOKS_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- runner_container_hooks_version: ${{ env.CONTAINER_HOOKS_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- sha: ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
echo "- push_to_registries: ${{ env.PUSH_TO_REGISTRIES }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY

View File

@@ -0,0 +1,149 @@
# This workflows polls releases from actions/runner and in case of a new one it
# updates files containing runner version and opens a pull request.
name: Runner Updates Check (Scheduled Job)
on:
schedule:
# run daily
- cron: "0 9 * * *"
workflow_dispatch:
jobs:
# check_versions compares our current version and the latest available runner
# version and sets them as outputs.
check_versions:
runs-on: ubuntu-latest
env:
GH_TOKEN: ${{ github.token }}
outputs:
runner_current_version: ${{ steps.runner_versions.outputs.runner_current_version }}
runner_latest_version: ${{ steps.runner_versions.outputs.runner_latest_version }}
container_hooks_current_version: ${{ steps.container_hooks_versions.outputs.container_hooks_current_version }}
container_hooks_latest_version: ${{ steps.container_hooks_versions.outputs.container_hooks_latest_version }}
steps:
- uses: actions/checkout@v3
- name: Get runner current and latest versions
id: runner_versions
run: |
CURRENT_VERSION="$(echo -n $(cat runner/VERSION | grep 'RUNNER_VERSION=' | cut -d '=' -f2))"
echo "Current version: $CURRENT_VERSION"
echo runner_current_version=$CURRENT_VERSION >> $GITHUB_OUTPUT
LATEST_VERSION=$(gh release list --exclude-drafts --exclude-pre-releases --limit 1 -R actions/runner | grep -oP '(?<=v)[0-9.]+' | head -1)
echo "Latest version: $LATEST_VERSION"
echo runner_latest_version=$LATEST_VERSION >> $GITHUB_OUTPUT
- name: Get container-hooks current and latest versions
id: container_hooks_versions
run: |
CURRENT_VERSION="$(echo -n $(cat runner/VERSION | grep 'RUNNER_CONTAINER_HOOKS_VERSION=' | cut -d '=' -f2))"
echo "Current version: $CURRENT_VERSION"
echo container_hooks_current_version=$CURRENT_VERSION >> $GITHUB_OUTPUT
LATEST_VERSION=$(gh release list --exclude-drafts --exclude-pre-releases --limit 1 -R actions/runner-container-hooks | grep -oP '(?<=v)[0-9.]+' | head -1)
echo "Latest version: $LATEST_VERSION"
echo container_hooks_latest_version=$LATEST_VERSION >> $GITHUB_OUTPUT
# check_pr checks if a PR for the same update already exists. It only runs if
# runner latest version != our current version. If no existing PR is found,
# it sets a PR name as output.
check_pr:
runs-on: ubuntu-latest
needs: check_versions
if: needs.check_versions.outputs.runner_current_version != needs.check_versions.outputs.runner_latest_version || needs.check_versions.outputs.container_hooks_current_version != needs.check_versions.outputs.container_hooks_latest_version
outputs:
pr_name: ${{ steps.pr_name.outputs.pr_name }}
env:
GH_TOKEN: ${{ github.token }}
steps:
- name: debug
run:
echo "RUNNER_CURRENT_VERSION=${{ needs.check_versions.outputs.runner_current_version }}"
echo "RUNNER_LATEST_VERSION=${{ needs.check_versions.outputs.runner_latest_version }}"
echo "CONTAINER_HOOKS_CURRENT_VERSION=${{ needs.check_versions.outputs.container_hooks_current_version }}"
echo "CONTAINER_HOOKS_LATEST_VERSION=${{ needs.check_versions.outputs.container_hooks_latest_version }}"
- uses: actions/checkout@v3
- name: PR Name
id: pr_name
env:
RUNNER_CURRENT_VERSION: ${{ needs.check_versions.outputs.runner_current_version }}
RUNNER_LATEST_VERSION: ${{ needs.check_versions.outputs.runner_latest_version }}
CONTAINER_HOOKS_CURRENT_VERSION: ${{ needs.check_versions.outputs.container_hooks_current_version }}
CONTAINER_HOOKS_LATEST_VERSION: ${{ needs.check_versions.outputs.container_hooks_latest_version }}
# Generate a PR name with the following title:
# Updates: runner to v2.304.0 and container-hooks to v0.3.1
run: |
RUNNER_MESSAGE="runner to v${RUNNER_LATEST_VERSION}"
CONTAINER_HOOKS_MESSAGE="container-hooks to v${CONTAINER_HOOKS_LATEST_VERSION}"
PR_NAME="Updates:"
if [ "$RUNNER_CURRENT_VERSION" != "$RUNNER_LATEST_VERSION" ]
then
PR_NAME="$PR_NAME $RUNNER_MESSAGE"
fi
if [ "$CONTAINER_HOOKS_CURRENT_VERSION" != "$CONTAINER_HOOKS_LATEST_VERSION" ]
then
PR_NAME="$PR_NAME $CONTAINER_HOOKS_MESSAGE"
fi
result=$(gh pr list --search "$PR_NAME" --json number --jq ".[].number" --limit 1)
if [ -z "$result" ]
then
echo "No existing PRs found, setting output with pr_name=$PR_NAME"
echo pr_name=$PR_NAME >> $GITHUB_OUTPUT
else
echo "Found a PR with title '$PR_NAME' already existing: ${{ github.server_url }}/${{ github.repository }}/pull/$result"
fi
# update_version updates runner version in the files listed below, commits
# the changes and opens a pull request as `github-actions` bot.
update_version:
runs-on: ubuntu-latest
needs:
- check_versions
- check_pr
if: needs.check_pr.outputs.pr_name
permissions:
pull-requests: write
contents: write
actions: write
env:
GH_TOKEN: ${{ github.token }}
RUNNER_CURRENT_VERSION: ${{ needs.check_versions.outputs.runner_current_version }}
RUNNER_LATEST_VERSION: ${{ needs.check_versions.outputs.runner_latest_version }}
CONTAINER_HOOKS_CURRENT_VERSION: ${{ needs.check_versions.outputs.container_hooks_current_version }}
CONTAINER_HOOKS_LATEST_VERSION: ${{ needs.check_versions.outputs.container_hooks_latest_version }}
PR_NAME: ${{ needs.check_pr.outputs.pr_name }}
steps:
- uses: actions/checkout@v3
- name: New branch
run: git checkout -b update-runner-"$(date +%Y-%m-%d)"
- name: Update files
run: |
sed -i "s/$RUNNER_CURRENT_VERSION/$RUNNER_LATEST_VERSION/g" runner/VERSION
sed -i "s/$RUNNER_CURRENT_VERSION/$RUNNER_LATEST_VERSION/g" runner/Makefile
sed -i "s/$RUNNER_CURRENT_VERSION/$RUNNER_LATEST_VERSION/g" Makefile
sed -i "s/$RUNNER_CURRENT_VERSION/$RUNNER_LATEST_VERSION/g" test/e2e/e2e_test.go
sed -i "s/$CONTAINER_HOOKS_CURRENT_VERSION/$CONTAINER_HOOKS_LATEST_VERSION/g" runner/VERSION
sed -i "s/$CONTAINER_HOOKS_CURRENT_VERSION/$CONTAINER_HOOKS_LATEST_VERSION/g" runner/Makefile
sed -i "s/$CONTAINER_HOOKS_CURRENT_VERSION/$CONTAINER_HOOKS_LATEST_VERSION/g" Makefile
sed -i "s/$CONTAINER_HOOKS_CURRENT_VERSION/$CONTAINER_HOOKS_LATEST_VERSION/g" test/e2e/e2e_test.go
- name: Commit changes
run: |
# from https://github.com/orgs/community/discussions/26560
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config user.name "github-actions[bot]"
git add .
git commit -m "$PR_NAME"
git push -u origin HEAD
- name: Create pull request
run: gh pr create -f -l "runners update"

View File

@@ -6,7 +6,7 @@ on:
- master
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/arc-validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- '!charts/gha-runner-scale-set-controller/**'
@@ -14,7 +14,7 @@ on:
push:
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/arc-validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- '!charts/gha-runner-scale-set-controller/**'
@@ -27,6 +27,13 @@ env:
permissions:
contents: read
concurrency:
# This will make sure we only apply the concurrency limits on pull requests
# but not pushes to master branch by making the concurrency group name unique
# for pushes
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
validate-chart:
name: Lint Chart
@@ -65,14 +72,14 @@ jobs:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
uses: helm/chart-testing-action@v2.4.0
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config charts/.ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
echo "changed=true" >> $GITHUB_OUTPUT
fi
- name: Run chart-testing (lint)

View File

@@ -1,4 +1,4 @@
name: Validate Runners
name: Validate ARC Runners
on:
pull_request:
@@ -12,6 +12,13 @@ on:
permissions:
contents: read
concurrency:
# This will make sure we only apply the concurrency limits on pull requests
# but not pushes to master branch by making the concurrency group name unique
# for pushes
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
shellcheck:
name: runner / shellcheck

View File

@@ -1,4 +1,4 @@
name: CI ARC E2E Linux VM Test
name: (gha) E2E Tests
on:
push:
@@ -16,7 +16,14 @@ env:
TARGET_ORG: actions-runner-controller
TARGET_REPO: arc_e2e_test_dummy
IMAGE_NAME: "arc-test-image"
IMAGE_VERSION: "0.4.0"
IMAGE_VERSION: "0.6.0"
concurrency:
# This will make sure we only apply the concurrency limits on pull requests
# but not pushes to master branch by making the concurrency group name unique
# for pushes
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
default-setup:
@@ -51,21 +58,21 @@ jobs:
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-rs-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-rs-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-rs-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
kubectl describe deployment arc-gha-rs-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
@@ -142,21 +149,21 @@ jobs:
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-rs-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-rs-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-rs-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
kubectl describe deployment arc-gha-rs-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
@@ -231,21 +238,21 @@ jobs:
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-rs-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-rs-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-rs-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
kubectl describe deployment arc-gha-rs-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
@@ -326,21 +333,21 @@ jobs:
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-rs-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-rs-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-rs-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
kubectl describe deployment arc-gha-rs-controller -n arc-systems
kubectl wait --timeout=30s --for=condition=ready pod -n openebs -l name=openebs-localpv-provisioner
- name: Install gha-runner-scale-set
@@ -420,21 +427,21 @@ jobs:
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-rs-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-rs-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-rs-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
kubectl describe deployment arc-gha-rs-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
@@ -521,21 +528,21 @@ jobs:
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-rs-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-rs-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-rs-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
kubectl describe deployment arc-gha-rs-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
@@ -616,21 +623,21 @@ jobs:
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-rs-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-rs-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-rs-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
kubectl describe deployment arc-gha-rs-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
@@ -703,3 +710,173 @@ jobs:
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
update-strategy-tests:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-sleepy-matrix.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
--set flags.updateStrategy="eventual" \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-rs-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-rs-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-rs-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-rs-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Trigger long running jobs and wait for runners to pick them up
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
wait-to-running: "true"
wait-to-finish: "false"
- name: Upgrade the gha-runner-scale-set
shell: bash
run: |
helm upgrade --install "${{ steps.install_arc.outputs.ARC_NAME }}" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{ env.TARGET_REPO }}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set template.spec.containers[0].name="runner" \
--set template.spec.containers[0].image="ghcr.io/actions/actions-runner:latest" \
--set template.spec.containers[0].command={"/home/runner/run.sh"} \
--set template.spec.containers[0].env[0].name="TEST" \
--set template.spec.containers[0].env[0].value="E2E TESTS" \
./charts/gha-runner-scale-set \
--debug
- name: Assert that the listener is deleted while jobs are running
shell: bash
run: |
count=0
while true; do
LISTENER_COUNT="$(kubectl get pods -l actions.github.com/scale-set-name=${{ steps.install_arc.outputs.ARC_NAME }} -n arc-systems --field-selector=status.phase=Running -o=jsonpath='{.items}' | jq 'length')"
RUNNERS_COUNT="$(kubectl get pods -l app.kubernetes.io/component=runner -n arc-runners --field-selector=status.phase=Running -o=jsonpath='{.items}' | jq 'length')"
RESOURCES="$(kubectl get pods -A)"
if [ "$LISTENER_COUNT" -eq 0 ]; then
echo "Listener has been deleted"
echo "$RESOURCES"
exit 0
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener to be deleted"
echo "$RESOURCES"
exit 1
fi
echo "Waiting for listener to be deleted"
echo "Listener count: $LISTENER_COUNT target: 0 | Runners count: $RUNNERS_COUNT target: 3"
sleep 1
count=$((count+1))
done
- name: Assert that the listener goes back up after the jobs are done
shell: bash
run: |
count=0
while true; do
LISTENER_COUNT="$(kubectl get pods -l actions.github.com/scale-set-name=${{ steps.install_arc.outputs.ARC_NAME }} -n arc-systems --field-selector=status.phase=Running -o=jsonpath='{.items}' | jq 'length')"
RUNNERS_COUNT="$(kubectl get pods -l app.kubernetes.io/component=runner -n arc-runners --field-selector=status.phase=Running -o=jsonpath='{.items}' | jq 'length')"
RESOURCES="$(kubectl get pods -A)"
if [ "$LISTENER_COUNT" -eq 1 ]; then
echo "Listener is up!"
echo "$RESOURCES"
exit 0
fi
if [ "$count" -ge 120 ]; then
echo "Timeout waiting for listener to be recreated"
echo "$RESOURCES"
exit 1
fi
echo "Waiting for listener to be recreated"
echo "Listener count: $LISTENER_COUNT target: 1 | Runners count: $RUNNERS_COUNT target: 0"
sleep 1
count=$((count+1))
done
- name: Gather logs and cleanup
shell: bash
if: always()
run: |
helm uninstall "${{ steps.install_arc.outputs.ARC_NAME }}" --namespace "arc-runners" --debug
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n "${{ steps.install_arc.outputs.ARC_NAME }}" -l app.kubernetes.io/instance="${{ steps.install_arc.outputs.ARC_NAME }}"
kubectl logs deployment/arc-gha-rs-controller -n "arc-systems"

View File

@@ -1,4 +1,4 @@
name: Publish Runner Scale Set Controller Charts
name: (gha) Publish Helm Charts
on:
workflow_dispatch:
@@ -33,7 +33,11 @@ env:
HELM_VERSION: v3.8.0
permissions:
packages: write
packages: write
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: true
jobs:
build-push-image:
@@ -101,7 +105,7 @@ jobs:
- name: Job summary
run: |
echo "The [publish-runner-scale-set.yaml](https://github.com/actions/actions-runner-controller/blob/main/.github/workflows/publish-runner-scale-set.yaml) workflow run was completed successfully!" >> $GITHUB_STEP_SUMMARY
echo "The [gha-publish-chart.yaml](https://github.com/actions/actions-runner-controller/blob/main/.github/workflows/gha-publish-chart.yaml) workflow run was completed successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY

View File

@@ -1,4 +1,4 @@
name: Validate Helm Chart (gha-runner-scale-set-controller and gha-runner-scale-set)
name: (gha) Validate Helm Charts
on:
pull_request:
@@ -6,13 +6,13 @@ on:
- master
paths:
- 'charts/**'
- '.github/workflows/validate-gha-chart.yaml'
- '.github/workflows/gha-validate-chart.yaml'
- '!charts/actions-runner-controller/**'
- '!**.md'
push:
paths:
- 'charts/**'
- '.github/workflows/validate-gha-chart.yaml'
- '.github/workflows/gha-validate-chart.yaml'
- '!charts/actions-runner-controller/**'
- '!**.md'
workflow_dispatch:
@@ -23,6 +23,13 @@ env:
permissions:
contents: read
concurrency:
# This will make sure we only apply the concurrency limits on pull requests
# but not pushes to master branch by making the concurrency group name unique
# for pushes
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
validate-chart:
name: Lint Chart
@@ -61,23 +68,7 @@ jobs:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
- name: Set up latest version chart-testing
run: |
echo 'deb [trusted=yes] https://repo.goreleaser.com/apt/ /' | sudo tee /etc/apt/sources.list.d/goreleaser.list
sudo apt update
sudo apt install goreleaser
git clone https://github.com/helm/chart-testing
cd chart-testing
unset CT_CONFIG_DIR
goreleaser build --clean --skip-validate
./dist/chart-testing_linux_amd64_v1/ct version
echo 'Adding ct directory to PATH...'
echo "$RUNNER_TEMP/chart-testing/dist/chart-testing_linux_amd64_v1" >> "$GITHUB_PATH"
echo 'Setting CT_CONFIG_DIR...'
echo "CT_CONFIG_DIR=$RUNNER_TEMP/chart-testing/etc" >> "$GITHUB_ENV"
working-directory: ${{ runner.temp }}
uses: helm/chart-testing-action@v2.4.0
- name: Run chart-testing (list-changed)
id: list-changed
@@ -85,7 +76,7 @@ jobs:
ct version
changed=$(ct list-changed --config charts/.ci/ct-config-gha.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
echo "changed=true" >> $GITHUB_OUTPUT
fi
- name: Run chart-testing (lint)

View File

@@ -1,4 +1,4 @@
name: Publish Canary Image
name: Publish Canary Images
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
@@ -11,19 +11,19 @@ on:
- '.github/actions/**'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/e2e-test-dispatch-workflow.yaml'
- '.github/workflows/e2e-test-linux-vm.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/publish-runner-scale-set.yaml'
- '.github/workflows/release-runners.yaml'
- '.github/workflows/run-codeql.yaml'
- '.github/workflows/run-first-interaction.yaml'
- '.github/workflows/run-stale.yaml'
- '.github/workflows/update-runners.yaml'
- '.github/workflows/gha-e2e-tests.yaml'
- '.github/workflows/arc-publish.yaml'
- '.github/workflows/arc-publish-chart.yaml'
- '.github/workflows/gha-publish-chart.yaml'
- '.github/workflows/arc-release-runners.yaml'
- '.github/workflows/global-run-codeql.yaml'
- '.github/workflows/global-run-first-interaction.yaml'
- '.github/workflows/global-run-stale.yaml'
- '.github/workflows/arc-update-runners-scheduled.yaml'
- '.github/workflows/validate-arc.yaml'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/validate-gha-chart.yaml'
- '.github/workflows/validate-runners.yaml'
- '.github/workflows/arc-validate-chart.yaml'
- '.github/workflows/gha-validate-chart.yaml'
- '.github/workflows/arc-validate-runners.yaml'
- '.github/dependabot.yml'
- '.github/RELEASE_NOTE_TEMPLATE.md'
- 'runner/**'
@@ -37,6 +37,10 @@ permissions:
contents: read
packages: write
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: true
env:
# Safeguard to prevent pushing images to registeries after build
PUSH_TO_REGISTRIES: true
@@ -87,7 +91,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
@@ -97,14 +101,14 @@ jobs:
# Normalization is needed because upper case characters are not allowed in the repository name
# and the short sha is needed for image tagging
- name: Resolve parameters
- name: Resolve parameters
id: resolve_parameters
run: |
echo "INFO: Resolving short sha"
echo "short_sha=$(git rev-parse --short ${{ github.ref }})" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
@@ -120,10 +124,10 @@ jobs:
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64
build-args: VERSION=canary-"${{ github.ref }}"
build-args: VERSION=canary-${{ steps.resolve_parameters.outputs.short_sha }}
push: ${{ env.PUSH_TO_REGISTRIES }}
tags: |
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:canary
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:canary-${{ steps.resolve_parameters.outputs.short_sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
cache-to: type=gha,mode=max

View File

@@ -10,6 +10,13 @@ on:
schedule:
- cron: '30 1 * * 0'
concurrency:
# This will make sure we only apply the concurrency limits on pull requests
# but not pushes to master branch by making the concurrency group name unique
# for pushes
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
analyze:
name: Analyze

View File

@@ -1,4 +1,4 @@
name: first-interaction
name: First Interaction
on:
issues:

View File

@@ -8,7 +8,6 @@ on:
- '**.go'
- 'go.mod'
- 'go.sum'
pull_request:
paths:
- '.github/workflows/go.yaml'
@@ -19,6 +18,13 @@ on:
permissions:
contents: read
concurrency:
# This will make sure we only apply the concurrency limits on pull requests
# but not pushes to master branch by making the concurrency group name unique
# for pushes
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
fmt:
runs-on: ubuntu-latest
@@ -72,9 +78,11 @@ jobs:
run: git diff --exit-code
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_linux_amd64.tar.gz
tar zxvf kubebuilder_2.3.2_linux_amd64.tar.gz
sudo mv kubebuilder_2.3.2_linux_amd64 /usr/local/kubebuilder
curl -D headers.txt -fsL "https://storage.googleapis.com/kubebuilder-tools/kubebuilder-tools-1.26.1-linux-amd64.tar.gz" -o kubebuilder-tools
echo "$(grep -i etag headers.txt -m 1 | cut -d'"' -f2) kubebuilder-tools" > sum
md5sum -c sum
tar -zvxf kubebuilder-tools
sudo mv kubebuilder /usr/local/
- name: Run go tests
run: |
go test -short `go list ./... | grep -v ./test_e2e_arc`

View File

@@ -1,109 +0,0 @@
# This workflows polls releases from actions/runner and in case of a new one it
# updates files containing runner version and opens a pull request.
name: Update runners
on:
schedule:
# run daily
- cron: "0 9 * * *"
workflow_dispatch:
jobs:
# check_versions compares our current version and the latest available runner
# version and sets them as outputs.
check_versions:
runs-on: ubuntu-latest
env:
GH_TOKEN: ${{ github.token }}
outputs:
current_version: ${{ steps.versions.outputs.current_version }}
latest_version: ${{ steps.versions.outputs.latest_version }}
steps:
- uses: actions/checkout@v3
- name: Get current and latest versions
id: versions
run: |
CURRENT_VERSION=$(echo -n $(cat runner/VERSION))
echo "Current version: $CURRENT_VERSION"
echo current_version=$CURRENT_VERSION >> $GITHUB_OUTPUT
LATEST_VERSION=$(gh release list --exclude-drafts --exclude-pre-releases --limit 1 -R actions/runner | grep -oP '(?<=v)[0-9.]+' | head -1)
echo "Latest version: $LATEST_VERSION"
echo latest_version=$LATEST_VERSION >> $GITHUB_OUTPUT
# check_pr checks if a PR for the same update already exists. It only runs if
# runner latest version != our current version. If no existing PR is found,
# it sets a PR name as output.
check_pr:
runs-on: ubuntu-latest
needs: check_versions
if: needs.check_versions.outputs.current_version != needs.check_versions.outputs.latest_version
outputs:
pr_name: ${{ steps.pr_name.outputs.pr_name }}
env:
GH_TOKEN: ${{ github.token }}
steps:
- name: debug
run:
echo ${{ needs.check_versions.outputs.current_version }}
echo ${{ needs.check_versions.outputs.latest_version }}
- uses: actions/checkout@v3
- name: PR Name
id: pr_name
env:
LATEST_VERSION: ${{ needs.check_versions.outputs.latest_version }}
run: |
PR_NAME="Update runner to version ${LATEST_VERSION}"
result=$(gh pr list --search "$PR_NAME" --json number --jq ".[].number" --limit 1)
if [ -z "$result" ]
then
echo "No existing PRs found, setting output with pr_name=$PR_NAME"
echo pr_name=$PR_NAME >> $GITHUB_OUTPUT
else
echo "Found a PR with title '$PR_NAME' already existing: ${{ github.server_url }}/${{ github.repository }}/pull/$result"
fi
# update_version updates runner version in the files listed below, commits
# the changes and opens a pull request as `github-actions` bot.
update_version:
runs-on: ubuntu-latest
needs:
- check_versions
- check_pr
if: needs.check_pr.outputs.pr_name
permissions:
pull-requests: write
contents: write
actions: write
env:
GH_TOKEN: ${{ github.token }}
CURRENT_VERSION: ${{ needs.check_versions.outputs.current_version }}
LATEST_VERSION: ${{ needs.check_versions.outputs.latest_version }}
PR_NAME: ${{ needs.check_pr.outputs.pr_name }}
steps:
- uses: actions/checkout@v3
- name: New branch
run: git checkout -b update-runner-$LATEST_VERSION
- name: Update files
run: |
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/VERSION
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" test/e2e/e2e_test.go
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" .github/workflows/e2e-test-linux-vm.yaml
- name: Commit changes
run: |
# from https://github.com/orgs/community/discussions/26560
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config user.name "github-actions[bot]"
git add .
git commit -m "$PR_NAME"
git push -u origin HEAD
- name: Create pull request
run: gh pr create -f

1
.gitignore vendored
View File

@@ -35,5 +35,4 @@ bin
.DS_STORE
/test-assets
/.tools

View File

@@ -15,6 +15,13 @@
- [Opening the Pull Request](#opening-the-pull-request)
- [Helm Version Changes](#helm-version-changes)
- [Testing Controller Built from a Pull Request](#testing-controller-built-from-a-pull-request)
- [Release process](#release-process)
- [Workflow structure](#workflow-structure)
- [Releasing legacy actions-runner-controller image and helm charts](#releasing-legacy-actions-runner-controller-image-and-helm-charts)
- [Release actions-runner-controller runner images](#release-actions-runner-controller-runner-images)
- [Release gha-runner-scale-set-controller image and helm charts](#release-gha-runner-scale-set-controller-image-and-helm-charts)
- [Release actions/runner image](#release-actionsrunner-image)
- [Canary releases](#canary-releases)
## Welcome
@@ -25,14 +32,13 @@ reviewed and merged.
## Before contributing code
We welcome code patches, but to make sure things are well coordinated you should discuss any significant change before starting the work.
The maintainers ask that you signal your intention to contribute to the project using the issue tracker.
If there is an existing issue that you want to work on, please let us know so we can get it assigned to you.
If you noticed a bug or want to add a new feature, there are issue templates you can fill out.
We welcome code patches, but to make sure things are well coordinated you should discuss any significant change before starting the work. The maintainers ask that you signal your intention to contribute to the project using the issue tracker. If there is an existing issue that you want to work on, please let us know so we can get it assigned to you. If you noticed a bug or want to add a new feature, there are issue templates you can fill out.
When filing a feature request, the maintainers will review the change and give you a decision on whether we are willing to accept the feature into the project.
For significantly large and/or complex features, we may request that you write up an architectural decision record ([ADR](https://github.blog/2020-08-13-why-write-adrs/)) detailing the change.
Please use the [template](/adrs/0000-TEMPLATE.md) as guidance.
Please use the [template](/docs/adrs/yyyy-mm-dd-TEMPLATE) as guidance.
<!--
TODO: Add a pre-requisite section describing what developers should
@@ -45,6 +51,7 @@ Depending on what you are patching depends on how you should go about it.
Below are some guides on how to test patches locally as well as develop the controller and runners.
When submitting a PR for a change please provide evidence that your change works as we still need to work on improving the CI of the project.
Some resources are provided for helping achieve this, see this guide for details.
### Developing the Controller
@@ -130,7 +137,7 @@ GINKGO_FOCUS='[It] should create a new Runner resource from the specified templa
>
> If you want to stick with `snap`-provided `docker`, do not forget to set `TMPDIR` to somewhere under `$HOME`.
> Otherwise `kind load docker-image` fail while running `docker save`.
> See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap for more information.
> See <https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap> for more information.
To test your local changes against both PAT and App based authentication please run the `acceptance` make target with the authentication configuration details provided:
@@ -186,7 +193,7 @@ Before shipping your PR, please check the following items to make sure CI passes
- Run `go mod tidy` if you made changes to dependencies.
- Format the code using `gofmt`
- Run the `golangci-lint` tool locally.
- We recommend you use `make lint` to run the tool using a Docker container matching the CI version.
- We recommend you use `make lint` to run the tool using a Docker container matching the CI version.
### Opening the Pull Request
@@ -217,3 +224,146 @@ Please also note that you need to replace `$DOCKER_USER` with your own DockerHub
Only the maintainers can release a new version of actions-runner-controller, publish a new version of the helm charts, and runner images.
All release workflows have been moved to [actions-runner-controller/releases](https://github.com/actions-runner-controller/releases) since the packages are owned by the former organization.
### Workflow structure
Following the migration of actions-runner-controller into GitHub actions, all the workflows had to be modified to accommodate the move to a new organization. The following table describes the workflows, their purpose and dependencies.
| Filename | Workflow name | Purpose |
|-----------------------------------|--------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| gha-e2e-tests.yaml | (gha) E2E Tests | Tests the Autoscaling Runner Set mode end to end. Coverage is restricted to this mode. Legacy modes are not tested. |
| go.yaml | Format, Lint, Unit Tests | Formats, lints and runs unit tests for the entire codebase. |
| arc-publish.yaml | Publish ARC Image | Uploads release/actions-runner-controller.yaml as an artifact to the newly created release and triggers the [build and publication of the controller image](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/publish-arc.yaml) |
| global-publish-canary.yaml | Publish Canary Images | Builds and publishes canary controller container images for both new and legacy modes. |
| arc-publish-chart.yaml | Publish ARC Helm Charts | Packages and publishes charts/actions-runner-controller (via GitHub Pages) |
| gha-publish-chart.yaml | (gha) Publish Helm Charts | Packages and publishes charts/gha-runner-scale-set-controller and charts/gha-runner-scale-set charts (OCI to GHCR) |
| arc-release-runners.yaml | Release ARC Runner Images | Triggers [release-runners.yaml](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/release-runners.yaml) which will build and push new runner images used with the legacy ARC modes. |
| global-run-codeql.yaml | Run CodeQL | Run CodeQL on all the codebase |
| global-run-first-interaction.yaml | First Interaction | Informs first time contributors what to expect when they open a new issue / PR |
| global-run-stale.yaml | Run Stale Bot | Closes issues / PRs without activity |
| arc-update-runners-scheduled.yaml | Runner Updates Check (Scheduled Job) | Polls [actions/runner](https://github.com/actions/runner) and [actions/runner-container-hooks](https://github.com/actions/runner-container-hooks) for new releases. If found, a PR is created to publish new runner images |
| arc-validate-chart.yaml | Validate Helm Chart | Run helm chart validators for charts/actions-runner-controller |
| gha-validate-chart.yaml | (gha) Validate Helm Charts | Run helm chart validators for charts/gha-runner-scale-set-controller and charts/gha-runner-scale-set charts |
| arc-validate-runners.yaml | Validate ARC Runners | Run validators for runners |
There are 7 components that we release regularly:
1. legacy [actions-runner-controller controller image](https://github.com/actions-runner-controller/actions-runner-controller/pkgs/container/actions-runner-controller)
2. legacy [actions-runner-controller helm charts](https://actions-runner-controller.github.io/actions-runner-controller/)
3. legacy actions-runner-controller runner images
1. [ubuntu-20.04](https://github.com/actions-runner-controller/actions-runner-controller/pkgs/container/actions-runner-controller%2Factions-runner)
2. [ubuntu-22.04](https://github.com/actions-runner-controller/actions-runner-controller/pkgs/container/actions-runner-controller%2Factions-runner)
3. [dind-ubuntu-20.04](https://github.com/actions-runner-controller/actions-runner-controller/pkgs/container/actions-runner-controller%2Factions-runner-dind)
4. [dind-ubuntu-22.04](https://github.com/actions-runner-controller/actions-runner-controller/pkgs/container/actions-runner-controller%2Factions-runner-dind)
5. [dind-rootless-ubuntu-20.04](https://github.com/actions-runner-controller/actions-runner-controller/pkgs/container/actions-runner-controller%2Factions-runner-dind-rootless)
6. [dind-rootless-ubuntu-22.04](https://github.com/actions-runner-controller/actions-runner-controller/pkgs/container/actions-runner-controller%2Factions-runner-dind-rootless)
4. [gha-runner-scale-set-controller image](https://github.com/actions/actions-runner-controller/pkgs/container/gha-runner-scale-set-controller)
5. [gha-runner-scale-set-controller helm charts](https://github.com/actions/actions-runner-controller/pkgs/container/actions-runner-controller-charts%2Fgha-runner-scale-set-controller)
6. [gha-runner-scale-set runner helm charts](https://github.com/actions/actions-runner-controller/pkgs/container/actions-runner-controller-charts%2Fgha-runner-scale-set)
7. [actions/runner image](https://github.com/actions/actions-runner-controller/pkgs/container/actions-runner-controller%2Factions-runner)
#### Releasing legacy actions-runner-controller image and helm charts
1. Start by making sure the master branch is stable and all CI jobs are passing
2. Create a new release in <https://github.com/actions/actions-runner-controller/releases> (Draft a new release)
3. Bump up the `version` and `appVersion` in charts/actions-runner-controller/Chart.yaml - make sure the `version` matches the release version you just created. (Example: <https://github.com/actions/actions-runner-controller/pull/2577>)
4. When the workflows finish execution, you will see:
1. A new controller image published to: <https://github.com/actions-runner-controller/actions-runner-controller/pkgs/container/actions-runner-controller>
2. Helm charts published to: <https://github.com/actions-runner-controller/actions-runner-controller.github.io/tree/master/actions-runner-controller> (the index.yaml file is updated)
When a new release is created, the [Publish ARC Image](https://github.com/actions/actions-runner-controller/blob/master/.github/workflows/arc-publish.yaml) workflow is triggered.
```mermaid
flowchart LR
subgraph repository: actions/actions-runner-controller
event_a{{"release: published"}} -- triggers --> workflow_a["arc-publish.yaml"]
event_b{{"workflow_dispatch"}} -- triggers --> workflow_a["arc-publish.yaml"]
workflow_a["arc-publish.yaml"] -- uploads --> package["actions-runner-controller.tar.gz"]
end
subgraph repository: actions-runner-controller/releases
workflow_a["arc-publish.yaml"] -- triggers --> event_d{{"repository_dispatch"}} --> workflow_b["publish-arc.yaml"]
workflow_b["publish-arc.yaml"] -- push --> A["GHCR: \nactions-runner-controller/actions-runner-controller:*"]
workflow_b["publish-arc.yaml"] -- push --> B["DockerHub: \nsummerwind/actions-runner-controller:*"]
end
```
#### Release actions-runner-controller runner images
**Manual steps:**
1. Navigate to the [actions-runner-controller/releases](https://github.com/actions-runner-controller/releases) repository
2. Trigger [the release-runners.yaml](https://github.com/actions-runner-controller/releases/actions/workflows/release-runners.yaml) workflow.
1. The list of input prameters for this workflow is defined in the table below (always inspect the workflow file for the latest version)
<!-- Table of Paramters -->
| Parameter | Description | Default |
|----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|
| `runner_version` | The version of the [actions/runner](https://github.com/actions/runner) to use | `2.300.2` |
| `docker_version` | The version of docker to use | `20.10.12` |
| `runner_container_hooks_version` | The version of [actions/runner-container-hooks](https://github.com/actions/runner-container-hooks) to use | `0.2.0` |
| `sha` | The commit sha from [actions/actions-runner-controller](https://github.com/actions/actions-runner-controller) to be used to build the runner images. This will be provided to `actions/checkout` & used to tag the container images | Empty string. |
| `push_to_registries` | Whether to push the images to the registries. Use false to test the build | false |
**Automated steps:**
```mermaid
flowchart LR
workflow["release-runners.yaml"] -- workflow_dispatch* --> workflow_b["release-runners.yaml"]
subgraph repository: actions/actions-runner-controller
runner_updates_check["arc-update-runners-scheduled.yaml"] -- "polls (daily)" --> runner_releases["actions/runner/releases"]
runner_updates_check -- creates --> runner_update_pr["PR: update /runner/VERSION"]****
runner_update_pr --> runner_update_pr_merge{{"merge"}}
runner_update_pr_merge -- triggers --> workflow["release-runners.yaml"]
end
subgraph repository: actions-runner-controller/releases
workflow_b["release-runners.yaml"] -- push --> A["GHCR: \n actions-runner-controller/actions-runner:* \n actions-runner-controller/actions-runner-dind:* \n actions-runner-controller/actions-runner-dind-rootless:*"]
workflow_b["release-runners.yaml"] -- push --> B["DockerHub: \n summerwind/actions-runner:* \n summerwind/actions-runner-dind:* \n summerwind/actions-runner-dind-rootless:*"]
event_b{{"workflow_dispatch"}} -- triggers --> workflow_b["release-runners.yaml"]
end
```
#### Release gha-runner-scale-set-controller image and helm charts
1. Make sure the master branch is stable and all CI jobs are passing
1. Prepare a release PR (example: <https://github.com/actions/actions-runner-controller/pull/2467>)
1. Bump up the version of the chart in: charts/gha-runner-scale-set-controller/Chart.yaml
2. Bump up the version of the chart in: charts/gha-runner-scale-set/Chart.yaml
1. Make sure that `version`, `appVersion` of both charts are always the same. These versions cannot diverge.
3. Update the quickstart guide to reflect the latest versions: docs/preview/gha-runner-scale-set-controller/README.md
4. Add changelog to the PR as well as the quickstart guide
1. Merge the release PR
1. Manually trigger the [(gha) Publish Helm Charts](https://github.com/actions/actions-runner-controller/actions/workflows/gha-publish-chart.yaml) workflow
1. Manually create a tag and release in [actions/actions-runner-controller](https://github.com/actions/actions-runner-controller/releases) with the format: `gha-runner-scale-set-x.x.x` where the version (x.x.x) matches that of the Helm chart
| Parameter | Description | Default |
|-------------------------------------------------|--------------------------------------------------------------------------------------------------------|----------------|
| `ref` | The branch, tag or SHA to cut a release from. | default branch |
| `release_tag_name` | The tag of the controller image. This is not a git tag. | canary |
| `push_to_registries` | Push images to registries. Use false to test the build process. | false |
| `publish_gha_runner_scale_set_controller_chart` | Publish new helm chart for gha-runner-scale-set-controller. This will push the new OCI archive to GHCR | false |
| `publish_gha_runner_scale_set_chart` | Publish new helm chart for gha-runner-scale-set. This will push the new OCI archive to GHCR | false |
#### Release actions/runner image
A new runner image is built and published to <https://github.com/actions/runner/pkgs/container/actions-runner> whenever a new runner binary has been released. There's nothing to do here.
#### Canary releases
We publish canary images for both the legacy actions-runner-controller and gha-runner-scale-set-controller images.
```mermaid
flowchart LR
subgraph org: actions
event_a{{"push: [master]"}} -- triggers --> workflow_a["publish-canary.yaml"]
end
subgraph org: actions-runner-controller
workflow_a["publish-canary.yaml"] -- triggers --> event_d{{"repository_dispatch"}} --> workflow_b["publish-canary.yaml"]
workflow_b["publish-canary.yaml"] -- push --> A["GHCR: \nactions-runner-controller/actions-runner-controller:canary"]
workflow_b["publish-canary.yaml"] -- push --> B["DockerHub: \nsummerwind/actions-runner-controller:canary"]
end
```
1. [actions-runner-controller canary image](https://github.com/actions-runner-controller/actions-runner-controller/pkgs/container/actions-runner-controller)
2. [gha-runner-scale-set-controller image](https://github.com/actions/actions-runner-controller/pkgs/container/gha-runner-scale-set-controller)
These canary images are automatically built and released on each push to the master branch.

View File

@@ -1,5 +1,5 @@
# Build the manager binary
FROM --platform=$BUILDPLATFORM golang:1.19.4 as builder
FROM --platform=$BUILDPLATFORM golang:1.20.7 as builder
WORKDIR /workspace
@@ -24,7 +24,7 @@ RUN go mod download
# With the above commmand,
# TARGETOS can be "linux", TARGETARCH can be "amd64", "arm64", and "arm", TARGETVARIANT can be "v7".
ARG TARGETPLATFORM TARGETOS TARGETARCH TARGETVARIANT VERSION=dev
ARG TARGETPLATFORM TARGETOS TARGETARCH TARGETVARIANT VERSION=dev COMMIT_SHA=dev
# We intentionally avoid `--mount=type=cache,mode=0777,target=/go/pkg/mod` in the `go mod download` and the `go build` runs
# to avoid https://github.com/moby/buildkit/issues/2334
@@ -36,8 +36,8 @@ ENV GOCACHE /build/${TARGETPLATFORM}/root/.cache/go-build
RUN --mount=target=. \
--mount=type=cache,mode=0777,target=${GOCACHE} \
export GOOS=${TARGETOS} GOARCH=${TARGETARCH} GOARM=${TARGETVARIANT#v} && \
go build -trimpath -ldflags="-s -w -X 'github.com/actions/actions-runner-controller/build.Version=${VERSION}'" -o /out/manager main.go && \
go build -trimpath -ldflags="-s -w" -o /out/github-runnerscaleset-listener ./cmd/githubrunnerscalesetlistener && \
go build -trimpath -ldflags="-s -w -X 'github.com/actions/actions-runner-controller/build.Version=${VERSION}' -X 'github.com/actions/actions-runner-controller/build.CommitSHA=${COMMIT_SHA}'" -o /out/manager main.go && \
go build -trimpath -ldflags="-s -w -X 'github.com/actions/actions-runner-controller/build.Version=${VERSION}' -X 'github.com/actions/actions-runner-controller/build.CommitSHA=${COMMIT_SHA}'" -o /out/github-runnerscaleset-listener ./cmd/githubrunnerscalesetlistener && \
go build -trimpath -ldflags="-s -w" -o /out/github-webhook-server ./cmd/githubwebhookserver && \
go build -trimpath -ldflags="-s -w" -o /out/actions-metrics-server ./cmd/actionsmetricsserver && \
go build -trimpath -ldflags="-s -w" -o /out/sleep ./cmd/sleep

View File

@@ -5,7 +5,8 @@ else
endif
DOCKER_USER ?= $(shell echo ${DOCKER_IMAGE_NAME} | cut -d / -f1)
VERSION ?= dev
RUNNER_VERSION ?= 2.304.0
COMMIT_SHA = $(shell git rev-parse HEAD)
RUNNER_VERSION ?= 2.309.0
TARGETPLATFORM ?= $(shell arch)
RUNNER_NAME ?= ${DOCKER_USER}/actions-runner
RUNNER_TAG ?= ${VERSION}
@@ -67,7 +68,7 @@ endif
all: manager
lint:
docker run --rm -v $(PWD):/app -w /app golangci/golangci-lint:v1.49.0 golangci-lint run
docker run --rm -v $(PWD):/app -w /app golangci/golangci-lint:v1.54.2 golangci-lint run
GO_TEST_ARGS ?= -short
@@ -95,7 +96,8 @@ run: generate fmt vet manifests
run-scaleset: generate fmt vet
CONTROLLER_MANAGER_POD_NAMESPACE=default \
CONTROLLER_MANAGER_CONTAINER_IMAGE="${DOCKER_IMAGE_NAME}:${VERSION}" \
go run ./main.go --auto-scaling-runner-set-only
go run -ldflags="-s -w -X 'github.com/actions/actions-runner-controller/build.Version=$(VERSION)'" \
./main.go --auto-scaling-runner-set-only
# Install CRDs into a cluster
install: manifests
@@ -214,6 +216,7 @@ docker-buildx:
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
--build-arg VERSION=${VERSION} \
--build-arg COMMIT_SHA=${COMMIT_SHA} \
-t "${DOCKER_IMAGE_NAME}:${VERSION}" \
-f Dockerfile \
. ${PUSH_ARG}

View File

@@ -4,39 +4,40 @@
[![awesome-runners](https://img.shields.io/badge/listed%20on-awesome--runners-blue.svg)](https://github.com/jonico/awesome-runners)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/actions-runner-controller)](https://artifacthub.io/packages/search?repo=actions-runner-controller)
## About
Actions Runner Controller (ARC) is a Kubernetes operator that orchestrates and scales self-hosted runners for GitHub Actions.
With ARC, you can create runner scale sets that automatically scale based on the number of workflows running in your repository, organization, or enterprise. Because controlled runners can be ephemeral and based on containers, new runner instances can scale up or down rapidly and cleanly. For more information about autoscaling, see ["Autoscaling with self-hosted runners."](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/autoscaling-with-self-hosted-runners)
You can set up ARC on Kubernetes using Helm, then create and run a workflow that uses runner scale sets. For more information about runner scale sets, see ["Deploying runner scale sets with Actions Runner Controller."](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#runner-scale-set)
## People
`actions-runner-controller` is an open-source project currently developed and maintained in collaboration with the GitHub Actions team, external maintainers @mumoshu and @toast-gear, various [contributors](https://github.com/actions/actions-runner-controller/graphs/contributors), and the [awesome community](https://github.com/actions/actions-runner-controller/discussions).
Actions Runner Controller (ARC) is an open-source project currently developed and maintained in collaboration with the GitHub Actions team, external maintainers @mumoshu and @toast-gear, various [contributors](https://github.com/actions/actions-runner-controller/graphs/contributors), and the [awesome community](https://github.com/actions/actions-runner-controller/discussions).
If you think the project is awesome and is adding value to your business, please consider directly sponsoring [community maintainers](https://github.com/sponsors/actions-runner-controller) and individual contributors via GitHub Sponsors.
In case you are already the employer of one of contributors, sponsoring via GitHub Sponsors might not be an option. Just support them in other means!
See [the sponsorship dashboard](https://github.com/sponsors/actions-runner-controller) for the former and the current sponsors.
## Status
Even though actions-runner-controller is used in production environments, it is still in its early stage of development, hence versioned 0.x.
actions-runner-controller complies to Semantic Versioning 2.0.0 in which v0.x means that there could be backward-incompatible changes for every release.
The documentation is kept inline with master@HEAD, we do our best to highlight any features that require a specific ARC version or higher however this is not always easily done due to there being many moving parts. Additionally, we actively do not retain compatibly with every GitHub Enterprise Server version nor every Kubernetes version so you will need to ensure you stay current within a reasonable timespan.
## About
[GitHub Actions](https://github.com/features/actions) is a very useful tool for automating development. GitHub Actions jobs are run in the cloud by default, but you may want to run your jobs in your environment. [Self-hosted runner](https://github.com/actions/runner) can be used for such use cases, but requires the provisioning and configuration of a virtual machine instance. Instead if you already have a Kubernetes cluster, it makes more sense to run the self-hosted runner on top of it.
**actions-runner-controller** makes that possible. Just create a *Runner* resource on your Kubernetes, and it will run and operate the self-hosted runner for the specified repository. Combined with Kubernetes RBAC, you can also build simple Self-hosted runners as a Service.
## Getting Started
To give ARC a try with just a handful of commands, Please refer to the [Quickstart guide](/docs/quickstart.md).
For an overview of ARC, please refer to [About ARC](https://github.com/actions/actions-runner-controller/blob/master/docs/about-arc.md)
To give ARC a try with just a handful of commands, Please refer to the [Quickstart guide](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller).
For more information, please refer to detailed documentation below!
For an overview of ARC, please refer to [About ARC](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller)
## Documentation
With the introduction of [autoscaling runner scale sets](https://github.com/actions/actions-runner-controller/discussions/2775), the existing [autoscaling modes](./docs/automatically-scaling-runners.md) are now legacy. The legacy modes have certain use cases and will continue to be maintained by the community only.
For further information on what is supported by GitHub and what's managed by the community, please refer to [this announcement discussion.](https://github.com/actions/actions-runner-controller/discussions/2775)
### Documentation
ARC documentation is available on [docs.github.com](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller).
### Legacy documentation
The following documentation is for the legacy autoscaling modes that continue to be maintained by the community
- [Quickstart guide](/docs/quickstart.md)
- [About ARC](/docs/about-arc.md)

View File

@@ -304,3 +304,27 @@ If you noticed that it takes several minutes for sidecar dind container to be cr
**Solution**
The solution is to switch to using faster storage, if you are experiencing this issue you are probably using HDD storage. Switching to SSD storage fixed the problem in my case. Most cloud providers have a list of storage options to use just pick something faster that your current disk, for on prem clusters you will need to invest in some SSDs.
### Dockerd no space left on device
**Problem**
If you are running many containers on your runner you might encounter an issue where docker daemon is unable to start new containers and you see error `no space left on device`.
**Solution**
Add a `dockerVarRunVolumeSizeLimit` key in your runner's spec with a higher size limit (the default is 1M) For instance:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: github-runner
namespace: github-system
spec:
replicas: 6
template:
spec:
dockerVarRunVolumeSizeLimit: 50M
env: []
```

View File

@@ -52,9 +52,6 @@ type AutoscalingListenerSpec struct {
// Required
Image string `json:"image,omitempty"`
// Required
ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"`
// Required
ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"`
@@ -63,6 +60,9 @@ type AutoscalingListenerSpec struct {
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
// +optional
Template *corev1.PodTemplateSpec `json:"template,omitempty"`
}
// AutoscalingListenerStatus defines the observed state of AutoscalingListener

View File

@@ -74,6 +74,9 @@ type AutoscalingRunnerSetSpec struct {
// Required
Template corev1.PodTemplateSpec `json:"template,omitempty"`
// +optional
ListenerTemplate *corev1.PodTemplateSpec `json:"listenerTemplate,omitempty"`
// +optional
// +kubebuilder:validation:Minimum:=0
MaxRunners *int `json:"maxRunners,omitempty"`

View File

@@ -103,6 +103,11 @@ func (in *AutoscalingListenerSpec) DeepCopyInto(out *AutoscalingListenerSpec) {
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
if in.Template != nil {
in, out := &in.Template, &out.Template
*out = new(v1.PodTemplateSpec)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerSpec.
@@ -203,6 +208,11 @@ func (in *AutoscalingRunnerSetSpec) DeepCopyInto(out *AutoscalingRunnerSetSpec)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template)
if in.ListenerTemplate != nil {
in, out := &in.ListenerTemplate, &out.ListenerTemplate
*out = new(v1.PodTemplateSpec)
(*in).DeepCopyInto(*out)
}
if in.MaxRunners != nil {
in, out := &in.MaxRunners, &out.MaxRunners
*out = new(int)

View File

@@ -22,7 +22,7 @@ import (
// HorizontalRunnerAutoscalerSpec defines the desired state of HorizontalRunnerAutoscaler
type HorizontalRunnerAutoscalerSpec struct {
// ScaleTargetRef sis the reference to scaled resource like RunnerDeployment
// ScaleTargetRef is the reference to scaled resource like RunnerDeployment
ScaleTargetRef ScaleTargetRef `json:"scaleTargetRef,omitempty"`
// MinReplicas is the minimum number of replicas the deployment is allowed to scale

View File

@@ -70,6 +70,8 @@ type RunnerConfig struct {
// +optional
DockerRegistryMirror *string `json:"dockerRegistryMirror,omitempty"`
// +optional
DockerVarRunVolumeSizeLimit *resource.Quantity `json:"dockerVarRunVolumeSizeLimit,omitempty"`
// +optional
VolumeSizeLimit *resource.Quantity `json:"volumeSizeLimit,omitempty"`
// +optional
VolumeStorageMedium *string `json:"volumeStorageMedium,omitempty"`

View File

@@ -436,6 +436,11 @@ func (in *RunnerConfig) DeepCopyInto(out *RunnerConfig) {
*out = new(string)
**out = **in
}
if in.DockerVarRunVolumeSizeLimit != nil {
in, out := &in.DockerVarRunVolumeSizeLimit, &out.DockerVarRunVolumeSizeLimit
x := (*in).DeepCopy()
*out = &x
}
if in.VolumeSizeLimit != nil {
in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit
x := (*in).DeepCopy()

View File

@@ -2,3 +2,5 @@ package build
// This is overridden at build-time using go-build ldflags. dev is the fallback value
var Version = "NA"
var CommitSHA = "NA"

View File

@@ -15,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.23.2
version: 0.23.5
# Used as the default manager tag value when no tag property is provided in the values.yaml
appVersion: 0.27.3
appVersion: 0.27.5
home: https://github.com/actions/actions-runner-controller

View File

@@ -35,13 +35,16 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
| `authSecret.github_basicauth_password` | Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `dockerRegistryMirror` | The default Docker Registry Mirror used by runners. | |
| `hostNetwork` | The "hostNetwork" of the controller container | false |
| `dnsPolicy` | The "dnsPolicy" of the controller container | ClusterFirst |
| `image.repository` | The "repository/image" of the controller container | summerwind/actions-runner-controller |
| `image.tag` | The tag of the controller container | |
| `image.actionsRunnerRepositoryAndTag` | The "repository/image" of the actions runner container | summerwind/actions-runner:latest |
| `image.actionsRunnerImagePullSecrets` | Optional image pull secrets to be included in the runner pod's ImagePullSecrets | |
| `image.dindSidecarRepositoryAndTag` | The "repository/image" of the dind sidecar container | docker:dind |
| `image.pullPolicy` | The pull policy of the controller image | IfNotPresent |
| `metrics.serviceMonitor` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `metrics.serviceMonitor.enable` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `metrics.serviceMonitor.interval` | Configure the interval that Prometheus should scrap the controller's metrics | 1m |
| `metrics.serviceMonitor.timeout` | Configure the timeout the timeout of Prometheus scrapping. | 30s |
| `metrics.serviceAnnotations` | Set annotations for the provisioned metrics service resource | |
| `metrics.port` | Set port of metrics service | 8443 |
| `metrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
@@ -148,7 +151,9 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
| `actionsMetricsServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
| `actionsMetricsServer.ingress.tls` | Set tls configuration for ingress | |
| `actionsMetricsServer.ingress.ingressClassName` | Set ingress class name | |
| `actionsMetrics.serviceMonitor` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `actionsMetrics.serviceMonitor.enable` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `actionsMetrics.serviceMonitor.interval` | Configure the interval that Prometheus should scrap the controller's metrics | 1m |
| `actionsMetrics.serviceMonitor.timeout` | Configure the timeout the timeout of Prometheus scrapping. | 30s |
| `actionsMetrics.serviceAnnotations` | Set annotations for the provisioned actions metrics service resource | |
| `actionsMetrics.port` | Set port of actions metrics service | 8443 |
| `actionsMetrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |

View File

@@ -1,8 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
name: horizontalrunnerautoscalers.actions.summerwind.dev
spec:
@@ -113,7 +114,7 @@ spec:
description: ScaleDownDelaySecondsAfterScaleUp is the approximate delay for a scale down followed by a scale up Used to prevent flapping (down->up->down->... loop)
type: integer
scaleTargetRef:
description: ScaleTargetRef sis the reference to scaled resource like RunnerDeployment
description: ScaleTargetRef is the reference to scaled resource like RunnerDeployment
properties:
kind:
description: Kind is the type of resource being referenced
@@ -251,9 +252,3 @@ spec:
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,8 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
name: runnerdeployments.actions.summerwind.dev
spec:
@@ -102,6 +103,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
template:
properties:
metadata:
@@ -183,6 +185,7 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
format: int32
@@ -243,10 +246,12 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
type: array
required:
- nodeSelectorTerms
type: object
x-kubernetes-map-type: atomic
type: object
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
@@ -289,6 +294,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -319,6 +325,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -374,6 +381,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -404,6 +412,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -458,6 +467,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -488,6 +498,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -543,6 +554,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -573,6 +585,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -634,6 +647,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -646,6 +660,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -665,6 +680,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -680,6 +696,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -700,6 +717,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -713,6 +731,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -1441,6 +1460,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1453,6 +1473,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1472,6 +1493,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1487,6 +1509,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1497,6 +1520,12 @@ spec:
type: integer
dockerRegistryMirror:
type: string
dockerVarRunVolumeSizeLimit:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
dockerVolumeMounts:
items:
description: VolumeMount describes a mounting of a Volume within a container.
@@ -1593,6 +1622,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1605,6 +1635,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1624,6 +1655,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1639,6 +1671,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1658,6 +1691,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1671,6 +1705,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
ephemeral:
@@ -1718,6 +1753,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1730,6 +1766,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1749,6 +1786,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1764,6 +1802,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1784,6 +1823,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1797,6 +1837,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -2508,6 +2549,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
type: array
initContainers:
items:
@@ -2552,6 +2594,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -2564,6 +2607,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -2583,6 +2627,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -2598,6 +2643,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -2618,6 +2664,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -2631,6 +2678,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -3486,6 +3534,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -3498,6 +3547,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -3517,6 +3567,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -3532,6 +3583,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -3552,6 +3604,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -3565,6 +3618,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -4293,6 +4347,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
@@ -4317,7 +4372,7 @@ spec:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
type: string
required:
- maxSkew
@@ -4448,6 +4503,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
type: string
@@ -4470,6 +4526,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeID:
description: 'volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
type: string
@@ -4510,6 +4567,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
csi:
description: csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
properties:
@@ -4526,6 +4584,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
readOnly:
description: readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
type: boolean
@@ -4561,6 +4620,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -4587,6 +4647,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -4607,7 +4668,7 @@ spec:
x-kubernetes-int-or-string: true
type: object
ephemeral:
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
properties:
volumeClaimTemplate:
description: "Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `<pod name>-<volume name>` where `<volume name>` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). \n An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. \n This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. \n Required, must not be nil."
@@ -4656,8 +4717,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4739,6 +4801,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
@@ -4801,6 +4864,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
required:
- driver
type: object
@@ -4916,6 +4980,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
targetPortal:
description: targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
type: string
@@ -5024,6 +5089,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
downwardAPI:
description: downwardAPI information about the downwardAPI data to project
properties:
@@ -5044,6 +5110,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -5070,6 +5137,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -5105,6 +5173,7 @@ spec:
description: optional field specify whether the Secret or its key must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
serviceAccountToken:
description: serviceAccountToken is information about the serviceAccountToken data to project
properties:
@@ -5179,6 +5248,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
type: string
@@ -5208,6 +5278,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
sslEnabled:
description: sslEnabled Flag enable/disable SSL communication with Gateway, default false
type: boolean
@@ -5278,6 +5349,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeName:
description: volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
type: string
@@ -5385,9 +5457,3 @@ spec:
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,8 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
name: runnerreplicasets.actions.summerwind.dev
spec:
@@ -84,6 +85,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
template:
properties:
metadata:
@@ -165,6 +167,7 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
format: int32
@@ -225,10 +228,12 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
type: array
required:
- nodeSelectorTerms
type: object
x-kubernetes-map-type: atomic
type: object
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
@@ -271,6 +276,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -301,6 +307,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -356,6 +363,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -386,6 +394,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -440,6 +449,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -470,6 +480,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -525,6 +536,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -555,6 +567,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -616,6 +629,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -628,6 +642,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -647,6 +662,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -662,6 +678,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -682,6 +699,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -695,6 +713,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -1423,6 +1442,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1435,6 +1455,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1454,6 +1475,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1469,6 +1491,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1479,6 +1502,12 @@ spec:
type: integer
dockerRegistryMirror:
type: string
dockerVarRunVolumeSizeLimit:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
dockerVolumeMounts:
items:
description: VolumeMount describes a mounting of a Volume within a container.
@@ -1575,6 +1604,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1587,6 +1617,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1606,6 +1637,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1621,6 +1653,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1640,6 +1673,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1653,6 +1687,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
ephemeral:
@@ -1700,6 +1735,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1712,6 +1748,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1731,6 +1768,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1746,6 +1784,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1766,6 +1805,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1779,6 +1819,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -2490,6 +2531,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
type: array
initContainers:
items:
@@ -2534,6 +2576,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -2546,6 +2589,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -2565,6 +2609,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -2580,6 +2625,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -2600,6 +2646,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -2613,6 +2660,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -3468,6 +3516,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -3480,6 +3529,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -3499,6 +3549,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -3514,6 +3565,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -3534,6 +3586,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -3547,6 +3600,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -4275,6 +4329,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
@@ -4299,7 +4354,7 @@ spec:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
type: string
required:
- maxSkew
@@ -4430,6 +4485,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
type: string
@@ -4452,6 +4508,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeID:
description: 'volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
type: string
@@ -4492,6 +4549,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
csi:
description: csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
properties:
@@ -4508,6 +4566,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
readOnly:
description: readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
type: boolean
@@ -4543,6 +4602,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -4569,6 +4629,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -4589,7 +4650,7 @@ spec:
x-kubernetes-int-or-string: true
type: object
ephemeral:
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
properties:
volumeClaimTemplate:
description: "Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `<pod name>-<volume name>` where `<volume name>` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). \n An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. \n This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. \n Required, must not be nil."
@@ -4638,8 +4699,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4721,6 +4783,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
@@ -4783,6 +4846,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
required:
- driver
type: object
@@ -4898,6 +4962,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
targetPortal:
description: targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
type: string
@@ -5006,6 +5071,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
downwardAPI:
description: downwardAPI information about the downwardAPI data to project
properties:
@@ -5026,6 +5092,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -5052,6 +5119,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -5087,6 +5155,7 @@ spec:
description: optional field specify whether the Secret or its key must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
serviceAccountToken:
description: serviceAccountToken is information about the serviceAccountToken data to project
properties:
@@ -5161,6 +5230,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
type: string
@@ -5190,6 +5260,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
sslEnabled:
description: sslEnabled Flag enable/disable SSL communication with Gateway, default false
type: boolean
@@ -5260,6 +5331,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeName:
description: volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
type: string
@@ -5364,9 +5436,3 @@ spec:
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,8 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
name: runners.actions.summerwind.dev
spec:
@@ -118,6 +119,7 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
format: int32
@@ -178,10 +180,12 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
type: array
required:
- nodeSelectorTerms
type: object
x-kubernetes-map-type: atomic
type: object
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
@@ -224,6 +228,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -254,6 +259,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -309,6 +315,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -339,6 +346,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -393,6 +401,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -423,6 +432,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -478,6 +488,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -508,6 +519,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -569,6 +581,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -581,6 +594,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -600,6 +614,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -615,6 +630,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -635,6 +651,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -648,6 +665,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -1376,6 +1394,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1388,6 +1407,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1407,6 +1427,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1422,6 +1443,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1432,6 +1454,12 @@ spec:
type: integer
dockerRegistryMirror:
type: string
dockerVarRunVolumeSizeLimit:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
dockerVolumeMounts:
items:
description: VolumeMount describes a mounting of a Volume within a container.
@@ -1528,6 +1556,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1540,6 +1569,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1559,6 +1589,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1574,6 +1605,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1593,6 +1625,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1606,6 +1639,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
ephemeral:
@@ -1653,6 +1687,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1665,6 +1700,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1684,6 +1720,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1699,6 +1736,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1719,6 +1757,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1732,6 +1771,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -2443,6 +2483,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
type: array
initContainers:
items:
@@ -2487,6 +2528,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -2499,6 +2541,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -2518,6 +2561,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -2533,6 +2577,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -2553,6 +2598,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -2566,6 +2612,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -3421,6 +3468,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -3433,6 +3481,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -3452,6 +3501,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -3467,6 +3517,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -3487,6 +3538,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -3500,6 +3552,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -4228,6 +4281,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
@@ -4252,7 +4306,7 @@ spec:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
type: string
required:
- maxSkew
@@ -4383,6 +4437,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
type: string
@@ -4405,6 +4460,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeID:
description: 'volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
type: string
@@ -4445,6 +4501,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
csi:
description: csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
properties:
@@ -4461,6 +4518,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
readOnly:
description: readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
type: boolean
@@ -4496,6 +4554,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -4522,6 +4581,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -4542,7 +4602,7 @@ spec:
x-kubernetes-int-or-string: true
type: object
ephemeral:
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
properties:
volumeClaimTemplate:
description: "Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `<pod name>-<volume name>` where `<volume name>` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). \n An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. \n This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. \n Required, must not be nil."
@@ -4591,8 +4651,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4674,6 +4735,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
@@ -4736,6 +4798,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
required:
- driver
type: object
@@ -4851,6 +4914,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
targetPortal:
description: targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
type: string
@@ -4959,6 +5023,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
downwardAPI:
description: downwardAPI information about the downwardAPI data to project
properties:
@@ -4979,6 +5044,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -5005,6 +5071,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -5040,6 +5107,7 @@ spec:
description: optional field specify whether the Secret or its key must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
serviceAccountToken:
description: serviceAccountToken is information about the serviceAccountToken data to project
properties:
@@ -5114,6 +5182,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
type: string
@@ -5143,6 +5212,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
sslEnabled:
description: sslEnabled Flag enable/disable SSL communication with Gateway, default false
type: boolean
@@ -5213,6 +5283,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeName:
description: volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
type: string
@@ -5362,9 +5433,3 @@ spec:
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,8 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
name: runnersets.actions.summerwind.dev
spec:
@@ -55,6 +56,12 @@ spec:
type: integer
dockerRegistryMirror:
type: string
dockerVarRunVolumeSizeLimit:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
dockerdWithinRunnerContainer:
type: boolean
effectiveTime:
@@ -93,7 +100,7 @@ spec:
description: ordinals controls the numbering of replica indices in a StatefulSet. The default ordinals behavior assigns a "0" index to the first replica and increments the index by one for each additional replica requested. Using the ordinals field requires the StatefulSetStartOrdinal feature gate to be enabled, which is alpha.
properties:
start:
description: 'start is the number representing the first replica''s index. It may be used to number replicas from an alternate index (eg: 1-indexed) over the default 0-indexed names, or to orchestrate progressive movement of replicas from one StatefulSet to another. If set, replica indices will be in the range: [.spec.ordinals.start, .spec.ordinals.start + .spec.replicas). If unset, defaults to 0. Replica indices will be in the range: [0, .spec.replicas).'
description: 'start is the number representing the first replica''s index. It may be used to number replicas from an alternate index (eg: 1-indexed) over the default 0-indexed names, or to orchestrate progressive movement of replicas from one StatefulSet to another. If set, replica indices will be in the range: [.spec.ordinals.start, .spec.ordinals.start + .spec.replicas). If unset, defaults to 0. Replica indices will be in the range: [0, .spec.replicas).'
format: int32
type: integer
type: object
@@ -154,6 +161,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
serviceAccountName:
type: string
serviceName:
@@ -246,6 +254,7 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
format: int32
@@ -306,10 +315,12 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
type: array
required:
- nodeSelectorTerms
type: object
x-kubernetes-map-type: atomic
type: object
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
@@ -352,6 +363,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -382,6 +394,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -437,6 +450,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -467,6 +481,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -521,6 +536,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -551,6 +567,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -606,6 +623,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -636,6 +654,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -697,6 +716,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -709,6 +729,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -728,6 +749,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -743,6 +765,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -763,6 +786,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -776,6 +800,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -1521,6 +1546,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1533,6 +1559,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1552,6 +1579,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1567,6 +1595,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1587,6 +1616,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1600,6 +1630,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -2311,6 +2342,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
type: array
initContainers:
description: 'List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/'
@@ -2356,6 +2388,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -2368,6 +2401,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -2387,6 +2421,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -2402,6 +2437,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -2422,6 +2458,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -2435,6 +2472,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -3367,6 +3405,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
@@ -3391,7 +3430,7 @@ spec:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
type: string
required:
- maxSkew
@@ -3492,6 +3531,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
type: string
@@ -3514,6 +3554,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeID:
description: 'volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
type: string
@@ -3554,6 +3595,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
csi:
description: csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
properties:
@@ -3570,6 +3612,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
readOnly:
description: readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
type: boolean
@@ -3605,6 +3648,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -3631,6 +3675,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -3651,7 +3696,7 @@ spec:
x-kubernetes-int-or-string: true
type: object
ephemeral:
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
properties:
volumeClaimTemplate:
description: "Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `<pod name>-<volume name>` where `<volume name>` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). \n An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. \n This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. \n Required, must not be nil."
@@ -3700,8 +3745,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -3783,6 +3829,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
@@ -3845,6 +3892,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
required:
- driver
type: object
@@ -3960,6 +4008,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
targetPortal:
description: targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
type: string
@@ -4068,6 +4117,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
downwardAPI:
description: downwardAPI information about the downwardAPI data to project
properties:
@@ -4088,6 +4138,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -4114,6 +4165,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -4149,6 +4201,7 @@ spec:
description: optional field specify whether the Secret or its key must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
serviceAccountToken:
description: serviceAccountToken is information about the serviceAccountToken data to project
properties:
@@ -4223,6 +4276,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
type: string
@@ -4252,6 +4306,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
sslEnabled:
description: sslEnabled Flag enable/disable SSL communication with Gateway, default false
type: boolean
@@ -4322,6 +4377,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeName:
description: volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
type: string
@@ -4431,8 +4487,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -4514,6 +4571,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
@@ -4674,9 +4732,3 @@ spec:
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -36,8 +36,8 @@ spec:
{{- end }}
containers:
- args:
{{- $metricsHost := .Values.metrics.proxy.enabled | ternary "127.0.0.1" "0.0.0.0" }}
{{- $metricsPort := .Values.metrics.proxy.enabled | ternary "8080" .Values.metrics.port }}
{{- $metricsHost := .Values.actionsMetrics.proxy.enabled | ternary "127.0.0.1" "0.0.0.0" }}
{{- $metricsPort := .Values.actionsMetrics.proxy.enabled | ternary "8080" .Values.actionsMetrics.port }}
- "--metrics-addr={{ $metricsHost }}:{{ $metricsPort }}"
{{- if .Values.actionsMetricsServer.logLevel }}
- "--log-level={{ .Values.actionsMetricsServer.logLevel }}"
@@ -111,10 +111,14 @@ spec:
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
{{- end }}
{{- if kindIs "slice" .Values.actionsMetricsServer.env }}
{{- toYaml .Values.actionsMetricsServer.env | nindent 8 }}
{{- else }}
{{- range $key, $val := .Values.actionsMetricsServer.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: actions-metrics-server
imagePullPolicy: {{ .Values.image.pullPolicy }}
@@ -122,8 +126,8 @@ spec:
- containerPort: 8000
name: http
protocol: TCP
{{- if not .Values.metrics.proxy.enabled }}
- containerPort: {{ .Values.metrics.port }}
{{- if not .Values.actionsMetrics.proxy.enabled }}
- containerPort: {{ .Values.actionsMetrics.port }}
name: metrics-port
protocol: TCP
{{- end }}
@@ -131,17 +135,17 @@ spec:
{{- toYaml .Values.actionsMetricsServer.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.actionsMetricsServer.securityContext | nindent 12 }}
{{- if .Values.metrics.proxy.enabled }}
{{- if .Values.actionsMetrics.proxy.enabled }}
- args:
- "--secure-listen-address=0.0.0.0:{{ .Values.metrics.port }}"
- "--secure-listen-address=0.0.0.0:{{ .Values.actionsMetrics.port }}"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=10"
image: "{{ .Values.metrics.proxy.image.repository }}:{{ .Values.metrics.proxy.image.tag }}"
image: "{{ .Values.actionsMetrics.proxy.image.repository }}:{{ .Values.actionsMetrics.proxy.image.tag }}"
name: kube-rbac-proxy
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.metrics.port }}
- containerPort: {{ .Values.actionsMetrics.port }}
name: metrics-port
resources:
{{- toYaml .Values.resources | nindent 12 }}

View File

@@ -16,9 +16,9 @@ spec:
{{ range $_, $port := .Values.actionsMetricsServer.service.ports -}}
- {{ $port | toYaml | nindent 6 }}
{{- end }}
{{- if .Values.metrics.serviceMonitor }}
{{- if .Values.actionsMetrics.serviceMonitor.enable }}
- name: metrics-port
port: {{ .Values.metrics.port }}
port: {{ .Values.actionsMetrics.port }}
targetPort: metrics-port
{{- end }}
selector:

View File

@@ -1,10 +1,10 @@
{{- if and .Values.actionsMetricsServer.enabled .Values.actionsMetrics.serviceMonitor }}
{{- if and .Values.actionsMetricsServer.enabled .Values.actionsMetrics.serviceMonitor.enable }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.actionsMetricsServer.serviceMonitorLabels }}
{{- with .Values.actionsMetrics.serviceMonitorLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "actions-runner-controller-actions-metrics-server.serviceMonitorName" . }}
@@ -19,6 +19,8 @@ spec:
tlsConfig:
insecureSkipVerify: true
{{- end }}
interval: {{ .Values.actionsMetrics.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.actionsMetrics.serviceMonitor.timeout }}
selector:
matchLabels:
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 6 }}

View File

@@ -1,4 +1,4 @@
{{- if .Values.metrics.serviceMonitor }}
{{- if .Values.metrics.serviceMonitor.enable }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
@@ -19,6 +19,8 @@ spec:
tlsConfig:
insecureSkipVerify: true
{{- end }}
interval: {{ .Values.metrics.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.metrics.serviceMonitor.timeout }}
selector:
matchLabels:
{{- include "actions-runner-controller.selectorLabels" . | nindent 6 }}

View File

@@ -70,6 +70,9 @@ spec:
{{- if .Values.logFormat }}
- "--log-format={{ .Values.logFormat }}"
{{- end }}
{{- if .Values.dockerGID }}
- "--docker-gid={{ .Values.dockerGID }}"
{{- end }}
command:
- "/manager"
env:
@@ -211,3 +214,6 @@ spec:
{{- if .Values.hostNetwork }}
hostNetwork: {{ .Values.hostNetwork }}
{{- end }}
{{- if .Values.dnsPolicy }}
dnsPolicy: {{ .Values.dnsPolicy }}
{{- end }}

View File

@@ -5,7 +5,7 @@ metadata:
name: {{ include "actions-runner-controller-github-webhook-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 4 }}
{{- if .Values.githubWebhookServer.service.annotations }}
annotations:
{{ toYaml .Values.githubWebhookServer.service.annotations | nindent 4 }}
@@ -16,7 +16,7 @@ spec:
{{ range $_, $port := .Values.githubWebhookServer.service.ports -}}
- {{ $port | toYaml | nindent 6 }}
{{- end }}
{{- if .Values.metrics.serviceMonitor }}
{{- if .Values.metrics.serviceMonitor.enable }}
- name: metrics-port
port: {{ .Values.metrics.port }}
targetPort: metrics-port

View File

@@ -1,4 +1,4 @@
{{- if and .Values.githubWebhookServer.enabled .Values.metrics.serviceMonitor }}
{{- if and .Values.githubWebhookServer.enabled .Values.metrics.serviceMonitor.enable }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
@@ -19,6 +19,8 @@ spec:
tlsConfig:
insecureSkipVerify: true
{{- end }}
interval: {{ .Values.metrics.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.metrics.serviceMonitor.timeout }}
selector:
matchLabels:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 6 }}

View File

@@ -19,7 +19,7 @@ webhooks:
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
kubernetes.io/metadata.name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
@@ -50,7 +50,7 @@ webhooks:
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
kubernetes.io/metadata.name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
@@ -81,7 +81,7 @@ webhooks:
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
kubernetes.io/metadata.name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
@@ -112,7 +112,7 @@ webhooks:
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
kubernetes.io/metadata.name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
@@ -156,7 +156,7 @@ webhooks:
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
kubernetes.io/metadata.name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
@@ -187,7 +187,7 @@ webhooks:
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
kubernetes.io/metadata.name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
@@ -218,7 +218,7 @@ webhooks:
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
kubernetes.io/metadata.name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}

View File

@@ -70,7 +70,7 @@ rbac:
{}
# # This allows ARC to dynamically create a ServiceAccount and a Role for each Runner pod that uses "kubernetes" container mode,
# # by extending ARC's manager role to have the same permissions required by the pod runs the runner agent in "kubernetes" container mode.
# # Without this, Kubernetes blocks ARC to create the role to prevent a priviledge escalation.
# # Without this, Kubernetes blocks ARC to create the role to prevent a privilege escalation.
# # See https://github.com/actions/actions-runner-controller/pull/1268/files#r917327010
# allowGrantingKubernetesContainerModePermissions: true
@@ -109,7 +109,10 @@ service:
# Metrics service resource
metrics:
serviceAnnotations: {}
serviceMonitor: false
serviceMonitor:
enable: false
timeout: 30s
interval: 1m
serviceMonitorLabels: {}
port: 8443
proxy:
@@ -148,8 +151,7 @@ podDisruptionBudget:
# PriorityClass: system-cluster-critical
priorityClassName: ""
env:
{}
# env:
# specify additional environment variables for the controller pod.
# It's possible to specify either key vale pairs e.g.:
# http_proxy: "proxy.com:8080"
@@ -189,9 +191,17 @@ admissionWebHooks:
# https://github.com/actions/actions-runner-controller/issues/1005#issuecomment-993097155
#hostNetwork: true
# If you use `hostNetwork: true`, then you need dnsPolicy: ClusterFirstWithHostNet
# https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
#dnsPolicy: ClusterFirst
## specify log format for actions runner controller. Valid options are "text" and "json"
logFormat: text
# enable setting the docker group id for the runner container
# https://github.com/actions/actions-runner-controller/pull/2499
#dockerGID: 121
githubWebhookServer:
enabled: false
replicaCount: 1
@@ -292,7 +302,7 @@ githubWebhookServer:
# key: GITHUB_WEBHOOK_SECRET_TOKEN
# name: prod-gha-controller-webhook-token
# optional: true
env: {}
# env:
actionsMetrics:
serviceAnnotations: {}
@@ -300,7 +310,10 @@ actionsMetrics:
# as a part of the helm release.
# Do note that you also need actionsMetricsServer.enabled=true
# to deploy the actions-metrics-server whose k8s service is referenced by the service monitor.
serviceMonitor: false
serviceMonitor:
enable: false
timeout: 30s
interval: 1m
serviceMonitorLabels: {}
port: 8443
proxy:
@@ -308,6 +321,19 @@ actionsMetrics:
image:
repository: quay.io/brancz/kube-rbac-proxy
tag: v0.13.1
# specify additional environment variables for the webhook server pod.
# It's possible to specify either key vale pairs e.g.:
# my_env_var: "some value"
# my_other_env_var: "other value"
# or a list of complete environment variable definitions e.g.:
# - name: GITHUB_WEBHOOK_SECRET_TOKEN
# valueFrom:
# secretKeyRef:
# key: GITHUB_WEBHOOK_SECRET_TOKEN
# name: prod-gha-controller-webhook-token
# optional: true
# env:
actionsMetricsServer:
enabled: false

View File

@@ -15,13 +15,13 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0
version: 0.6.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.4.0"
appVersion: "0.6.0"
home: https://github.com/actions/actions-runner-controller

View File

@@ -1,8 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
name: ephemeralrunners.actions.github.com
spec:
@@ -82,6 +83,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
type: object
metadata:
@@ -195,6 +197,7 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
format: int32
@@ -255,10 +258,12 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
type: array
required:
- nodeSelectorTerms
type: object
x-kubernetes-map-type: atomic
type: object
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
@@ -301,6 +306,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -331,6 +337,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -386,6 +393,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -416,6 +424,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -470,6 +479,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -500,6 +510,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -555,6 +566,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -585,6 +597,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -646,6 +659,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -658,6 +672,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -677,6 +692,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -692,6 +708,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -712,6 +729,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -725,6 +743,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -1470,6 +1489,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1482,6 +1502,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1501,6 +1522,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1516,6 +1538,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1536,6 +1559,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1549,6 +1573,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -2260,6 +2285,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
type: array
initContainers:
description: 'List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/'
@@ -2305,6 +2331,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -2317,6 +2344,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -2336,6 +2364,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -2351,6 +2380,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -2371,6 +2401,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -2384,6 +2415,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -3316,6 +3348,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
@@ -3340,7 +3373,7 @@ spec:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
type: string
required:
- maxSkew
@@ -3441,6 +3474,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
type: string
@@ -3463,6 +3497,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeID:
description: 'volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
type: string
@@ -3503,6 +3538,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
csi:
description: csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
properties:
@@ -3519,6 +3555,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
readOnly:
description: readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
type: boolean
@@ -3554,6 +3591,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -3580,6 +3618,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -3600,7 +3639,7 @@ spec:
x-kubernetes-int-or-string: true
type: object
ephemeral:
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
properties:
volumeClaimTemplate:
description: "Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `<pod name>-<volume name>` where `<volume name>` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). \n An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. \n This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. \n Required, must not be nil."
@@ -3649,8 +3688,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -3732,6 +3772,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
@@ -3794,6 +3835,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
required:
- driver
type: object
@@ -3909,6 +3951,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
targetPortal:
description: targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
type: string
@@ -4017,6 +4060,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
downwardAPI:
description: downwardAPI information about the downwardAPI data to project
properties:
@@ -4037,6 +4081,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -4063,6 +4108,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -4098,6 +4144,7 @@ spec:
description: optional field specify whether the Secret or its key must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
serviceAccountToken:
description: serviceAccountToken is information about the serviceAccountToken data to project
properties:
@@ -4172,6 +4219,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
type: string
@@ -4201,6 +4249,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
sslEnabled:
description: sslEnabled Flag enable/disable SSL communication with Gateway, default false
type: boolean
@@ -4271,6 +4320,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeName:
description: volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
type: string
@@ -4346,9 +4396,3 @@ spec:
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,8 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
name: ephemeralrunnersets.actions.github.com
spec:
@@ -76,6 +77,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
type: object
metadata:
@@ -189,6 +191,7 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
format: int32
@@ -249,10 +252,12 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
type: array
required:
- nodeSelectorTerms
type: object
x-kubernetes-map-type: atomic
type: object
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
@@ -295,6 +300,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -325,6 +331,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -380,6 +387,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -410,6 +418,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -464,6 +473,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -494,6 +504,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -549,6 +560,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -579,6 +591,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -640,6 +653,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -652,6 +666,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -671,6 +686,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -686,6 +702,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -706,6 +723,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -719,6 +737,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -1464,6 +1483,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1476,6 +1496,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1495,6 +1516,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1510,6 +1532,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1530,6 +1553,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1543,6 +1567,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -2254,6 +2279,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
type: array
initContainers:
description: 'List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/'
@@ -2299,6 +2325,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -2311,6 +2338,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -2330,6 +2358,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -2345,6 +2374,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -2365,6 +2395,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -2378,6 +2409,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -3310,6 +3342,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
@@ -3334,7 +3367,7 @@ spec:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
type: string
required:
- maxSkew
@@ -3435,6 +3468,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
type: string
@@ -3457,6 +3491,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeID:
description: 'volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
type: string
@@ -3497,6 +3532,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
csi:
description: csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
properties:
@@ -3513,6 +3549,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
readOnly:
description: readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
type: boolean
@@ -3548,6 +3585,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -3574,6 +3612,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -3594,7 +3633,7 @@ spec:
x-kubernetes-int-or-string: true
type: object
ephemeral:
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
properties:
volumeClaimTemplate:
description: "Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `<pod name>-<volume name>` where `<volume name>` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). \n An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. \n This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. \n Required, must not be nil."
@@ -3643,8 +3682,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -3726,6 +3766,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
@@ -3788,6 +3829,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
required:
- driver
type: object
@@ -3903,6 +3945,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
targetPortal:
description: targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
type: string
@@ -4011,6 +4054,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
downwardAPI:
description: downwardAPI information about the downwardAPI data to project
properties:
@@ -4031,6 +4075,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -4057,6 +4102,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -4092,6 +4138,7 @@ spec:
description: optional field specify whether the Secret or its key must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
serviceAccountToken:
description: serviceAccountToken is information about the serviceAccountToken data to project
properties:
@@ -4166,6 +4213,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
type: string
@@ -4195,6 +4243,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
sslEnabled:
description: sslEnabled Flag enable/disable SSL communication with Gateway, default false
type: boolean
@@ -4265,6 +4314,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeName:
description: volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
type: string
@@ -4323,9 +4373,3 @@ spec:
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,3 +1,5 @@
Thank you for installing {{ .Chart.Name }}.
Your release is named {{ .Release.Name }}.
WARNING: value specified under image.pullPolicy will be ignored and no longer be applied to the listener pod spec as of gha-runner-scale-set-0.7.0. Please use the listenerTemplate in the gha-runner-scale-set chart to control the image pull policy of the listener.

View File

@@ -1,8 +1,14 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "gha-base-name" -}}
gha-rs-controller
{{- end }}
{{- define "gha-runner-scale-set-controller.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- default (include "gha-base-name" .) .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
@@ -14,7 +20,7 @@ If release name contains chart name it will be used as a full name.
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- $name := default (include "gha-base-name" .) .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
@@ -27,7 +33,7 @@ If release name contains chart name it will be used as a full name.
Create chart name and version as used by the chart label.
*/}}
{{- define "gha-runner-scale-set-controller.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- printf "%s-%s" (include "gha-base-name" .) .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
@@ -39,7 +45,7 @@ helm.sh/chart: {{ include "gha-runner-scale-set-controller.chart" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/part-of: gha-rs-controller
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- range $k, $v := .Values.labels }}
{{ $k }}: {{ $v }}
@@ -51,6 +57,7 @@ Selector labels
*/}}
{{- define "gha-runner-scale-set-controller.selectorLabels" -}}
app.kubernetes.io/name: {{ include "gha-runner-scale-set-controller.name" . }}
app.kubernetes.io/namespace: {{ .Release.Namespace }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
@@ -73,35 +80,43 @@ Create the name of the service account to use
{{- end }}
{{- define "gha-runner-scale-set-controller.managerClusterRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-cluster-role
{{- include "gha-runner-scale-set-controller.fullname" . }}
{{- end }}
{{- define "gha-runner-scale-set-controller.managerClusterRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-cluster-rolebinding
{{- include "gha-runner-scale-set-controller.fullname" . }}
{{- end }}
{{- define "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-single-namespace-role
{{- include "gha-runner-scale-set-controller.fullname" . }}-single-namespace
{{- end }}
{{- define "gha-runner-scale-set-controller.managerSingleNamespaceRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-single-namespace-rolebinding
{{- include "gha-runner-scale-set-controller.fullname" . }}-single-namespace
{{- end }}
{{- define "gha-runner-scale-set-controller.managerSingleNamespaceWatchRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-single-namespace-watch
{{- end }}
{{- define "gha-runner-scale-set-controller.managerSingleNamespaceWatchRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-single-namespace-watch
{{- end }}
{{- define "gha-runner-scale-set-controller.managerListenerRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-listener-role
{{- include "gha-runner-scale-set-controller.fullname" . }}-listener
{{- end }}
{{- define "gha-runner-scale-set-controller.managerListenerRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-manager-listener-rolebinding
{{- include "gha-runner-scale-set-controller.fullname" . }}-listener
{{- end }}
{{- define "gha-runner-scale-set-controller.leaderElectionRoleName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-leader-election-role
{{- include "gha-runner-scale-set-controller.fullname" . }}-leader-election
{{- end }}
{{- define "gha-runner-scale-set-controller.leaderElectionRoleBinding" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-leader-election-rolebinding
{{- include "gha-runner-scale-set-controller.fullname" . }}-leader-election
{{- end }}
{{- define "gha-runner-scale-set-controller.imagePullSecretsNames" -}}
@@ -111,3 +126,7 @@ Create the name of the service account to use
{{- end }}
{{- $names | join ","}}
{{- end }}
{{- define "gha-runner-scale-set-controller.serviceMonitorName" -}}
{{- include "gha-runner-scale-set-controller.fullname" . }}-service-monitor
{{- end }}

View File

@@ -23,7 +23,7 @@ spec:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
app.kubernetes.io/part-of: gha-runner-scale-set-controller
app.kubernetes.io/part-of: gha-rs-controller
app.kubernetes.io/component: controller-manager
app.kubernetes.io/version: {{ .Chart.Version }}
{{- include "gha-runner-scale-set-controller.selectorLabels" . | nindent 8 }}
@@ -56,11 +56,34 @@ spec:
{{- with .Values.flags.logLevel }}
- "--log-level={{ . }}"
{{- end }}
{{- with .Values.flags.logFormat }}
- "--log-format={{ . }}"
{{- end }}
{{- with .Values.flags.watchSingleNamespace }}
- "--watch-single-namespace={{ . }}"
{{- end }}
{{- with .Values.flags.updateStrategy }}
- "--update-strategy={{ . }}"
{{- end }}
{{- if .Values.metrics }}
{{- with .Values.metrics }}
- "--listener-metrics-addr={{ .listenerAddr }}"
- "--listener-metrics-endpoint={{ .listenerEndpoint }}"
- "--metrics-addr={{ .controllerManagerAddr }}"
{{- end }}
{{- else }}
- "--listener-metrics-addr=0"
- "--listener-metrics-endpoint="
- "--metrics-addr=0"
{{- end }}
command:
- "/manager"
{{- with .Values.metrics }}
ports:
- containerPort: {{regexReplaceAll ":([0-9]+)" .controllerManagerAddr "${1}"}}
protocol: TCP
name: metrics
{{- end }}
env:
- name: CONTROLLER_MANAGER_CONTAINER_IMAGE
value: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"

View File

@@ -2,7 +2,7 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" . }}
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceWatchRoleName" . }}
namespace: {{ .Values.flags.watchSingleNamespace }}
rules:
- apiGroups:

View File

@@ -2,14 +2,14 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleBinding" . }}
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceWatchRoleBinding" . }}
namespace: {{ .Values.flags.watchSingleNamespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceRoleName" . }}
name: {{ include "gha-runner-scale-set-controller.managerSingleNamespaceWatchRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "gha-runner-scale-set-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
{{- end }}

View File

@@ -1,6 +1,7 @@
package tests
import (
"fmt"
"os"
"path/filepath"
"strings"
@@ -8,6 +9,7 @@ import (
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/gruntwork-io/terratest/modules/k8s"
"github.com/gruntwork-io/terratest/modules/logger"
"github.com/gruntwork-io/terratest/modules/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -33,6 +35,7 @@ func TestTemplate_CreateServiceAccount(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.annotations.foo": "bar",
@@ -46,7 +49,7 @@ func TestTemplate_CreateServiceAccount(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", serviceAccount.Name)
assert.Equal(t, "test-arc-gha-rs-controller", serviceAccount.Name)
assert.Equal(t, "bar", string(serviceAccount.Annotations["foo"]))
}
@@ -61,6 +64,7 @@ func TestTemplate_CreateServiceAccount_OverwriteName(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.name": "overwritten-name",
@@ -90,6 +94,7 @@ func TestTemplate_CreateServiceAccount_CannotUseDefaultServiceAccount(t *testing
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"serviceAccount.create": "true",
"serviceAccount.name": "default",
@@ -113,6 +118,7 @@ func TestTemplate_NotCreateServiceAccount(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"serviceAccount.create": "false",
"serviceAccount.name": "overwritten-name",
@@ -136,6 +142,7 @@ func TestTemplate_NotCreateServiceAccount_ServiceAccountNotSet(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"serviceAccount.create": "false",
"serviceAccount.annotations.foo": "bar",
@@ -158,6 +165,7 @@ func TestTemplate_CreateManagerClusterRole(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
@@ -168,7 +176,7 @@ func TestTemplate_CreateManagerClusterRole(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &managerClusterRole)
assert.Empty(t, managerClusterRole.Namespace, "ClusterRole should not have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-cluster-role", managerClusterRole.Name)
assert.Equal(t, "test-arc-gha-rs-controller", managerClusterRole.Name)
assert.Equal(t, 16, len(managerClusterRole.Rules))
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role.yaml"})
@@ -189,6 +197,7 @@ func TestTemplate_ManagerClusterRoleBinding(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"serviceAccount.create": "true",
},
@@ -201,9 +210,9 @@ func TestTemplate_ManagerClusterRoleBinding(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &managerClusterRoleBinding)
assert.Empty(t, managerClusterRoleBinding.Namespace, "ClusterRoleBinding should not have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-cluster-rolebinding", managerClusterRoleBinding.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-cluster-role", managerClusterRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerClusterRoleBinding.Subjects[0].Name)
assert.Equal(t, "test-arc-gha-rs-controller", managerClusterRoleBinding.Name)
assert.Equal(t, "test-arc-gha-rs-controller", managerClusterRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-rs-controller", managerClusterRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerClusterRoleBinding.Subjects[0].Namespace)
_, err = helm.RenderTemplateE(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_controller_role_binding.yaml"})
@@ -224,6 +233,7 @@ func TestTemplate_CreateManagerListenerRole(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
@@ -234,7 +244,7 @@ func TestTemplate_CreateManagerListenerRole(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &managerListenerRole)
assert.Equal(t, namespaceName, managerListenerRole.Namespace, "Role should have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-listener-role", managerListenerRole.Name)
assert.Equal(t, "test-arc-gha-rs-controller-listener", managerListenerRole.Name)
assert.Equal(t, 4, len(managerListenerRole.Rules))
assert.Equal(t, "pods", managerListenerRole.Rules[0].Resources[0])
assert.Equal(t, "pods/status", managerListenerRole.Rules[1].Resources[0])
@@ -253,6 +263,7 @@ func TestTemplate_ManagerListenerRoleBinding(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"serviceAccount.create": "true",
},
@@ -265,9 +276,9 @@ func TestTemplate_ManagerListenerRoleBinding(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &managerListenerRoleBinding)
assert.Equal(t, namespaceName, managerListenerRoleBinding.Namespace, "RoleBinding should have a namespace")
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-listener-rolebinding", managerListenerRoleBinding.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-listener-role", managerListenerRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerListenerRoleBinding.Subjects[0].Name)
assert.Equal(t, "test-arc-gha-rs-controller-listener", managerListenerRoleBinding.Name)
assert.Equal(t, "test-arc-gha-rs-controller-listener", managerListenerRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-rs-controller", managerListenerRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerListenerRoleBinding.Subjects[0].Namespace)
}
@@ -289,6 +300,7 @@ func TestTemplate_ControllerDeployment_Defaults(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"image.tag": "dev",
},
@@ -301,29 +313,29 @@ func TestTemplate_ControllerDeployment_Defaults(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Equal(t, "gha-runner-scale-set-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc-gha-rs-controller", deployment.Name)
assert.Equal(t, "gha-rs-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-rs-controller", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, namespaceName, deployment.Labels["actions.github.com/controller-service-account-namespace"])
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Labels["actions.github.com/controller-service-account-name"])
assert.Equal(t, "test-arc-gha-rs-controller", deployment.Labels["actions.github.com/controller-service-account-name"])
assert.NotContains(t, deployment.Labels, "actions.github.com/controller-watch-single-namespace")
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, "gha-rs-controller", deployment.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs-controller", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs-controller", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 0)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Equal(t, "test-arc-gha-rs-controller", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Nil(t, deployment.Spec.Template.Spec.SecurityContext)
assert.Empty(t, deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
@@ -345,9 +357,16 @@ func TestTemplate_ControllerDeployment_Defaults(t *testing.T) {
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 2)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[1])
expectedArgs := []string{
"--auto-scaling-runner-set-only",
"--log-level=debug",
"--log-format=text",
"--update-strategy=immediate",
"--metrics-addr=0",
"--listener-metrics-addr=0",
"--listener-metrics-endpoint=",
}
assert.ElementsMatch(t, expectedArgs, deployment.Spec.Template.Spec.Containers[0].Args)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 3)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
@@ -355,7 +374,6 @@ func TestTemplate_ControllerDeployment_Defaults(t *testing.T) {
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY", deployment.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "IfNotPresent", deployment.Spec.Template.Spec.Containers[0].Env[2].Value) // default value. Needs to align with controllers/actions.github.com/resourcebuilder.go
@@ -384,6 +402,7 @@ func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"labels.foo": "bar",
"labels.github": "actions",
@@ -391,11 +410,11 @@ func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
"image.pullPolicy": "Always",
"image.tag": "dev",
"imagePullSecrets[0].name": "dockerhub",
"nameOverride": "gha-runner-scale-set-controller-override",
"fullnameOverride": "gha-runner-scale-set-controller-fullname-override",
"nameOverride": "gha-rs-controller-override",
"fullnameOverride": "gha-rs-controller-fullname-override",
"env[0].name": "ENV_VAR_NAME_1",
"env[0].value": "ENV_VAR_VALUE_1",
"serviceAccount.name": "gha-runner-scale-set-controller-sa",
"serviceAccount.name": "gha-rs-controller-sa",
"podAnnotations.foo": "bar",
"podSecurityContext.fsGroup": "1000",
"securityContext.runAsUser": "1000",
@@ -405,7 +424,10 @@ func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
"tolerations[0].key": "foo",
"affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key": "foo",
"affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator": "bar",
"priorityClassName": "test-priority-class",
"priorityClassName": "test-priority-class",
"flags.updateStrategy": "eventual",
"flags.logLevel": "info",
"flags.logFormat": "json",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
@@ -416,22 +438,22 @@ func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "gha-runner-scale-set-controller-fullname-override", deployment.Name)
assert.Equal(t, "gha-runner-scale-set-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-runner-scale-set-controller-override", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs-controller-fullname-override", deployment.Name)
assert.Equal(t, "gha-rs-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-rs-controller-override", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, "gha-rs-controller", deployment.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, "bar", deployment.Labels["foo"])
assert.Equal(t, "actions", deployment.Labels["github"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "gha-runner-scale-set-controller-override", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs-controller-override", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set-controller-override", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs-controller-override", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "bar", deployment.Spec.Template.Annotations["foo"])
@@ -442,7 +464,7 @@ func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 1)
assert.Equal(t, "dockerhub", deployment.Spec.Template.Spec.ImagePullSecrets[0].Name)
assert.Equal(t, "gha-runner-scale-set-controller-sa", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Equal(t, "gha-rs-controller-sa", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Equal(t, int64(1000), *deployment.Spec.Template.Spec.SecurityContext.FSGroup)
assert.Equal(t, "test-priority-class", deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
@@ -470,24 +492,31 @@ func TestTemplate_ControllerDeployment_Customize(t *testing.T) {
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--auto-scaler-image-pull-secrets=dockerhub", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[2])
expectArgs := []string{
"--auto-scaling-runner-set-only",
"--auto-scaler-image-pull-secrets=dockerhub",
"--log-level=info",
"--log-format=json",
"--update-strategy=eventual",
"--listener-metrics-addr=0",
"--listener-metrics-endpoint=",
"--metrics-addr=0",
}
assert.ElementsMatch(t, expectArgs, deployment.Spec.Template.Spec.Containers[0].Args)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 4)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
assert.Equal(t, managerImage, deployment.Spec.Template.Spec.Containers[0].Env[0].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY", deployment.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "Always", deployment.Spec.Template.Spec.Containers[0].Env[2].Value) // default value. Needs to align with controllers/actions.github.com/resourcebuilder.go
assert.Equal(t, "ENV_VAR_NAME_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Name)
assert.Equal(t, "ENV_VAR_VALUE_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Value)
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "500m", deployment.Spec.Template.Spec.Containers[0].Resources.Limits.Cpu().String())
assert.True(t, *deployment.Spec.Template.Spec.Containers[0].SecurityContext.RunAsNonRoot)
assert.Equal(t, int64(1000), *deployment.Spec.Template.Spec.Containers[0].SecurityContext.RunAsUser)
@@ -508,6 +537,7 @@ func TestTemplate_EnableLeaderElectionRole(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"replicaCount": "2",
},
@@ -519,7 +549,7 @@ func TestTemplate_EnableLeaderElectionRole(t *testing.T) {
var leaderRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &leaderRole)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-leader-election-role", leaderRole.Name)
assert.Equal(t, "test-arc-gha-rs-controller-leader-election", leaderRole.Name)
assert.Equal(t, namespaceName, leaderRole.Namespace)
}
@@ -534,6 +564,7 @@ func TestTemplate_EnableLeaderElectionRoleBinding(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"replicaCount": "2",
},
@@ -545,10 +576,10 @@ func TestTemplate_EnableLeaderElectionRoleBinding(t *testing.T) {
var leaderRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &leaderRoleBinding)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-leader-election-rolebinding", leaderRoleBinding.Name)
assert.Equal(t, "test-arc-gha-rs-controller-leader-election", leaderRoleBinding.Name)
assert.Equal(t, namespaceName, leaderRoleBinding.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-leader-election-role", leaderRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", leaderRoleBinding.Subjects[0].Name)
assert.Equal(t, "test-arc-gha-rs-controller-leader-election", leaderRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-rs-controller", leaderRoleBinding.Subjects[0].Name)
}
func TestTemplate_EnableLeaderElection(t *testing.T) {
@@ -562,6 +593,7 @@ func TestTemplate_EnableLeaderElection(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"replicaCount": "2",
"image.tag": "dev",
@@ -575,7 +607,7 @@ func TestTemplate_EnableLeaderElection(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Equal(t, "test-arc-gha-rs-controller", deployment.Name)
assert.Equal(t, int32(2), *deployment.Spec.Replicas)
@@ -587,11 +619,19 @@ func TestTemplate_EnableLeaderElection(t *testing.T) {
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 4)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--enable-leader-election", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--leader-election-id=test-arc-gha-runner-scale-set-controller", deployment.Spec.Template.Spec.Containers[0].Args[2])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[3])
expectedArgs := []string{
"--auto-scaling-runner-set-only",
"--enable-leader-election",
"--leader-election-id=test-arc-gha-rs-controller",
"--log-level=debug",
"--log-format=text",
"--update-strategy=immediate",
"--listener-metrics-addr=0",
"--listener-metrics-endpoint=",
"--metrics-addr=0",
}
assert.ElementsMatch(t, expectedArgs, deployment.Spec.Template.Spec.Containers[0].Args)
}
func TestTemplate_ControllerDeployment_ForwardImagePullSecrets(t *testing.T) {
@@ -605,6 +645,7 @@ func TestTemplate_ControllerDeployment_ForwardImagePullSecrets(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"imagePullSecrets[0].name": "dockerhub",
"imagePullSecrets[1].name": "ghcr",
@@ -619,10 +660,18 @@ func TestTemplate_ControllerDeployment_ForwardImagePullSecrets(t *testing.T) {
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--auto-scaler-image-pull-secrets=dockerhub,ghcr", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[2])
expectedArgs := []string{
"--auto-scaling-runner-set-only",
"--auto-scaler-image-pull-secrets=dockerhub,ghcr",
"--log-level=debug",
"--log-format=text",
"--update-strategy=immediate",
"--listener-metrics-addr=0",
"--listener-metrics-endpoint=",
"--metrics-addr=0",
}
assert.ElementsMatch(t, expectedArgs, deployment.Spec.Template.Spec.Containers[0].Args)
}
func TestTemplate_ControllerDeployment_WatchSingleNamespace(t *testing.T) {
@@ -643,6 +692,7 @@ func TestTemplate_ControllerDeployment_WatchSingleNamespace(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"image.tag": "dev",
"flags.watchSingleNamespace": "demo",
@@ -656,28 +706,28 @@ func TestTemplate_ControllerDeployment_WatchSingleNamespace(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Equal(t, "gha-runner-scale-set-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc-gha-rs-controller", deployment.Name)
assert.Equal(t, "gha-rs-controller-"+chart.Version, deployment.Labels["helm.sh/chart"])
assert.Equal(t, "gha-rs-controller", deployment.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Labels["app.kubernetes.io/instance"])
assert.Equal(t, chart.AppVersion, deployment.Labels["app.kubernetes.io/version"])
assert.Equal(t, "Helm", deployment.Labels["app.kubernetes.io/managed-by"])
assert.Equal(t, namespaceName, deployment.Labels["actions.github.com/controller-service-account-namespace"])
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Labels["actions.github.com/controller-service-account-name"])
assert.Equal(t, "test-arc-gha-rs-controller", deployment.Labels["actions.github.com/controller-service-account-name"])
assert.Equal(t, "demo", deployment.Labels["actions.github.com/controller-watch-single-namespace"])
assert.Equal(t, int32(1), *deployment.Spec.Replicas)
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs-controller", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Selector.MatchLabels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set-controller", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs-controller", deployment.Spec.Template.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-arc", deployment.Spec.Template.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "manager", deployment.Spec.Template.Annotations["kubectl.kubernetes.io/default-container"])
assert.Len(t, deployment.Spec.Template.Spec.ImagePullSecrets, 0)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Equal(t, "test-arc-gha-rs-controller", deployment.Spec.Template.Spec.ServiceAccountName)
assert.Nil(t, deployment.Spec.Template.Spec.SecurityContext)
assert.Empty(t, deployment.Spec.Template.Spec.PriorityClassName)
assert.Equal(t, int64(10), *deployment.Spec.Template.Spec.TerminationGracePeriodSeconds)
@@ -699,10 +749,18 @@ func TestTemplate_ControllerDeployment_WatchSingleNamespace(t *testing.T) {
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Command, 1)
assert.Equal(t, "/manager", deployment.Spec.Template.Spec.Containers[0].Command[0])
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Args, 3)
assert.Equal(t, "--auto-scaling-runner-set-only", deployment.Spec.Template.Spec.Containers[0].Args[0])
assert.Equal(t, "--log-level=debug", deployment.Spec.Template.Spec.Containers[0].Args[1])
assert.Equal(t, "--watch-single-namespace=demo", deployment.Spec.Template.Spec.Containers[0].Args[2])
expectedArgs := []string{
"--auto-scaling-runner-set-only",
"--log-level=debug",
"--log-format=text",
"--watch-single-namespace=demo",
"--update-strategy=immediate",
"--listener-metrics-addr=0",
"--listener-metrics-endpoint=",
"--metrics-addr=0",
}
assert.ElementsMatch(t, expectedArgs, deployment.Spec.Template.Spec.Containers[0].Args)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 3)
assert.Equal(t, "CONTROLLER_MANAGER_CONTAINER_IMAGE", deployment.Spec.Template.Spec.Containers[0].Env[0].Name)
@@ -710,7 +768,6 @@ func TestTemplate_ControllerDeployment_WatchSingleNamespace(t *testing.T) {
assert.Equal(t, "CONTROLLER_MANAGER_POD_NAMESPACE", deployment.Spec.Template.Spec.Containers[0].Env[1].Name)
assert.Equal(t, "metadata.namespace", deployment.Spec.Template.Spec.Containers[0].Env[1].ValueFrom.FieldRef.FieldPath)
assert.Equal(t, "CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY", deployment.Spec.Template.Spec.Containers[0].Env[2].Name)
assert.Equal(t, "IfNotPresent", deployment.Spec.Template.Spec.Containers[0].Env[2].Value) // default value. Needs to align with controllers/actions.github.com/resourcebuilder.go
@@ -732,6 +789,7 @@ func TestTemplate_ControllerContainerEnvironmentVariables(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"env[0].Name": "ENV_VAR_NAME_1",
"env[0].Value": "ENV_VAR_VALUE_1",
@@ -752,7 +810,7 @@ func TestTemplate_ControllerContainerEnvironmentVariables(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &deployment)
assert.Equal(t, namespaceName, deployment.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", deployment.Name)
assert.Equal(t, "test-arc-gha-rs-controller", deployment.Name)
assert.Len(t, deployment.Spec.Template.Spec.Containers[0].Env, 7)
assert.Equal(t, "ENV_VAR_NAME_1", deployment.Spec.Template.Spec.Containers[0].Env[3].Name)
@@ -778,6 +836,7 @@ func TestTemplate_WatchSingleNamespace_NotCreateManagerClusterRole(t *testing.T)
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"flags.watchSingleNamespace": "demo",
},
@@ -799,6 +858,7 @@ func TestTemplate_WatchSingleNamespace_NotManagerClusterRoleBinding(t *testing.T
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"serviceAccount.create": "true",
"flags.watchSingleNamespace": "demo",
@@ -821,6 +881,7 @@ func TestTemplate_CreateManagerSingleNamespaceRole(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"flags.watchSingleNamespace": "demo",
},
@@ -832,7 +893,7 @@ func TestTemplate_CreateManagerSingleNamespaceRole(t *testing.T) {
var managerSingleNamespaceControllerRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceControllerRole)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceControllerRole.Name)
assert.Equal(t, "test-arc-gha-rs-controller-single-namespace", managerSingleNamespaceControllerRole.Name)
assert.Equal(t, namespaceName, managerSingleNamespaceControllerRole.Namespace)
assert.Equal(t, 10, len(managerSingleNamespaceControllerRole.Rules))
@@ -841,7 +902,7 @@ func TestTemplate_CreateManagerSingleNamespaceRole(t *testing.T) {
var managerSingleNamespaceWatchRole rbacv1.Role
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceWatchRole)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceWatchRole.Name)
assert.Equal(t, "test-arc-gha-rs-controller-single-namespace-watch", managerSingleNamespaceWatchRole.Name)
assert.Equal(t, "demo", managerSingleNamespaceWatchRole.Namespace)
assert.Equal(t, 14, len(managerSingleNamespaceWatchRole.Rules))
}
@@ -857,6 +918,7 @@ func TestTemplate_ManagerSingleNamespaceRoleBinding(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"flags.watchSingleNamespace": "demo",
},
@@ -868,10 +930,10 @@ func TestTemplate_ManagerSingleNamespaceRoleBinding(t *testing.T) {
var managerSingleNamespaceControllerRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceControllerRoleBinding)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-rolebinding", managerSingleNamespaceControllerRoleBinding.Name)
assert.Equal(t, "test-arc-gha-rs-controller-single-namespace", managerSingleNamespaceControllerRoleBinding.Name)
assert.Equal(t, namespaceName, managerSingleNamespaceControllerRoleBinding.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceControllerRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerSingleNamespaceControllerRoleBinding.Subjects[0].Name)
assert.Equal(t, "test-arc-gha-rs-controller-single-namespace", managerSingleNamespaceControllerRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-rs-controller", managerSingleNamespaceControllerRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerSingleNamespaceControllerRoleBinding.Subjects[0].Namespace)
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/manager_single_namespace_watch_role_binding.yaml"})
@@ -879,9 +941,81 @@ func TestTemplate_ManagerSingleNamespaceRoleBinding(t *testing.T) {
var managerSingleNamespaceWatchRoleBinding rbacv1.RoleBinding
helm.UnmarshalK8SYaml(t, output, &managerSingleNamespaceWatchRoleBinding)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-rolebinding", managerSingleNamespaceWatchRoleBinding.Name)
assert.Equal(t, "test-arc-gha-rs-controller-single-namespace-watch", managerSingleNamespaceWatchRoleBinding.Name)
assert.Equal(t, "demo", managerSingleNamespaceWatchRoleBinding.Namespace)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller-manager-single-namespace-role", managerSingleNamespaceWatchRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-runner-scale-set-controller", managerSingleNamespaceWatchRoleBinding.Subjects[0].Name)
assert.Equal(t, "test-arc-gha-rs-controller-single-namespace-watch", managerSingleNamespaceWatchRoleBinding.RoleRef.Name)
assert.Equal(t, "test-arc-gha-rs-controller", managerSingleNamespaceWatchRoleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, managerSingleNamespaceWatchRoleBinding.Subjects[0].Namespace)
}
func TestControllerDeployment_MetricsPorts(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set-controller")
require.NoError(t, err)
chartContent, err := os.ReadFile(filepath.Join(helmChartPath, "Chart.yaml"))
require.NoError(t, err)
chart := new(Chart)
err = yaml.Unmarshal(chartContent, chart)
require.NoError(t, err)
releaseName := "test-arc"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"image.tag": "dev",
"metrics.controllerManagerAddr": ":8080",
"metrics.listenerAddr": ":8081",
"metrics.listenerEndpoint": "/metrics",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/deployment.yaml"})
var deployment appsv1.Deployment
helm.UnmarshalK8SYaml(t, output, &deployment)
require.Len(t, deployment.Spec.Template.Spec.Containers, 1, "Expected one container")
container := deployment.Spec.Template.Spec.Containers[0]
assert.Len(t, container.Ports, 1)
port := container.Ports[0]
assert.Equal(t, corev1.Protocol("TCP"), port.Protocol)
assert.Equal(t, int32(8080), port.ContainerPort)
metricsFlags := map[string]*struct {
expect string
frequency int
}{
"--listener-metrics-addr": {
expect: ":8081",
},
"--listener-metrics-endpoint": {
expect: "/metrics",
},
"--metrics-addr": {
expect: ":8080",
},
}
for _, cmd := range container.Args {
s := strings.Split(cmd, "=")
if len(s) != 2 {
continue
}
flag, ok := metricsFlags[s[0]]
if !ok {
continue
}
flag.frequency++
assert.Equal(t, flag.expect, s[1])
}
for key, value := range metricsFlags {
assert.Equal(t, value.frequency, 1, fmt.Sprintf("frequency of %q is not 1", key))
}
}

View File

@@ -75,11 +75,41 @@ affinity: {}
# PriorityClass: system-cluster-critical
priorityClassName: ""
## If `metrics:` object is not provided, or commented out, the following flags
## will be applied the controller-manager and listener pods with empty values:
## `--metrics-addr`, `--listener-metrics-addr`, `--listener-metrics-endpoint`.
## This will disable metrics.
##
## To enable metrics, uncomment the following lines.
# metrics:
# controllerManagerAddr: ":8080"
# listenerAddr: ":8080"
# listenerEndpoint: "/metrics"
flags:
# Log level can be set here with one of the following values: "debug", "info", "warn", "error".
# Defaults to "debug".
## Log level can be set here with one of the following values: "debug", "info", "warn", "error".
## Defaults to "debug".
logLevel: "debug"
## Log format can be set with one of the following values: "text", "json"
## Defaults to "text"
logFormat: "text"
## Restricts the controller to only watch resources in the desired namespace.
## Defaults to watch all namespaces when unset.
# watchSingleNamespace: ""
## Defines how the controller should handle upgrades while having running jobs.
##
## The srategies available are:
## - "immediate": (default) The controller will immediately apply the change causing the
## recreation of the listener and ephemeral runner set. This can lead to an
## overprovisioning of runners, if there are pending / running jobs. This should not
## be a problem at a small scale, but it could lead to a significant increase of
## resources if you have a lot of jobs running concurrently.
##
## - "eventual": The controller will remove the listener and ephemeral runner set
## immediately, but will not recreate them (to apply changes) until all
## pending / running jobs have completed.
## This can lead to a longer time to apply the change but it will ensure
## that you don't have any overprovisioning of runners.
updateStrategy: "immediate"

View File

@@ -15,13 +15,13 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0
version: 0.6.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.4.0"
appVersion: "0.6.0"
home: https://github.com/actions/dev-arc

View File

@@ -1,8 +1,13 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "gha-base-name" -}}
gha-rs
{{- end }}
{{- define "gha-runner-scale-set.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- default (include "gha-base-name" .) .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
@@ -11,7 +16,7 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
If release name contains chart name it will be used as a full name.
*/}}
{{- define "gha-runner-scale-set.fullname" -}}
{{- $name := default .Chart.Name }}
{{- $name := default (include "gha-base-name" .) }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
@@ -19,7 +24,7 @@ If release name contains chart name it will be used as a full name.
Create chart name and version as used by the chart label.
*/}}
{{- define "gha-runner-scale-set.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- printf "%s-%s" (include "gha-base-name" .) .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
@@ -32,7 +37,7 @@ helm.sh/chart: {{ include "gha-runner-scale-set.chart" . }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/part-of: gha-runner-scale-set
app.kubernetes.io/part-of: gha-rs
actions.github.com/scale-set-name: {{ .Release.Name }}
actions.github.com/scale-set-namespace: {{ .Release.Namespace }}
{{- end }}
@@ -58,19 +63,19 @@ app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{- define "gha-runner-scale-set.noPermissionServiceAccountName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-no-permission-service-account
{{- include "gha-runner-scale-set.fullname" . }}-no-permission
{{- end }}
{{- define "gha-runner-scale-set.kubeModeRoleName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-role
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode
{{- end }}
{{- define "gha-runner-scale-set.kubeModeRoleBindingName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-role-binding
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode
{{- end }}
{{- define "gha-runner-scale-set.kubeModeServiceAccountName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode-service-account
{{- include "gha-runner-scale-set.fullname" . }}-kube-mode
{{- end }}
{{- define "gha-runner-scale-set.dind-init-container" -}}
@@ -428,11 +433,11 @@ volumeMounts:
{{- end }}
{{- define "gha-runner-scale-set.managerRoleName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-manager-role
{{- include "gha-runner-scale-set.fullname" . }}-manager
{{- end }}
{{- define "gha-runner-scale-set.managerRoleBindingName" -}}
{{- include "gha-runner-scale-set.fullname" . }}-manager-role-binding
{{- include "gha-runner-scale-set.fullname" . }}-manager
{{- end }}
{{- define "gha-runner-scale-set.managerServiceAccountName" -}}
@@ -451,7 +456,7 @@ volumeMounts:
{{- $managerServiceAccountName := "" }}
{{- range $index, $deployment := (lookup "apps/v1" "Deployment" "" "").items }}
{{- if kindIs "map" $deployment.metadata.labels }}
{{- if eq (get $deployment.metadata.labels "app.kubernetes.io/part-of") "gha-runner-scale-set-controller" }}
{{- if eq (get $deployment.metadata.labels "app.kubernetes.io/part-of") "gha-rs-controller" }}
{{- if hasKey $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace" }}
{{- $singleNamespaceCounter = add $singleNamespaceCounter 1 }}
{{- $_ := set $singleNamespaceControllerDeployments (get $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace") $deployment}}
@@ -463,13 +468,13 @@ volumeMounts:
{{- end }}
{{- end }}
{{- if and (eq $multiNamespacesCounter 0) (eq $singleNamespaceCounter 0) }}
{{- fail "No gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- fail "No gha-rs-controller deployment found using label (app.kubernetes.io/part-of=gha-rs-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if and (gt $multiNamespacesCounter 0) (gt $singleNamespaceCounter 0) }}
{{- fail "Found both gha-runner-scale-set-controller installed with flags.watchSingleNamespace set and unset in cluster, this is not supported. Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- fail "Found both gha-rs-controller installed with flags.watchSingleNamespace set and unset in cluster, this is not supported. Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if gt $multiNamespacesCounter 1 }}
{{- fail "More than one gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- fail "More than one gha-rs-controller deployment found using label (app.kubernetes.io/part-of=gha-rs-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if eq $multiNamespacesCounter 1 }}
{{- with $controllerDeployment.metadata }}
@@ -482,11 +487,11 @@ volumeMounts:
{{- $managerServiceAccountName = (get $controllerDeployment.metadata.labels "actions.github.com/controller-service-account-name") }}
{{- end }}
{{- else }}
{{- fail "No gha-runner-scale-set-controller deployment that watch this namespace found using label (actions.github.com/controller-watch-single-namespace). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- fail "No gha-rs-controller deployment that watch this namespace found using label (actions.github.com/controller-watch-single-namespace). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- end }}
{{- if eq $managerServiceAccountName "" }}
{{- fail "No service account name found for gha-runner-scale-set-controller deployment using label (actions.github.com/controller-service-account-name), consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- fail "No service account name found for gha-rs-controller deployment using label (actions.github.com/controller-service-account-name), consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- $managerServiceAccountName }}
{{- end }}
@@ -508,7 +513,7 @@ volumeMounts:
{{- $managerServiceAccountNamespace := "" }}
{{- range $index, $deployment := (lookup "apps/v1" "Deployment" "" "").items }}
{{- if kindIs "map" $deployment.metadata.labels }}
{{- if eq (get $deployment.metadata.labels "app.kubernetes.io/part-of") "gha-runner-scale-set-controller" }}
{{- if eq (get $deployment.metadata.labels "app.kubernetes.io/part-of") "gha-rs-controller" }}
{{- if hasKey $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace" }}
{{- $singleNamespaceCounter = add $singleNamespaceCounter 1 }}
{{- $_ := set $singleNamespaceControllerDeployments (get $deployment.metadata.labels "actions.github.com/controller-watch-single-namespace") $deployment}}
@@ -520,13 +525,13 @@ volumeMounts:
{{- end }}
{{- end }}
{{- if and (eq $multiNamespacesCounter 0) (eq $singleNamespaceCounter 0) }}
{{- fail "No gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- fail "No gha-rs-controller deployment found using label (app.kubernetes.io/part-of=gha-rs-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if and (gt $multiNamespacesCounter 0) (gt $singleNamespaceCounter 0) }}
{{- fail "Found both gha-runner-scale-set-controller installed with flags.watchSingleNamespace set and unset in cluster, this is not supported. Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- fail "Found both gha-rs-controller installed with flags.watchSingleNamespace set and unset in cluster, this is not supported. Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if gt $multiNamespacesCounter 1 }}
{{- fail "More than one gha-runner-scale-set-controller deployment found using label (app.kubernetes.io/part-of=gha-runner-scale-set-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- fail "More than one gha-rs-controller deployment found using label (app.kubernetes.io/part-of=gha-rs-controller). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- if eq $multiNamespacesCounter 1 }}
{{- with $controllerDeployment.metadata }}
@@ -539,11 +544,11 @@ volumeMounts:
{{- $managerServiceAccountNamespace = (get $controllerDeployment.metadata.labels "actions.github.com/controller-service-account-namespace") }}
{{- end }}
{{- else }}
{{- fail "No gha-runner-scale-set-controller deployment that watch this namespace found using label (actions.github.com/controller-watch-single-namespace). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- fail "No gha-rs-controller deployment that watch this namespace found using label (actions.github.com/controller-watch-single-namespace). Consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- end }}
{{- if eq $managerServiceAccountNamespace "" }}
{{- fail "No service account namespace found for gha-runner-scale-set-controller deployment using label (actions.github.com/controller-service-account-namespace), consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- fail "No service account namespace found for gha-rs-controller deployment using label (actions.github.com/controller-service-account-namespace), consider setting controllerServiceAccount.name in values.yaml to be explicit if you think the discovery is wrong." }}
{{- end }}
{{- $managerServiceAccountNamespace }}
{{- end }}

View File

@@ -7,7 +7,7 @@ metadata:
{{- if gt (len .Release.Namespace) 63 }}
{{ fail "Namespace must have up to 63 characters" }}
{{- end }}
name: {{ .Release.Name }}
name: {{ .Values.runnerScaleSetName | default .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/component: "autoscaling-runner-set"
@@ -88,6 +88,11 @@ spec:
minRunners: {{ .Values.minRunners | int }}
{{- end }}
{{- with .Values.listenerTemplate}}
listenerTemplate:
{{- toYaml . | nindent 4}}
{{- end }}
template:
{{- with .Values.template.metadata }}
metadata:
@@ -106,6 +111,9 @@ spec:
{{ $key }}: {{ $val | toYaml | nindent 8 }}
{{- end }}
{{- end }}
{{- if not .Values.template.spec.restartPolicy }}
restartPolicy: Never
{{- end }}
{{- $containerMode := .Values.containerMode }}
{{- if eq $containerMode.type "kubernetes" }}
serviceAccountName: {{ default (include "gha-runner-scale-set.kubeModeServiceAccountName" .) .Values.template.spec.serviceAccountName }}
@@ -119,7 +127,7 @@ spec:
{{- include "gha-runner-scale-set.dind-init-container" . | nindent 8 }}
{{- end }}
{{- with .Values.template.spec.initContainers }}
{{- toYaml . | nindent 8 }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}
containers:

View File

@@ -10,6 +10,7 @@ import (
actionsgithubcom "github.com/actions/actions-runner-controller/controllers/actions.github.com"
"github.com/gruntwork-io/terratest/modules/helm"
"github.com/gruntwork-io/terratest/modules/k8s"
"github.com/gruntwork-io/terratest/modules/logger"
"github.com/gruntwork-io/terratest/modules/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -28,6 +29,7 @@ func TestTemplateRenderedGitHubSecretWithGitHubToken(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -43,7 +45,7 @@ func TestTemplateRenderedGitHubSecretWithGitHubToken(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &githubSecret)
assert.Equal(t, namespaceName, githubSecret.Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-github-secret", githubSecret.Name)
assert.Equal(t, "test-runners-gha-rs-github-secret", githubSecret.Name)
assert.Equal(t, "gh_token12345", string(githubSecret.Data["github_token"]))
assert.Equal(t, "actions.github.com/cleanup-protection", githubSecret.Finalizers[0])
}
@@ -59,6 +61,7 @@ func TestTemplateRenderedGitHubSecretWithGitHubApp(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_app_id": "10",
@@ -92,6 +95,7 @@ func TestTemplateRenderedGitHubSecretErrorWithMissingAuthInput(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_app_id": "",
@@ -119,6 +123,7 @@ func TestTemplateRenderedGitHubSecretErrorWithMissingAppInput(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_app_id": "10",
@@ -145,6 +150,7 @@ func TestTemplateNotRenderedGitHubSecretWithPredefinedSecret(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secret",
@@ -169,6 +175,7 @@ func TestTemplateRenderedSetServiceAccountToNoPermission(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -183,13 +190,13 @@ func TestTemplateRenderedSetServiceAccountToNoPermission(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-no-permission-service-account", serviceAccount.Name)
assert.Equal(t, "test-runners-gha-rs-no-permission", serviceAccount.Name)
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, "test-runners-gha-runner-scale-set-no-permission-service-account", ars.Spec.Template.Spec.ServiceAccountName)
assert.Equal(t, "test-runners-gha-rs-no-permission", ars.Spec.Template.Spec.ServiceAccountName)
assert.Empty(t, ars.Annotations[actionsgithubcom.AnnotationKeyKubernetesModeServiceAccountName]) // no finalizer protections in place
}
@@ -204,6 +211,7 @@ func TestTemplateRenderedSetServiceAccountToKubeMode(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -219,7 +227,7 @@ func TestTemplateRenderedSetServiceAccountToKubeMode(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &serviceAccount)
assert.Equal(t, namespaceName, serviceAccount.Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-service-account", serviceAccount.Name)
assert.Equal(t, "test-runners-gha-rs-kube-mode", serviceAccount.Name)
assert.Equal(t, "actions.github.com/cleanup-protection", serviceAccount.Finalizers[0])
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/kube_mode_role.yaml"})
@@ -227,7 +235,7 @@ func TestTemplateRenderedSetServiceAccountToKubeMode(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &role)
assert.Equal(t, namespaceName, role.Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-role", role.Name)
assert.Equal(t, "test-runners-gha-rs-kube-mode", role.Name)
assert.Equal(t, "actions.github.com/cleanup-protection", role.Finalizers[0])
@@ -243,11 +251,11 @@ func TestTemplateRenderedSetServiceAccountToKubeMode(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &roleBinding)
assert.Equal(t, namespaceName, roleBinding.Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-role-binding", roleBinding.Name)
assert.Equal(t, "test-runners-gha-rs-kube-mode", roleBinding.Name)
assert.Len(t, roleBinding.Subjects, 1)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-service-account", roleBinding.Subjects[0].Name)
assert.Equal(t, "test-runners-gha-rs-kube-mode", roleBinding.Subjects[0].Name)
assert.Equal(t, namespaceName, roleBinding.Subjects[0].Namespace)
assert.Equal(t, "test-runners-gha-runner-scale-set-kube-mode-role", roleBinding.RoleRef.Name)
assert.Equal(t, "test-runners-gha-rs-kube-mode", roleBinding.RoleRef.Name)
assert.Equal(t, "Role", roleBinding.RoleRef.Kind)
assert.Equal(t, "actions.github.com/cleanup-protection", serviceAccount.Finalizers[0])
@@ -255,7 +263,7 @@ func TestTemplateRenderedSetServiceAccountToKubeMode(t *testing.T) {
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
expectedServiceAccountName := "test-runners-gha-runner-scale-set-kube-mode-service-account"
expectedServiceAccountName := "test-runners-gha-rs-kube-mode"
assert.Equal(t, expectedServiceAccountName, ars.Spec.Template.Spec.ServiceAccountName)
assert.Equal(t, expectedServiceAccountName, ars.Annotations[actionsgithubcom.AnnotationKeyKubernetesModeServiceAccountName])
}
@@ -271,6 +279,7 @@ func TestTemplateRenderedUserProvideSetServiceAccount(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -303,6 +312,7 @@ func TestTemplateRenderedAutoScalingRunnerSet(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -320,14 +330,14 @@ func TestTemplateRenderedAutoScalingRunnerSet(t *testing.T) {
assert.Equal(t, namespaceName, ars.Namespace)
assert.Equal(t, "test-runners", ars.Name)
assert.Equal(t, "gha-runner-scale-set", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-runners", ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-runner-scale-set", ars.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, "gha-rs", ars.Labels["app.kubernetes.io/part-of"])
assert.Equal(t, "autoscaling-runner-set", ars.Labels["app.kubernetes.io/component"])
assert.NotEmpty(t, ars.Labels["app.kubernetes.io/version"])
assert.Equal(t, "https://github.com/actions", ars.Spec.GitHubConfigUrl)
assert.Equal(t, "test-runners-gha-runner-scale-set-github-secret", ars.Spec.GitHubConfigSecret)
assert.Equal(t, "test-runners-gha-rs-github-secret", ars.Spec.GitHubConfigSecret)
assert.Empty(t, ars.Spec.RunnerGroup, "RunnerGroup should be empty")
@@ -354,6 +364,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_RunnerScaleSetName(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -370,12 +381,12 @@ func TestTemplateRenderedAutoScalingRunnerSet_RunnerScaleSetName(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, namespaceName, ars.Namespace)
assert.Equal(t, "test-runners", ars.Name)
assert.Equal(t, "test-runner-scale-set-name", ars.Name)
assert.Equal(t, "gha-runner-scale-set", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-runners", ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "gha-rs", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, releaseName, ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "https://github.com/actions", ars.Spec.GitHubConfigUrl)
assert.Equal(t, "test-runners-gha-runner-scale-set-github-secret", ars.Spec.GitHubConfigSecret)
assert.Equal(t, "test-runners-gha-rs-github-secret", ars.Spec.GitHubConfigSecret)
assert.Equal(t, "test-runner-scale-set-name", ars.Spec.RunnerScaleSetName)
assert.Empty(t, ars.Spec.RunnerGroup, "RunnerGroup should be empty")
@@ -403,6 +414,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_ProvideMetadata(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -450,6 +462,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_MaxRunnersValidationError(t *testi
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -477,6 +490,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_MinRunnersValidationError(t *testi
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -505,6 +519,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_MinMaxRunnersValidationError(t *te
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -533,6 +548,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_MinMaxRunnersValidationSameValue(t
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -564,6 +580,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_MinMaxRunnersValidation_OnlyMin(t
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -594,6 +611,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_MinMaxRunnersValidation_OnlyMax(t
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -627,6 +645,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_MinMaxRunners_FromValuesFile(t *te
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
ValuesFiles: []string{testValuesPath},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
@@ -654,6 +673,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_ExtraVolumes(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
@@ -674,6 +694,50 @@ func TestTemplateRenderedAutoScalingRunnerSet_ExtraVolumes(t *testing.T) {
assert.Equal(t, "/data", ars.Spec.Template.Spec.Volumes[2].HostPath.Path, "Volume host path should be /data")
}
func TestTemplateRenderedAutoScalingRunnerSet_DinD_ExtraInitContainers(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set")
require.NoError(t, err)
testValuesPath, err := filepath.Abs("../tests/values_dind_extra_init_containers.yaml")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
},
ValuesFiles: []string{testValuesPath},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Len(t, ars.Spec.Template.Spec.InitContainers, 3, "InitContainers should be 3")
assert.Equal(t, "kube-init", ars.Spec.Template.Spec.InitContainers[1].Name, "InitContainers[1] Name should be kube-init")
assert.Equal(t, "runner-image:latest", ars.Spec.Template.Spec.InitContainers[1].Image, "InitContainers[1] Image should be runner-image:latest")
assert.Equal(t, "sudo", ars.Spec.Template.Spec.InitContainers[1].Command[0], "InitContainers[1] Command[0] should be sudo")
assert.Equal(t, "chown", ars.Spec.Template.Spec.InitContainers[1].Command[1], "InitContainers[1] Command[1] should be chown")
assert.Equal(t, "-R", ars.Spec.Template.Spec.InitContainers[1].Command[2], "InitContainers[1] Command[2] should be -R")
assert.Equal(t, "1001:123", ars.Spec.Template.Spec.InitContainers[1].Command[3], "InitContainers[1] Command[3] should be 1001:123")
assert.Equal(t, "/home/runner/_work", ars.Spec.Template.Spec.InitContainers[1].Command[4], "InitContainers[1] Command[4] should be /home/runner/_work")
assert.Equal(t, "work", ars.Spec.Template.Spec.InitContainers[1].VolumeMounts[0].Name, "InitContainers[1] VolumeMounts[0] Name should be work")
assert.Equal(t, "/home/runner/_work", ars.Spec.Template.Spec.InitContainers[1].VolumeMounts[0].MountPath, "InitContainers[1] VolumeMounts[0] MountPath should be /home/runner/_work")
assert.Equal(t, "ls", ars.Spec.Template.Spec.InitContainers[2].Name, "InitContainers[2] Name should be ls")
assert.Equal(t, "ubuntu:latest", ars.Spec.Template.Spec.InitContainers[2].Image, "InitContainers[2] Image should be ubuntu:latest")
assert.Equal(t, "ls", ars.Spec.Template.Spec.InitContainers[2].Command[0], "InitContainers[2] Command[0] should be ls")
}
func TestTemplateRenderedAutoScalingRunnerSet_DinD_ExtraVolumes(t *testing.T) {
t.Parallel()
@@ -688,6 +752,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_DinD_ExtraVolumes(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
@@ -724,6 +789,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_K8S_ExtraVolumes(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
@@ -755,6 +821,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_EnableDinD(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -773,10 +840,10 @@ func TestTemplateRenderedAutoScalingRunnerSet_EnableDinD(t *testing.T) {
assert.Equal(t, namespaceName, ars.Namespace)
assert.Equal(t, "test-runners", ars.Name)
assert.Equal(t, "gha-runner-scale-set", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-runners", ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "https://github.com/actions", ars.Spec.GitHubConfigUrl)
assert.Equal(t, "test-runners-gha-runner-scale-set-github-secret", ars.Spec.GitHubConfigSecret)
assert.Equal(t, "test-runners-gha-rs-github-secret", ars.Spec.GitHubConfigSecret)
assert.Empty(t, ars.Spec.RunnerGroup, "RunnerGroup should be empty")
@@ -846,6 +913,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_EnableKubernetesMode(t *testing.T)
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -864,10 +932,10 @@ func TestTemplateRenderedAutoScalingRunnerSet_EnableKubernetesMode(t *testing.T)
assert.Equal(t, namespaceName, ars.Namespace)
assert.Equal(t, "test-runners", ars.Name)
assert.Equal(t, "gha-runner-scale-set", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-runners", ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "https://github.com/actions", ars.Spec.GitHubConfigUrl)
assert.Equal(t, "test-runners-gha-runner-scale-set-github-secret", ars.Spec.GitHubConfigSecret)
assert.Equal(t, "test-runners-gha-rs-github-secret", ars.Spec.GitHubConfigSecret)
assert.Empty(t, ars.Spec.RunnerGroup, "RunnerGroup should be empty")
assert.Nil(t, ars.Spec.MinRunners, "MinRunners should be nil")
@@ -892,6 +960,50 @@ func TestTemplateRenderedAutoScalingRunnerSet_EnableKubernetesMode(t *testing.T)
assert.NotNil(t, ars.Spec.Template.Spec.Volumes[0].Ephemeral, "Template.Spec should have 1 ephemeral volume")
}
func TestTemplateRenderedAutoscalingRunnerSet_ListenerPodTemplate(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set")
require.NoError(t, err)
testValuesPath, err := filepath.Abs("../tests/values_listener_template.yaml")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
},
ValuesFiles: []string{testValuesPath},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
require.NotNil(t, ars.Spec.ListenerTemplate, "ListenerPodTemplate should not be nil")
assert.Equal(t, ars.Spec.ListenerTemplate.Spec.Hostname, "example")
require.Len(t, ars.Spec.ListenerTemplate.Spec.Containers, 2, "ListenerPodTemplate should have 2 containers")
assert.Equal(t, ars.Spec.ListenerTemplate.Spec.Containers[0].Name, "listener")
assert.Equal(t, ars.Spec.ListenerTemplate.Spec.Containers[0].Image, "listener:latest")
assert.ElementsMatch(t, ars.Spec.ListenerTemplate.Spec.Containers[0].Command, []string{"/path/to/entrypoint"})
assert.Len(t, ars.Spec.ListenerTemplate.Spec.Containers[0].VolumeMounts, 1, "VolumeMounts should be 1")
assert.Equal(t, ars.Spec.ListenerTemplate.Spec.Containers[0].VolumeMounts[0].Name, "work")
assert.Equal(t, ars.Spec.ListenerTemplate.Spec.Containers[0].VolumeMounts[0].MountPath, "/home/example")
assert.Equal(t, ars.Spec.ListenerTemplate.Spec.Containers[1].Name, "side-car")
assert.Equal(t, ars.Spec.ListenerTemplate.Spec.Containers[1].Image, "nginx:latest")
}
func TestTemplateRenderedAutoScalingRunnerSet_UsePredefinedSecret(t *testing.T) {
t.Parallel()
@@ -903,6 +1015,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_UsePredefinedSecret(t *testing.T)
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secrets",
@@ -920,7 +1033,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_UsePredefinedSecret(t *testing.T)
assert.Equal(t, namespaceName, ars.Namespace)
assert.Equal(t, "test-runners", ars.Name)
assert.Equal(t, "gha-runner-scale-set", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "gha-rs", ars.Labels["app.kubernetes.io/name"])
assert.Equal(t, "test-runners", ars.Labels["app.kubernetes.io/instance"])
assert.Equal(t, "https://github.com/actions", ars.Spec.GitHubConfigUrl)
assert.Equal(t, "pre-defined-secrets", ars.Spec.GitHubConfigSecret)
@@ -937,6 +1050,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_ErrorOnEmptyPredefinedSecret(t *te
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "",
@@ -963,6 +1077,7 @@ func TestTemplateRenderedWithProxy(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secrets",
@@ -1026,6 +1141,7 @@ func TestTemplateRenderedWithTLS(t *testing.T) {
t.Run("providing githubServerTLS.runnerMountPath", func(t *testing.T) {
t.Run("mode: default", func(t *testing.T) {
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secrets",
@@ -1084,6 +1200,7 @@ func TestTemplateRenderedWithTLS(t *testing.T) {
t.Run("mode: dind", func(t *testing.T) {
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secrets",
@@ -1143,6 +1260,7 @@ func TestTemplateRenderedWithTLS(t *testing.T) {
t.Run("mode: kubernetes", func(t *testing.T) {
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secrets",
@@ -1204,6 +1322,7 @@ func TestTemplateRenderedWithTLS(t *testing.T) {
t.Run("without providing githubServerTLS.runnerMountPath", func(t *testing.T) {
t.Run("mode: default", func(t *testing.T) {
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secrets",
@@ -1258,6 +1377,7 @@ func TestTemplateRenderedWithTLS(t *testing.T) {
t.Run("mode: dind", func(t *testing.T) {
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secrets",
@@ -1313,6 +1433,7 @@ func TestTemplateRenderedWithTLS(t *testing.T) {
t.Run("mode: kubernetes", func(t *testing.T) {
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secrets",
@@ -1402,6 +1523,7 @@ func TestTemplateNamingConstraints(t *testing.T) {
for name, tc := range tt {
t.Run(name, func(t *testing.T) {
options := &helm.Options{
Logger: logger.Discard,
SetValues: setValues,
KubectlOptions: k8s.NewKubectlOptions("", "", tc.namespaceName),
}
@@ -1423,6 +1545,7 @@ func TestTemplateRenderedGitHubConfigUrlEndsWIthSlash(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions/",
"githubConfigSecret.github_token": "gh_token12345",
@@ -1453,6 +1576,7 @@ func TestTemplate_CreateManagerRole(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -1468,7 +1592,7 @@ func TestTemplate_CreateManagerRole(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &managerRole)
assert.Equal(t, namespaceName, managerRole.Namespace, "namespace should match the namespace of the Helm release")
assert.Equal(t, "test-runners-gha-runner-scale-set-manager-role", managerRole.Name)
assert.Equal(t, "test-runners-gha-rs-manager", managerRole.Name)
assert.Equal(t, "actions.github.com/cleanup-protection", managerRole.Finalizers[0])
assert.Equal(t, 6, len(managerRole.Rules))
@@ -1487,6 +1611,7 @@ func TestTemplate_CreateManagerRole_UseConfigMaps(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -1503,7 +1628,7 @@ func TestTemplate_CreateManagerRole_UseConfigMaps(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &managerRole)
assert.Equal(t, namespaceName, managerRole.Namespace, "namespace should match the namespace of the Helm release")
assert.Equal(t, "test-runners-gha-runner-scale-set-manager-role", managerRole.Name)
assert.Equal(t, "test-runners-gha-rs-manager", managerRole.Name)
assert.Equal(t, "actions.github.com/cleanup-protection", managerRole.Finalizers[0])
assert.Equal(t, 7, len(managerRole.Rules))
assert.Equal(t, "configmaps", managerRole.Rules[6].Resources[0])
@@ -1520,6 +1645,7 @@ func TestTemplate_CreateManagerRoleBinding(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -1535,8 +1661,8 @@ func TestTemplate_CreateManagerRoleBinding(t *testing.T) {
helm.UnmarshalK8SYaml(t, output, &managerRoleBinding)
assert.Equal(t, namespaceName, managerRoleBinding.Namespace, "namespace should match the namespace of the Helm release")
assert.Equal(t, "test-runners-gha-runner-scale-set-manager-role-binding", managerRoleBinding.Name)
assert.Equal(t, "test-runners-gha-runner-scale-set-manager-role", managerRoleBinding.RoleRef.Name)
assert.Equal(t, "test-runners-gha-rs-manager", managerRoleBinding.Name)
assert.Equal(t, "test-runners-gha-rs-manager", managerRoleBinding.RoleRef.Name)
assert.Equal(t, "actions.github.com/cleanup-protection", managerRoleBinding.Finalizers[0])
assert.Equal(t, "arc", managerRoleBinding.Subjects[0].Name)
assert.Equal(t, "arc-system", managerRoleBinding.Subjects[0].Namespace)
@@ -1556,6 +1682,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_ExtraContainers(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
@@ -1589,6 +1716,53 @@ func TestTemplateRenderedAutoScalingRunnerSet_ExtraContainers(t *testing.T) {
assert.Equal(t, "192.0.2.1", ars.Spec.Template.Spec.DNSConfig.Nameservers[0], "DNS Nameserver should be set")
}
func TestTemplateRenderedAutoScalingRunnerSet_RestartPolicy(t *testing.T) {
t.Parallel()
// Path to the helm chart we will test
helmChartPath, err := filepath.Abs("../../gha-runner-scale-set")
require.NoError(t, err)
releaseName := "test-runners"
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output := helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"})
var ars v1alpha1.AutoscalingRunnerSet
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, corev1.RestartPolicyNever, ars.Spec.Template.Spec.RestartPolicy, "RestartPolicy should be Never")
options = &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
"template.spec.restartPolicy": "Always",
},
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
}
output = helm.RenderTemplate(t, options, helmChartPath, releaseName, []string{"templates/autoscalingrunnerset.yaml"}, "--debug")
helm.UnmarshalK8SYaml(t, output, &ars)
assert.Equal(t, corev1.RestartPolicyAlways, ars.Spec.Template.Spec.RestartPolicy, "RestartPolicy should be Always")
}
func TestTemplateRenderedAutoScalingRunnerSet_ExtraPodSpec(t *testing.T) {
t.Parallel()
@@ -1603,6 +1777,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_ExtraPodSpec(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
@@ -1636,6 +1811,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_DinDMergePodSpec(t *testing.T) {
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
@@ -1681,6 +1857,7 @@ func TestTemplateRenderedAutoScalingRunnerSet_KubeModeMergePodSpec(t *testing.T)
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"controllerServiceAccount.name": "arc",
"controllerServiceAccount.namespace": "arc-system",
@@ -1722,6 +1899,7 @@ func TestTemplateRenderedAutoscalingRunnerSetAnnotation_GitHubSecret(t *testing.
annotationExpectedTests := map[string]*helm.Options{
"GitHub token": {
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -1731,6 +1909,7 @@ func TestTemplateRenderedAutoscalingRunnerSetAnnotation_GitHubSecret(t *testing.
KubectlOptions: k8s.NewKubectlOptions("", "", namespaceName),
},
"GitHub app": {
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_app_id": "10",
@@ -1755,6 +1934,7 @@ func TestTemplateRenderedAutoscalingRunnerSetAnnotation_GitHubSecret(t *testing.
t.Run("Annotation should not be set", func(t *testing.T) {
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret": "pre-defined-secret",
@@ -1782,6 +1962,7 @@ func TestTemplateRenderedAutoscalingRunnerSetAnnotation_KubernetesModeCleanup(t
namespaceName := "test-" + strings.ToLower(random.UniqueId())
options := &helm.Options{
Logger: logger.Discard,
SetValues: map[string]string{
"githubConfigUrl": "https://github.com/actions",
"githubConfigSecret.github_token": "gh_token12345",
@@ -1797,12 +1978,12 @@ func TestTemplateRenderedAutoscalingRunnerSetAnnotation_KubernetesModeCleanup(t
helm.UnmarshalK8SYaml(t, output, &autoscalingRunnerSet)
annotationValues := map[string]string{
actionsgithubcom.AnnotationKeyGitHubSecretName: "test-runners-gha-runner-scale-set-github-secret",
actionsgithubcom.AnnotationKeyManagerRoleName: "test-runners-gha-runner-scale-set-manager-role",
actionsgithubcom.AnnotationKeyManagerRoleBindingName: "test-runners-gha-runner-scale-set-manager-role-binding",
actionsgithubcom.AnnotationKeyKubernetesModeServiceAccountName: "test-runners-gha-runner-scale-set-kube-mode-service-account",
actionsgithubcom.AnnotationKeyKubernetesModeRoleName: "test-runners-gha-runner-scale-set-kube-mode-role",
actionsgithubcom.AnnotationKeyKubernetesModeRoleBindingName: "test-runners-gha-runner-scale-set-kube-mode-role-binding",
actionsgithubcom.AnnotationKeyGitHubSecretName: "test-runners-gha-rs-github-secret",
actionsgithubcom.AnnotationKeyManagerRoleName: "test-runners-gha-rs-manager",
actionsgithubcom.AnnotationKeyManagerRoleBindingName: "test-runners-gha-rs-manager",
actionsgithubcom.AnnotationKeyKubernetesModeServiceAccountName: "test-runners-gha-rs-kube-mode",
actionsgithubcom.AnnotationKeyKubernetesModeRoleName: "test-runners-gha-rs-kube-mode",
actionsgithubcom.AnnotationKeyKubernetesModeRoleBindingName: "test-runners-gha-rs-kube-mode",
}
for annotation, value := range annotationValues {

View File

@@ -0,0 +1,17 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
template:
spec:
initContainers:
- name: kube-init
image: runner-image:latest
command: ["sudo", "chown", "-R", "1001:123", "/home/runner/_work"]
volumeMounts:
- name: work
mountPath: /home/runner/_work
- name: ls
image: ubuntu:latest
command: ["ls"]
containerMode:
type: dind

View File

@@ -0,0 +1,15 @@
githubConfigUrl: https://github.com/actions/actions-runner-controller
githubConfigSecret:
github_token: test
listenerTemplate:
spec:
hostname: "example"
containers:
- name: listener
image: listener:latest
command: ["/path/to/entrypoint"]
volumeMounts:
- name: work
mountPath: /home/example
- name: side-car
image: nginx:latest

View File

@@ -36,10 +36,10 @@ githubConfigSecret:
# - example.com
# - example.org
## maxRunners is the max number of runners the auto scaling runner set will scale up to.
## maxRunners is the max number of runners the autoscaling runner set will scale up to.
# maxRunners: 5
## minRunners is the min number of runners the auto scaling runner set will scale down to.
## minRunners is the min number of runners the autoscaling runner set will scale down to.
# minRunners: 0
# runnerGroup: "default"
@@ -68,6 +68,12 @@ githubConfigSecret:
# key: ca.crt
# runnerMountPath: /usr/local/share/ca-certificates/
## Container mode is an object that provides out-of-box configuration
## for dind and kubernetes mode. Template will be modified as documented under the
## template object.
##
## If any customization is required for dind or kubernetes mode, containerMode should remain
## empty, and configuration should be applied to the template.
# containerMode:
# type: "dind" ## type can be set to dind or kubernetes
# ## the following is required when containerMode.type=kubernetes
@@ -79,7 +85,25 @@ githubConfigSecret:
# requests:
# storage: 1Gi
## template is the PodSpec for each listener Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
# listenerTemplate:
# spec:
# containers:
# # Use this section to append additional configuration to the listener container.
# # If you change the name of the container, the configuration will not be applied to the listener,
# # and it will be treated as a side-car container.
# - name: listener
# securityContext:
# runAsUser: 1000
# # Use this section to add the configuration of a side-car container.
# # Comment it out or remove it if you don't need it.
# # Spec for this container will be applied as is without any modifications.
# - name: side-car
# image: example-sidecar
## template is the PodSpec for each runner Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
template:
## template.spec will be modified if you change the container mode
## with containerMode.type=dind, we will populate the template.spec with following pod spec
@@ -95,6 +119,7 @@ template:
## containers:
## - name: runner
## image: ghcr.io/actions/actions-runner:latest
## command: ["/home/runner/run.sh"]
## env:
## - name: DOCKER_HOST
## value: tcp://localhost:2376
@@ -133,6 +158,7 @@ template:
## containers:
## - name: runner
## image: ghcr.io/actions/actions-runner:latest
## command: ["/home/runner/run.sh"]
## env:
## - name: ACTIONS_RUNNER_CONTAINER_HOOKS
## value: /home/runner/k8s/index.js
@@ -157,9 +183,9 @@ template:
## storage: 1Gi
spec:
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.

View File

@@ -114,7 +114,14 @@ func createSession(ctx context.Context, logger *logr.Logger, client actions.Acti
return runnerScaleSetSession, initialMessage, nil
}
return runnerScaleSetSession, nil, nil
initialMessage := &actions.RunnerScaleSetMessage{
MessageId: 0,
MessageType: "RunnerScaleSetJobMessages",
Statistics: runnerScaleSetSession.Statistics,
Body: "",
}
return runnerScaleSetSession, initialMessage, nil
}
func (m *AutoScalerClient) Close() error {

View File

@@ -37,7 +37,7 @@ func TestCreateSession(t *testing.T) {
require.NoError(t, err, "Error creating autoscaler client")
assert.Equal(t, session, session, "Session is not correct")
assert.Nil(t, asClient.initialMessage, "Initial message should be nil")
assert.NotNil(t, asClient.initialMessage, "Initial message should not be nil")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be 0")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
@@ -188,7 +188,7 @@ func TestCreateSession_RetrySessionConflict(t *testing.T) {
require.NoError(t, err, "Error creating autoscaler client")
assert.Equal(t, session, session, "Session is not correct")
assert.Nil(t, asClient.initialMessage, "Initial message should be nil")
assert.NotNil(t, asClient.initialMessage, "Initial message should not be nil")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be 0")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
}
@@ -334,6 +334,14 @@ func TestGetRunnerScaleSetMessage(t *testing.T) {
return nil
})
assert.NoError(t, err, "Error getting message")
assert.Equal(t, int64(0), asClient.lastMessageId, "Initial message")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return nil
})
assert.NoError(t, err, "Error getting message")
assert.Equal(t, int64(1), asClient.lastMessageId, "Last message id should be updated")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
@@ -371,13 +379,21 @@ func TestGetRunnerScaleSetMessage_HandleFailed(t *testing.T) {
})
require.NoError(t, err, "Error creating autoscaler client")
// read initial message
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return nil
})
assert.NoError(t, err, "Error getting message")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return fmt.Errorf("error")
})
assert.ErrorContains(t, err, "handle message failed. error", "Error getting message")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should be updated")
assert.Equal(t, int64(0), asClient.lastMessageId, "Last message id should not be updated")
assert.True(t, mockActionsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockSessionClient.AssertExpectations(t), "All expectations should be met")
}
@@ -513,6 +529,12 @@ func TestGetRunnerScaleSetMessage_RetryUntilGetMessage(t *testing.T) {
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return nil
})
assert.NoError(t, err, "Error getting initial message")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return nil
@@ -550,6 +572,12 @@ func TestGetRunnerScaleSetMessage_ErrorOnGetMessage(t *testing.T) {
})
require.NoError(t, err, "Error creating autoscaler client")
// process initial message
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
return nil
})
assert.NoError(t, err, "Error getting initial message")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
return fmt.Errorf("Should not be called")
})
@@ -592,6 +620,12 @@ func TestDeleteRunnerScaleSetMessage_Error(t *testing.T) {
})
require.NoError(t, err, "Error creating autoscaler client")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return nil
})
assert.NoError(t, err, "Error getting initial message")
err = asClient.GetRunnerScaleSetMessage(ctx, func(msg *actions.RunnerScaleSetMessage) error {
logger.Info("Message received", "messageId", msg.MessageId, "messageType", msg.MessageType, "body", msg.Body)
return nil

View File

@@ -3,6 +3,7 @@ package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"math"
"strings"
@@ -25,6 +26,31 @@ type Service struct {
kubeManager KubernetesManager
settings *ScaleSettings
currentRunnerCount int
metricsExporter metricsExporter
errs []error
}
func WithPrometheusMetrics(conf RunnerScaleSetListenerConfig) func(*Service) {
return func(svc *Service) {
parsedURL, err := actions.ParseGitHubConfigFromURL(conf.ConfigureUrl)
if err != nil {
svc.errs = append(svc.errs, err)
}
svc.metricsExporter.withBaseLabels(baseLabels{
scaleSetName: conf.EphemeralRunnerSetName,
scaleSetNamespace: conf.EphemeralRunnerSetNamespace,
enterprise: parsedURL.Enterprise,
organization: parsedURL.Organization,
repository: parsedURL.Repository,
})
}
}
func WithLogger(logger logr.Logger) func(*Service) {
return func(s *Service) {
s.logger = logger.WithName("service")
}
}
func NewService(
@@ -33,13 +59,13 @@ func NewService(
manager KubernetesManager,
settings *ScaleSettings,
options ...func(*Service),
) *Service {
) (*Service, error) {
s := &Service{
ctx: ctx,
rsClient: rsClient,
kubeManager: manager,
settings: settings,
currentRunnerCount: 0,
currentRunnerCount: -1, // force patch on startup
logger: logr.FromContextOrDiscard(ctx),
}
@@ -47,18 +73,14 @@ func NewService(
option(s)
}
return s
if len(s.errs) > 0 {
return nil, errors.Join(s.errs...)
}
return s, nil
}
func (s *Service) Start() error {
if s.settings.MinRunners > 0 {
s.logger.Info("scale to match minimal runners.")
err := s.scaleForAssignedJobCount(0)
if err != nil {
return fmt.Errorf("could not scale to match minimal runners. %w", err)
}
}
for {
s.logger.Info("waiting for message...")
select {
@@ -89,11 +111,17 @@ func (s *Service) processMessage(message *actions.RunnerScaleSetMessage) error {
"busy runners", message.Statistics.TotalBusyRunners,
"idle runners", message.Statistics.TotalIdleRunners)
s.metricsExporter.publishStatistics(message.Statistics)
if message.MessageType != "RunnerScaleSetJobMessages" {
s.logger.Info("skip message with unknown message type.", "messageType", message.MessageType)
return nil
}
if message.MessageId == 0 && message.Body == "" { // initial message with statistics only
return s.scaleForAssignedJobCount(message.Statistics.TotalAssignedJobs)
}
var batchedMessages []json.RawMessage
if err := json.NewDecoder(strings.NewReader(message.Body)).Decode(&batchedMessages); err != nil {
return fmt.Errorf("could not decode job messages. %w", err)
@@ -114,27 +142,54 @@ func (s *Service) processMessage(message *actions.RunnerScaleSetMessage) error {
if err := json.Unmarshal(message, &jobAvailable); err != nil {
return fmt.Errorf("could not decode job available message. %w", err)
}
s.logger.Info("job available message received.", "RequestId", jobAvailable.RunnerRequestId)
s.logger.Info(
"job available message received.",
"RequestId",
jobAvailable.RunnerRequestId,
)
availableJobs = append(availableJobs, jobAvailable.RunnerRequestId)
case "JobAssigned":
var jobAssigned actions.JobAssigned
if err := json.Unmarshal(message, &jobAssigned); err != nil {
return fmt.Errorf("could not decode job assigned message. %w", err)
}
s.logger.Info("job assigned message received.", "RequestId", jobAssigned.RunnerRequestId)
s.logger.Info(
"job assigned message received.",
"RequestId",
jobAssigned.RunnerRequestId,
)
// s.metricsExporter.publishJobAssigned(&jobAssigned)
case "JobStarted":
var jobStarted actions.JobStarted
if err := json.Unmarshal(message, &jobStarted); err != nil {
return fmt.Errorf("could not decode job started message. %w", err)
}
s.logger.Info("job started message received.", "RequestId", jobStarted.RunnerRequestId, "RunnerId", jobStarted.RunnerId)
s.logger.Info(
"job started message received.",
"RequestId",
jobStarted.RunnerRequestId,
"RunnerId",
jobStarted.RunnerId,
)
s.metricsExporter.publishJobStarted(&jobStarted)
s.updateJobInfoForRunner(jobStarted)
case "JobCompleted":
var jobCompleted actions.JobCompleted
if err := json.Unmarshal(message, &jobCompleted); err != nil {
return fmt.Errorf("could not decode job completed message. %w", err)
}
s.logger.Info("job completed message received.", "RequestId", jobCompleted.RunnerRequestId, "Result", jobCompleted.Result, "RunnerId", jobCompleted.RunnerId, "RunnerName", jobCompleted.RunnerName)
s.logger.Info(
"job completed message received.",
"RequestId",
jobCompleted.RunnerRequestId,
"Result",
jobCompleted.Result,
"RunnerId",
jobCompleted.RunnerId,
"RunnerName",
jobCompleted.RunnerName,
)
s.metricsExporter.publishJobCompleted(&jobCompleted)
default:
s.logger.Info("unknown job message type.", "messageType", messageType.MessageType)
}
@@ -150,13 +205,15 @@ func (s *Service) processMessage(message *actions.RunnerScaleSetMessage) error {
func (s *Service) scaleForAssignedJobCount(count int) error {
targetRunnerCount := int(math.Max(math.Min(float64(s.settings.MaxRunners), float64(count)), float64(s.settings.MinRunners)))
s.metricsExporter.publishDesiredRunners(targetRunnerCount)
if targetRunnerCount != s.currentRunnerCount {
s.logger.Info("try scale runner request up/down base on assigned job count",
"assigned job", count,
"decision", targetRunnerCount,
"min", s.settings.MinRunners,
"max", s.settings.MaxRunners,
"currentRunnerCount", s.currentRunnerCount)
"currentRunnerCount", s.currentRunnerCount,
)
err := s.kubeManager.ScaleEphemeralRunnerSet(s.ctx, s.settings.Namespace, s.settings.ResourceName, targetRunnerCount)
if err != nil {
return fmt.Errorf("could not scale ephemeral runner set (%s/%s). %w", s.settings.Namespace, s.settings.ResourceName, err)
@@ -177,7 +234,8 @@ func (s *Service) updateJobInfoForRunner(jobInfo actions.JobStarted) {
"workflowRef", jobInfo.JobWorkflowRef,
"workflowRunId", jobInfo.WorkflowRunId,
"jobDisplayName", jobInfo.JobDisplayName,
"requestId", jobInfo.RunnerRequestId)
"requestId", jobInfo.RunnerRequestId,
)
err := s.kubeManager.UpdateEphemeralRunnerWithJobInfo(s.ctx, s.settings.Namespace, jobInfo.RunnerName, jobInfo.OwnerName, jobInfo.RepositoryName, jobInfo.JobWorkflowRef, jobInfo.JobDisplayName, jobInfo.WorkflowRunId, jobInfo.RunnerRequestId)
if err != nil {
s.logger.Error(err, "could not update ephemeral runner with job info", "runnerName", jobInfo.RunnerName, "requestId", jobInfo.RunnerRequestId)

View File

@@ -21,7 +21,7 @@ func TestNewService(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -36,6 +36,7 @@ func TestNewService(t *testing.T) {
},
)
require.NoError(t, err)
assert.Equal(t, logger, service.logger)
}
@@ -47,7 +48,7 @@ func TestStart(t *testing.T) {
require.NoError(t, log_err, "Error creating logger")
ctx, cancel := context.WithCancel(context.Background())
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -61,9 +62,11 @@ func TestStart(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
mockRsClient.On("GetRunnerScaleSetMessage", service.ctx, mock.Anything).Run(func(args mock.Arguments) { cancel() }).Return(nil).Once()
err := service.Start()
err = service.Start()
assert.NoError(t, err, "Unexpected error")
assert.True(t, mockRsClient.AssertExpectations(t), "All expectations should be met")
@@ -72,13 +75,14 @@ func TestStart(t *testing.T) {
func TestStart_ScaleToMinRunners(t *testing.T) {
mockRsClient := &MockRunnerScaleSetClient{}
mockKubeManager := &MockKubernetesManager{}
logger, log_err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
logger = logger.WithName(t.Name())
require.NoError(t, log_err, "Error creating logger")
ctx, cancel := context.WithCancel(context.Background())
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -92,11 +96,17 @@ func TestStart_ScaleToMinRunners(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
mockRsClient.On("GetRunnerScaleSetMessage", ctx, mock.Anything).Run(func(args mock.Arguments) {
_ = service.scaleForAssignedJobCount(5)
}).Return(nil)
mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 5).Run(func(args mock.Arguments) { cancel() }).Return(nil).Once()
err := service.Start()
err = service.Start()
assert.NoError(t, err, "Unexpected error")
assert.True(t, mockRsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockKubeManager.AssertExpectations(t), "All expectations should be met")
}
@@ -110,7 +120,7 @@ func TestStart_ScaleToMinRunnersFailed(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -124,11 +134,16 @@ func TestStart_ScaleToMinRunnersFailed(t *testing.T) {
s.logger = logger
},
)
mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 5).Return(fmt.Errorf("error")).Once()
require.NoError(t, err)
err := service.Start()
c := mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 5).Return(fmt.Errorf("error")).Once()
mockRsClient.On("GetRunnerScaleSetMessage", ctx, mock.Anything).Run(func(args mock.Arguments) {
_ = service.scaleForAssignedJobCount(5)
}).Return(c.ReturnArguments.Get(0))
assert.ErrorContains(t, err, "could not scale to match minimal runners", "Unexpected error")
err = service.Start()
assert.ErrorContains(t, err, "could not get and process message", "Unexpected error")
assert.True(t, mockRsClient.AssertExpectations(t), "All expectations should be met")
assert.True(t, mockKubeManager.AssertExpectations(t), "All expectations should be met")
}
@@ -141,7 +156,7 @@ func TestStart_GetMultipleMessages(t *testing.T) {
require.NoError(t, log_err, "Error creating logger")
ctx, cancel := context.WithCancel(context.Background())
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -155,10 +170,12 @@ func TestStart_GetMultipleMessages(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
mockRsClient.On("GetRunnerScaleSetMessage", service.ctx, mock.Anything).Return(nil).Times(5)
mockRsClient.On("GetRunnerScaleSetMessage", service.ctx, mock.Anything).Run(func(args mock.Arguments) { cancel() }).Return(nil).Once()
err := service.Start()
err = service.Start()
assert.NoError(t, err, "Unexpected error")
assert.True(t, mockRsClient.AssertExpectations(t), "All expectations should be met")
@@ -174,7 +191,7 @@ func TestStart_ErrorOnMessage(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -188,10 +205,12 @@ func TestStart_ErrorOnMessage(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
mockRsClient.On("GetRunnerScaleSetMessage", service.ctx, mock.Anything).Return(nil).Times(2)
mockRsClient.On("GetRunnerScaleSetMessage", service.ctx, mock.Anything).Return(fmt.Errorf("error")).Once()
err := service.Start()
err = service.Start()
assert.ErrorContains(t, err, "could not get and process message. error", "Unexpected error")
assert.True(t, mockRsClient.AssertExpectations(t), "All expectations should be met")
@@ -207,7 +226,7 @@ func TestProcessMessage_NoStatistic(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -221,8 +240,9 @@ func TestProcessMessage_NoStatistic(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
err := service.processMessage(&actions.RunnerScaleSetMessage{
err = service.processMessage(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "test",
Body: "test",
@@ -242,7 +262,7 @@ func TestProcessMessage_IgnoreUnknownMessageType(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -256,8 +276,9 @@ func TestProcessMessage_IgnoreUnknownMessageType(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
err := service.processMessage(&actions.RunnerScaleSetMessage{
err = service.processMessage(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "unknown",
Statistics: &actions.RunnerScaleSetStatistic{
@@ -280,7 +301,7 @@ func TestProcessMessage_InvalidBatchMessageJson(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -295,7 +316,9 @@ func TestProcessMessage_InvalidBatchMessageJson(t *testing.T) {
},
)
err := service.processMessage(&actions.RunnerScaleSetMessage{
require.NoError(t, err)
err = service.processMessage(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "RunnerScaleSetJobMessages",
Statistics: &actions.RunnerScaleSetStatistic{
@@ -318,7 +341,7 @@ func TestProcessMessage_InvalidJobMessageJson(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -332,8 +355,9 @@ func TestProcessMessage_InvalidJobMessageJson(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
err := service.processMessage(&actions.RunnerScaleSetMessage{
err = service.processMessage(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "RunnerScaleSetJobMessages",
Statistics: &actions.RunnerScaleSetStatistic{
@@ -356,7 +380,7 @@ func TestProcessMessage_MultipleMessages(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -370,10 +394,12 @@ func TestProcessMessage_MultipleMessages(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
mockRsClient.On("AcquireJobsForRunnerScaleSet", ctx, mock.MatchedBy(func(ids []int64) bool { return ids[0] == 3 && ids[1] == 4 })).Return(nil).Once()
mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 2).Run(func(args mock.Arguments) { cancel() }).Return(nil).Once()
err := service.processMessage(&actions.RunnerScaleSetMessage{
err = service.processMessage(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "RunnerScaleSetJobMessages",
Statistics: &actions.RunnerScaleSetStatistic{
@@ -397,7 +423,7 @@ func TestProcessMessage_AcquireJobsFailed(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -411,9 +437,11 @@ func TestProcessMessage_AcquireJobsFailed(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
mockRsClient.On("AcquireJobsForRunnerScaleSet", ctx, mock.MatchedBy(func(ids []int64) bool { return ids[0] == 1 })).Return(fmt.Errorf("error")).Once()
err := service.processMessage(&actions.RunnerScaleSetMessage{
err = service.processMessage(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "RunnerScaleSetJobMessages",
Statistics: &actions.RunnerScaleSetStatistic{
@@ -437,7 +465,7 @@ func TestScaleForAssignedJobCount_DeDupScale(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -451,9 +479,11 @@ func TestScaleForAssignedJobCount_DeDupScale(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 2).Return(nil).Once()
err := service.scaleForAssignedJobCount(2)
err = service.scaleForAssignedJobCount(2)
require.NoError(t, err, "Unexpected error")
err = service.scaleForAssignedJobCount(2)
require.NoError(t, err, "Unexpected error")
@@ -476,7 +506,7 @@ func TestScaleForAssignedJobCount_ScaleWithinMinMax(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -490,13 +520,15 @@ func TestScaleForAssignedJobCount_ScaleWithinMinMax(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 1).Return(nil).Once()
mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 3).Return(nil).Once()
mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 5).Return(nil).Once()
mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 1).Return(nil).Once()
mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 5).Return(nil).Once()
err := service.scaleForAssignedJobCount(0)
err = service.scaleForAssignedJobCount(0)
require.NoError(t, err, "Unexpected error")
err = service.scaleForAssignedJobCount(3)
require.NoError(t, err, "Unexpected error")
@@ -521,7 +553,7 @@ func TestScaleForAssignedJobCount_ScaleFailed(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -535,9 +567,11 @@ func TestScaleForAssignedJobCount_ScaleFailed(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
mockKubeManager.On("ScaleEphemeralRunnerSet", ctx, service.settings.Namespace, service.settings.ResourceName, 2).Return(fmt.Errorf("error"))
err := service.scaleForAssignedJobCount(2)
err = service.scaleForAssignedJobCount(2)
assert.ErrorContains(t, err, "could not scale ephemeral runner set (namespace/resource). error", "Unexpected error")
assert.True(t, mockRsClient.AssertExpectations(t), "All expectations should be met")
@@ -553,7 +587,7 @@ func TestProcessMessage_JobStartedMessage(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -567,12 +601,14 @@ func TestProcessMessage_JobStartedMessage(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
service.currentRunnerCount = 1
mockKubeManager.On("UpdateEphemeralRunnerWithJobInfo", ctx, service.settings.Namespace, "runner1", "owner1", "repo1", ".github/workflows/ci.yaml", "job1", int64(100), int64(3)).Run(func(args mock.Arguments) { cancel() }).Return(nil).Once()
mockRsClient.On("AcquireJobsForRunnerScaleSet", ctx, mock.MatchedBy(func(ids []int64) bool { return len(ids) == 0 })).Return(nil).Once()
err := service.processMessage(&actions.RunnerScaleSetMessage{
err = service.processMessage(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "RunnerScaleSetJobMessages",
Statistics: &actions.RunnerScaleSetStatistic{
@@ -596,7 +632,7 @@ func TestProcessMessage_JobStartedMessageIgnoreRunnerUpdateError(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
service := NewService(
service, err := NewService(
ctx,
mockRsClient,
mockKubeManager,
@@ -610,12 +646,14 @@ func TestProcessMessage_JobStartedMessageIgnoreRunnerUpdateError(t *testing.T) {
s.logger = logger
},
)
require.NoError(t, err)
service.currentRunnerCount = 1
mockKubeManager.On("UpdateEphemeralRunnerWithJobInfo", ctx, service.settings.Namespace, "runner1", "owner1", "repo1", ".github/workflows/ci.yaml", "job1", int64(100), int64(3)).Run(func(args mock.Arguments) { cancel() }).Return(fmt.Errorf("error")).Once()
mockRsClient.On("AcquireJobsForRunnerScaleSet", ctx, mock.MatchedBy(func(ids []int64) bool { return len(ids) == 0 })).Return(nil).Once()
err := service.processMessage(&actions.RunnerScaleSetMessage{
err = service.processMessage(&actions.RunnerScaleSetMessage{
MessageId: 1,
MessageType: "RunnerScaleSetJobMessages",
Statistics: &actions.RunnerScaleSetStatistic{

View File

@@ -25,13 +25,17 @@ import (
"os"
"os/signal"
"syscall"
"time"
"github.com/actions/actions-runner-controller/build"
"github.com/actions/actions-runner-controller/github/actions"
"github.com/actions/actions-runner-controller/logging"
"github.com/go-logr/logr"
"github.com/kelseyhightower/envconfig"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"golang.org/x/net/http/httpproxy"
"golang.org/x/sync/errgroup"
)
type RunnerScaleSetListenerConfig struct {
@@ -45,19 +49,34 @@ type RunnerScaleSetListenerConfig struct {
MaxRunners int `split_words:"true"`
MinRunners int `split_words:"true"`
RunnerScaleSetId int `split_words:"true"`
RunnerScaleSetName string `split_words:"true"`
ServerRootCA string `split_words:"true"`
LogLevel string `split_words:"true"`
LogFormat string `split_words:"true"`
MetricsAddr string `split_words:"true"`
MetricsEndpoint string `split_words:"true"`
}
func main() {
logger, err := logging.NewLogger(logging.LogLevelDebug, logging.LogFormatText)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: creating logger: %v\n", err)
var rc RunnerScaleSetListenerConfig
if err := envconfig.Process("github", &rc); err != nil {
fmt.Fprintf(os.Stderr, "Error: processing environment variables for RunnerScaleSetListenerConfig: %v\n", err)
os.Exit(1)
}
var rc RunnerScaleSetListenerConfig
if err := envconfig.Process("github", &rc); err != nil {
logger.Error(err, "Error: processing environment variables for RunnerScaleSetListenerConfig")
logLevel := string(logging.LogLevelDebug)
if rc.LogLevel != "" {
logLevel = rc.LogLevel
}
logFormat := string(logging.LogFormatText)
if rc.LogFormat != "" {
logFormat = rc.LogFormat
}
logger, err := logging.NewLogger(logLevel, logFormat)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: creating logger: %v\n", err)
os.Exit(1)
}
@@ -67,17 +86,95 @@ func main() {
os.Exit(1)
}
if err := run(rc, logger); err != nil {
logger.Error(err, "Run error")
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer stop()
g, ctx := errgroup.WithContext(ctx)
g.Go(func() error {
opts := runOptions{
serviceOptions: []func(*Service){
WithLogger(logger),
},
}
opts.serviceOptions = append(opts.serviceOptions, WithPrometheusMetrics(rc))
return run(ctx, rc, logger, opts)
})
if len(rc.MetricsAddr) != 0 {
g.Go(func() error {
metricsServer := metricsServer{
rc: rc,
logger: logger,
}
g.Go(func() error {
<-ctx.Done()
return metricsServer.shutdown()
})
return metricsServer.listenAndServe()
})
}
if err := g.Wait(); err != nil {
logger.Error(err, "Error encountered")
os.Exit(1)
}
}
func run(rc RunnerScaleSetListenerConfig, logger logr.Logger) error {
// Create root context and hook with sigint and sigterm
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer stop()
type metricsServer struct {
rc RunnerScaleSetListenerConfig
logger logr.Logger
srv *http.Server
}
func (s *metricsServer) shutdown() error {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
return s.srv.Shutdown(ctx)
}
func (s *metricsServer) listenAndServe() error {
reg := prometheus.NewRegistry()
reg.MustRegister(
// availableJobs,
// acquiredJobs,
assignedJobs,
runningJobs,
registeredRunners,
busyRunners,
minRunners,
maxRunners,
desiredRunners,
idleRunners,
startedJobsTotal,
completedJobsTotal,
// jobQueueDurationSeconds,
jobStartupDurationSeconds,
jobExecutionDurationSeconds,
)
mux := http.NewServeMux()
mux.Handle(
s.rc.MetricsEndpoint,
promhttp.HandlerFor(reg, promhttp.HandlerOpts{Registry: reg}),
)
s.srv = &http.Server{
Addr: s.rc.MetricsAddr,
Handler: mux,
}
s.logger.Info("Starting metrics server", "address", s.srv.Addr)
return s.srv.ListenAndServe()
}
type runOptions struct {
serviceOptions []func(*Service)
}
func run(ctx context.Context, rc RunnerScaleSetListenerConfig, logger logr.Logger, opts runOptions) error {
// Create root context and hook with sigint and sigterm
creds := &actions.ActionsAuth{}
if rc.Token != "" {
creds.Token = rc.Token
@@ -93,8 +190,12 @@ func run(rc RunnerScaleSetListenerConfig, logger logr.Logger) error {
rc,
creds,
actions.WithLogger(logger),
actions.WithUserAgent(fmt.Sprintf("actions-runner-controller/%s", build.Version)),
)
actionsServiceClient.SetUserAgent(actions.UserAgentInfo{
Version: build.Version,
CommitSHA: build.CommitSHA,
ScaleSetID: rc.RunnerScaleSetId,
})
if err != nil {
return fmt.Errorf("failed to create an Actions Service client: %w", err)
}
@@ -119,9 +220,10 @@ func run(rc RunnerScaleSetListenerConfig, logger logr.Logger) error {
MinRunners: rc.MinRunners,
}
service := NewService(ctx, autoScalerClient, kubeManager, scaleSettings, func(s *Service) {
s.logger = logger.WithName("service")
})
service, err := NewService(ctx, autoScalerClient, kubeManager, scaleSettings, opts.serviceOptions...)
if err != nil {
return fmt.Errorf("failed to create new service: %v", err)
}
// Start listening for messages
if err = service.Start(); err != nil {

View File

@@ -0,0 +1,330 @@
package main
import (
"strconv"
"github.com/actions/actions-runner-controller/github/actions"
"github.com/prometheus/client_golang/prometheus"
)
// label names
const (
labelKeyRunnerScaleSetName = "name"
labelKeyRunnerScaleSetNamespace = "namespace"
labelKeyEnterprise = "enterprise"
labelKeyOrganization = "organization"
labelKeyRepository = "repository"
labelKeyJobName = "job_name"
labelKeyJobWorkflowRef = "job_workflow_ref"
labelKeyEventName = "event_name"
labelKeyJobResult = "job_result"
labelKeyRunnerID = "runner_id"
labelKeyRunnerName = "runner_name"
)
const githubScaleSetSubsystem = "gha"
// labels
var (
scaleSetLabels = []string{
labelKeyRunnerScaleSetName,
labelKeyRepository,
labelKeyOrganization,
labelKeyEnterprise,
labelKeyRunnerScaleSetNamespace,
}
jobLabels = []string{
labelKeyRepository,
labelKeyOrganization,
labelKeyEnterprise,
labelKeyJobName,
labelKeyJobWorkflowRef,
labelKeyEventName,
}
completedJobsTotalLabels = append(jobLabels, labelKeyJobResult, labelKeyRunnerID, labelKeyRunnerName)
jobExecutionDurationLabels = append(jobLabels, labelKeyJobResult, labelKeyRunnerID, labelKeyRunnerName)
startedJobsTotalLabels = append(jobLabels, labelKeyRunnerID, labelKeyRunnerName)
jobStartupDurationLabels = append(jobLabels, labelKeyRunnerID, labelKeyRunnerName)
)
// metrics
var (
// availableJobs = prometheus.NewGaugeVec(
// prometheus.GaugeOpts{
// Subsystem: githubScaleSetSubsystem,
// Name: "available_jobs",
// Help: "Number of jobs with `runs-on` matching the runner scale set name. Jobs are not yet assigned to the runner scale set.",
// },
// scaleSetLabels,
// )
//
// acquiredJobs = prometheus.NewGaugeVec(
// prometheus.GaugeOpts{
// Subsystem: githubScaleSetSubsystem,
// Name: "acquired_jobs",
// Help: "Number of jobs acquired by the scale set.",
// },
// scaleSetLabels,
// )
assignedJobs = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetSubsystem,
Name: "assigned_jobs",
Help: "Number of jobs assigned to this scale set.",
},
scaleSetLabels,
)
runningJobs = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetSubsystem,
Name: "running_jobs",
Help: "Number of jobs running (or about to be run).",
},
scaleSetLabels,
)
registeredRunners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetSubsystem,
Name: "registered_runners",
Help: "Number of runners registered by the scale set.",
},
scaleSetLabels,
)
busyRunners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetSubsystem,
Name: "busy_runners",
Help: "Number of registered runners running a job.",
},
scaleSetLabels,
)
minRunners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetSubsystem,
Name: "min_runners",
Help: "Minimum number of runners.",
},
scaleSetLabels,
)
maxRunners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetSubsystem,
Name: "max_runners",
Help: "Maximum number of runners.",
},
scaleSetLabels,
)
desiredRunners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetSubsystem,
Name: "desired_runners",
Help: "Number of runners desired by the scale set.",
},
scaleSetLabels,
)
idleRunners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetSubsystem,
Name: "idle_runners",
Help: "Number of registered runners not running a job.",
},
scaleSetLabels,
)
startedJobsTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Subsystem: githubScaleSetSubsystem,
Name: "started_jobs_total",
Help: "Total number of jobs started.",
},
startedJobsTotalLabels,
)
completedJobsTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "completed_jobs_total",
Help: "Total number of jobs completed.",
Subsystem: githubScaleSetSubsystem,
},
completedJobsTotalLabels,
)
// jobQueueDurationSeconds = prometheus.NewHistogramVec(
// prometheus.HistogramOpts{
// Subsystem: githubScaleSetSubsystem,
// Name: "job_queue_duration_seconds",
// Help: "Time spent waiting for workflow jobs to get assigned to the scale set after queueing (in seconds).",
// Buckets: runtimeBuckets,
// },
// jobLabels,
// )
jobStartupDurationSeconds = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Subsystem: githubScaleSetSubsystem,
Name: "job_startup_duration_seconds",
Help: "Time spent waiting for workflow job to get started on the runner owned by the scale set (in seconds).",
Buckets: runtimeBuckets,
},
jobStartupDurationLabels,
)
jobExecutionDurationSeconds = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Subsystem: githubScaleSetSubsystem,
Name: "job_execution_duration_seconds",
Help: "Time spent executing workflow jobs by the scale set (in seconds).",
Buckets: runtimeBuckets,
},
jobExecutionDurationLabels,
)
)
var runtimeBuckets []float64 = []float64{
0.01,
0.05,
0.1,
0.5,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
12,
15,
18,
20,
25,
30,
40,
50,
60,
70,
80,
90,
100,
110,
120,
150,
180,
210,
240,
300,
360,
420,
480,
540,
600,
900,
1200,
1800,
2400,
3000,
3600,
}
type metricsExporter struct {
// Initialized during creation.
baseLabels
}
type baseLabels struct {
scaleSetName string
scaleSetNamespace string
enterprise string
organization string
repository string
}
func (b *baseLabels) jobLabels(jobBase *actions.JobMessageBase) prometheus.Labels {
return prometheus.Labels{
labelKeyEnterprise: b.enterprise,
labelKeyOrganization: b.organization,
labelKeyRepository: b.repository,
labelKeyJobName: jobBase.JobDisplayName,
labelKeyJobWorkflowRef: jobBase.JobWorkflowRef,
labelKeyEventName: jobBase.EventName,
}
}
func (b *baseLabels) scaleSetLabels() prometheus.Labels {
return prometheus.Labels{
labelKeyRunnerScaleSetName: b.scaleSetName,
labelKeyRunnerScaleSetNamespace: b.scaleSetNamespace,
labelKeyEnterprise: b.enterprise,
labelKeyOrganization: b.organization,
labelKeyRepository: b.repository,
}
}
func (b *baseLabels) completedJobLabels(msg *actions.JobCompleted) prometheus.Labels {
l := b.jobLabels(&msg.JobMessageBase)
l[labelKeyRunnerID] = strconv.Itoa(msg.RunnerId)
l[labelKeyJobResult] = msg.Result
l[labelKeyRunnerName] = msg.RunnerName
return l
}
func (b *baseLabels) startedJobLabels(msg *actions.JobStarted) prometheus.Labels {
l := b.jobLabels(&msg.JobMessageBase)
l[labelKeyRunnerID] = strconv.Itoa(msg.RunnerId)
l[labelKeyRunnerName] = msg.RunnerName
return l
}
func (m *metricsExporter) withBaseLabels(base baseLabels) {
m.baseLabels = base
}
func (m *metricsExporter) publishStatistics(stats *actions.RunnerScaleSetStatistic) {
l := m.scaleSetLabels()
// availableJobs.With(l).Set(float64(stats.TotalAvailableJobs))
// acquiredJobs.With(l).Set(float64(stats.TotalAcquiredJobs))
assignedJobs.With(l).Set(float64(stats.TotalAssignedJobs))
runningJobs.With(l).Set(float64(stats.TotalRunningJobs))
registeredRunners.With(l).Set(float64(stats.TotalRegisteredRunners))
busyRunners.With(l).Set(float64(stats.TotalBusyRunners))
idleRunners.With(l).Set(float64(stats.TotalIdleRunners))
}
func (m *metricsExporter) publishJobStarted(msg *actions.JobStarted) {
l := m.startedJobLabels(msg)
startedJobsTotal.With(l).Inc()
startupDuration := msg.JobMessageBase.RunnerAssignTime.Unix() - msg.JobMessageBase.ScaleSetAssignTime.Unix()
jobStartupDurationSeconds.With(l).Observe(float64(startupDuration))
}
// func (m *metricsExporter) publishJobAssigned(msg *actions.JobAssigned) {
// l := m.jobLabels(&msg.JobMessageBase)
// queueDuration := msg.JobMessageBase.ScaleSetAssignTime.Unix() - msg.JobMessageBase.QueueTime.Unix()
// jobQueueDurationSeconds.With(l).Observe(float64(queueDuration))
// }
func (m *metricsExporter) publishJobCompleted(msg *actions.JobCompleted) {
l := m.completedJobLabels(msg)
completedJobsTotal.With(l).Inc()
executionDuration := msg.JobMessageBase.FinishTime.Unix() - msg.JobMessageBase.RunnerAssignTime.Unix()
jobExecutionDurationSeconds.With(l).Observe(float64(executionDuration))
}
func (m *metricsExporter) publishDesiredRunners(count int) {
desiredRunners.With(m.scaleSetLabels()).Set(float64(count))
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.16.0. DO NOT EDIT.
// Code generated by mockery v2.33.2. DO NOT EDIT.
package main
@@ -41,13 +41,12 @@ func (_m *MockKubernetesManager) UpdateEphemeralRunnerWithJobInfo(ctx context.Co
return r0
}
type mockConstructorTestingTNewMockKubernetesManager interface {
// NewMockKubernetesManager creates a new instance of MockKubernetesManager. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockKubernetesManager(t interface {
mock.TestingT
Cleanup(func())
}
// NewMockKubernetesManager creates a new instance of MockKubernetesManager. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewMockKubernetesManager(t mockConstructorTestingTNewMockKubernetesManager) *MockKubernetesManager {
}) *MockKubernetesManager {
mock := &MockKubernetesManager{}
mock.Mock.Test(t)

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.16.0. DO NOT EDIT.
// Code generated by mockery v2.33.2. DO NOT EDIT.
package main
@@ -43,13 +43,12 @@ func (_m *MockRunnerScaleSetClient) GetRunnerScaleSetMessage(ctx context.Context
return r0
}
type mockConstructorTestingTNewMockRunnerScaleSetClient interface {
// NewMockRunnerScaleSetClient creates a new instance of MockRunnerScaleSetClient. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockRunnerScaleSetClient(t interface {
mock.TestingT
Cleanup(func())
}
// NewMockRunnerScaleSetClient creates a new instance of MockRunnerScaleSetClient. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewMockRunnerScaleSetClient(t mockConstructorTestingTNewMockRunnerScaleSetClient) *MockRunnerScaleSetClient {
}) *MockRunnerScaleSetClient {
mock := &MockRunnerScaleSetClient{}
mock.Mock.Test(t)

View File

@@ -124,7 +124,7 @@ func main() {
if watchNamespace == "" {
logger.Info("-watch-namespace is empty. HorizontalRunnerAutoscalers in all the namespaces are watched, cached, and considered as scale targets.")
} else {
logger.Info("-watch-namespace is %q. Only HorizontalRunnerAutoscalers in %q are watched, cached, and considered as scale targets.", watchNamespace, watchNamespace)
logger.Info(fmt.Sprintf("-watch-namespace is %q. Only HorizontalRunnerAutoscalers in %q are watched, cached, and considered as scale targets.", watchNamespace, watchNamespace))
}
ctrl.SetLogger(logger)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
name: ephemeralrunners.actions.github.com
spec:
@@ -82,6 +83,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
type: object
metadata:
@@ -195,6 +197,7 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
format: int32
@@ -255,10 +258,12 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
type: array
required:
- nodeSelectorTerms
type: object
x-kubernetes-map-type: atomic
type: object
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
@@ -301,6 +306,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -331,6 +337,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -386,6 +393,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -416,6 +424,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -470,6 +479,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -500,6 +510,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -555,6 +566,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -585,6 +597,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -646,6 +659,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -658,6 +672,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -677,6 +692,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -692,6 +708,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -712,6 +729,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -725,6 +743,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -1470,6 +1489,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1482,6 +1502,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1501,6 +1522,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1516,6 +1538,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1536,6 +1559,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1549,6 +1573,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -2260,6 +2285,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
type: array
initContainers:
description: 'List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/'
@@ -2305,6 +2331,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -2317,6 +2344,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -2336,6 +2364,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -2351,6 +2380,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -2371,6 +2401,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -2384,6 +2415,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -3316,6 +3348,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
@@ -3340,7 +3373,7 @@ spec:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
type: string
required:
- maxSkew
@@ -3441,6 +3474,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
type: string
@@ -3463,6 +3497,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeID:
description: 'volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
type: string
@@ -3503,6 +3538,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
csi:
description: csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
properties:
@@ -3519,6 +3555,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
readOnly:
description: readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
type: boolean
@@ -3554,6 +3591,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -3580,6 +3618,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -3600,7 +3639,7 @@ spec:
x-kubernetes-int-or-string: true
type: object
ephemeral:
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
properties:
volumeClaimTemplate:
description: "Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `<pod name>-<volume name>` where `<volume name>` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). \n An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. \n This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. \n Required, must not be nil."
@@ -3649,8 +3688,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -3732,6 +3772,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
@@ -3794,6 +3835,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
required:
- driver
type: object
@@ -3909,6 +3951,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
targetPortal:
description: targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
type: string
@@ -4017,6 +4060,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
downwardAPI:
description: downwardAPI information about the downwardAPI data to project
properties:
@@ -4037,6 +4081,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -4063,6 +4108,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -4098,6 +4144,7 @@ spec:
description: optional field specify whether the Secret or its key must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
serviceAccountToken:
description: serviceAccountToken is information about the serviceAccountToken data to project
properties:
@@ -4172,6 +4219,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
type: string
@@ -4201,6 +4249,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
sslEnabled:
description: sslEnabled Flag enable/disable SSL communication with Gateway, default false
type: boolean
@@ -4271,6 +4320,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeName:
description: volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
type: string
@@ -4346,9 +4396,3 @@ spec:
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,8 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
name: ephemeralrunnersets.actions.github.com
spec:
@@ -76,6 +77,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
type: object
metadata:
@@ -189,6 +191,7 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
weight:
description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
format: int32
@@ -249,10 +252,12 @@ spec:
type: object
type: array
type: object
x-kubernetes-map-type: atomic
type: array
required:
- nodeSelectorTerms
type: object
x-kubernetes-map-type: atomic
type: object
podAffinity:
description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
@@ -295,6 +300,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -325,6 +331,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -380,6 +387,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -410,6 +418,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -464,6 +473,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -494,6 +504,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -549,6 +560,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaceSelector:
description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces.
properties:
@@ -579,6 +591,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
namespaces:
description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace".
items:
@@ -640,6 +653,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -652,6 +666,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -671,6 +686,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -686,6 +702,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -706,6 +723,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -719,6 +737,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -1464,6 +1483,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -1476,6 +1496,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -1495,6 +1516,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -1510,6 +1532,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -1530,6 +1553,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -1543,6 +1567,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -2254,6 +2279,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
type: array
initContainers:
description: 'List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/'
@@ -2299,6 +2325,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels[''<KEY>'']`, `metadata.annotations[''<KEY>'']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.'
properties:
@@ -2311,6 +2338,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.'
properties:
@@ -2330,6 +2358,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
@@ -2345,6 +2374,7 @@ spec:
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
@@ -2365,6 +2395,7 @@ spec:
description: Specify whether the ConfigMap must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
type: string
@@ -2378,6 +2409,7 @@ spec:
description: Specify whether the Secret must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
type: object
type: array
image:
@@ -3310,6 +3342,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
items:
@@ -3334,7 +3367,7 @@ spec:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
type: string
whenUnsatisfiable:
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.'
type: string
required:
- maxSkew
@@ -3435,6 +3468,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
type: string
@@ -3457,6 +3491,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeID:
description: 'volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
type: string
@@ -3497,6 +3532,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
csi:
description: csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
properties:
@@ -3513,6 +3549,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
readOnly:
description: readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
type: boolean
@@ -3548,6 +3585,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -3574,6 +3612,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -3594,7 +3633,7 @@ spec:
x-kubernetes-int-or-string: true
type: object
ephemeral:
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time."
properties:
volumeClaimTemplate:
description: "Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `<pod name>-<volume name>` where `<volume name>` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). \n An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. \n This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. \n Required, must not be nil."
@@ -3643,8 +3682,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -3726,6 +3766,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
@@ -3788,6 +3829,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
required:
- driver
type: object
@@ -3903,6 +3945,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
targetPortal:
description: targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
type: string
@@ -4011,6 +4054,7 @@ spec:
description: optional specify whether the ConfigMap or its keys must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
downwardAPI:
description: downwardAPI information about the downwardAPI data to project
properties:
@@ -4031,6 +4075,7 @@ spec:
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
mode:
description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.'
format: int32
@@ -4057,6 +4102,7 @@ spec:
required:
- resource
type: object
x-kubernetes-map-type: atomic
required:
- path
type: object
@@ -4092,6 +4138,7 @@ spec:
description: optional field specify whether the Secret or its key must be defined
type: boolean
type: object
x-kubernetes-map-type: atomic
serviceAccountToken:
description: serviceAccountToken is information about the serviceAccountToken data to project
properties:
@@ -4166,6 +4213,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
user:
description: 'user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
type: string
@@ -4195,6 +4243,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
sslEnabled:
description: sslEnabled Flag enable/disable SSL communication with Gateway, default false
type: boolean
@@ -4265,6 +4314,7 @@ spec:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
x-kubernetes-map-type: atomic
volumeName:
description: volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
type: string
@@ -4323,9 +4373,3 @@ spec:
subresources:
status: {}
preserveUnknownFields: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -113,7 +113,7 @@ spec:
description: ScaleDownDelaySecondsAfterScaleUp is the approximate delay for a scale down followed by a scale up Used to prevent flapping (down->up->down->... loop)
type: integer
scaleTargetRef:
description: ScaleTargetRef sis the reference to scaled resource like RunnerDeployment
description: ScaleTargetRef is the reference to scaled resource like RunnerDeployment
properties:
kind:
description: Kind is the type of resource being referenced

View File

@@ -1497,6 +1497,12 @@ spec:
type: integer
dockerRegistryMirror:
type: string
dockerVarRunVolumeSizeLimit:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
dockerVolumeMounts:
items:
description: VolumeMount describes a mounting of a Volume within a container.

View File

@@ -1479,6 +1479,12 @@ spec:
type: integer
dockerRegistryMirror:
type: string
dockerVarRunVolumeSizeLimit:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
dockerVolumeMounts:
items:
description: VolumeMount describes a mounting of a Volume within a container.

View File

@@ -1432,6 +1432,12 @@ spec:
type: integer
dockerRegistryMirror:
type: string
dockerVarRunVolumeSizeLimit:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
dockerVolumeMounts:
items:
description: VolumeMount describes a mounting of a Volume within a container.

View File

@@ -55,6 +55,12 @@ spec:
type: integer
dockerRegistryMirror:
type: string
dockerVarRunVolumeSizeLimit:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
dockerdWithinRunnerContainer:
type: boolean
effectiveTime:

View File

@@ -56,8 +56,6 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONTROLLER_MANAGER_LISTENER_IMAGE_PULL_POLICY
value: IfNotPresent
volumeMounts:
- name: controller-manager
mountPath: "/etc/actions-runner-controller"

View File

@@ -33,6 +33,8 @@ import (
"sigs.k8s.io/controller-runtime/pkg/source"
v1alpha1 "github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/actions/actions-runner-controller/controllers/actions.github.com/metrics"
"github.com/actions/actions-runner-controller/github/actions"
hash "github.com/actions/actions-runner-controller/hash"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
@@ -49,6 +51,10 @@ type AutoscalingListenerReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
// ListenerMetricsAddr is address that the metrics endpoint binds to.
// If it is set to "0", the metrics server is not started.
ListenerMetricsAddr string
ListenerMetricsEndpoint string
resourceBuilder resourceBuilder
}
@@ -227,6 +233,11 @@ func (r *AutoscalingListenerReconciler) Reconcile(ctx context.Context, req ctrl.
return ctrl.Result{}, err
}
if err := r.publishRunningListener(autoscalingListener, false); err != nil {
// If publish fails, URL is incorrect which means the listener pod would never be able to start
return ctrl.Result{}, nil
}
// Create a listener pod in the controller namespace
log.Info("Creating a listener pod")
return r.createListenerPod(ctx, &autoscalingRunnerSet, autoscalingListener, serviceAccount, mirrorSecret, log)
@@ -242,6 +253,16 @@ func (r *AutoscalingListenerReconciler) Reconcile(ctx context.Context, req ctrl.
}
}
if listenerPod.Status.Phase == corev1.PodRunning {
if err := r.publishRunningListener(autoscalingListener, true); err != nil {
log.Error(err, "Unable to publish running listener", "namespace", listenerPod.Namespace, "name", listenerPod.Name)
// stop reconciling. We should never get to this point but if we do,
// listener won't be able to start up, and the crash from the pod should
// notify the reconciler again.
return ctrl.Result{}, nil
}
}
return ctrl.Result{}, nil
}
@@ -260,6 +281,9 @@ func (r *AutoscalingListenerReconciler) cleanupResources(ctx context.Context, au
return false, nil
case err != nil && !kerrors.IsNotFound(err):
return false, fmt.Errorf("failed to get listener pods: %v", err)
default: // NOT FOUND
_ = r.publishRunningListener(autoscalingListener, false) // If error is returned, we never published metrics so it is safe to ignore
}
logger.Info("Listener pod is deleted")
@@ -371,9 +395,22 @@ func (r *AutoscalingListenerReconciler) createListenerPod(ctx context.Context, a
envs = append(envs, env)
}
newPod := r.resourceBuilder.newScaleSetListenerPod(autoscalingListener, serviceAccount, secret, envs...)
var metricsConfig *listenerMetricsServerConfig
if r.ListenerMetricsAddr != "0" {
metricsConfig = &listenerMetricsServerConfig{
addr: r.ListenerMetricsAddr,
endpoint: r.ListenerMetricsEndpoint,
}
}
newPod, err := r.resourceBuilder.newScaleSetListenerPod(autoscalingListener, serviceAccount, secret, metricsConfig, envs...)
if err != nil {
logger.Error(err, "Failed to build listener pod")
return ctrl.Result{}, err
}
if err := ctrl.SetControllerReference(autoscalingListener, newPod, r.Scheme); err != nil {
logger.Error(err, "Failed to set controller reference")
return ctrl.Result{}, err
}
@@ -556,6 +593,30 @@ func (r *AutoscalingListenerReconciler) createRoleBindingForListener(ctx context
return ctrl.Result{Requeue: true}, nil
}
func (r *AutoscalingListenerReconciler) publishRunningListener(autoscalingListener *v1alpha1.AutoscalingListener, isUp bool) error {
githubConfigURL := autoscalingListener.Spec.GitHubConfigUrl
parsedURL, err := actions.ParseGitHubConfigFromURL(githubConfigURL)
if err != nil {
return err
}
commonLabels := metrics.CommonLabels{
Name: autoscalingListener.Name,
Namespace: autoscalingListener.Namespace,
Repository: parsedURL.Repository,
Organization: parsedURL.Organization,
Enterprise: parsedURL.Enterprise,
}
if isUp {
metrics.AddRunningListener(commonLabels)
} else {
metrics.SubRunningListener(commonLabels)
}
return nil
}
// SetupWithManager sets up the controller with the Manager.
func (r *AutoscalingListenerReconciler) SetupWithManager(mgr ctrl.Manager) error {
groupVersionIndexer := func(rawObj client.Object) []string {

View File

@@ -361,6 +361,150 @@ var _ = Describe("Test AutoScalingListener controller", func() {
})
})
var _ = Describe("Test AutoScalingListener customization", func() {
var ctx context.Context
var mgr ctrl.Manager
var autoscalingNS *corev1.Namespace
var autoscalingRunnerSet *actionsv1alpha1.AutoscalingRunnerSet
var configSecret *corev1.Secret
var autoscalingListener *actionsv1alpha1.AutoscalingListener
var runAsUser int64 = 1001
listenerPodTemplate := corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "listener",
ImagePullPolicy: corev1.PullAlways,
SecurityContext: &corev1.SecurityContext{
RunAsUser: &runAsUser,
},
},
{
Name: "sidecar",
ImagePullPolicy: corev1.PullIfNotPresent,
Image: "busybox",
},
},
SecurityContext: &corev1.PodSecurityContext{
RunAsUser: &runAsUser,
},
},
}
BeforeEach(func() {
ctx = context.Background()
autoscalingNS, mgr = createNamespace(GinkgoT(), k8sClient)
configSecret = createDefaultSecret(GinkgoT(), k8sClient, autoscalingNS.Name)
controller := &AutoscalingListenerReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Log: logf.Log,
}
err := controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
min := 1
max := 10
autoscalingRunnerSet = &actionsv1alpha1.AutoscalingRunnerSet{
ObjectMeta: metav1.ObjectMeta{
Name: "test-asrs",
Namespace: autoscalingNS.Name,
},
Spec: actionsv1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: "https://github.com/owner/repo",
GitHubConfigSecret: configSecret.Name,
MaxRunners: &max,
MinRunners: &min,
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "runner",
Image: "ghcr.io/actions/runner",
},
},
},
},
},
}
err = k8sClient.Create(ctx, autoscalingRunnerSet)
Expect(err).NotTo(HaveOccurred(), "failed to create AutoScalingRunnerSet")
autoscalingListener = &actionsv1alpha1.AutoscalingListener{
ObjectMeta: metav1.ObjectMeta{
Name: "test-asltest",
Namespace: autoscalingNS.Name,
},
Spec: actionsv1alpha1.AutoscalingListenerSpec{
GitHubConfigUrl: "https://github.com/owner/repo",
GitHubConfigSecret: configSecret.Name,
RunnerScaleSetId: 1,
AutoscalingRunnerSetNamespace: autoscalingRunnerSet.Namespace,
AutoscalingRunnerSetName: autoscalingRunnerSet.Name,
EphemeralRunnerSetName: "test-ers",
MaxRunners: 10,
MinRunners: 1,
Image: "ghcr.io/owner/repo",
Template: &listenerPodTemplate,
},
}
err = k8sClient.Create(ctx, autoscalingListener)
Expect(err).NotTo(HaveOccurred(), "failed to create AutoScalingListener")
startManagers(GinkgoT(), mgr)
})
Context("When creating a new AutoScalingListener", func() {
It("It should create customized pod with applied configuration", func() {
// Check if finalizer is added
created := new(actionsv1alpha1.AutoscalingListener)
Eventually(
func() (string, error) {
err := k8sClient.Get(ctx, client.ObjectKey{Name: autoscalingListener.Name, Namespace: autoscalingListener.Namespace}, created)
if err != nil {
return "", err
}
if len(created.Finalizers) == 0 {
return "", nil
}
return created.Finalizers[0], nil
},
autoscalingListenerTestTimeout,
autoscalingListenerTestInterval).Should(BeEquivalentTo(autoscalingListenerFinalizerName), "AutoScalingListener should have a finalizer")
// Check if pod is created
pod := new(corev1.Pod)
Eventually(
func() (string, error) {
err := k8sClient.Get(ctx, client.ObjectKey{Name: autoscalingListener.Name, Namespace: autoscalingListener.Namespace}, pod)
if err != nil {
return "", err
}
return pod.Name, nil
},
autoscalingListenerTestTimeout,
autoscalingListenerTestInterval,
).Should(BeEquivalentTo(autoscalingListener.Name), "Pod should be created")
Expect(pod.Spec.SecurityContext.RunAsUser).To(Equal(&runAsUser), "Pod should have the correct security context")
Expect(pod.Spec.Containers[0].Name).NotTo(Equal("listener"), "Pod should have the correct container name")
Expect(pod.Spec.Containers[0].SecurityContext.RunAsUser).To(Equal(&runAsUser), "Pod should have the correct security context")
Expect(pod.Spec.Containers[0].ImagePullPolicy).To(Equal(corev1.PullAlways), "Pod should have the correct image pull policy")
Expect(pod.Spec.Containers[1].Name).To(Equal("sidecar"), "Pod should have the correct container name")
Expect(pod.Spec.Containers[1].Image).To(Equal("busybox"), "Pod should have the correct image")
Expect(pod.Spec.Containers[1].ImagePullPolicy).To(Equal(corev1.PullIfNotPresent), "Pod should have the correct image pull policy")
})
})
})
var _ = Describe("Test AutoScalingListener controller with proxy", func() {
var ctx context.Context
var mgr ctrl.Manager

View File

@@ -49,6 +49,24 @@ const (
runnerScaleSetNameAnnotationKey = "runner-scale-set-name"
)
type UpdateStrategy string
// Defines how the controller should handle upgrades while having running jobs.
const (
// "immediate": (default) The controller will immediately apply the change causing the
// recreation of the listener and ephemeral runner set. This can lead to an
// overprovisioning of runners, if there are pending / running jobs. This should not
// be a problem at a small scale, but it could lead to a significant increase of
// resources if you have a lot of jobs running concurrently.
UpdateStrategyImmediate = UpdateStrategy("immediate")
// "eventual": The controller will remove the listener and ephemeral runner set
// immediately, but will not recreate them (to apply changes) until all
// pending / running jobs have completed.
// This can lead to a longer time to apply the change but it will ensure
// that you don't have any overprovisioning of runners.
UpdateStrategyEventual = UpdateStrategy("eventual")
)
// AutoscalingRunnerSetReconciler reconciles a AutoscalingRunnerSet object
type AutoscalingRunnerSetReconciler struct {
client.Client
@@ -57,6 +75,7 @@ type AutoscalingRunnerSetReconciler struct {
ControllerNamespace string
DefaultRunnerScaleSetListenerImage string
DefaultRunnerScaleSetListenerImagePullSecrets []string
UpdateStrategy UpdateStrategy
ActionsClient actions.MultiClient
resourceBuilder resourceBuilder
@@ -218,7 +237,48 @@ func (r *AutoscalingRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl
log.Info("Find existing ephemeral runner set", "name", runnerSet.Name, "specHash", runnerSet.Labels[labelKeyRunnerSpecHash])
}
// Make sure the AutoscalingListener is up and running in the controller namespace
listener := new(v1alpha1.AutoscalingListener)
listenerFound := true
if err := r.Get(ctx, client.ObjectKey{Namespace: r.ControllerNamespace, Name: scaleSetListenerName(autoscalingRunnerSet)}, listener); err != nil {
if !kerrors.IsNotFound(err) {
log.Error(err, "Failed to get AutoscalingListener resource")
return ctrl.Result{}, err
}
listenerFound = false
log.Info("AutoscalingListener does not exist.")
}
// Our listener pod is out of date, so we need to delete it to get a new recreate.
if listenerFound && (listener.Labels[labelKeyRunnerSpecHash] != autoscalingRunnerSet.ListenerSpecHash()) {
log.Info("RunnerScaleSetListener is out of date. Deleting it so that it is recreated", "name", listener.Name)
if err := r.Delete(ctx, listener); err != nil {
if kerrors.IsNotFound(err) {
return ctrl.Result{}, nil
}
log.Error(err, "Failed to delete AutoscalingListener resource")
return ctrl.Result{}, err
}
log.Info("Deleted RunnerScaleSetListener since existing one is out of date")
return ctrl.Result{}, nil
}
if desiredSpecHash != latestRunnerSet.Labels[labelKeyRunnerSpecHash] {
if r.drainingJobs(&latestRunnerSet.Status) {
log.Info("Latest runner set spec hash does not match the current autoscaling runner set. Waiting for the running and pending runners to finish:", "running", latestRunnerSet.Status.RunningEphemeralRunners, "pending", latestRunnerSet.Status.PendingEphemeralRunners)
log.Info("Scaling down the number of desired replicas to 0")
// We are in the process of draining the jobs. The listener has been deleted and the ephemeral runner set replicas
// need to scale down to 0
err := patch(ctx, r.Client, latestRunnerSet, func(obj *v1alpha1.EphemeralRunnerSet) {
obj.Spec.Replicas = 0
})
if err != nil {
log.Error(err, "Failed to patch runner set to set desired count to 0")
}
return ctrl.Result{}, err
}
log.Info("Latest runner set spec hash does not match the current autoscaling runner set. Creating a new runner set")
return r.createEphemeralRunnerSet(ctx, autoscalingRunnerSet, log)
}
@@ -234,30 +294,13 @@ func (r *AutoscalingRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl
}
// Make sure the AutoscalingListener is up and running in the controller namespace
listener := new(v1alpha1.AutoscalingListener)
if err := r.Get(ctx, client.ObjectKey{Namespace: r.ControllerNamespace, Name: scaleSetListenerName(autoscalingRunnerSet)}, listener); err != nil {
if kerrors.IsNotFound(err) {
// We don't have a listener
log.Info("Creating a new AutoscalingListener for the runner set", "ephemeralRunnerSetName", latestRunnerSet.Name)
return r.createAutoScalingListenerForRunnerSet(ctx, autoscalingRunnerSet, latestRunnerSet, log)
if !listenerFound {
if r.drainingJobs(&latestRunnerSet.Status) {
log.Info("Creating a new AutoscalingListener is waiting for the running and pending runners to finish. Waiting for the running and pending runners to finish:", "running", latestRunnerSet.Status.RunningEphemeralRunners, "pending", latestRunnerSet.Status.PendingEphemeralRunners)
return ctrl.Result{}, nil
}
log.Error(err, "Failed to get AutoscalingListener resource")
return ctrl.Result{}, err
}
// Our listener pod is out of date, so we need to delete it to get a new recreate.
if listener.Labels[labelKeyRunnerSpecHash] != autoscalingRunnerSet.ListenerSpecHash() {
log.Info("RunnerScaleSetListener is out of date. Deleting it so that it is recreated", "name", listener.Name)
if err := r.Delete(ctx, listener); err != nil {
if kerrors.IsNotFound(err) {
return ctrl.Result{}, nil
}
log.Error(err, "Failed to delete AutoscalingListener resource")
return ctrl.Result{}, err
}
log.Info("Deleted RunnerScaleSetListener since existing one is out of date")
return ctrl.Result{}, nil
log.Info("Creating a new AutoscalingListener for the runner set", "ephemeralRunnerSetName", latestRunnerSet.Name)
return r.createAutoScalingListenerForRunnerSet(ctx, autoscalingRunnerSet, latestRunnerSet, log)
}
// Update the status of autoscaling runner set.
@@ -276,6 +319,16 @@ func (r *AutoscalingRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl
return ctrl.Result{}, nil
}
// Prevents overprovisioning of runners.
// We reach this code path when runner scale set has been patched with a new runner spec but there are still running ephemeral runners.
// The safest approach is to wait for the running ephemeral runners to finish before creating a new runner set.
func (r *AutoscalingRunnerSetReconciler) drainingJobs(latestRunnerSetStatus *v1alpha1.EphemeralRunnerSetStatus) bool {
if r.UpdateStrategy == UpdateStrategyEventual && ((latestRunnerSetStatus.RunningEphemeralRunners + latestRunnerSetStatus.PendingEphemeralRunners) > 0) {
return true
}
return false
}
func (r *AutoscalingRunnerSetReconciler) cleanupListener(ctx context.Context, autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet, logger logr.Logger) (done bool, err error) {
logger.Info("Cleaning up the listener")
var listener v1alpha1.AutoscalingListener
@@ -410,6 +463,12 @@ func (r *AutoscalingRunnerSetReconciler) createRunnerScaleSet(ctx context.Contex
}
}
actionsClient.SetUserAgent(actions.UserAgentInfo{
Version: build.Version,
CommitSHA: build.CommitSHA,
ScaleSetID: runnerScaleSet.Id,
})
logger.Info("Created/Reused a runner scale set", "id", runnerScaleSet.Id, "runnerGroupName", runnerScaleSet.RunnerGroupName)
if autoscalingRunnerSet.Annotations == nil {
autoscalingRunnerSet.Annotations = map[string]string{}

View File

@@ -42,6 +42,7 @@ const (
var _ = Describe("Test AutoScalingRunnerSet controller", Ordered, func() {
var ctx context.Context
var mgr ctrl.Manager
var controller *AutoscalingRunnerSetReconciler
var autoscalingNS *corev1.Namespace
var autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet
var configSecret *corev1.Secret
@@ -63,7 +64,7 @@ var _ = Describe("Test AutoScalingRunnerSet controller", Ordered, func() {
autoscalingNS, mgr = createNamespace(GinkgoT(), k8sClient)
configSecret = createDefaultSecret(GinkgoT(), k8sClient, autoscalingNS.Name)
controller := &AutoscalingRunnerSetReconciler{
controller = &AutoscalingRunnerSetReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Log: logf.Log,
@@ -424,6 +425,110 @@ var _ = Describe("Test AutoScalingRunnerSet controller", Ordered, func() {
})
})
Context("When updating an AutoscalingRunnerSet with running or pending jobs", func() {
It("It should wait for running and pending jobs to finish before applying the update. Update Strategy is set to eventual.", func() {
// Switch update strategy to eventual (drain jobs )
controller.UpdateStrategy = UpdateStrategyEventual
// Wait till the listener is created
listener := new(v1alpha1.AutoscalingListener)
Eventually(
func() error {
return k8sClient.Get(ctx, client.ObjectKey{Name: scaleSetListenerName(autoscalingRunnerSet), Namespace: autoscalingRunnerSet.Namespace}, listener)
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(Succeed(), "Listener should be created")
// Wait till the ephemeral runner set is created
Eventually(
func() (int, error) {
runnerSetList := new(v1alpha1.EphemeralRunnerSetList)
err := k8sClient.List(ctx, runnerSetList, client.InNamespace(autoscalingRunnerSet.Namespace))
if err != nil {
return 0, err
}
return len(runnerSetList.Items), nil
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeEquivalentTo(1), "Only one EphemeralRunnerSet should be created")
runnerSetList := new(v1alpha1.EphemeralRunnerSetList)
err := k8sClient.List(ctx, runnerSetList, client.InNamespace(autoscalingRunnerSet.Namespace))
Expect(err).NotTo(HaveOccurred(), "failed to list EphemeralRunnerSet")
// Emulate running and pending jobs
runnerSet := runnerSetList.Items[0]
activeRunnerSet := runnerSet.DeepCopy()
activeRunnerSet.Status.CurrentReplicas = 6
activeRunnerSet.Status.FailedEphemeralRunners = 1
activeRunnerSet.Status.RunningEphemeralRunners = 2
activeRunnerSet.Status.PendingEphemeralRunners = 3
desiredStatus := v1alpha1.AutoscalingRunnerSetStatus{
CurrentRunners: activeRunnerSet.Status.CurrentReplicas,
State: "",
PendingEphemeralRunners: activeRunnerSet.Status.PendingEphemeralRunners,
RunningEphemeralRunners: activeRunnerSet.Status.RunningEphemeralRunners,
FailedEphemeralRunners: activeRunnerSet.Status.FailedEphemeralRunners,
}
err = k8sClient.Status().Patch(ctx, activeRunnerSet, client.MergeFrom(&runnerSet))
Expect(err).NotTo(HaveOccurred(), "Failed to patch runner set status")
Eventually(
func() (v1alpha1.AutoscalingRunnerSetStatus, error) {
updated := new(v1alpha1.AutoscalingRunnerSet)
err := k8sClient.Get(ctx, client.ObjectKey{Name: autoscalingRunnerSet.Name, Namespace: autoscalingRunnerSet.Namespace}, updated)
if err != nil {
return v1alpha1.AutoscalingRunnerSetStatus{}, fmt.Errorf("failed to get AutoScalingRunnerSet: %w", err)
}
return updated.Status, nil
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeEquivalentTo(desiredStatus), "AutoScalingRunnerSet status should be updated")
// Patch the AutoScalingRunnerSet image which should trigger
// the recreation of the Listener and EphemeralRunnerSet
patched := autoscalingRunnerSet.DeepCopy()
patched.Spec.Template.Spec = corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "runner",
Image: "ghcr.io/actions/abcd:1.1.1",
},
},
}
// patched.Spec.Template.Spec.PriorityClassName = "test-priority-class"
err = k8sClient.Patch(ctx, patched, client.MergeFrom(autoscalingRunnerSet))
Expect(err).NotTo(HaveOccurred(), "failed to patch AutoScalingRunnerSet")
autoscalingRunnerSet = patched.DeepCopy()
// The EphemeralRunnerSet should not be recreated
Consistently(
func() (string, error) {
runnerSetList := new(v1alpha1.EphemeralRunnerSetList)
err := k8sClient.List(ctx, runnerSetList, client.InNamespace(autoscalingRunnerSet.Namespace))
Expect(err).NotTo(HaveOccurred(), "failed to fetch AutoScalingRunnerSet")
return runnerSetList.Items[0].Name, nil
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(Equal(activeRunnerSet.Name), "The EphemeralRunnerSet should not be recreated")
// The listener should not be recreated
Consistently(
func() error {
return k8sClient.Get(ctx, client.ObjectKey{Name: scaleSetListenerName(autoscalingRunnerSet), Namespace: autoscalingRunnerSet.Namespace}, listener)
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).ShouldNot(Succeed(), "Listener should not be recreated")
})
})
It("Should update Status on EphemeralRunnerSet status Update", func() {
ars := new(v1alpha1.AutoscalingRunnerSet)
Eventually(
@@ -783,8 +888,9 @@ var _ = Describe("Test client optional configuration", Ordered, func() {
Log: logf.Log,
ControllerNamespace: autoscalingNS.Name,
DefaultRunnerScaleSetListenerImage: "ghcr.io/actions/arc",
ActionsClient: actions.NewMultiClient("test", logr.Discard()),
ActionsClient: actions.NewMultiClient(logr.Discard()),
}
err := controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
@@ -974,7 +1080,7 @@ var _ = Describe("Test client optional configuration", Ordered, func() {
})
It("should be able to make requests to a server using root CAs", func() {
controller.ActionsClient = actions.NewMultiClient("test", logr.Discard())
controller.ActionsClient = actions.NewMultiClient(logr.Discard())
certsFolder := filepath.Join(
"../../",
@@ -1617,10 +1723,14 @@ var _ = Describe("Test resource version and build version mismatch", func() {
startManagers(GinkgoT(), mgr)
Eventually(func() bool {
ars := new(v1alpha1.AutoscalingRunnerSet)
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: autoscalingRunnerSet.Namespace, Name: autoscalingRunnerSet.Name}, ars)
return errors.IsNotFound(err)
}).Should(BeTrue())
Eventually(
func() bool {
ars := new(v1alpha1.AutoscalingRunnerSet)
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: autoscalingRunnerSet.Namespace, Name: autoscalingRunnerSet.Name}, ars)
return errors.IsNotFound(err)
},
autoscalingRunnerSetTestTimeout,
autoscalingRunnerSetTestInterval,
).Should(BeTrue())
})
})

View File

@@ -1,6 +1,9 @@
package actionsgithubcom
import corev1 "k8s.io/api/core/v1"
import (
"github.com/actions/actions-runner-controller/logging"
corev1 "k8s.io/api/core/v1"
)
const (
LabelKeyRunnerTemplateHash = "runner-template-hash"
@@ -60,5 +63,11 @@ const (
// to the listener when ImagePullPolicy is not specified
const DefaultScaleSetListenerImagePullPolicy = corev1.PullIfNotPresent
// DefaultScaleSetListenerLogLevel is the default log level applied
const DefaultScaleSetListenerLogLevel = string(logging.LogLevelDebug)
// DefaultScaleSetListenerLogFormat is the default log format applied
const DefaultScaleSetListenerLogFormat = string(logging.LogFormatText)
// ownerKey is field selector matching the owner name of a particular resource
const resourceOwnerKey = ".metadata.controller"

View File

@@ -729,7 +729,7 @@ var _ = Describe("EphemeralRunner", func() {
It("uses an actions client with proxy transport", func() {
// Use an actual client
controller.ActionsClient = actions.NewMultiClient("test", logr.Discard())
controller.ActionsClient = actions.NewMultiClient(logr.Discard())
proxySuccessfulllyCalled := false
proxy := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@@ -914,7 +914,7 @@ var _ = Describe("EphemeralRunner", func() {
server.StartTLS()
// Use an actual client
controller.ActionsClient = actions.NewMultiClient("test", logr.Discard())
controller.ActionsClient = actions.NewMultiClient(logr.Discard())
ephemeralRunner := newExampleRunner("test-runner", autoScalingNS.Name, configSecret.Name)
ephemeralRunner.Spec.GitHubConfigUrl = server.ConfigURLForOrg("my-org")

View File

@@ -25,6 +25,7 @@ import (
"strings"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/actions/actions-runner-controller/controllers/actions.github.com/metrics"
"github.com/actions/actions-runner-controller/github/actions"
"github.com/go-logr/logr"
"go.uber.org/multierr"
@@ -50,6 +51,8 @@ type EphemeralRunnerSetReconciler struct {
Scheme *runtime.Scheme
ActionsClient actions.MultiClient
PublishMetrics bool
resourceBuilder resourceBuilder
}
@@ -163,6 +166,29 @@ func (r *EphemeralRunnerSetReconciler) Reconcile(ctx context.Context, req ctrl.R
"deleting", len(deletingEphemeralRunners),
)
if r.PublishMetrics {
githubConfigURL := ephemeralRunnerSet.Spec.EphemeralRunnerSpec.GitHubConfigUrl
parsedURL, err := actions.ParseGitHubConfigFromURL(githubConfigURL)
if err != nil {
log.Error(err, "Github Config URL is invalid", "URL", githubConfigURL)
// stop reconciling on this object
return ctrl.Result{}, nil
}
metrics.SetEphemeralRunnerCountsByStatus(
metrics.CommonLabels{
Name: ephemeralRunnerSet.Labels[LabelKeyGitHubScaleSetName],
Namespace: ephemeralRunnerSet.Labels[LabelKeyGitHubScaleSetNamespace],
Repository: parsedURL.Repository,
Organization: parsedURL.Organization,
Enterprise: parsedURL.Enterprise,
},
len(pendingEphemeralRunners),
len(runningEphemeralRunners),
len(failedEphemeralRunners),
)
}
// cleanup finished runners and proceed
var errs []error
for i := range finishedEphemeralRunners {

View File

@@ -753,7 +753,7 @@ var _ = Describe("Test EphemeralRunnerSet controller with proxy settings", func(
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Log: logf.Log,
ActionsClient: actions.NewMultiClient("test", logr.Discard()),
ActionsClient: actions.NewMultiClient(logr.Discard()),
}
err := controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
@@ -1052,7 +1052,7 @@ var _ = Describe("Test EphemeralRunnerSet controller with custom root CA", func(
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Log: logf.Log,
ActionsClient: actions.NewMultiClient("test", logr.Discard()),
ActionsClient: actions.NewMultiClient(logr.Discard()),
}
err = controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")

View File

@@ -0,0 +1,92 @@
package metrics
import (
"github.com/prometheus/client_golang/prometheus"
"sigs.k8s.io/controller-runtime/pkg/metrics"
)
var githubScaleSetControllerSubsystem = "gha_controller"
var labels = []string{
"name",
"namespace",
"repository",
"organization",
"enterprise",
}
type CommonLabels struct {
Name string
Namespace string
Repository string
Organization string
Enterprise string
}
func (l *CommonLabels) labels() prometheus.Labels {
return prometheus.Labels{
"name": l.Name,
"namespace": l.Namespace,
"repository": l.Repository,
"organization": l.Organization,
"enterprise": l.Enterprise,
}
}
var (
pendingEphemeralRunners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetControllerSubsystem,
Name: "pending_ephemeral_runners",
Help: "Number of ephemeral runners in a pending state.",
},
labels,
)
runningEphemeralRunners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetControllerSubsystem,
Name: "running_ephemeral_runners",
Help: "Number of ephemeral runners in a running state.",
},
labels,
)
failedEphemeralRunners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetControllerSubsystem,
Name: "failed_ephemeral_runners",
Help: "Number of ephemeral runners in a failed state.",
},
labels,
)
runningListeners = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Subsystem: githubScaleSetControllerSubsystem,
Name: "running_listeners",
Help: "Number of listeners in a running state.",
},
labels,
)
)
func RegisterMetrics() {
metrics.Registry.MustRegister(
pendingEphemeralRunners,
runningEphemeralRunners,
failedEphemeralRunners,
runningListeners,
)
}
func SetEphemeralRunnerCountsByStatus(commonLabels CommonLabels, pending, running, failed int) {
pendingEphemeralRunners.With(commonLabels.labels()).Set(float64(pending))
runningEphemeralRunners.With(commonLabels.labels()).Set(float64(running))
failedEphemeralRunners.With(commonLabels.labels()).Set(float64(failed))
}
func AddRunningListener(commonLabels CommonLabels) {
runningListeners.With(commonLabels.labels()).Set(1)
}
func SubRunningListener(commonLabels CommonLabels) {
runningListeners.With(commonLabels.labels()).Set(0)
}

View File

@@ -4,12 +4,14 @@ import (
"context"
"fmt"
"math"
"net"
"strconv"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/actions/actions-runner-controller/build"
"github.com/actions/actions-runner-controller/github/actions"
"github.com/actions/actions-runner-controller/hash"
"github.com/actions/actions-runner-controller/logging"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -31,14 +33,31 @@ var commonLabelKeys = [...]string{
LabelKeyGitHubRepository,
}
var reservedListenerContainerEnvKeys = map[string]struct{}{
"GITHUB_CONFIGURE_URL": {},
"GITHUB_EPHEMERAL_RUNNER_SET_NAMESPACE": {},
"GITHUB_EPHEMERAL_RUNNER_SET_NAME": {},
"GITHUB_MAX_RUNNERS": {},
"GITHUB_MIN_RUNNERS": {},
"GITHUB_RUNNER_SCALE_SET_ID": {},
"GITHUB_RUNNER_LOG_LEVEL": {},
"GITHUB_RUNNER_LOG_FORMAT": {},
"GITHUB_TOKEN": {},
"GITHUB_APP_ID": {},
"GITHUB_APP_INSTALLATION_ID": {},
"GITHUB_APP_PRIVATE_KEY": {},
}
const labelValueKubernetesPartOf = "gha-runner-scale-set"
// scaleSetListenerImagePullPolicy is applied to all listeners
var scaleSetListenerLogLevel = DefaultScaleSetListenerLogLevel
var scaleSetListenerLogFormat = DefaultScaleSetListenerLogFormat
var scaleSetListenerImagePullPolicy = DefaultScaleSetListenerImagePullPolicy
func SetListenerImagePullPolicy(pullPolicy string) bool {
switch p := corev1.PullPolicy(pullPolicy); p {
case corev1.PullAlways, corev1.PullNever, corev1.PullIfNotPresent:
case corev1.PullAlways, corev1.PullIfNotPresent, corev1.PullNever:
scaleSetListenerImagePullPolicy = p
return true
default:
@@ -46,6 +65,24 @@ func SetListenerImagePullPolicy(pullPolicy string) bool {
}
}
func SetListenerLoggingParameters(level string, format string) bool {
switch level {
case logging.LogLevelDebug, logging.LogLevelInfo, logging.LogLevelWarn, logging.LogLevelError:
default:
return false
}
switch format {
case logging.LogFormatJSON, logging.LogFormatText:
default:
return false
}
scaleSetListenerLogLevel = level
scaleSetListenerLogFormat = format
return true
}
type resourceBuilder struct{}
func (b *resourceBuilder) newAutoScalingListener(autoscalingRunnerSet *v1alpha1.AutoscalingRunnerSet, ephemeralRunnerSet *v1alpha1.EphemeralRunnerSet, namespace, image string, imagePullSecrets []corev1.LocalObjectReference) (*v1alpha1.AutoscalingListener, error) {
@@ -63,26 +100,24 @@ func (b *resourceBuilder) newAutoScalingListener(autoscalingRunnerSet *v1alpha1.
effectiveMinRunners = *autoscalingRunnerSet.Spec.MinRunners
}
githubConfig, err := actions.ParseGitHubConfigFromURL(autoscalingRunnerSet.Spec.GitHubConfigUrl)
if err != nil {
return nil, fmt.Errorf("failed to parse github config from url: %v", err)
labels := map[string]string{
LabelKeyGitHubScaleSetNamespace: autoscalingRunnerSet.Namespace,
LabelKeyGitHubScaleSetName: autoscalingRunnerSet.Name,
LabelKeyKubernetesPartOf: labelValueKubernetesPartOf,
LabelKeyKubernetesComponent: "runner-scale-set-listener",
LabelKeyKubernetesVersion: autoscalingRunnerSet.Labels[LabelKeyKubernetesVersion],
labelKeyRunnerSpecHash: autoscalingRunnerSet.ListenerSpecHash(),
}
if err := applyGitHubURLLabels(autoscalingRunnerSet.Spec.GitHubConfigUrl, labels); err != nil {
return nil, fmt.Errorf("failed to apply GitHub URL labels: %v", err)
}
autoscalingListener := &v1alpha1.AutoscalingListener{
ObjectMeta: metav1.ObjectMeta{
Name: scaleSetListenerName(autoscalingRunnerSet),
Namespace: namespace,
Labels: map[string]string{
LabelKeyGitHubScaleSetNamespace: autoscalingRunnerSet.Namespace,
LabelKeyGitHubScaleSetName: autoscalingRunnerSet.Name,
LabelKeyKubernetesPartOf: labelValueKubernetesPartOf,
LabelKeyKubernetesComponent: "runner-scale-set-listener",
LabelKeyKubernetesVersion: autoscalingRunnerSet.Labels[LabelKeyKubernetesVersion],
LabelKeyGitHubEnterprise: githubConfig.Enterprise,
LabelKeyGitHubOrganization: githubConfig.Organization,
LabelKeyGitHubRepository: githubConfig.Repository,
labelKeyRunnerSpecHash: autoscalingRunnerSet.ListenerSpecHash(),
},
Labels: labels,
},
Spec: v1alpha1.AutoscalingListenerSpec{
GitHubConfigUrl: autoscalingRunnerSet.Spec.GitHubConfigUrl,
@@ -94,17 +129,22 @@ func (b *resourceBuilder) newAutoScalingListener(autoscalingRunnerSet *v1alpha1.
MinRunners: effectiveMinRunners,
MaxRunners: effectiveMaxRunners,
Image: image,
ImagePullPolicy: scaleSetListenerImagePullPolicy,
ImagePullSecrets: imagePullSecrets,
Proxy: autoscalingRunnerSet.Spec.Proxy,
GitHubServerTLS: autoscalingRunnerSet.Spec.GitHubServerTLS,
Template: autoscalingRunnerSet.Spec.ListenerTemplate,
},
}
return autoscalingListener, nil
}
func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.AutoscalingListener, serviceAccount *corev1.ServiceAccount, secret *corev1.Secret, envs ...corev1.EnvVar) *corev1.Pod {
type listenerMetricsServerConfig struct {
addr string
endpoint string
}
func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.AutoscalingListener, serviceAccount *corev1.ServiceAccount, secret *corev1.Secret, metricsConfig *listenerMetricsServerConfig, envs ...corev1.EnvVar) (*corev1.Pod, error) {
listenerEnv := []corev1.EnvVar{
{
Name: "GITHUB_CONFIGURE_URL",
@@ -130,6 +170,18 @@ func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.A
Name: "GITHUB_RUNNER_SCALE_SET_ID",
Value: strconv.Itoa(autoscalingListener.Spec.RunnerScaleSetId),
},
{
Name: "GITHUB_RUNNER_SCALE_SET_NAME",
Value: autoscalingListener.Spec.AutoscalingRunnerSetName,
},
{
Name: "GITHUB_RUNNER_LOG_LEVEL",
Value: scaleSetListenerLogLevel,
},
{
Name: "GITHUB_RUNNER_LOG_FORMAT",
Value: scaleSetListenerLogFormat,
},
}
listenerEnv = append(listenerEnv, envs...)
@@ -189,6 +241,38 @@ func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.A
})
}
var ports []corev1.ContainerPort
if metricsConfig != nil && len(metricsConfig.addr) != 0 {
listenerEnv = append(
listenerEnv,
corev1.EnvVar{
Name: "GITHUB_METRICS_ADDR",
Value: metricsConfig.addr,
},
corev1.EnvVar{
Name: "GITHUB_METRICS_ENDPOINT",
Value: metricsConfig.endpoint,
},
)
_, portStr, err := net.SplitHostPort(metricsConfig.addr)
if err != nil {
return nil, fmt.Errorf("failed to split host:port for metrics address: %v", err)
}
port, err := strconv.ParseInt(portStr, 10, 32)
if err != nil {
return nil, fmt.Errorf("failed to convert port %q to int32: %v", portStr, err)
}
ports = append(
ports,
corev1.ContainerPort{
ContainerPort: int32(port),
Protocol: corev1.ProtocolTCP,
Name: "metrics",
},
)
}
podSpec := corev1.PodSpec{
ServiceAccountName: serviceAccount.Name,
Containers: []corev1.Container{
@@ -196,10 +280,11 @@ func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.A
Name: autoscalingListenerContainerName,
Image: autoscalingListener.Spec.Image,
Env: listenerEnv,
ImagePullPolicy: autoscalingListener.Spec.ImagePullPolicy,
ImagePullPolicy: scaleSetListenerImagePullPolicy,
Command: []string{
"/github-runnerscaleset-listener",
},
Ports: ports,
},
},
ImagePullSecrets: autoscalingListener.Spec.ImagePullSecrets,
@@ -224,7 +309,128 @@ func (b *resourceBuilder) newScaleSetListenerPod(autoscalingListener *v1alpha1.A
Spec: podSpec,
}
return newRunnerScaleSetListenerPod
if autoscalingListener.Spec.Template != nil {
mergeListenerPodWithTemplate(newRunnerScaleSetListenerPod, autoscalingListener.Spec.Template)
}
return newRunnerScaleSetListenerPod, nil
}
func mergeListenerPodWithTemplate(pod *corev1.Pod, tmpl *corev1.PodTemplateSpec) {
// apply metadata
for k, v := range tmpl.Annotations {
if _, ok := pod.Annotations[k]; !ok {
pod.Annotations[k] = v
}
}
for k, v := range tmpl.Labels {
if _, ok := pod.Labels[k]; !ok {
pod.Labels[k] = v
}
}
// apply spec
// apply container
listenerContainer := &pod.Spec.Containers[0] // if this panics, we have bigger problems
for i := range tmpl.Spec.Containers {
c := &tmpl.Spec.Containers[i]
switch c.Name {
case "listener":
mergeListenerContainer(listenerContainer, c)
default:
pod.Spec.Containers = append(pod.Spec.Containers, *c)
}
}
// apply pod related spec
// NOTE: fields that should be gnored
// - service account based fields
if tmpl.Spec.RestartPolicy != "" {
pod.Spec.RestartPolicy = tmpl.Spec.RestartPolicy
}
if tmpl.Spec.ImagePullSecrets != nil {
pod.Spec.ImagePullSecrets = tmpl.Spec.ImagePullSecrets
}
pod.Spec.Volumes = tmpl.Spec.Volumes
pod.Spec.InitContainers = tmpl.Spec.InitContainers
pod.Spec.EphemeralContainers = tmpl.Spec.EphemeralContainers
pod.Spec.TerminationGracePeriodSeconds = tmpl.Spec.TerminationGracePeriodSeconds
pod.Spec.ActiveDeadlineSeconds = tmpl.Spec.ActiveDeadlineSeconds
pod.Spec.DNSPolicy = tmpl.Spec.DNSPolicy
pod.Spec.NodeSelector = tmpl.Spec.NodeSelector
pod.Spec.NodeName = tmpl.Spec.NodeName
pod.Spec.HostNetwork = tmpl.Spec.HostNetwork
pod.Spec.HostPID = tmpl.Spec.HostPID
pod.Spec.HostIPC = tmpl.Spec.HostIPC
pod.Spec.ShareProcessNamespace = tmpl.Spec.ShareProcessNamespace
pod.Spec.SecurityContext = tmpl.Spec.SecurityContext
pod.Spec.Hostname = tmpl.Spec.Hostname
pod.Spec.Subdomain = tmpl.Spec.Subdomain
pod.Spec.Affinity = tmpl.Spec.Affinity
pod.Spec.SchedulerName = tmpl.Spec.SchedulerName
pod.Spec.Tolerations = tmpl.Spec.Tolerations
pod.Spec.HostAliases = tmpl.Spec.HostAliases
pod.Spec.PriorityClassName = tmpl.Spec.PriorityClassName
pod.Spec.Priority = tmpl.Spec.Priority
pod.Spec.DNSConfig = tmpl.Spec.DNSConfig
pod.Spec.ReadinessGates = tmpl.Spec.ReadinessGates
pod.Spec.RuntimeClassName = tmpl.Spec.RuntimeClassName
pod.Spec.EnableServiceLinks = tmpl.Spec.EnableServiceLinks
pod.Spec.PreemptionPolicy = tmpl.Spec.PreemptionPolicy
pod.Spec.Overhead = tmpl.Spec.Overhead
pod.Spec.TopologySpreadConstraints = tmpl.Spec.TopologySpreadConstraints
pod.Spec.SetHostnameAsFQDN = tmpl.Spec.SetHostnameAsFQDN
pod.Spec.OS = tmpl.Spec.OS
pod.Spec.HostUsers = tmpl.Spec.HostUsers
pod.Spec.SchedulingGates = tmpl.Spec.SchedulingGates
pod.Spec.ResourceClaims = tmpl.Spec.ResourceClaims
}
func mergeListenerContainer(base, from *corev1.Container) {
// name should not be modified
if from.Image != "" {
base.Image = from.Image
}
if len(from.Command) > 0 {
base.Command = from.Command
}
for _, v := range from.Env {
if _, ok := reservedListenerContainerEnvKeys[v.Name]; !ok {
base.Env = append(base.Env, v)
}
}
if from.ImagePullPolicy != "" {
base.ImagePullPolicy = from.ImagePullPolicy
}
base.Args = append(base.Args, from.Args...)
base.WorkingDir = from.WorkingDir
base.Ports = append(base.Ports, from.Ports...)
base.EnvFrom = append(base.EnvFrom, from.EnvFrom...)
base.Resources = from.Resources
base.VolumeMounts = append(base.VolumeMounts, from.VolumeMounts...)
base.VolumeDevices = append(base.VolumeDevices, from.VolumeDevices...)
base.LivenessProbe = from.LivenessProbe
base.ReadinessProbe = from.ReadinessProbe
base.StartupProbe = from.StartupProbe
base.Lifecycle = from.Lifecycle
base.TerminationMessagePath = from.TerminationMessagePath
base.TerminationMessagePolicy = from.TerminationMessagePolicy
base.ImagePullPolicy = from.ImagePullPolicy
base.SecurityContext = from.SecurityContext
base.Stdin = from.Stdin
base.StdinOnce = from.StdinOnce
base.TTY = from.TTY
}
func (b *resourceBuilder) newScaleSetListenerServiceAccount(autoscalingListener *v1alpha1.AutoscalingListener) *corev1.ServiceAccount {
@@ -323,7 +529,7 @@ func (b *resourceBuilder) newEphemeralRunnerSet(autoscalingRunnerSet *v1alpha1.A
}
runnerSpecHash := autoscalingRunnerSet.RunnerSetSpecHash()
newLabels := map[string]string{
labels := map[string]string{
labelKeyRunnerSpecHash: runnerSpecHash,
LabelKeyKubernetesPartOf: labelValueKubernetesPartOf,
LabelKeyKubernetesComponent: "runner-set",
@@ -332,7 +538,7 @@ func (b *resourceBuilder) newEphemeralRunnerSet(autoscalingRunnerSet *v1alpha1.A
LabelKeyGitHubScaleSetNamespace: autoscalingRunnerSet.Namespace,
}
if err := applyGitHubURLLabels(autoscalingRunnerSet.Spec.GitHubConfigUrl, newLabels); err != nil {
if err := applyGitHubURLLabels(autoscalingRunnerSet.Spec.GitHubConfigUrl, labels); err != nil {
return nil, fmt.Errorf("failed to apply GitHub URL labels: %v", err)
}
@@ -345,7 +551,7 @@ func (b *resourceBuilder) newEphemeralRunnerSet(autoscalingRunnerSet *v1alpha1.A
ObjectMeta: metav1.ObjectMeta{
GenerateName: autoscalingRunnerSet.ObjectMeta.Name + "-",
Namespace: autoscalingRunnerSet.ObjectMeta.Namespace,
Labels: newLabels,
Labels: labels,
Annotations: newAnnotations,
},
Spec: v1alpha1.EphemeralRunnerSetSpec{
@@ -545,14 +751,23 @@ func applyGitHubURLLabels(url string, labels map[string]string) error {
}
if len(githubConfig.Enterprise) > 0 {
labels[LabelKeyGitHubEnterprise] = githubConfig.Enterprise
labels[LabelKeyGitHubEnterprise] = trimLabelValue(githubConfig.Enterprise)
}
if len(githubConfig.Organization) > 0 {
labels[LabelKeyGitHubOrganization] = githubConfig.Organization
labels[LabelKeyGitHubOrganization] = trimLabelValue(githubConfig.Organization)
}
if len(githubConfig.Repository) > 0 {
labels[LabelKeyGitHubRepository] = githubConfig.Repository
labels[LabelKeyGitHubRepository] = trimLabelValue(githubConfig.Repository)
}
return nil
}
const trimLabelVauleSuffix = "-trim"
func trimLabelValue(val string) string {
if len(val) > 63 {
return val[:63-len(trimLabelVauleSuffix)] + trimLabelVauleSuffix
}
return val
}

View File

@@ -2,6 +2,8 @@ package actionsgithubcom
import (
"context"
"fmt"
"strings"
"testing"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
@@ -66,7 +68,8 @@ func TestLabelPropagation(t *testing.T) {
Name: "test",
},
}
listenerPod := b.newScaleSetListenerPod(listener, listenerServiceAccount, listenerSecret)
listenerPod, err := b.newScaleSetListenerPod(listener, listenerServiceAccount, listenerSecret, nil)
require.NoError(t, err)
assert.Equal(t, listenerPod.Labels, listener.Labels)
ephemeralRunner := b.newEphemeralRunner(ephemeralRunnerSet)
@@ -91,3 +94,70 @@ func TestLabelPropagation(t *testing.T) {
assert.Equal(t, ephemeralRunner.Labels[key], pod.Labels[key])
}
}
func TestGitHubURLTrimLabelValues(t *testing.T) {
enterprise := strings.Repeat("a", 64)
organization := strings.Repeat("b", 64)
repository := strings.Repeat("c", 64)
autoscalingRunnerSet := v1alpha1.AutoscalingRunnerSet{
ObjectMeta: metav1.ObjectMeta{
Name: "test-scale-set",
Namespace: "test-ns",
Labels: map[string]string{
LabelKeyKubernetesPartOf: labelValueKubernetesPartOf,
LabelKeyKubernetesVersion: "0.2.0",
},
Annotations: map[string]string{
runnerScaleSetIdAnnotationKey: "1",
AnnotationKeyGitHubRunnerGroupName: "test-group",
},
},
}
t.Run("org/repo", func(t *testing.T) {
autoscalingRunnerSet := autoscalingRunnerSet.DeepCopy()
autoscalingRunnerSet.Spec = v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: fmt.Sprintf("https://github.com/%s/%s", organization, repository),
}
var b resourceBuilder
ephemeralRunnerSet, err := b.newEphemeralRunnerSet(autoscalingRunnerSet)
require.NoError(t, err)
assert.Len(t, ephemeralRunnerSet.Labels[LabelKeyGitHubEnterprise], 0)
assert.Len(t, ephemeralRunnerSet.Labels[LabelKeyGitHubOrganization], 63)
assert.Len(t, ephemeralRunnerSet.Labels[LabelKeyGitHubRepository], 63)
assert.True(t, strings.HasSuffix(ephemeralRunnerSet.Labels[LabelKeyGitHubOrganization], trimLabelVauleSuffix))
assert.True(t, strings.HasSuffix(ephemeralRunnerSet.Labels[LabelKeyGitHubRepository], trimLabelVauleSuffix))
listener, err := b.newAutoScalingListener(autoscalingRunnerSet, ephemeralRunnerSet, autoscalingRunnerSet.Namespace, "test:latest", nil)
require.NoError(t, err)
assert.Len(t, listener.Labels[LabelKeyGitHubEnterprise], 0)
assert.Len(t, listener.Labels[LabelKeyGitHubOrganization], 63)
assert.Len(t, listener.Labels[LabelKeyGitHubRepository], 63)
assert.True(t, strings.HasSuffix(ephemeralRunnerSet.Labels[LabelKeyGitHubOrganization], trimLabelVauleSuffix))
assert.True(t, strings.HasSuffix(ephemeralRunnerSet.Labels[LabelKeyGitHubRepository], trimLabelVauleSuffix))
})
t.Run("enterprise", func(t *testing.T) {
autoscalingRunnerSet := autoscalingRunnerSet.DeepCopy()
autoscalingRunnerSet.Spec = v1alpha1.AutoscalingRunnerSetSpec{
GitHubConfigUrl: fmt.Sprintf("https://github.com/enterprises/%s", enterprise),
}
var b resourceBuilder
ephemeralRunnerSet, err := b.newEphemeralRunnerSet(autoscalingRunnerSet)
require.NoError(t, err)
assert.Len(t, ephemeralRunnerSet.Labels[LabelKeyGitHubEnterprise], 63)
assert.True(t, strings.HasSuffix(ephemeralRunnerSet.Labels[LabelKeyGitHubEnterprise], trimLabelVauleSuffix))
assert.Len(t, ephemeralRunnerSet.Labels[LabelKeyGitHubOrganization], 0)
assert.Len(t, ephemeralRunnerSet.Labels[LabelKeyGitHubRepository], 0)
listener, err := b.newAutoScalingListener(autoscalingRunnerSet, ephemeralRunnerSet, autoscalingRunnerSet.Namespace, "test:latest", nil)
require.NoError(t, err)
assert.Len(t, listener.Labels[LabelKeyGitHubEnterprise], 63)
assert.True(t, strings.HasSuffix(ephemeralRunnerSet.Labels[LabelKeyGitHubEnterprise], trimLabelVauleSuffix))
assert.Len(t, listener.Labels[LabelKeyGitHubOrganization], 0)
assert.Len(t, listener.Labels[LabelKeyGitHubRepository], 0)
})
}

View File

@@ -11,7 +11,7 @@ import (
"github.com/actions/actions-runner-controller/apis/actions.summerwind.net/v1alpha1"
prometheus_metrics "github.com/actions/actions-runner-controller/controllers/actions.summerwind.net/metrics"
arcgithub "github.com/actions/actions-runner-controller/github"
"github.com/google/go-github/v47/github"
"github.com/google/go-github/v52/github"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
@@ -118,10 +118,10 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgr
}
var total, inProgress, queued, completed, unknown int
type callback func()
listWorkflowJobs := func(user string, repoName string, runID int64, fallback_cb callback) {
listWorkflowJobs := func(user string, repoName string, runID int64) {
if runID == 0 {
fallback_cb()
// should not happen in reality
r.Log.Info("Detected run with no runID of 0, ignoring the case and not scaling.", "repo_name", repoName, "run_id", runID)
return
}
opt := github.ListWorkflowJobsOptions{ListOptions: github.ListOptions{PerPage: 50}}
@@ -139,7 +139,8 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgr
opt.Page = resp.NextPage
}
if len(allJobs) == 0 {
fallback_cb()
// GitHub API can return run with empty job array - should be ignored
r.Log.Info("Detected run with no jobs, ignoring the case and not scaling.", "repo_name", repoName, "run_id", runID)
} else {
JOB:
for _, job := range allJobs {
@@ -201,9 +202,9 @@ func (r *HorizontalRunnerAutoscalerReconciler) suggestReplicasByQueuedAndInProgr
case "completed":
completed++
case "in_progress":
listWorkflowJobs(user, repoName, run.GetID(), func() { inProgress++ })
listWorkflowJobs(user, repoName, run.GetID())
case "queued":
listWorkflowJobs(user, repoName, run.GetID(), func() { queued++ })
listWorkflowJobs(user, repoName, run.GetID())
default:
unknown++
}

View File

@@ -61,8 +61,9 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
want int
err string
}{
// case_0
// Legacy functionality
// 3 demanded, max at 3
// 0 demanded due to zero runID, min at 2
{
repo: "test/valid",
min: intPtr(2),
@@ -70,9 +71,10 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 3,
want: 2,
},
// Explicitly speified the default `self-hosted` label which is ignored by the simulator,
// case_1
// Explicitly specified the default `self-hosted` label which is ignored by the simulator,
// as we assume that GitHub Actions automatically associates the `self-hosted` label to every self-hosted runner.
// 3 demanded, max at 3
{
@@ -80,11 +82,17 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
labels: []string{"self-hosted"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 3,
workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"id": 1, "status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}]}"`,
workflowJobs: map[int]string{
1: `{"jobs": [{"status": "queued", "labels":["self-hosted"]}]}`,
2: `{"jobs": [{"status": "in_progress", "labels":["self-hosted"]}]}`,
3: `{"jobs": [{"status": "in_progress", "labels":["self-hosted"]}]}`,
},
want: 3,
},
// case_2
// 2 demanded, max at 3, currently 3, delay scaling down due to grace period
{
repo: "test/valid",
@@ -97,6 +105,7 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 3,
},
// case_3
// 3 demanded, max at 2
{
repo: "test/valid",
@@ -107,6 +116,7 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 2,
},
// case_4
// 2 demanded, min at 2
{
repo: "test/valid",
@@ -117,6 +127,7 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
},
// case_5
// 1 demanded, min at 2
{
repo: "test/valid",
@@ -127,6 +138,7 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 2,
},
// case_6
// 1 demanded, min at 2
{
repo: "test/valid",
@@ -137,6 +149,7 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
},
// case_7
// 1 demanded, min at 1
{
repo: "test/valid",
@@ -147,6 +160,7 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 1,
},
// case_8
// 1 demanded, min at 1
{
repo: "test/valid",
@@ -157,6 +171,7 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 1,
},
// case_9
// fixed at 3
{
repo: "test/valid",
@@ -166,9 +181,36 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 3, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 3,
want: 1,
},
// Case for empty GitHub Actions reponse - should not trigger scale up
{
description: "GitHub Actions Jobs Array is empty - no scale up",
repo: "test/valid",
min: intPtr(0),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
workflowJobs: map[int]string{
1: `{"jobs": []}`,
},
want: 0,
},
// Case for hosted GitHub Actions run
{
description: "Hosted GitHub Actions run - no scale up",
repo: "test/valid",
min: intPtr(0),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"id": 1, "status":"queued"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"id": 1, "status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
workflowJobs: map[int]string{
1: `{"jobs": [{"status":"queued"}]}`,
},
want: 0,
},
{
description: "Job-level autoscaling with no explicit runner label (runners have implicit self-hosted, requested self-hosted, 5 jobs from 3 workflows)",
repo: "test/valid",
@@ -422,7 +464,8 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
want int
err string
}{
// 3 demanded, max at 3
// case_0
// 0 demanded due to zero runID, min at 2
{
org: "test",
repos: []string{"valid"},
@@ -431,8 +474,9 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 3,
want: 2,
},
// case_1
// 2 demanded, max at 3, currently 3, delay scaling down due to grace period
{
org: "test",
@@ -446,6 +490,7 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 3,
},
// case_2
// 3 demanded, max at 2
{
org: "test",
@@ -457,6 +502,7 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 2,
},
// case_3
// 2 demanded, min at 2
{
org: "test",
@@ -468,6 +514,7 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
},
// case_4
// 1 demanded, min at 2
{
org: "test",
@@ -479,6 +526,7 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 2,
},
// case_5
// 1 demanded, min at 2
{
org: "test",
@@ -512,6 +560,7 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 1,
},
// case_6
// fixed at 3
{
org: "test",
@@ -522,8 +571,9 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 3, "workflow_runs":[{"status":"in_progress"},{"status":"in_progress"},{"status":"in_progress"}]}"`,
want: 3,
want: 1,
},
// case_7
// org runner, fixed at 3
{
org: "test",
@@ -534,8 +584,9 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 3, "workflow_runs":[{"status":"in_progress"},{"status":"in_progress"},{"status":"in_progress"}]}"`,
want: 3,
want: 1,
},
// case_8
// org runner, 1 demanded, min at 1, no repos
{
org: "test",

View File

@@ -44,8 +44,8 @@ type scaleOperation struct {
// Add the scale target to the unbounded queue, blocking until the target is successfully added to the queue.
// All the targets in the queue are dequeued every 3 seconds, grouped by the HRA, and applied.
// In a happy path, batchScaler update each HRA only once, even though the HRA had two or more associated webhook events in the 3 seconds interval,
// which results in less K8s API calls and less HRA update conflicts in case your ARC installation receives a lot of webhook events
// In a happy path, batchScaler updates each HRA only once, even though the HRA had two or more associated webhook events in the 3 seconds interval,
// which results in fewer K8s API calls and fewer HRA update conflicts in case your ARC installation receives a lot of webhook events
func (s *batchScaler) Add(st *ScaleTarget) {
if st == nil {
return
@@ -142,87 +142,130 @@ func (s *batchScaler) batchScale(ctx context.Context, batch batchScaleOperation)
return err
}
now := time.Now()
copy, err := s.planBatchScale(ctx, batch, &hra, now)
if err != nil {
return err
}
if err := s.Client.Patch(ctx, copy, client.MergeFrom(&hra)); err != nil {
return fmt.Errorf("patching horizontalrunnerautoscaler to add capacity reservation: %w", err)
}
return nil
}
func (s *batchScaler) planBatchScale(ctx context.Context, batch batchScaleOperation, hra *v1alpha1.HorizontalRunnerAutoscaler, now time.Time) (*v1alpha1.HorizontalRunnerAutoscaler, error) {
copy := hra.DeepCopy()
if hra.Spec.MaxReplicas != nil && len(copy.Spec.CapacityReservations) > *copy.Spec.MaxReplicas {
// We have more reservations than MaxReplicas, meaning that we previously
// could not scale up to meet a capacity demand because we had hit MaxReplicas.
// Therefore, there are reservations that are starved for capacity. We extend the
// expiration time on these starved reservations because the "duration" is meant
// to apply to reservations that have launched replicas, not replicas in the backlog.
// Of course, if MaxReplicas is nil, then there is no max to hit, and we do not need this adjustment.
// See https://github.com/actions/actions-runner-controller/issues/2254 for more context.
// Extend the expiration time of all the reservations not yet assigned to replicas.
//
// Note that we assume that the two scenarios equivalent here.
// The first case is where the number of reservations become greater than MaxReplicas.
// The second case is where MaxReplicas become greater than the number of reservations equivalent.
// Presuming the HRA.spec.scaleTriggers[].duration as "the duration until the reservation expires after a corresponding runner was deployed",
// it's correct.
//
// In other words, we settle on a capacity reservation's ExpirationTime only after the corresponding runner is "about to be" deployed.
// It's "about to be deployed" not "deployed" because we have no way to correlate a capacity reservation and the runner;
// the best we can do here is to simulate the desired behavior by reading MaxReplicas and assuming it will be equal to the number of active runners soon.
//
// Perhaps we could use RunnerDeployment.Status.Replicas or RunnerSet.Status.Replicas instead of the MaxReplicas as a better source of "the number of active runners".
// However, note that the status is not guaranteed to be up-to-date.
// It might not be that easy to decide which is better to use.
for i := *hra.Spec.MaxReplicas; i < len(copy.Spec.CapacityReservations); i++ {
// Let's say maxReplicas=3 and the workflow job of status=completed result in deleting the first capacity reservation
// copy.Spec.CapacityReservations[i] where i=0.
// We are interested in at least four reservations and runners:
// i=0 - already included in the current desired replicas, but may be about to be deleted
// i=1-2 - already included in the current desired replicas
// i=3 - not yet included in the current desired replicas, might have been expired while waiting in the queue
//
// i=3 is especially important here- If we didn't reset the expiration time of this reservation,
// it might expire before it is assigned to a runner, due to the delay between the time the
// expiration timer starts and the time a runner becomes available.
//
// Why is there such delay? Because ARC implements the scale duration and expiration as such.
// The expiration timer starts when the reservation is created, while the runner is created only after
// the corresponding reservation fits within maxReplicas.
//
// We address that, by resetting the expiration time for fourth(i=3 in the above example)
// and subsequent reservations whenever a batch is run (which is when expired reservations get deleted).
// There is no guarantee that all the reservations have the same duration, and even if there were,
// at this point we have lost the reference to the duration that was intended.
// However, we can compute the intended duration from the existing interval.
//
// In other words, updating HRA.spec.scaleTriggers[].duration does not result in delaying capacity reservations expiration any longer
// than the "intended" duration, which is the duration of the trigger when the reservation was created.
duration := copy.Spec.CapacityReservations[i].ExpirationTime.Time.Sub(copy.Spec.CapacityReservations[i].EffectiveTime.Time)
copy.Spec.CapacityReservations[i].EffectiveTime = metav1.Time{Time: now}
copy.Spec.CapacityReservations[i].ExpirationTime = metav1.Time{Time: now.Add(duration)}
}
}
// Now we can filter out any expired reservations from consideration.
// This could leave us with 0 reservations left.
copy.Spec.CapacityReservations = getValidCapacityReservations(copy)
before := len(hra.Spec.CapacityReservations)
expired := before - len(copy.Spec.CapacityReservations)
var added, completed int
for _, scale := range batch.scaleOps {
amount := 1
amount := scale.trigger.Amount
if scale.trigger.Amount != 0 {
amount = scale.trigger.Amount
}
scale.log.V(2).Info("Adding capacity reservation", "amount", amount)
now := time.Now()
// We do not track if a webhook-based scale-down event matches an expired capacity reservation
// or a job for which the scale-up event was never received. This means that scale-down
// events could drive capacity reservations into the negative numbers if we let it.
// We ensure capacity never falls below zero, but that also means that the
// final number of capacity reservations depends on the order in which events come in.
// If capacity is at zero and we get a scale-down followed by a scale-up,
// the scale-down will be ignored and we will end up with a desired capacity of 1.
// However, if we get the scale-up first, the scale-down will drive desired capacity back to zero.
// This could be fixed by matching events' `workflow_job.run_id` with capacity reservations,
// but that would be a lot of work. So for now we allow for some slop, and hope that
// GitHub provides a better autoscaling solution soon.
if amount > 0 {
copy.Spec.CapacityReservations = append(copy.Spec.CapacityReservations, v1alpha1.CapacityReservation{
EffectiveTime: metav1.Time{Time: now},
ExpirationTime: metav1.Time{Time: now.Add(scale.trigger.Duration.Duration)},
Replicas: amount,
})
scale.log.V(2).Info("Adding capacity reservation", "amount", amount)
// Parts of this function require that Spec.CapacityReservations.Replicas always equals 1.
// Enforce that rule no matter what the `amount` value is
for i := 0; i < amount; i++ {
copy.Spec.CapacityReservations = append(copy.Spec.CapacityReservations, v1alpha1.CapacityReservation{
EffectiveTime: metav1.Time{Time: now},
ExpirationTime: metav1.Time{Time: now.Add(scale.trigger.Duration.Duration)},
Replicas: 1,
})
}
added += amount
} else if amount < 0 {
var reservations []v1alpha1.CapacityReservation
scale.log.V(2).Info("Removing capacity reservation", "amount", -amount)
var (
found bool
foundIdx int
)
for i, r := range copy.Spec.CapacityReservations {
r := r
if !found && r.Replicas+amount == 0 {
found = true
foundIdx = i
} else {
// Note that we nil-check max replicas because this "fix" is needed only when there is the upper limit of runners.
// In other words, you don't need to reset effective time and expiration time when there is no max replicas.
// That's because the desired replicas would already contain the reservation since it's creation.
if found && copy.Spec.MaxReplicas != nil && i > foundIdx+*copy.Spec.MaxReplicas {
// Update newer CapacityReservations' time to now to trigger reconcile
// Without this, we might stuck in minReplicas unnecessarily long.
// That is, we might not scale up after an ephemeral runner has been deleted
// until a new scale up, all runners finish, or after DefaultRunnerPodRecreationDelayAfterWebhookScale
// See https://github.com/actions/actions-runner-controller/issues/2254 for more context.
r.EffectiveTime = metav1.Time{Time: now}
// We also reset the scale trigger expiration time, so that you don't need to tweak
// scale trigger duratoin depending on maxReplicas.
// A detailed explanation follows.
//
// Let's say maxReplicas=3 and the workflow job of status=canceled result in deleting the first capacity reservation hence i=0.
// We are interested in at least four reservations and runners:
// i=0 - already included in the current desired replicas, but just got deleted
// i=1-2 - already included in the current desired replicas
// i=3 - not yet included in the current desired replicas, might have been expired while waiting in the queue
//
// i=3 is especially important here- If we didn't reset the expiration time of 3rd reservation,
// it might expire before a corresponding runner is created, due to the delay between the expiration timer starts and the runner is created.
//
// Why is there such delay? Because ARC implements the scale duration and expiration as such...
// The expiration timer starts when the reservation is created, while the runner is created only after the corresponding reservation fits within maxReplicas.
//
// We address that, by resetting the expiration time for fourth(i=3 in the above example) and subsequent reservations when the first reservation gets cancelled.
r.ExpirationTime = metav1.Time{Time: now.Add(scale.trigger.Duration.Duration)}
}
reservations = append(reservations, r)
}
// Remove the requested number of reservations unless there are not that many left
if len(copy.Spec.CapacityReservations) > -amount {
copy.Spec.CapacityReservations = copy.Spec.CapacityReservations[-amount:]
} else {
copy.Spec.CapacityReservations = nil
}
copy.Spec.CapacityReservations = reservations
completed += amount
// This "completed" represents the number of completed and therefore removed runners in this batch,
// which is logged later.
// As the amount is negative for a scale-down trigger, we make the "completed" amount positive by negating the amount.
// That way, the user can see the number of removed runners(like 3), rather than the delta (like -3) in the number of runners.
completed -= amount
}
}
before := len(hra.Spec.CapacityReservations)
expired := before - len(copy.Spec.CapacityReservations)
after := len(copy.Spec.CapacityReservations)
s.Log.V(1).Info(
@@ -234,9 +277,5 @@ func (s *batchScaler) batchScale(ctx context.Context, batch batchScaleOperation)
"after", after,
)
if err := s.Client.Patch(ctx, copy, client.MergeFrom(&hra)); err != nil {
return fmt.Errorf("patching horizontalrunnerautoscaler to add capacity reservation: %w", err)
}
return nil
return copy, nil
}

View File

@@ -0,0 +1,166 @@
package actionssummerwindnet
import (
"context"
"testing"
"time"
"github.com/actions/actions-runner-controller/apis/actions.summerwind.net/v1alpha1"
"github.com/go-logr/logr"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func TestPlanBatchScale(t *testing.T) {
s := &batchScaler{Log: logr.Discard()}
var (
expiry = 10 * time.Second
interval = 3 * time.Second
t0 = time.Now()
t1 = t0.Add(interval)
t2 = t1.Add(interval)
)
check := func(t *testing.T, amount int, newExpiry time.Duration, wantReservations []v1alpha1.CapacityReservation) {
t.Helper()
var (
op = batchScaleOperation{
scaleOps: []scaleOperation{
{
log: logr.Discard(),
trigger: v1alpha1.ScaleUpTrigger{
Amount: amount,
Duration: metav1.Duration{Duration: newExpiry},
},
},
},
}
hra = &v1alpha1.HorizontalRunnerAutoscaler{
Spec: v1alpha1.HorizontalRunnerAutoscalerSpec{
MaxReplicas: intPtr(1),
ScaleUpTriggers: []v1alpha1.ScaleUpTrigger{
{
Amount: 1,
Duration: metav1.Duration{Duration: newExpiry},
},
},
CapacityReservations: []v1alpha1.CapacityReservation{
{
EffectiveTime: metav1.NewTime(t0),
ExpirationTime: metav1.NewTime(t0.Add(expiry)),
Replicas: 1,
},
{
EffectiveTime: metav1.NewTime(t1),
ExpirationTime: metav1.NewTime(t1.Add(expiry)),
Replicas: 1,
},
},
},
}
)
want := hra.DeepCopy()
want.Spec.CapacityReservations = wantReservations
got, err := s.planBatchScale(context.Background(), op, hra, t2)
require.NoError(t, err)
require.Equal(t, want, got)
}
t.Run("scale up", func(t *testing.T) {
check(t, 1, expiry, []v1alpha1.CapacityReservation{
{
// This is kept based on t0 because it falls within maxReplicas
// i.e. the corresponding runner has assumbed to be already deployed.
EffectiveTime: metav1.NewTime(t0),
ExpirationTime: metav1.NewTime(t0.Add(expiry)),
Replicas: 1,
},
{
// Updated from t1 to t2 due to this exceeded maxReplicas
EffectiveTime: metav1.NewTime(t2),
ExpirationTime: metav1.NewTime(t2.Add(expiry)),
Replicas: 1,
},
{
// This is based on t2(=now) because it has been added just now.
EffectiveTime: metav1.NewTime(t2),
ExpirationTime: metav1.NewTime(t2.Add(expiry)),
Replicas: 1,
},
})
})
t.Run("scale up reuses previous scale trigger duration for extension", func(t *testing.T) {
newExpiry := expiry + time.Second
check(t, 1, newExpiry, []v1alpha1.CapacityReservation{
{
// This is kept based on t0 because it falls within maxReplicas
// i.e. the corresponding runner has assumbed to be already deployed.
EffectiveTime: metav1.NewTime(t0),
ExpirationTime: metav1.NewTime(t0.Add(expiry)),
Replicas: 1,
},
{
// Updated from t1 to t2 due to this exceeded maxReplicas
EffectiveTime: metav1.NewTime(t2),
ExpirationTime: metav1.NewTime(t2.Add(expiry)),
Replicas: 1,
},
{
// This is based on t2(=now) because it has been added just now.
EffectiveTime: metav1.NewTime(t2),
ExpirationTime: metav1.NewTime(t2.Add(newExpiry)),
Replicas: 1,
},
})
})
t.Run("scale down", func(t *testing.T) {
check(t, -1, expiry, []v1alpha1.CapacityReservation{
{
// Updated from t1 to t2 due to this exceeded maxReplicas
EffectiveTime: metav1.NewTime(t2),
ExpirationTime: metav1.NewTime(t2.Add(expiry)),
Replicas: 1,
},
})
})
t.Run("scale down is not affected by new scale trigger duration", func(t *testing.T) {
check(t, -1, expiry+time.Second, []v1alpha1.CapacityReservation{
{
// Updated from t1 to t2 due to this exceeded maxReplicas
EffectiveTime: metav1.NewTime(t2),
ExpirationTime: metav1.NewTime(t2.Add(expiry)),
Replicas: 1,
},
})
})
// TODO: Keep refreshing the expiry date even when there are no other scale down/up triggers before the expiration
t.Run("extension", func(t *testing.T) {
check(t, 0, expiry, []v1alpha1.CapacityReservation{
{
// This is kept based on t0 because it falls within maxReplicas
// i.e. the corresponding runner has assumbed to be already deployed.
EffectiveTime: metav1.NewTime(t0),
ExpirationTime: metav1.NewTime(t0.Add(expiry)),
Replicas: 1,
},
{
// Updated from t1 to t2 due to this exceeded maxReplicas
EffectiveTime: metav1.NewTime(t2),
ExpirationTime: metav1.NewTime(t2.Add(expiry)),
Replicas: 1,
},
})
})
}

View File

@@ -30,7 +30,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"github.com/go-logr/logr"
gogithub "github.com/google/go-github/v47/github"
gogithub "github.com/google/go-github/v52/github"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"

Some files were not shown because too many files have changed in this diff Show More