Compare commits

..

63 Commits

Author SHA1 Message Date
mumoshu
e2d3489171 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2023-11-27 07:04:47 +00:00
mumoshu
90db051e3e Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2023-11-27 05:06:40 +00:00
mumoshu
41e135be59 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2023-08-30 06:02:42 +00:00
mumoshu
11938d728d Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2023-08-28 05:25:26 +00:00
Link-
bfd63fb6f9 Update index.yaml
Signed-off-by: Link- <Link-@users.noreply.github.com>
2023-05-12 13:19:52 +00:00
mumoshu
c70e760b19 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2023-04-17 13:01:55 +00:00
Link-
3c7e32eb9f Update index.yaml
Signed-off-by: Link- <Link-@users.noreply.github.com>
2023-04-06 12:05:57 +00:00
Yusuke Kuoka
ccc7b81b5c chart-repo: Update index.yaml to include 0.22.0 and 0.23.0 (#2452) 2023-03-30 05:22:21 -04:00
Jan van den Berg
701f8427a0 docs: typo + codefence fix (#2053)
* Little typo in PRIVATE_KEY_FILE_PATH.
* Code fence `shell script` does not work and does not render well, `shell` (just as above) does work.
2022-11-28 11:28:47 +00:00
toast-gear
20322eb2c9 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-10-25 19:13:34 +00:00
Yusuke Kuoka
72edcbba10 Show me as the verified publisher on ArtifactHub (#1815)
Ref #1502

See https://artifacthub.io/docs/topics/repositories/helm-charts/#helm-charts-repositories for more information on how ArtifactHub works for Helm charts and how we can set up this metadata file.

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
2022-09-20 18:49:15 +09:00
mumoshu
5396f9322b Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-09-13 00:10:15 +00:00
mumoshu
65b0cdc588 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-07-15 01:24:15 +00:00
mumoshu
b7dbf997ec Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-07-09 23:26:54 +00:00
mumoshu
c04f1daeab Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-07-05 02:20:10 +00:00
mumoshu
84b7abe2ce Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-06-15 02:35:41 +00:00
mumoshu
85422d15a8 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-06-03 13:02:23 +00:00
toast-gear
d3e9c43c34 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-29 12:55:04 +00:00
mumoshu
6a9b0d74fd Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-04-08 01:59:57 +00:00
toast-gear
3e1fbfa830 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-04-03 09:16:27 +00:00
toast-gear
c842be4501 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-03-29 06:47:43 +00:00
toast-gear
132867482f Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-03-16 07:58:36 +00:00
toast-gear
6a2a90164f Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2022-02-21 09:25:02 +00:00
mumoshu
f7952743e5 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-02-18 01:58:55 +00:00
mumoshu
1492a0d0f9 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-02-09 00:30:45 +00:00
mumoshu
26d9758452 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2022-02-08 03:57:19 +00:00
Callum Tait
6ac125f060 Revert "dics: typo (#1063)" (#1080)
This reverts commit 48af148297.
2022-01-28 22:28:44 +00:00
Ashith Wilson
48af148297 dics: typo (#1063) 2022-01-28 22:27:20 +00:00
toast-gear
818c1bd3dd Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2021-12-08 21:59:25 +00:00
toast-gear
a4c569f552 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2021-11-15 19:59:31 +00:00
toast-gear
55d5550ad4 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2021-10-18 21:06:27 +00:00
toast-gear
ddc29b1d38 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2021-10-02 09:05:47 +00:00
mumoshu
b684553da2 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-09-24 00:41:09 +00:00
mumoshu
ec8a74f219 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-09-15 00:39:32 +00:00
mumoshu
73e6a91de3 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-08-31 00:47:20 +00:00
mumoshu
e1c62ee5e5 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-07-03 06:17:45 +00:00
mumoshu
a192a76ca9 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-30 11:43:05 +00:00
mumoshu
a7d378ca09 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-30 00:54:28 +00:00
mumoshu
285cfd69cd Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-29 06:51:20 +00:00
mumoshu
c1fb952a94 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-27 07:51:38 +00:00
mumoshu
b1916a0e1a Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-11 00:21:50 +00:00
mumoshu
44972a284c Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-06-09 23:31:21 +00:00
toast-gear
6dd93508e7 Update index.yaml
Signed-off-by: toast-gear <toast-gear@users.noreply.github.com>
2021-06-08 18:25:40 +00:00
mumoshu
930efd244d Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-04-18 04:59:38 +00:00
mumoshu
60f577ea04 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-04-06 01:10:56 +00:00
mumoshu
31a16d3c2e Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-19 02:16:24 +00:00
mumoshu
c53a03372d Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-18 23:57:04 +00:00
mumoshu
e9caad7dec Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-18 01:37:25 +00:00
mumoshu
7a21693912 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-17 22:37:12 +00:00
mumoshu
942fc9fe00 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-16 01:53:11 +00:00
mumoshu
a2096046d5 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-14 01:22:24 +00:00
mumoshu
a7cb21605c Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-09 06:04:20 +00:00
mumoshu
c495ce47ed Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-05 11:28:36 +00:00
mumoshu
f1a1941455 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-03-03 00:22:03 +00:00
mumoshu
a19eab8382 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-26 00:27:48 +00:00
mumoshu
4ee7e5541f Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-18 11:21:46 +00:00
mumoshu
013d5bd2b2 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-17 23:44:33 +00:00
mumoshu
c1d36ebaef Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-16 08:17:08 +00:00
mumoshu
71eb2ae333 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-16 00:46:35 +00:00
mumoshu
dd1ad63ca9 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-02-07 07:47:05 +00:00
mumoshu
de7e37509c Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-01-29 00:30:24 +00:00
mumoshu
51918fecbe Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-01-25 00:15:39 +00:00
mumoshu
4fb7d154d6 Update index.yaml
Signed-off-by: mumoshu <mumoshu@users.noreply.github.com>
2021-01-24 06:37:49 +00:00
458 changed files with 15208 additions and 126735 deletions

View File

@@ -1,14 +0,0 @@
Makefile
acceptance
runner
hack
test-assets
config
charts
.github
.envrc
.env
*.md
*.txt
*.sh
test/e2e/.docker-build

View File

@@ -1,191 +0,0 @@
name: Bug Report
description: File a bug report
title: "<Please write what didn't work for you here>"
labels: ["bug", "needs triage"]
body:
- type: checkboxes
id: read-troubleshooting-guide
attributes:
label: Checks
description: Please check all the boxes below before submitting
options:
- label: I've already read https://github.com/actions/actions-runner-controller/blob/master/TROUBLESHOOTING.md and I'm sure my issue is not covered in the troubleshooting guide.
required: true
- label: I'm not using a custom entrypoint in my runner image
required: true
- type: input
id: controller-version
attributes:
label: Controller Version
description: Refer to semver-like release tags for controller versions. Any release tags prefixed with `actions-runner-controller-` are for chart releases
placeholder: ex. 0.18.2 or git commit ID
validations:
required: true
- type: input
id: chart-version
attributes:
label: Helm Chart Version
description: Run `helm list` and see what's shown under CHART VERSION. Any release tags prefixed with `actions-runner-controller-` are for chart releases
placeholder: ex. 0.11.0
- type: input
id: cert-manager-version
attributes:
label: CertManager Version
description: Run `kubectl get po -o yaml $CERT_MANAGER_POD` and see the image tag, or run `helm list` and see what's shown under APP VERSION for your cert-manager Helm release.
placeholder: ex. 1.8
- type: dropdown
id: deployment-method
attributes:
label: Deployment Method
description: Which deployment method did you use to install ARC?
options:
- Helm
- Kustomize
- ArgoCD
- Other
validations:
required: true
- type: textarea
id: cert-manager
attributes:
label: cert-manager installation
description: Confirm that you've installed cert-manager correctly by answering a few questions
placeholder: |
- Did you follow https://github.com/actions/actions-runner-controller#installation? If not, describe the installation process so that we can reproduce your environment.
- Are you sure you've installed cert-manager from an official source?
(Note that we won't provide user support for cert-manager itself. Make sure cert-manager is fully working before testing ARC or reporting a bug
validations:
required: true
- type: checkboxes
id: checks
attributes:
label: Checks
description: Please check all the boxes below before submitting
options:
- label: This isn't a question or user support case (For Q&A and community support, go to [Discussions](https://github.com/actions/actions-runner-controller/discussions). It might also be a good idea to contract with any of contributors and maintainers if your business is so critical and therefore you need priority support
required: true
- label: I've read [releasenotes](https://github.com/actions/actions-runner-controller/tree/master/docs/releasenotes) before submitting this issue and I'm sure it's not due to any recently-introduced backward-incompatible changes
required: true
- label: My actions-runner-controller version (v0.x.y) does support the feature
required: true
- label: I've already upgraded ARC (including the CRDs, see charts/actions-runner-controller/docs/UPGRADING.md for details) to the latest and it didn't fix the issue
required: true
- label: I've migrated to the workflow job webhook event (if you using webhook driven scaling)
required: true
- type: textarea
id: resource-definitions
attributes:
label: Resource Definitions
description: "Add copy(s) of your resource definition(s) (RunnerDeployment or RunnerSet, and HorizontalRunnerAutoscaler. If RunnerSet, also include the StorageClass being used)"
render: yaml
placeholder: |
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example
spec:
#snip
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerSet
metadata:
name: example
spec:
#snip
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: example
provisioner: ...
reclaimPolicy: ...
volumeBindingMode: ...
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name:
spec:
#snip
validations:
required: true
- type: textarea
id: reproduction-steps
attributes:
label: To Reproduce
description: "Steps to reproduce the behavior"
render: markdown
placeholder: |
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
validations:
required: true
- type: textarea
id: actual-behavior
attributes:
label: Describe the bug
description: Also tell us, what did happen?
placeholder: A clear and concise description of what happened.
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Describe the expected behavior
description: Also tell us, what did you expect to happen?
placeholder: A clear and concise description of what the expected behavior is.
validations:
required: true
- type: textarea
id: controller-logs
attributes:
label: Whole Controller Logs
description: "NEVER EVER OMIT THIS! Include logs from `actions-runner-controller`'s controller-manager pod. Don't omit the parts you think irrelevant!"
render: shell
placeholder: |
PROVIDE THE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
To grab controller logs:
# Set NS according to your setup
NS=actions-runner-system
# Grab the pod name and set it to $POD_NAME
kubectl -n $NS get po
kubectl -n $NS logs $POD_NAME > arc.log
validations:
required: true
- type: textarea
id: runner-pod-logs
attributes:
label: Whole Runner Pod Logs
description: "Include logs from runner pod(s). Please don't omit the parts you think irrelevant!"
render: shell
placeholder: |
PROVIDE THE WHOLE LOGS VIA A GIST LINK (https://gist.github.com/), NOT DIRECTLY IN THIS TEXT AREA
To grab the runner pod logs:
# Set NS according to your setup. It should match your RunnerDeployment's metadata.namespace.
NS=default
# Grab the name of the problematic runner pod and set it to $POD_NAME
kubectl -n $NS get po
kubectl -n $NS logs $POD_NAME -c runner > runnerpod_runner.log
kubectl -n $NS logs $POD_NAME -c docker > runnerpod_docker.log
If any of the containers are getting terminated immediately, try adding `--previous` to the kubectl-logs command to obtain logs emitted before the termination.
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: |
Add any other context about the problem here.
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.

View File

@@ -1,14 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Sponsor ARC Maintainers
about: If your business relies on the continued maintainance of actions-runner-controller, please consider sponsoring the project and the maintainers.
url: https://github.com/actions/actions-runner-controller/tree/master/CODEOWNERS
- name: Ideas and Feature Requests
about: Wanna request a feature? Create a discussion and collect :+1:s first.
url: https://github.com/actions/actions-runner-controller/discussions/new?category=ideas
- name: Questions and User Support
about: Need support using ARC? We use Discussions as the place to provide community support.
url: https://github.com/actions/actions-runner-controller/discussions/new?category=questions
- name: Need Paid Support?
about: Consider contracting with any of the actions-runner-controller maintainers and contributors.
url: https://github.com/actions/actions-runner-controller/tree/master/CODEOWNERS

View File

@@ -1,21 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
labels: ["enhancement", "needs triage"]
title: ''
assignees: ''
---
### What would you like added?
*A clear and concise description of what you want to happen.*
Note: Feature requests to integrate vendor specific cloud tools (e.g. `awscli`, `gcloud-sdk`, `azure-cli`) will likely be rejected as the Runner image aims to be vendor agnostic.
### Why is this needed?
*A clear and concise description of any alternative solutions or features you've considered.*
### Additional context
*Add any other context or screenshots about the feature request here.*

View File

@@ -1,34 +0,0 @@
# Release Note Template
This is the template of actions-runner-controller's release notes.
Whenever a new release is made, I start by manually copy-pasting this template onto the GitHub UI for creating the release.
I then walk-through all the changes, take sometime to think abount best one-sentence explanations to tell the users about changes, write it all,
and click the publish button.
If you think you can improve future release notes in any way, please do submit a pull request to change the template below.
Note that even though it looks like a Go template, I don't use any templating to generate the changelog.
It's just that I'm used to reading and intepreting Go template by myself, not a computer program :)
**Title**:
```
v{{ .Version }}: {{ .TitlesOfImportantChanges }}
```
**Body**:
```
**CAUTION:** If you're using the Helm chart, beware to review changes to CRDs and do manually upgrade CRDs! Helm installs CRDs only on installing a chart. It doesn't automatically upgrade CRDs. Otherwise you end up with troubles like #427, #467, and #468. Please refer to the [UPGRADING](charts/actions-runner-controller/docs/UPGRADING.md) docs for the latest process.
This release includes the following changes from contributors. Thank you!
- @{{ .GitHubUser }} fixed {{ .Feature }} to not break when ... (#{{ .PullRequestNumber }})
- @{{ .GitHubUser }} enhanced {{ .Feature }} to ... (#{{ .PullRequestNumber }})
- @{{ .GitHubUser }} added {{ .Feature }} for ... (#{{ .PullRequestNumber }})
- @{{ .GitHubUser }} fixed {{ .Topic }} in the documentation so that ... (#{{ .PullRequestNumber }})
- @{{ .GitHubUser }} added {{ .Topic }} to the documentation (#{{ .PullRequestNumber }})
- @{{ .GitHubUser }} improved the documentation about {{ .Topic }} to also cover ... (#{{ .PullRequestNumber }})
```

View File

@@ -1,160 +0,0 @@
name: 'Execute and Assert ARC E2E Test Action'
description: 'Queue E2E test workflow and assert workflow run result to be succeed'
inputs:
auth-token:
description: 'GitHub access token to queue workflow run'
required: true
repo-owner:
description: "The repository owner name that has the test workflow file, ex: actions"
required: true
repo-name:
description: "The repository name that has the test workflow file, ex: test"
required: true
workflow-file:
description: 'The file name of the workflow yaml, ex: test.yml'
required: true
arc-name:
description: 'The name of the configured gha-runner-scale-set'
required: true
arc-namespace:
description: 'The namespace of the configured gha-runner-scale-set'
required: true
arc-controller-namespace:
description: 'The namespace of the configured gha-runner-scale-set-controller'
required: true
runs:
using: "composite"
steps:
- name: Queue test workflow
shell: bash
id: queue_workflow
run: |
queue_time=`date +%FT%TZ`
echo "queue_time=$queue_time" >> $GITHUB_OUTPUT
curl -X POST https://api.github.com/repos/${{inputs.repo-owner}}/${{inputs.repo-name}}/actions/workflows/${{inputs.workflow-file}}/dispatches \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token ${{inputs.auth-token}}" \
-d '{"ref": "main", "inputs": { "arc_name": "${{inputs.arc-name}}" } }'
- name: Fetch workflow run & job ids
uses: actions/github-script@v6
id: query_workflow
with:
script: |
// Try to find the workflow run triggered by the previous step using the workflow_dispatch event.
// - Find recently create workflow runs in the test repository
// - For each workflow run, list its workflow job and see if the job's labels contain `inputs.arc-name`
// - Since the inputs.arc-name should be unique per e2e workflow run, once we find the job with the label, we find the workflow that we just triggered.
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms))
}
const owner = '${{inputs.repo-owner}}'
const repo = '${{inputs.repo-name}}'
const workflow_id = '${{inputs.workflow-file}}'
let workflow_run_id = 0
let workflow_job_id = 0
let workflow_run_html_url = ""
let count = 0
while (count++<12) {
await sleep(10 * 1000);
let listRunResponse = await github.rest.actions.listWorkflowRuns({
owner: owner,
repo: repo,
workflow_id: workflow_id,
created: '>${{steps.queue_workflow.outputs.queue_time}}'
})
if (listRunResponse.data.total_count > 0) {
console.log(`Found some new workflow runs for ${workflow_id}`)
for (let i = 0; i<listRunResponse.data.total_count; i++) {
let workflowRun = listRunResponse.data.workflow_runs[i]
console.log(`Check if workflow run ${workflowRun.id} is triggered by us.`)
let listJobResponse = await github.rest.actions.listJobsForWorkflowRun({
owner: owner,
repo: repo,
run_id: workflowRun.id
})
console.log(`Workflow run ${workflowRun.id} has ${listJobResponse.data.total_count} jobs.`)
if (listJobResponse.data.total_count > 0) {
for (let j = 0; j<listJobResponse.data.total_count; j++) {
let workflowJob = listJobResponse.data.jobs[j]
console.log(`Check if workflow job ${workflowJob.id} is triggered by us.`)
console.log(JSON.stringify(workflowJob.labels));
if (workflowJob.labels.includes('${{inputs.arc-name}}')) {
console.log(`Workflow job ${workflowJob.id} (Run id: ${workflowJob.run_id}) is triggered by us.`)
workflow_run_id = workflowJob.run_id
workflow_job_id = workflowJob.id
workflow_run_html_url = workflowRun.html_url
break
}
}
}
if (workflow_job_id > 0) {
break;
}
}
}
if (workflow_job_id > 0) {
break;
}
}
if (workflow_job_id == 0) {
core.setFailed(`Can't find workflow run and workflow job triggered to 'runs-on ${{inputs.arc-name}}'`)
} else {
core.setOutput('workflow_run', workflow_run_id);
core.setOutput('workflow_job', workflow_job_id);
core.setOutput('workflow_run_url', workflow_run_html_url);
}
- name: Generate summary about the triggered workflow run
shell: bash
run: |
cat <<-EOF > $GITHUB_STEP_SUMMARY
| **Triggered workflow run** |
|:--------------------------:|
| ${{steps.query_workflow.outputs.workflow_run_url}} |
EOF
- name: Wait for workflow to finish successfully
uses: actions/github-script@v6
with:
script: |
// Wait 5 minutes and make sure the workflow run we triggered completed with result 'success'
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms))
}
const owner = '${{inputs.repo-owner}}'
const repo = '${{inputs.repo-name}}'
const workflow_run_id = ${{steps.query_workflow.outputs.workflow_run}}
const workflow_job_id = ${{steps.query_workflow.outputs.workflow_job}}
let count = 0
while (count++<10) {
await sleep(30 * 1000);
let getRunResponse = await github.rest.actions.getWorkflowRun({
owner: owner,
repo: repo,
run_id: workflow_run_id
})
console.log(`${getRunResponse.data.html_url}: ${getRunResponse.data.status} (${getRunResponse.data.conclusion})`);
if (getRunResponse.data.status == 'completed') {
if ( getRunResponse.data.conclusion == 'success') {
console.log(`Workflow run finished properly.`)
return
} else {
core.setFailed(`The triggered workflow run finish with result ${getRunResponse.data.conclusion}`)
return
}
}
}
core.setFailed(`The triggered workflow run didn't finish properly using ${{inputs.arc-name}}`)
- name: Gather logs and cleanup
shell: bash
if: always()
run: |
helm uninstall ${{ inputs.arc-name }} --namespace ${{inputs.arc-namespace}} --debug
kubectl wait --timeout=10s --for=delete AutoScalingRunnerSet -n ${{inputs.arc-name}} -l app.kubernetes.io/instance=${{ inputs.arc-name }}
kubectl logs deployment/arc-gha-runner-scale-set-controller -n ${{inputs.arc-controller-namespace}}

View File

@@ -1,63 +0,0 @@
name: 'Setup ARC E2E Test Action'
description: 'Build controller image, create kind cluster, load the image, and exchange ARC configure token.'
inputs:
app-id:
description: 'GitHub App Id for exchange access token'
required: true
app-pk:
description: "GitHub App private key for exchange access token"
required: true
image-name:
description: "Local docker image name for building"
required: true
image-tag:
description: "Tag of ARC Docker image for building"
required: true
target-org:
description: "The test organization for ARC e2e test"
required: true
outputs:
token:
description: 'Token to use for configure ARC'
value: ${{steps.config-token.outputs.token}}
runs:
using: "composite"
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
# Pinning v0.9.1 for Buildx and BuildKit v0.10.6
# BuildKit v0.11 which has a bug causing intermittent
# failures pushing images to GHCR
version: v0.9.1
driver-opts: image=moby/buildkit:v0.10.6
- name: Build controller image
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64
load: true
build-args: |
DOCKER_IMAGE_NAME=${{inputs.image-name}}
VERSION=${{inputs.image-tag}}
tags: |
${{inputs.image-name}}:${{inputs.image-tag}}
no-cache: true
- name: Create minikube cluster and load image
shell: bash
run: |
minikube start
minikube image load ${{inputs.image-name}}:${{inputs.image-tag}}
- name: Get configure token
id: config-token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ inputs.app-id }}
application_private_key: ${{ inputs.app-pk }}
organization: ${{ inputs.target-org}}

View File

@@ -1,47 +0,0 @@
name: "Setup Docker"
inputs:
username:
description: "Username"
required: true
password:
description: "Password"
required: true
ghcr_username:
description: "GHCR username. Usually set from the github.actor variable"
required: true
ghcr_password:
description: "GHCR password. Usually set from the secrets.GITHUB_TOKEN variable"
required: true
runs:
using: "composite"
steps:
- name: Get Short SHA
id: vars
run: |
echo "sha_short=${GITHUB_SHA::7}" >> $GITHUB_ENV
shell: bash
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: latest
- name: Login to DockerHub
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' && inputs.password != '' }}
uses: docker/login-action@v2
with:
username: ${{ inputs.username }}
password: ${{ inputs.password }}
- name: Login to GitHub Container Registry
if: ${{ github.event_name == 'release' || github.event_name == 'push' && github.ref == 'refs/heads/master' && inputs.ghcr_password != '' }}
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ inputs.ghcr_username }}
password: ${{ inputs.ghcr_password }}

View File

@@ -1,11 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "gomod" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "weekly"

64
.github/workflows/build-runner.yml vendored Normal file
View File

@@ -0,0 +1,64 @@
on:
pull_request:
branches:
- '**'
paths:
- 'runner/**'
- .github/workflows/build-runner.yml
push:
branches:
- master
paths:
- runner/patched/*
- runner/Dockerfile
- runner/dindrunner.Dockerfile
- runner/entrypoint.sh
- .github/workflows/build-runner.yml
name: Runner
jobs:
build:
runs-on: ubuntu-latest
name: Build ${{ matrix.name }}
strategy:
matrix:
include:
- name: actions-runner
dockerfile: Dockerfile
- name: actions-runner-dind
dockerfile: dindrunner.Dockerfile
env:
RUNNER_VERSION: 2.275.1
DOCKER_VERSION: 19.03.12
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
if: ${{ github.event_name == 'push' }}
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build [and Push]
uses: docker/build-push-action@v2
with:
context: ./runner
file: ./runner/${{ matrix.dockerfile }}
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
build-args: |
RUNNER_VERSION=${{ env.RUNNER_VERSION }}
DOCKER_VERSION=${{ env.DOCKER_VERSION }}
tags: |
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:latest

View File

@@ -1,705 +0,0 @@
name: CI ARC E2E Linux VM Test
on:
push:
branches:
- master
pull_request:
branches:
- master
workflow_dispatch:
permissions:
contents: read
env:
TARGET_ORG: actions-runner-controller
TARGET_REPO: arc_e2e_test_dummy
IMAGE_NAME: "arc-test-image"
IMAGE_VERSION: "dev"
jobs:
default-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
single-namespace-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
kubectl create namespace arc-runners
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
--set flags.watchSingleNamespace=arc-runners \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
dind-mode-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: arc-test-dind-workflow.yaml
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set containerMode.type="dind" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
kubernetes-mode-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-kubernetes-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
echo "Install openebs/dynamic-localpv-provisioner"
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install openebs openebs/openebs -n openebs --create-namespace
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
kubectl wait --timeout=30s --for=condition=ready pod -n openebs -l name=openebs-localpv-provisioner
- name: Install gha-runner-scale-set
id: install_arc
run: |
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set containerMode.type="kubernetes" \
--set containerMode.kubernetesModeWorkVolumeClaim.accessModes={"ReadWriteOnce"} \
--set containerMode.kubernetesModeWorkVolumeClaim.storageClassName="openebs-hostpath" \
--set containerMode.kubernetesModeWorkVolumeClaim.resources.requests.storage="1Gi" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
auth-proxy-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--name squid \
--publish 3128:3128 \
huangtingluo/squid-proxy:latest
kubectl create namespace arc-runners
kubectl create secret generic proxy-auth \
--namespace=arc-runners \
--from-literal=username=github \
--from-literal=password='actions'
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:3128" \
--set proxy.https.credentialSecretRef="proxy-auth" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
anonymous-proxy-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--name squid \
--publish 3128:3128 \
ubuntu/squid:latest
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:3128" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"
self-signed-ca-setup:
runs-on: ubuntu-latest
timeout-minutes: 20
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.id == github.repository_id
env:
WORKFLOW_FILE: "arc-test-workflow.yaml"
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
- uses: ./.github/actions/setup-arc-e2e
id: setup
with:
app-id: ${{secrets.E2E_TESTS_ACCESS_APP_ID}}
app-pk: ${{secrets.E2E_TESTS_ACCESS_PK}}
image-name: ${{env.IMAGE_NAME}}
image-tag: ${{env.IMAGE_VERSION}}
target-org: ${{env.TARGET_ORG}}
- name: Install gha-runner-scale-set-controller
id: install_arc_controller
run: |
helm install arc \
--namespace "arc-systems" \
--create-namespace \
--set image.repository=${{ env.IMAGE_NAME }} \
--set image.tag=${{ env.IMAGE_VERSION }} \
./charts/gha-runner-scale-set-controller \
--debug
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for controller pod with label app.kubernetes.io/name=gha-runner-scale-set-controller"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
kubectl get pod -n arc-systems
kubectl describe deployment arc-gha-runner-scale-set-controller -n arc-systems
- name: Install gha-runner-scale-set
id: install_arc
run: |
docker run -d \
--rm \
--name mitmproxy \
--publish 8080:8080 \
-v ${{ github.workspace }}/mitmproxy:/home/mitmproxy/.mitmproxy \
mitmproxy/mitmproxy:latest \
mitmdump
count=0
while true; do
if [ -f "${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.pem" ]; then
echo "CA cert generated"
cat ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.pem
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for mitmproxy generate its CA cert"
exit 1
fi
sleep 1
count=$((count+1))
done
sudo cp ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.pem ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.crt
sudo chown runner ${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.crt
kubectl create namespace arc-runners
kubectl -n arc-runners create configmap ca-cert --from-file="${{ github.workspace }}/mitmproxy/mitmproxy-ca-cert.crt"
kubectl -n arc-runners get configmap ca-cert -o yaml
ARC_NAME=${{github.job}}-$(date +'%M%S')$((($RANDOM + 100) % 100 + 1))
helm install "$ARC_NAME" \
--namespace "arc-runners" \
--create-namespace \
--set githubConfigUrl="https://github.com/${{ env.TARGET_ORG }}/${{env.TARGET_REPO}}" \
--set githubConfigSecret.github_token="${{ steps.setup.outputs.token }}" \
--set proxy.https.url="http://host.minikube.internal:8080" \
--set "proxy.noProxy[0]=10.96.0.1:443" \
--set "githubServerTLS.certificateFrom.configMapKeyRef.name=ca-cert" \
--set "githubServerTLS.certificateFrom.configMapKeyRef.key=mitmproxy-ca-cert.crt" \
--set "githubServerTLS.runnerMountPath=/usr/local/share/ca-certificates/" \
./charts/gha-runner-scale-set \
--debug
echo "ARC_NAME=$ARC_NAME" >> $GITHUB_OUTPUT
count=0
while true; do
POD_NAME=$(kubectl get pods -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME -o name)
if [ -n "$POD_NAME" ]; then
echo "Pod found: $POD_NAME"
break
fi
if [ "$count" -ge 60 ]; then
echo "Timeout waiting for listener pod with label actions.github.com/scale-set-name=$ARC_NAME"
exit 1
fi
sleep 1
count=$((count+1))
done
kubectl wait --timeout=30s --for=condition=ready pod -n arc-systems -l actions.github.com/scale-set-name=$ARC_NAME
kubectl get pod -n arc-systems
- name: Test ARC E2E
uses: ./.github/actions/execute-assert-arc-e2e
timeout-minutes: 10
with:
auth-token: ${{ steps.setup.outputs.token }}
repo-owner: ${{ env.TARGET_ORG }}
repo-name: ${{env.TARGET_REPO}}
workflow-file: ${{env.WORKFLOW_FILE}}
arc-name: ${{steps.install_arc.outputs.ARC_NAME}}
arc-namespace: "arc-runners"
arc-controller-namespace: "arc-systems"

View File

@@ -1,80 +0,0 @@
name: Go
on:
push:
branches:
- master
paths:
- '.github/workflows/go.yaml'
- '**.go'
- 'go.mod'
- 'go.sum'
pull_request:
paths:
- '.github/workflows/go.yaml'
- '**.go'
- 'go.mod'
- 'go.sum'
permissions:
contents: read
jobs:
fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
cache: false
- name: fmt
run: go fmt ./...
- name: Check diff
run: git diff --exit-code
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
cache: false
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
only-new-issues: true
version: v1.51.1
generate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
cache: false
- name: Generate
run: make generate
- name: Check diff
run: git diff --exit-code
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
- run: make manifests
- name: Check diff
run: git diff --exit-code
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_linux_amd64.tar.gz
tar zxvf kubebuilder_2.3.2_linux_amd64.tar.gz
sudo mv kubebuilder_2.3.2_linux_amd64 /usr/local/kubebuilder
- name: Run go tests
run: |
go test -short `go list ./... | grep -v ./test_e2e_arc`

View File

@@ -1,105 +0,0 @@
name: Publish ARC
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
on:
release:
types:
- published
workflow_dispatch:
inputs:
release_tag_name:
description: 'Tag name of the release to publish'
required: true
push_to_registries:
description: 'Push images to registries'
required: true
type: boolean
default: false
permissions:
contents: write
packages: write
env:
TARGET_ORG: actions-runner-controller
TARGET_REPO: actions-runner-controller
jobs:
release-controller:
name: Release
runs-on: ubuntu-latest
# gha-runner-scale-set has its own release workflow.
# We don't want to publish a new actions-runner-controller image
# we release gha-runner-scale-set.
if: ${{ !startsWith(github.event.inputs.release_tag_name, 'gha-runner-scale-set-') }}
steps:
- name: Checkout
uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.18.2'
- name: Install tools
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.2.0/kubebuilder_2.2.0_linux_amd64.tar.gz
tar zxvf kubebuilder_2.2.0_linux_amd64.tar.gz
sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder
curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash
sudo mv kustomize /usr/local/bin
curl -L -O https://github.com/tcnksm/ghr/releases/download/v0.13.0/ghr_v0.13.0_linux_amd64.tar.gz
tar zxvf ghr_v0.13.0_linux_amd64.tar.gz
sudo mv ghr_v0.13.0_linux_amd64/ghr /usr/local/bin
- name: Set version env variable
run: |
# Define the release tag name based on the event type
if [[ "${{ github.event_name }}" == "release" ]]; then
echo "VERSION=$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')" >> $GITHUB_ENV
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "VERSION=${{ inputs.release_tag_name }}" >> $GITHUB_ENV
fi
- name: Upload artifacts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
make github-release
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ secrets.ACTIONS_ACCESS_APP_ID }}
application_private_key: ${{ secrets.ACTIONS_ACCESS_PK }}
organization: ${{ env.TARGET_ORG }}
- name: Resolve push to registries
run: |
# Define the push to registries based on the event type
if [[ "${{ github.event_name }}" == "release" ]]; then
echo "PUSH_TO_REGISTRIES=true" >> $GITHUB_ENV
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "PUSH_TO_REGISTRIES=${{ inputs.push_to_registries }}" >> $GITHUB_ENV
fi
- name: Trigger Build And Push Images To Registries
run: |
# Authenticate
gh auth login --with-token <<< ${{ steps.get_workflow_token.outputs.token }}
# Trigger the workflow run
jq -n '{"event_type": "arc", "client_payload": {"release_tag_name": "${{ env.VERSION }}", "push_to_registries": "${{ env.PUSH_TO_REGISTRIES }}" }}' \
| gh api -X POST /repos/actions-runner-controller/releases/dispatches --input -
- name: Job summary
run: |
echo "The [publish-arc](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/publish-arc.yaml) workflow has been triggered!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Release tag: ${{ env.VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- Push to registries: ${{ env.PUSH_TO_REGISTRIES }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "[https://github.com/actions-runner-controller/releases/actions/workflows/publish-arc.yaml](https://github.com/actions-runner-controller/releases/actions/workflows/publish-arc.yaml)" >> $GITHUB_STEP_SUMMARY

View File

@@ -1,129 +0,0 @@
name: Publish Canary Image
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
on:
push:
branches:
- master
paths-ignore:
- '**.md'
- '.github/actions/**'
- '.github/ISSUE_TEMPLATE/**'
- '.github/workflows/e2e-test-dispatch-workflow.yaml'
- '.github/workflows/e2e-test-linux-vm.yaml'
- '.github/workflows/publish-arc.yaml'
- '.github/workflows/publish-chart.yaml'
- '.github/workflows/publish-runner-scale-set.yaml'
- '.github/workflows/release-runners.yaml'
- '.github/workflows/run-codeql.yaml'
- '.github/workflows/run-first-interaction.yaml'
- '.github/workflows/run-stale.yaml'
- '.github/workflows/update-runners.yaml'
- '.github/workflows/validate-arc.yaml'
- '.github/workflows/validate-chart.yaml'
- '.github/workflows/validate-gha-chart.yaml'
- '.github/workflows/validate-runners.yaml'
- '.github/dependabot.yml'
- '.github/RELEASE_NOTE_TEMPLATE.md'
- 'runner/**'
- '.gitignore'
- 'PROJECT'
- 'LICENSE'
- 'Makefile'
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps
permissions:
contents: read
packages: write
env:
# Safeguard to prevent pushing images to registeries after build
PUSH_TO_REGISTRIES: true
jobs:
legacy-canary-build:
name: Build and Publish Legacy Canary Image
runs-on: ubuntu-latest
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
TARGET_ORG: actions-runner-controller
TARGET_REPO: actions-runner-controller
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ secrets.ACTIONS_ACCESS_APP_ID }}
application_private_key: ${{ secrets.ACTIONS_ACCESS_PK }}
organization: ${{ env.TARGET_ORG }}
- name: Trigger Build And Push Images To Registries
run: |
# Authenticate
gh auth login --with-token <<< ${{ steps.get_workflow_token.outputs.token }}
# Trigger the workflow run
jq -n '{"event_type": "canary", "client_payload": {"sha": "${{ github.sha }}", "push_to_registries": ${{ env.PUSH_TO_REGISTRIES }}}}' \
| gh api -X POST /repos/actions-runner-controller/releases/dispatches --input -
- name: Job summary
run: |
echo "The [publish-canary](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/publish-canary.yaml) workflow has been triggered!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- sha: ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
echo "- Push to registries: ${{ env.PUSH_TO_REGISTRIES }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "[https://github.com/actions-runner-controller/releases/actions/workflows/publish-canary.yaml](https://github.com/actions-runner-controller/releases/actions/workflows/publish-canary.yaml)" >> $GITHUB_STEP_SUMMARY
canary-build:
name: Build and Publish gha-runner-scale-set-controller Canary Image
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Normalization is needed because upper case characters are not allowed in the repository name
# and the short sha is needed for image tagging
- name: Resolve parameters
id: resolve_parameters
run: |
echo "INFO: Resolving short sha"
echo "short_sha=$(git rev-parse --short ${{ github.ref }})" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
version: latest
# Unstable builds - run at your own risk
- name: Build and Push
uses: docker/build-push-action@v3
with:
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64
build-args: VERSION=canary-"${{ github.ref }}"
push: ${{ env.PUSH_TO_REGISTRIES }}
tags: |
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:canary
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:canary-${{ steps.resolve_parameters.outputs.short_sha }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -1,207 +0,0 @@
name: Publish Helm Chart
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
on:
push:
branches:
- master
paths:
- 'charts/**'
- '.github/workflows/publish-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
- '!**.md'
workflow_dispatch:
inputs:
force:
description: 'Force publish even if the chart version is not bumped'
type: boolean
required: true
default: false
env:
KUBE_SCORE_VERSION: 1.10.0
HELM_VERSION: v3.8.0
permissions:
contents: write
jobs:
lint-chart:
name: Lint Chart
runs-on: ubuntu-latest
outputs:
publish-chart: ${{ steps.publish-chart-step.outputs.publish }}
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v3.4
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score - --ignore-test pod-networkpolicy --ignore-test deployment-has-poddisruptionbudget --ignore-test deployment-has-host-podantiaffinity --ignore-test container-security-context --ignore-test pod-probes --ignore-test container-image-tag --enable-optional-test container-security-context-privileged --enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config charts/.ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: |
ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
if: steps.list-changed.outputs.changed == 'true'
uses: helm/kind-action@v1.4.0
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
if: steps.list-changed.outputs.changed == 'true'
run: |
helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: ct install --config charts/.ci/ct-config.yaml
# WARNING: This relies on the latest release being at the top of the JSON from GitHub and a clean chart.yaml
- name: Check if Chart Publish is Needed
id: publish-chart-step
run: |
CHART_TEXT=$(curl -fs https://raw.githubusercontent.com/${{ github.repository }}/master/charts/actions-runner-controller/Chart.yaml)
NEW_CHART_VERSION=$(echo "$CHART_TEXT" | grep version: | cut -d ' ' -f 2)
RELEASE_LIST=$(curl -fs https://api.github.com/repos/${{ github.repository }}/releases | jq .[].tag_name | grep actions-runner-controller | cut -d '"' -f 2 | cut -d '-' -f 4)
LATEST_RELEASED_CHART_VERSION=$(echo $RELEASE_LIST | cut -d ' ' -f 1)
echo "CHART_VERSION_IN_MASTER=$NEW_CHART_VERSION" >> $GITHUB_ENV
echo "LATEST_CHART_VERSION=$LATEST_RELEASED_CHART_VERSION" >> $GITHUB_ENV
# Always publish if force is true
if [[ $NEW_CHART_VERSION != $LATEST_RELEASED_CHART_VERSION || "${{ inputs.force }}" == "true" ]]; then
echo "publish=true" >> $GITHUB_OUTPUT
else
echo "publish=false" >> $GITHUB_OUTPUT
fi
- name: Job summary
run: |
echo "Chart linting has been completed." >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "- chart version in master: ${{ env.CHART_VERSION_IN_MASTER }}" >> $GITHUB_STEP_SUMMARY
echo "- latest chart version: ${{ env.LATEST_CHART_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- publish new chart: ${{ steps.publish-chart-step.outputs.publish }}" >> $GITHUB_STEP_SUMMARY
publish-chart:
if: needs.lint-chart.outputs.publish-chart == 'true'
needs: lint-chart
name: Publish Chart
runs-on: ubuntu-latest
permissions:
contents: write # for helm/chart-releaser-action to push chart release and create a release
env:
CHART_TARGET_ORG: actions-runner-controller
CHART_TARGET_REPO: actions-runner-controller.github.io
CHART_TARGET_BRANCH: master
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ secrets.ACTIONS_ACCESS_APP_ID }}
application_private_key: ${{ secrets.ACTIONS_ACCESS_PK }}
organization: ${{ env.CHART_TARGET_ORG }}
- name: Install chart-releaser
uses: helm/chart-releaser-action@v1.4.1
with:
install_only: true
install_dir: ${{ github.workspace }}/bin
- name: Package and upload release assets
run: |
cr package \
${{ github.workspace }}/charts/actions-runner-controller/ \
--package-path .cr-release-packages
cr upload \
--owner "$(echo ${{ github.repository }} | cut -d '/' -f 1)" \
--git-repo "$(echo ${{ github.repository }} | cut -d '/' -f 2)" \
--package-path .cr-release-packages \
--token ${{ secrets.GITHUB_TOKEN }}
- name: Generate updated index.yaml
run: |
cr index \
--owner "$(echo ${{ github.repository }} | cut -d '/' -f 1)" \
--git-repo "$(echo ${{ github.repository }} | cut -d '/' -f 2)" \
--index-path ${{ github.workspace }}/index.yaml \
--push \
--pages-branch 'gh-pages' \
--pages-index-path 'index.yaml'
# Chart Release was never intended to publish to a different repo
# this workaround is intended to move the index.yaml to the target repo
# where the github pages are hosted
- name: Checkout target repository
uses: actions/checkout@v3
with:
repository: ${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}
path: ${{ env.CHART_TARGET_REPO }}
ref: ${{ env.CHART_TARGET_BRANCH }}
token: ${{ steps.get_workflow_token.outputs.token }}
- name: Copy index.yaml
run: |
cp ${{ github.workspace }}/index.yaml ${{ env.CHART_TARGET_REPO }}/actions-runner-controller/index.yaml
- name: Commit and push to target repository
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
git add .
git commit -m "Update index.yaml"
git push
working-directory: ${{ github.workspace }}/${{ env.CHART_TARGET_REPO }}
- name: Job summary
run: |
echo "New helm chart has been published" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "- New [index.yaml](https://github.com/${{ env.CHART_TARGET_ORG }}/${{ env.CHART_TARGET_REPO }}/tree/main/actions-runner-controller) pushed" >> $GITHUB_STEP_SUMMARY

View File

@@ -1,201 +0,0 @@
name: Publish Runner Scale Set Controller Charts
on:
workflow_dispatch:
inputs:
ref:
description: 'The branch, tag or SHA to cut a release from'
required: false
type: string
default: ''
release_tag_name:
description: 'The name to tag the controller image with'
required: true
type: string
default: 'canary'
push_to_registries:
description: 'Push images to registries'
required: true
type: boolean
default: false
publish_gha_runner_scale_set_controller_chart:
description: 'Publish new helm chart for gha-runner-scale-set-controller'
required: true
type: boolean
default: false
publish_gha_runner_scale_set_chart:
description: 'Publish new helm chart for gha-runner-scale-set'
required: true
type: boolean
default: false
env:
HELM_VERSION: v3.8.0
permissions:
packages: write
jobs:
build-push-image:
name: Build and push controller image
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
# If inputs.ref is empty, it'll resolve to the default branch
ref: ${{ inputs.ref }}
- name: Resolve parameters
id: resolve_parameters
run: |
resolvedRef="${{ inputs.ref }}"
if [ -z "$resolvedRef" ]
then
resolvedRef="${{ github.ref }}"
fi
echo "resolved_ref=$resolvedRef" >> $GITHUB_OUTPUT
echo "INFO: Resolving short SHA for $resolvedRef"
echo "short_sha=$(git rev-parse --short $resolvedRef)" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
# Pinning v0.9.1 for Buildx and BuildKit v0.10.6
# BuildKit v0.11 which has a bug causing intermittent
# failures pushing images to GHCR
version: v0.9.1
driver-opts: image=moby/buildkit:v0.10.6
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build & push controller image
uses: docker/build-push-action@v3
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
build-args: VERSION=${{ inputs.release_tag_name }}
push: ${{ inputs.push_to_registries }}
tags: |
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:${{ inputs.release_tag_name }}
ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/gha-runner-scale-set-controller:${{ inputs.release_tag_name }}-${{ steps.resolve_parameters.outputs.short_sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Job summary
run: |
echo "The [publish-runner-scale-set.yaml](https://github.com/actions/actions-runner-controller/blob/main/.github/workflows/publish-runner-scale-set.yaml) workflow run was completed successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- Release tag: ${{ inputs.release_tag_name }}" >> $GITHUB_STEP_SUMMARY
echo "- Push to registries: ${{ inputs.push_to_registries }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
publish-helm-chart-gha-runner-scale-set-controller:
if: ${{ inputs.publish_gha_runner_scale_set_controller_chart == true }}
needs: build-push-image
name: Publish Helm chart for gha-runner-scale-set-controller
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
# If inputs.ref is empty, it'll resolve to the default branch
ref: ${{ inputs.ref }}
- name: Resolve parameters
id: resolve_parameters
run: |
resolvedRef="${{ inputs.ref }}"
if [ -z "$resolvedRef" ]
then
resolvedRef="${{ github.ref }}"
fi
echo "INFO: Resolving short SHA for $resolvedRef"
echo "short_sha=$(git rev-parse --short $resolvedRef)" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Publish new helm chart for gha-runner-scale-set-controller
run: |
echo ${{ secrets.GITHUB_TOKEN }} | helm registry login ghcr.io --username ${{ github.actor }} --password-stdin
GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG=$(cat charts/gha-runner-scale-set-controller/Chart.yaml | grep version: | cut -d " " -f 2)
echo "GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG=${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}" >> $GITHUB_ENV
helm package charts/gha-runner-scale-set-controller/ --version="${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}"
helm push gha-runner-scale-set-controller-"${GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG}".tgz oci://ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-charts
- name: Job summary
run: |
echo "New helm chart for gha-runner-scale-set-controller published successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- gha-runner-scale-set-controller Chart version: ${{ env.GHA_RUNNER_SCALE_SET_CONTROLLER_CHART_VERSION_TAG }}" >> $GITHUB_STEP_SUMMARY
publish-helm-chart-gha-runner-scale-set:
if: ${{ inputs.publish_gha_runner_scale_set_chart == true }}
needs: build-push-image
name: Publish Helm chart for gha-runner-scale-set
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
# If inputs.ref is empty, it'll resolve to the default branch
ref: ${{ inputs.ref }}
- name: Resolve parameters
id: resolve_parameters
run: |
resolvedRef="${{ inputs.ref }}"
if [ -z "$resolvedRef" ]
then
resolvedRef="${{ github.ref }}"
fi
echo "INFO: Resolving short SHA for $resolvedRef"
echo "short_sha=$(git rev-parse --short $resolvedRef)" >> $GITHUB_OUTPUT
echo "INFO: Normalizing repository name (lowercase)"
echo "repository_owner=$(echo ${{ github.repository_owner }} | tr '[:upper:]' '[:lower:]')" >> $GITHUB_OUTPUT
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Publish new helm chart for gha-runner-scale-set
run: |
echo ${{ secrets.GITHUB_TOKEN }} | helm registry login ghcr.io --username ${{ github.actor }} --password-stdin
GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG=$(cat charts/gha-runner-scale-set/Chart.yaml | grep version: | cut -d " " -f 2)
echo "GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG=${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}" >> $GITHUB_ENV
helm package charts/gha-runner-scale-set/ --version="${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}"
helm push gha-runner-scale-set-"${GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG}".tgz oci://ghcr.io/${{ steps.resolve_parameters.outputs.repository_owner }}/actions-runner-controller-charts
- name: Job summary
run: |
echo "New helm chart for gha-runner-scale-set published successfully!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- Ref: ${{ steps.resolve_parameters.outputs.resolvedRef }}" >> $GITHUB_STEP_SUMMARY
echo "- Short SHA: ${{ steps.resolve_parameters.outputs.short_sha }}" >> $GITHUB_STEP_SUMMARY
echo "- gha-runner-scale-set Chart version: ${{ env.GHA_RUNNER_SCALE_SET_CHART_VERSION_TAG }}" >> $GITHUB_STEP_SUMMARY

View File

@@ -1,72 +0,0 @@
name: Runners
# Revert to https://github.com/actions-runner-controller/releases#releases
# for details on why we use this approach
on:
# We must do a trigger on a push: instead of a types: closed so GitHub Secrets
# are available to the workflow run
push:
branches:
- 'master'
paths:
- 'runner/VERSION'
- '.github/workflows/release-runners.yaml'
env:
# Safeguard to prevent pushing images to registeries after build
PUSH_TO_REGISTRIES: true
TARGET_ORG: actions-runner-controller
TARGET_WORKFLOW: release-runners.yaml
DOCKER_VERSION: 20.10.23
RUNNER_CONTAINER_HOOKS_VERSION: 0.2.0
jobs:
build-runners:
name: Trigger Build and Push of Runner Images
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Get runner version
id: runner_version
run: |
version=$(echo -n $(cat runner/VERSION))
echo runner_version=$version >> $GITHUB_OUTPUT
- name: Get Token
id: get_workflow_token
uses: peter-murray/workflow-application-token-action@8e1ba3bf1619726336414f1014e37f17fbadf1db
with:
application_id: ${{ secrets.ACTIONS_ACCESS_APP_ID }}
application_private_key: ${{ secrets.ACTIONS_ACCESS_PK }}
organization: ${{ env.TARGET_ORG }}
- name: Trigger Build And Push Runner Images To Registries
env:
RUNNER_VERSION: ${{ steps.runner_version.outputs.runner_version }}
run: |
# Authenticate
gh auth login --with-token <<< ${{ steps.get_workflow_token.outputs.token }}
# Trigger the workflow run
gh workflow run ${{ env.TARGET_WORKFLOW }} -R ${{ env.TARGET_ORG }}/releases \
-f runner_version=${{ env.RUNNER_VERSION }} \
-f docker_version=${{ env.DOCKER_VERSION }} \
-f runner_container_hooks_version=${{ env.RUNNER_CONTAINER_HOOKS_VERSION }} \
-f sha='${{ github.sha }}' \
-f push_to_registries=${{ env.PUSH_TO_REGISTRIES }}
- name: Job summary
env:
RUNNER_VERSION: ${{ steps.runner_version.outputs.runner_version }}
run: |
echo "The [release-runners.yaml](https://github.com/actions-runner-controller/releases/blob/main/.github/workflows/release-runners.yaml) workflow has been triggered!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Parameters:**" >> $GITHUB_STEP_SUMMARY
echo "- runner_version: ${{ env.RUNNER_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- docker_version: ${{ env.DOCKER_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- runner_container_hooks_version: ${{ env.RUNNER_CONTAINER_HOOKS_VERSION }}" >> $GITHUB_STEP_SUMMARY
echo "- sha: ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
echo "- push_to_registries: ${{ env.PUSH_TO_REGISTRIES }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Status:**" >> $GITHUB_STEP_SUMMARY
echo "[https://github.com/actions-runner-controller/releases/actions/workflows/release-runners.yaml](https://github.com/actions-runner-controller/releases/actions/workflows/release-runners.yaml)" >> $GITHUB_STEP_SUMMARY

56
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,56 @@
on:
release:
types: [published]
jobs:
build:
runs-on: ubuntu-latest
name: Release
env:
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install tools
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.2.0/kubebuilder_2.2.0_linux_amd64.tar.gz
tar zxvf kubebuilder_2.2.0_linux_amd64.tar.gz
sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder
curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash
sudo mv kustomize /usr/local/bin
curl -L -O https://github.com/tcnksm/ghr/releases/download/v0.13.0/ghr_v0.13.0_linux_amd64.tar.gz
tar zxvf ghr_v0.13.0_linux_amd64.tar.gz
sudo mv ghr_v0.13.0_linux_amd64/ghr /usr/local/bin
- name: Set version
run: echo "VERSION=$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')" >> $GITHUB_ENV
- name: Upload artifacts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: make github-release
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v2
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}

View File

@@ -1,32 +0,0 @@
name: Run CodeQL
on:
push:
branches:
- master
pull_request:
branches:
- master
schedule:
- cron: '30 1 * * 0'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v2
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2

View File

@@ -1,29 +0,0 @@
name: first-interaction
on:
issues:
types: [opened]
pull_request:
branches: [master]
types: [opened]
jobs:
check_for_first_interaction:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/first-interaction@main
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
issue-message: |
Hello! Thank you for filing an issue.
The maintainers will triage your issue shortly.
In the meantime, please take a look at the [troubleshooting guide](https://github.com/actions/actions-runner-controller/blob/master/TROUBLESHOOTING.md) for bug reports.
If this is a feature request, please review our [contribution guidelines](https://github.com/actions/actions-runner-controller/blob/master/CONTRIBUTING.md).
pr-message: |
Hello! Thank you for your contribution.
Please review our [contribution guidelines](https://github.com/actions/actions-runner-controller/blob/master/CONTRIBUTING.md) to understand the project's testing and code conventions.

View File

@@ -1,25 +0,0 @@
name: Run Stale Bot
on:
schedule:
- cron: '30 1 * * *'
permissions:
contents: read
jobs:
stale:
name: Run Stale
runs-on: ubuntu-latest
permissions:
issues: write # for actions/stale to close stale issues
pull-requests: write # for actions/stale to close stale PRs
steps:
- uses: actions/stale@v6
with:
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
# turn off stale for both issues and PRs
days-before-stale: -1
# turn stale back on for issues only
days-before-issue-stale: 30
days-before-issue-close: 14
exempt-issue-labels: 'pinned,security,enhancement,refactor,documentation,chore,bug,dependencies,needs-investigation'

27
.github/workflows/test.yaml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: CI
on:
pull_request:
branches:
- master
paths-ignore:
- 'runner/**'
jobs:
test:
runs-on: ubuntu-latest
name: Test
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.2.0/kubebuilder_2.2.0_linux_amd64.tar.gz
tar zxvf kubebuilder_2.2.0_linux_amd64.tar.gz
sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder
- name: Run tests
run: make test
- name: Verify manifests are up-to-date
run: |
make manifests
git diff --exit-code

View File

@@ -1,109 +0,0 @@
# This workflows polls releases from actions/runner and in case of a new one it
# updates files containing runner version and opens a pull request.
name: Update runners
on:
schedule:
# run daily
- cron: "0 9 * * *"
workflow_dispatch:
jobs:
# check_versions compares our current version and the latest available runner
# version and sets them as outputs.
check_versions:
runs-on: ubuntu-latest
env:
GH_TOKEN: ${{ github.token }}
outputs:
current_version: ${{ steps.versions.outputs.current_version }}
latest_version: ${{ steps.versions.outputs.latest_version }}
steps:
- uses: actions/checkout@v3
- name: Get current and latest versions
id: versions
run: |
CURRENT_VERSION=$(echo -n $(cat runner/VERSION))
echo "Current version: $CURRENT_VERSION"
echo current_version=$CURRENT_VERSION >> $GITHUB_OUTPUT
LATEST_VERSION=$(gh release list --exclude-drafts --exclude-pre-releases --limit 1 -R actions/runner | grep -oP '(?<=v)[0-9.]+' | head -1)
echo "Latest version: $LATEST_VERSION"
echo latest_version=$LATEST_VERSION >> $GITHUB_OUTPUT
# check_pr checks if a PR for the same update already exists. It only runs if
# runner latest version != our current version. If no existing PR is found,
# it sets a PR name as output.
check_pr:
runs-on: ubuntu-latest
needs: check_versions
if: needs.check_versions.outputs.current_version != needs.check_versions.outputs.latest_version
outputs:
pr_name: ${{ steps.pr_name.outputs.pr_name }}
env:
GH_TOKEN: ${{ github.token }}
steps:
- name: debug
run:
echo ${{ needs.check_versions.outputs.current_version }}
echo ${{ needs.check_versions.outputs.latest_version }}
- uses: actions/checkout@v3
- name: PR Name
id: pr_name
env:
LATEST_VERSION: ${{ needs.check_versions.outputs.latest_version }}
run: |
PR_NAME="Update runner to version ${LATEST_VERSION}"
result=$(gh pr list --search "$PR_NAME" --json number --jq ".[].number" --limit 1)
if [ -z "$result" ]
then
echo "No existing PRs found, setting output with pr_name=$PR_NAME"
echo pr_name=$PR_NAME >> $GITHUB_OUTPUT
else
echo "Found a PR with title '$PR_NAME' already existing: ${{ github.server_url }}/${{ github.repository }}/pull/$result"
fi
# update_version updates runner version in the files listed below, commits
# the changes and opens a pull request as `github-actions` bot.
update_version:
runs-on: ubuntu-latest
needs:
- check_versions
- check_pr
if: needs.check_pr.outputs.pr_name
permissions:
pull-requests: write
contents: write
actions: write
env:
GH_TOKEN: ${{ github.token }}
CURRENT_VERSION: ${{ needs.check_versions.outputs.current_version }}
LATEST_VERSION: ${{ needs.check_versions.outputs.latest_version }}
PR_NAME: ${{ needs.check_pr.outputs.pr_name }}
steps:
- uses: actions/checkout@v3
- name: New branch
run: git checkout -b update-runner-$LATEST_VERSION
- name: Update files
run: |
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/VERSION
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" runner/Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" Makefile
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" test/e2e/e2e_test.go
sed -i "s/$CURRENT_VERSION/$LATEST_VERSION/g" .github/workflows/e2e-test-linux-vm.yaml
- name: Commit changes
run: |
# from https://github.com/orgs/community/discussions/26560
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config user.name "github-actions[bot]"
git add .
git commit -m "$PR_NAME"
git push -u origin HEAD
- name: Create pull request
run: gh pr create -f

View File

@@ -1,96 +0,0 @@
name: Validate Helm Chart
on:
pull_request:
branches:
- master
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
push:
paths:
- 'charts/**'
- '.github/workflows/validate-chart.yaml'
- '!charts/actions-runner-controller/docs/**'
- '!**.md'
- '!charts/gha-runner-scale-set-controller/**'
- '!charts/gha-runner-scale-set/**'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.10.0
HELM_VERSION: v3.8.0
permissions:
contents: read
jobs:
validate-chart:
name: Lint Chart
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config charts/.ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: |
ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
uses: helm/kind-action@v1.4.0
if: steps.list-changed.outputs.changed == 'true'
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
if: steps.list-changed.outputs.changed == 'true'
run: |
helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: |
ct install --config charts/.ci/ct-config.yaml

View File

@@ -1,134 +0,0 @@
name: Validate Helm Chart (gha-runner-scale-set-controller and gha-runner-scale-set)
on:
pull_request:
branches:
- master
paths:
- 'charts/**'
- '.github/workflows/validate-gha-chart.yaml'
- '!charts/actions-runner-controller/**'
- '!**.md'
push:
paths:
- 'charts/**'
- '.github/workflows/validate-gha-chart.yaml'
- '!charts/actions-runner-controller/**'
- '!**.md'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.16.1
HELM_VERSION: v3.8.0
permissions:
contents: read
jobs:
validate-chart:
name: Lint Chart
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Helm
# Using https://github.com/Azure/setup-helm/releases/tag/v3.5
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v4
with:
python-version: '3.7'
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.3.1
- name: Set up latest version chart-testing
run: |
echo 'deb [trusted=yes] https://repo.goreleaser.com/apt/ /' | sudo tee /etc/apt/sources.list.d/goreleaser.list
sudo apt update
sudo apt install goreleaser
git clone https://github.com/helm/chart-testing
cd chart-testing
unset CT_CONFIG_DIR
goreleaser build --clean --skip-validate
./dist/chart-testing_linux_amd64_v1/ct version
echo 'Adding ct directory to PATH...'
echo "$RUNNER_TEMP/chart-testing/dist/chart-testing_linux_amd64_v1" >> "$GITHUB_PATH"
echo 'Setting CT_CONFIG_DIR...'
echo "CT_CONFIG_DIR=$RUNNER_TEMP/chart-testing/etc" >> "$GITHUB_ENV"
working-directory: ${{ runner.temp }}
- name: Run chart-testing (list-changed)
id: list-changed
run: |
ct version
changed=$(ct list-changed --config charts/.ci/ct-config-gha.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: |
ct lint --config charts/.ci/ct-config-gha.yaml
- name: Set up docker buildx
uses: docker/setup-buildx-action@v2
if: steps.list-changed.outputs.changed == 'true'
with:
version: latest
- name: Build controller image
uses: docker/build-push-action@v3
if: steps.list-changed.outputs.changed == 'true'
with:
file: Dockerfile
platforms: linux/amd64
load: true
build-args: |
DOCKER_IMAGE_NAME=test-arc
VERSION=dev
tags: |
test-arc:dev
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Create kind cluster
uses: helm/kind-action@v1.4.0
if: steps.list-changed.outputs.changed == 'true'
with:
cluster_name: chart-testing
- name: Load image into cluster
if: steps.list-changed.outputs.changed == 'true'
run: |
export DOCKER_IMAGE_NAME=test-arc
export VERSION=dev
export IMG_RESULT=load
make docker-buildx
kind load docker-image test-arc:dev --name chart-testing
- name: Run chart-testing (install)
if: steps.list-changed.outputs.changed == 'true'
run: |
ct install --config charts/.ci/ct-config-gha.yaml

View File

@@ -1,45 +0,0 @@
name: Validate Runners
on:
pull_request:
branches:
- '**'
paths:
- 'runner/**'
- 'test/startup/**'
- '!**.md'
permissions:
contents: read
jobs:
shellcheck:
name: runner / shellcheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: shellcheck
uses: reviewdog/action-shellcheck@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
path: "./runner"
pattern: |
*.sh
*.bash
update-status
# Make this consistent with `make shellsheck`
shellcheck_flags: "--shell bash --source-path runner"
exclude: "./.git/*"
check_all_files_with_shebangs: "false"
# Set this to "true" once we addressed all the shellcheck findings
fail_on_error: "false"
test-runner-entrypoint:
name: Test entrypoint
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Run tests
run: |
make acceptance/runner/startup

40
.github/workflows/wip.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
on:
push:
branches:
- master
paths-ignore:
- "runner/**"
jobs:
build:
runs-on: ubuntu-latest
name: release-latest
env:
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v2
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:latest

11
.gitignore vendored
View File

@@ -1,4 +1,3 @@
# Deploy Assets
release
# Binaries for programs and plugins
@@ -16,22 +15,14 @@ bin
*.out
# Kubernetes Generated files - skip generated files, except for vendored files
!vendor/**/zz_generated.*
# editor and IDE paraphernalia
.vscode
.idea
*.swp
*.swo
*~
.envrc
.env
.test.env
*.pem
!github/actions/testdata/*.pem
# OS
.DS_STORE
/test-assets

View File

@@ -1,17 +0,0 @@
run:
timeout: 3m
output:
format: github-actions
linters-settings:
errcheck:
exclude-functions:
- (net/http.ResponseWriter).Write
- (*net/http.Server).Shutdown
- (*github.com/actions/actions-runner-controller/simulator.VisibleRunnerGroups).Add
- (*github.com/actions/actions-runner-controller/testing.Kind).Stop
issues:
exclude-rules:
- path: controllers/suite_test.go
linters:
- staticcheck
text: "SA1019"

View File

@@ -1,2 +0,0 @@
# actions-runner-controller maintainers
* @mumoshu @toast-gear @actions/actions-runtime @nikola-jokic

View File

@@ -1,74 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at opensource@github.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@@ -1,219 +0,0 @@
# Contribution Guide
- [Contribution Guide](#contribution-guide)
- [Welcome](#welcome)
- [Before contributing code](#before-contributing-code)
- [How to Contribute a Patch](#how-to-contribute-a-patch)
- [Developing the Controller](#developing-the-controller)
- [Developing the Runners](#developing-the-runners)
- [Tests](#tests)
- [Running Ginkgo Tests](#running-ginkgo-tests)
- [Running End to End Tests](#running-end-to-end-tests)
- [Rerunning a failed test](#rerunning-a-failed-test)
- [Testing in a non-kind cluster](#testing-in-a-non-kind-cluster)
- [Code conventions](#code-conventions)
- [Opening the Pull Request](#opening-the-pull-request)
- [Helm Version Changes](#helm-version-changes)
- [Testing Controller Built from a Pull Request](#testing-controller-built-from-a-pull-request)
## Welcome
This document is the single source of truth for how to contribute to the code base.
Feel free to browse the [open issues](https://github.com/actions/actions-runner-controller/issues) or file a new one, all feedback is welcome!
By reading this guide, we hope to give you all of the information you need to be able to pick up issues, contribute new features, and get your work
reviewed and merged.
## Before contributing code
We welcome code patches, but to make sure things are well coordinated you should discuss any significant change before starting the work.
The maintainers ask that you signal your intention to contribute to the project using the issue tracker.
If there is an existing issue that you want to work on, please let us know so we can get it assigned to you.
If you noticed a bug or want to add a new feature, there are issue templates you can fill out.
When filing a feature request, the maintainers will review the change and give you a decision on whether we are willing to accept the feature into the project.
For significantly large and/or complex features, we may request that you write up an architectural decision record ([ADR](https://github.blog/2020-08-13-why-write-adrs/)) detailing the change.
Please use the [template](/adrs/0000-TEMPLATE.md) as guidance.
<!--
TODO: Add a pre-requisite section describing what developers should
install in order get started on ARC.
-->
## How to Contribute a Patch
Depending on what you are patching depends on how you should go about it.
Below are some guides on how to test patches locally as well as develop the controller and runners.
When submitting a PR for a change please provide evidence that your change works as we still need to work on improving the CI of the project.
Some resources are provided for helping achieve this, see this guide for details.
### Developing the Controller
Rerunning the whole acceptance test suite from scratch on every little change to the controller, the runner, and the chart would be counter-productive.
To make your development cycle faster, use the below command to update deploy and update all the three:
```shell
# Let assume we have all other envvars like DOCKER_USER, GITHUB_TOKEN already set,
# The below command will (re)build `actions-runner-controller:controller1` and `actions-runner:runner1`,
# load those into kind nodes, and then rerun kubectl or helm to install/upgrade the controller,
# and finally upgrade the runner deployment to use the new runner image.
#
# As helm 3 and kubectl is unable to recreate a pod when no tag change,
# you either need to bump VERSION and RUNNER_TAG on each run,
# or manually run `kubectl delete pod $POD` on respective pods for changes to actually take effect.
# Makefile
VERSION=controller1 \
RUNNER_TAG=runner1 \
make acceptance/pull acceptance/kind docker-build acceptance/load acceptance/deploy
```
If you've already deployed actions-runner-controller and only want to recreate pods to use the newer image, you can run:
```shell
# Makefile
NAME=$DOCKER_USER/actions-runner-controller \
make docker-build acceptance/load && \
kubectl -n actions-runner-system delete po $(kubectl -n actions-runner-system get po -ojsonpath={.items[*].metadata.name})
```
Similarly, if you'd like to recreate runner pods with the newer runner image you can use the runner specific [Makefile](runner/Makefile) to build and / or push new runner images
```shell
# runner/Makefile
NAME=$DOCKER_USER/actions-runner make \
-C runner docker-{build,push}-ubuntu && \
(kubectl get po -ojsonpath={.items[*].metadata.name} | xargs -n1 kubectl delete po)
```
### Developing the Runners
#### Tests
A set of example pipelines (./acceptance/pipelines) are provided in this repository which you can use to validate your runners are working as expected.
When raising a PR please run the relevant suites to prove your change hasn't broken anything.
#### Running Ginkgo Tests
You can run the integration test suite that is written in Ginkgo with:
```shell
make test-with-deps
```
This will firstly install a few binaries required to setup the integration test environment and then runs `go test` to start the Ginkgo test.
If you don't want to use `make`, like when you're running tests from your IDE, install required binaries to `/usr/local/kubebuilder/bin`.
That's the directory in which controller-runtime's `envtest` framework locates the binaries.
```shell
sudo mkdir -p /usr/local/kubebuilder/bin
make kube-apiserver etcd
sudo mv test-assets/{etcd,kube-apiserver} /usr/local/kubebuilder/bin/
go test -v -run TestAPIs github.com/actions/actions-runner-controller/controllers/actions.summerwind.net
```
To run Ginkgo tests selectively, set the pattern of target test names to `GINKGO_FOCUS`.
All the Ginkgo test that matches `GINKGO_FOCUS` will be run.
```shell
GINKGO_FOCUS='[It] should create a new Runner resource from the specified template, add a another Runner on replicas increased, and removes all the replicas when set to 0' \
go test -v -run TestAPIs github.com/actions/actions-runner-controller/controllers/actions.summerwind.net
```
### Running End to End Tests
> **Notes for Ubuntu 20.04+ users**
>
> If you're using Ubuntu 20.04 or greater, you might have installed `docker` with `snap`.
>
> If you want to stick with `snap`-provided `docker`, do not forget to set `TMPDIR` to somewhere under `$HOME`.
> Otherwise `kind load docker-image` fail while running `docker save`.
> See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap for more information.
To test your local changes against both PAT and App based authentication please run the `acceptance` make target with the authentication configuration details provided:
```shell
# This sets `VERSION` envvar to some appropriate value
. hack/make-env.sh
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
make acceptance
```
#### Rerunning a failed test
When one of tests run by `make acceptance` failed, you'd probably like to rerun only the failed one.
It can be done by `make acceptance/run` and by setting the combination of `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm|kubectl` and `ACCEPTANCE_TEST_SECRET_TYPE=token|app` values that failed (note, you just need to set the corresponding authentication configuration in this circumstance)
In the example below, we rerun the test for the combination `ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token` only:
```shell
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make acceptance/run
```
#### Testing in a non-kind cluster
If you prefer to test in a non-kind cluster, you can instead run:
```shell
KUBECONFIG=path/to/kubeconfig \
DOCKER_USER=*** \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make docker-build acceptance/setup \
acceptance/deploy \
acceptance/tests
```
### Code conventions
Before shipping your PR, please check the following items to make sure CI passes.
- Run `go mod tidy` if you made changes to dependencies.
- Format the code using `gofmt`
- Run the `golangci-lint` tool locally.
- We recommend you use `make lint` to run the tool using a Docker container matching the CI version.
### Opening the Pull Request
Send PR, add issue number to description
## Helm Version Changes
In general we ask you not to bump the version in your PR.
The maintainers will manage releases and publishing new charts.
## Testing Controller Built from a Pull Request
We always appreciate your help in testing open pull requests by deploying custom builds of actions-runner-controller onto your own environment, so that we are extra sure we didn't break anything.
It is especially true when the pull request is about GitHub Enterprise, both GHEC and GHES, as [maintainers don't have GitHub Enterprise environments for testing](docs/about-arc.md#github-enterprise-support).
The process would look like the below:
- Clone this repository locally
- Checkout the branch. If you use the `gh` command, run `gh pr checkout $PR_NUMBER`
- Run `NAME=$DOCKER_USER/actions-runner-controller VERSION=canary make docker-build docker-push` for a custom container image build
- Update your actions-runner-controller's controller-manager deployment to use the new image, `$DOCKER_USER/actions-runner-controller:canary`
Please also note that you need to replace `$DOCKER_USER` with your own DockerHub account name.
## Release process
Only the maintainers can release a new version of actions-runner-controller, publish a new version of the helm charts, and runner images.
All release workflows have been moved to [actions-runner-controller/releases](https://github.com/actions-runner-controller/releases) since the packages are owned by the former organization.

View File

@@ -1,46 +1,28 @@
# Build the manager binary
FROM --platform=$BUILDPLATFORM golang:1.19.4 as builder
FROM golang:1.15 as builder
ARG TARGETPLATFORM
WORKDIR /workspace
# Make it runnable on a distroless image/without libc
ENV CGO_ENABLED=0
ENV GO111MODULE=on \
CGO_ENABLED=0
# Copy the Go Modules manifests
COPY go.mod go.sum ./
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer.
#
# Also, we need to do this before setting TARGETPLATFORM/TARGETOS/TARGETARCH/TARGETVARIANT
# so that go mod cache is shared across platforms.
# and so that source changes don't invalidate our downloaded layer
RUN go mod download
# Copy the go source
# COPY . .
# Usage:
# docker buildx build --tag repo/img:tag -f ./Dockerfile . --platform linux/amd64,linux/arm64,linux/arm/v7
#
# With the above commmand,
# TARGETOS can be "linux", TARGETARCH can be "amd64", "arm64", and "arm", TARGETVARIANT can be "v7".
ARG TARGETPLATFORM TARGETOS TARGETARCH TARGETVARIANT VERSION=dev
# We intentionally avoid `--mount=type=cache,mode=0777,target=/go/pkg/mod` in the `go mod download` and the `go build` runs
# to avoid https://github.com/moby/buildkit/issues/2334
# We can use docker layer cache so the build is fast enogh anyway
# We also use per-platform GOCACHE for the same reason.
ENV GOCACHE /build/${TARGETPLATFORM}/root/.cache/go-build
COPY . .
# Build
RUN --mount=target=. \
--mount=type=cache,mode=0777,target=${GOCACHE} \
export GOOS=${TARGETOS} GOARCH=${TARGETARCH} GOARM=${TARGETVARIANT#v} && \
go build -trimpath -ldflags="-s -w -X 'github.com/actions/actions-runner-controller/build.Version=${VERSION}'" -o /out/manager main.go && \
go build -trimpath -ldflags="-s -w" -o /out/github-runnerscaleset-listener ./cmd/githubrunnerscalesetlistener && \
go build -trimpath -ldflags="-s -w" -o /out/github-webhook-server ./cmd/githubwebhookserver && \
go build -trimpath -ldflags="-s -w" -o /out/actions-metrics-server ./cmd/actionsmetricsserver && \
go build -trimpath -ldflags="-s -w" -o /out/sleep ./cmd/sleep
RUN export GOOS=$(echo ${TARGETPLATFORM} | cut -d / -f1) && \
export GOARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) && \
GOARM=$(echo ${TARGETPLATFORM} | cut -d / -f3 | cut -c2-) && \
go build -a -o manager main.go
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
@@ -48,12 +30,8 @@ FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /out/manager .
COPY --from=builder /out/github-webhook-server .
COPY --from=builder /out/actions-metrics-server .
COPY --from=builder /out/github-runnerscaleset-listener .
COPY --from=builder /out/sleep .
COPY --from=builder /workspace/manager .
USER 65532:65532
USER nonroot:nonroot
ENTRYPOINT ["/manager"]

348
Makefile
View File

@@ -1,28 +1,11 @@
ifdef DOCKER_USER
DOCKER_IMAGE_NAME ?= ${DOCKER_USER}/actions-runner-controller
else
DOCKER_IMAGE_NAME ?= summerwind/actions-runner-controller
endif
DOCKER_USER ?= $(shell echo ${DOCKER_IMAGE_NAME} | cut -d / -f1)
VERSION ?= dev
RUNNER_VERSION ?= 2.303.0
TARGETPLATFORM ?= $(shell arch)
RUNNER_NAME ?= ${DOCKER_USER}/actions-runner
RUNNER_TAG ?= ${VERSION}
TEST_REPO ?= ${DOCKER_USER}/actions-runner-controller
TEST_ORG ?=
TEST_ORG_REPO ?=
TEST_EPHEMERAL ?= false
SYNC_PERIOD ?= 1m
USE_RUNNERSET ?=
KUBECONTEXT ?= kind-acceptance
CLUSTER ?= acceptance
CERT_MANAGER_VERSION ?= v1.1.1
KUBE_RBAC_PROXY_VERSION ?= v0.11.0
SHELLCHECK_VERSION ?= 0.8.0
NAME ?= summerwind/actions-runner-controller
VERSION ?= latest
# From https://github.com/VictoriaMetrics/operator/pull/44
YAML_DROP=$(YQ) delete --inplace
YAML_DROP_PREFIX=spec.validation.openAPIV3Schema.properties.spec.properties
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:generateEmbeddedObjectMeta=true"
CRD_OPTIONS ?= "crd:trivialVersions=true"
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
@@ -31,22 +14,6 @@ else
GOBIN=$(shell go env GOBIN)
endif
TEST_ASSETS=$(PWD)/test-assets
TOOLS_PATH=$(PWD)/.tools
OS_NAME := $(shell uname -s | tr A-Z a-z)
# The etcd packages that coreos maintain use different extensions for each *nix OS on their github release page.
# ETCD_EXTENSION: the storage format file extension listed on the release page.
# EXTRACT_COMMAND: the appropriate CLI command for extracting this file format.
ifeq ($(OS_NAME), darwin)
ETCD_EXTENSION:=zip
EXTRACT_COMMAND:=unzip
else
ETCD_EXTENSION:=tar.gz
EXTRACT_COMMAND:=tar -xzf
endif
# default list of platforms for which multiarch image is built
ifeq (${PLATFORMS}, )
export PLATFORMS="linux/amd64,linux/arm64"
@@ -55,8 +22,8 @@ endif
# if IMG_RESULT is unspecified, by default the image will be pushed to registry
ifeq (${IMG_RESULT}, load)
export PUSH_ARG="--load"
# if load is specified, image will be built only for the build machine architecture.
export PLATFORMS="local"
# if load is specified, image will be built only for the build machine architecture.
export PLATFORMS="local"
else ifeq (${IMG_RESULT}, cache)
# if cache is specified, image will only be available in the build cache, it won't be pushed or loaded
# therefore no PUSH_ARG will be specified
@@ -66,40 +33,21 @@ endif
all: manager
lint:
docker run --rm -v $(PWD):/app -w /app golangci/golangci-lint:v1.49.0 golangci-lint run
GO_TEST_ARGS ?= -short
# Run tests
test: generate fmt vet manifests shellcheck
go test $(GO_TEST_ARGS) `go list ./... | grep -v ./test_e2e_arc` -coverprofile cover.out
go test -fuzz=Fuzz -fuzztime=10s -run=Fuzz* ./controllers/actions.summerwind.net
test-with-deps: kube-apiserver etcd kubectl
# See https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest#pkg-constants
TEST_ASSET_KUBE_APISERVER=$(KUBE_APISERVER_BIN) \
TEST_ASSET_ETCD=$(ETCD_BIN) \
TEST_ASSET_KUBECTL=$(KUBECTL_BIN) \
make test
test: generate fmt vet manifests
go test ./... -coverprofile cover.out
# Build manager binary
manager: generate fmt vet
go build -o bin/manager main.go
go build -o bin/github-runnerscaleset-listener ./cmd/githubrunnerscalesetlistener
# Run against the configured Kubernetes cluster in ~/.kube/config
run: generate fmt vet manifests
go run ./main.go
run-scaleset: generate fmt vet
CONTROLLER_MANAGER_POD_NAMESPACE=default \
CONTROLLER_MANAGER_CONTAINER_IMAGE="${DOCKER_IMAGE_NAME}:${VERSION}" \
go run ./main.go --auto-scaling-runner-set-only
# Install CRDs into a cluster
install: manifests
kustomize build config/crd | kubectl apply --server-side -f -
kustomize build config/crd | kubectl apply -f -
# Uninstall CRDs from a cluster
uninstall: manifests
@@ -107,86 +55,17 @@ uninstall: manifests
# Deploy controller in the configured Kubernetes cluster in ~/.kube/config
deploy: manifests
cd config/manager && kustomize edit set image controller=${DOCKER_IMAGE_NAME}:${VERSION}
kustomize build config/default | kubectl apply --server-side -f -
cd config/manager && kustomize edit set image controller=${NAME}:${VERSION}
kustomize build config/default | kubectl apply -f -
# Generate manifests e.g. CRD, RBAC etc.
manifests: manifests-gen-crds chart-crds
manifests: manifests-118 fix118 chart-crds
manifests-gen-crds: controller-gen yq
manifests-118: controller-gen
$(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
for YAMLFILE in config/crd/bases/actions*.yaml; do \
$(YQ) '.spec.preserveUnknownFields = false' --inplace "$$YAMLFILE" ; \
done
make manifests-gen-crds-fix DELETE_KEY=x-kubernetes-list-type
make manifests-gen-crds-fix DELETE_KEY=x-kubernetes-list-map-keys
manifests-gen-crds-fix: DELETE_KEY ?=
manifests-gen-crds-fix:
#runners
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runners.yaml
#runnerreplicasets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml
#runnerdeployments
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.sidecarContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.dockerdContainerResources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml
#runnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.volumeClaimTemplates.items.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.workVolumeClaimTemplate.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.summerwind.dev_runnersets.yaml
#autoscalingrunnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_autoscalingrunnersets.yaml
#ehemeralrunnersets
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.ephemeralRunnerSpec.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunnersets.yaml
# ephemeralrunners
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.ephemeralContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.containers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.initContainers.items.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
$(YQ) 'del(.spec.versions[].schema.openAPIV3Schema.properties.spec.properties.spec.properties.volumes.items.properties.ephemeral.properties.volumeClaimTemplate.properties.spec.properties.resources.properties.claims.$(DELETE_KEY))' --inplace config/crd/bases/actions.github.com_ephemeralrunners.yaml
chart-crds:
cp config/crd/bases/*.yaml charts/actions-runner-controller/crds/
cp config/crd/bases/actions.github.com_autoscalingrunnersets.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_autoscalinglisteners.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_ephemeralrunnersets.yaml charts/gha-runner-scale-set-controller/crds/
cp config/crd/bases/actions.github.com_ephemeralrunners.yaml charts/gha-runner-scale-set-controller/crds/
rm charts/actions-runner-controller/crds/actions.github.com_autoscalingrunnersets.yaml
rm charts/actions-runner-controller/crds/actions.github.com_autoscalinglisteners.yaml
rm charts/actions-runner-controller/crds/actions.github.com_ephemeralrunnersets.yaml
rm charts/actions-runner-controller/crds/actions.github.com_ephemeralrunners.yaml
# Run go fmt against code
fmt:
@@ -196,36 +75,49 @@ fmt:
vet:
go vet ./...
# workaround for CRD issue with k8s 1.18 & controller-gen
# ref: https://github.com/kubernetes/kubernetes/issues/91395
fix118: yq
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.ephemeralContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.ephemeralContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).ephemeralContainers.items.properties
# Generate code
generate: controller-gen
$(CONTROLLER_GEN) object:headerFile=./hack/boilerplate.go.txt paths="./..."
# Run shellcheck on runner scripts
shellcheck: shellcheck-install
$(TOOLS_PATH)/shellcheck --shell bash --source-path runner runner/*.sh
# Build the docker image
docker-build: test
docker build . -t ${NAME}:${VERSION}
# Push the docker image
docker-push:
docker push ${NAME}:${VERSION}
docker-buildx:
export DOCKER_CLI_EXPERIMENTAL=enabled ;\
export DOCKER_BUILDKIT=1
export DOCKER_CLI_EXPERIMENTAL=enabled
@if ! docker buildx ls | grep -q container-builder; then\
docker buildx create --platform ${PLATFORMS} --name container-builder --use;\
fi
docker buildx build --platform ${PLATFORMS} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
--build-arg VERSION=${VERSION} \
-t "${DOCKER_IMAGE_NAME}:${VERSION}" \
-t "${NAME}:${VERSION}" \
-f Dockerfile \
. ${PUSH_ARG}
# Push the docker image
docker-push:
docker push ${DOCKER_IMAGE_NAME}:${VERSION}
docker push ${RUNNER_NAME}:${RUNNER_TAG}
# Generate the release manifest file
release: manifests
cd config/manager && kustomize edit set image controller=${DOCKER_IMAGE_NAME}:${VERSION}
cd config/manager && kustomize edit set image controller=${NAME}:${VERSION}
mkdir -p release
kustomize build config/default > release/actions-runner-controller.yaml
@@ -234,41 +126,19 @@ release/clean:
rm -rf release
.PHONY: acceptance
acceptance: release/clean acceptance/pull docker-build release
ACCEPTANCE_TEST_SECRET_TYPE=token make acceptance/run
ACCEPTANCE_TEST_SECRET_TYPE=app make acceptance/run
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token make acceptance/run
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=app make acceptance/run
acceptance/run: acceptance/kind acceptance/load acceptance/setup acceptance/deploy acceptance/tests acceptance/teardown
acceptance: release/clean docker-build docker-push release
ACCEPTANCE_TEST_SECRET_TYPE=token make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
ACCEPTANCE_TEST_SECRET_TYPE=app make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=app make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
acceptance/kind:
kind create cluster --name ${CLUSTER} --config acceptance/kind.yaml
# Set TMPDIR to somewhere under $HOME when you use docker installed with Ubuntu snap
# Otherwise `load docker-image` fail while running `docker save`.
# See https://kind.sigs.k8s.io/docs/user/known-issues/#docker-installed-with-snap
acceptance/load:
kind load docker-image ${DOCKER_IMAGE_NAME}:${VERSION} --name ${CLUSTER}
kind load docker-image quay.io/brancz/kube-rbac-proxy:$(KUBE_RBAC_PROXY_VERSION) --name ${CLUSTER}
kind load docker-image ${RUNNER_NAME}:${RUNNER_TAG} --name ${CLUSTER}
kind load docker-image docker:dind --name ${CLUSTER}
kind load docker-image quay.io/jetstack/cert-manager-controller:$(CERT_MANAGER_VERSION) --name ${CLUSTER}
kind load docker-image quay.io/jetstack/cert-manager-cainjector:$(CERT_MANAGER_VERSION) --name ${CLUSTER}
kind load docker-image quay.io/jetstack/cert-manager-webhook:$(CERT_MANAGER_VERSION) --name ${CLUSTER}
kubectl cluster-info --context ${KUBECONTEXT}
# Pull the docker images for acceptance
acceptance/pull:
docker pull quay.io/brancz/kube-rbac-proxy:$(KUBE_RBAC_PROXY_VERSION)
docker pull docker:dind
docker pull quay.io/jetstack/cert-manager-controller:$(CERT_MANAGER_VERSION)
docker pull quay.io/jetstack/cert-manager-cainjector:$(CERT_MANAGER_VERSION)
docker pull quay.io/jetstack/cert-manager-webhook:$(CERT_MANAGER_VERSION)
kind create cluster --name acceptance
kubectl cluster-info --context kind-acceptance
acceptance/setup:
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/$(CERT_MANAGER_VERSION)/cert-manager.yaml #kubectl create namespace actions-runner-system
kubectl -n cert-manager wait deploy/cert-manager-cainjector --for condition=available --timeout 90s
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.yaml #kubectl create namespace actions-runner-system
kubectl -n cert-manager wait deploy/cert-manager-cainjector --for condition=available --timeout 60s
kubectl -n cert-manager wait deploy/cert-manager-webhook --for condition=available --timeout 60s
kubectl -n cert-manager wait deploy/cert-manager --for condition=available --timeout 60s
kubectl create namespace actions-runner-system || true
@@ -276,39 +146,18 @@ acceptance/setup:
sleep 5
acceptance/teardown:
kind delete cluster --name ${CLUSTER}
acceptance/deploy:
DOCKER_IMAGE_NAME=${DOCKER_IMAGE_NAME} DOCKER_USER=${DOCKER_USER} VERSION=${VERSION} RUNNER_NAME=${RUNNER_NAME} RUNNER_TAG=${RUNNER_TAG} TEST_REPO=${TEST_REPO} \
TEST_ORG=${TEST_ORG} TEST_ORG_REPO=${TEST_ORG_REPO} SYNC_PERIOD=${SYNC_PERIOD} \
USE_RUNNERSET=${USE_RUNNERSET} \
TEST_EPHEMERAL=${TEST_EPHEMERAL} \
acceptance/deploy.sh
kind delete cluster --name acceptance
acceptance/tests:
acceptance/deploy.sh
acceptance/checks.sh
acceptance/runner/startup:
cd test/startup/ && bash test.sh
# We use -count=1 instead of `go clean -testcache`
# See https://terratest.gruntwork.io/docs/testing-best-practices/avoid-test-caching/
.PHONY: e2e
e2e:
go test -count=1 -v -timeout 600s -run '^TestE2E$$' ./test/e2e
# Upload release file to GitHub.
github-release: release
ghr ${VERSION} release/
# Find or download controller-gen
#
# Note that controller-gen newer than 0.4.1 is needed for https://github.com/kubernetes-sigs/controller-tools/issues/444#issuecomment-680168439
# Otherwise we get errors like the below:
# Error: failed to install CRD crds/actions.summerwind.dev_runnersets.yaml: CustomResourceDefinition.apiextensions.k8s.io "runnersets.actions.summerwind.dev" is invalid: [spec.validation.openAPIV3Schema.properties[spec].properties[template].properties[spec].properties[containers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property, spec.validation.openAPIV3Schema.properties[spec].properties[template].properties[spec].properties[initContainers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property]
#
# Note that controller-gen newer than 0.6.0 is needed due to https://github.com/kubernetes-sigs/controller-tools/issues/448
# Otherwise ObjectMeta embedded in Spec results in empty on the storage.
# find or download controller-gen
# download controller-gen if necessary
controller-gen:
ifeq (, $(shell which controller-gen))
ifeq (, $(wildcard $(GOBIN)/controller-gen))
@@ -317,7 +166,7 @@ ifeq (, $(wildcard $(GOBIN)/controller-gen))
CONTROLLER_GEN_TMP_DIR=$$(mktemp -d) ;\
cd $$CONTROLLER_GEN_TMP_DIR ;\
go mod init tmp ;\
go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0 ;\
go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.3.0 ;\
rm -rf $$CONTROLLER_GEN_TMP_DIR ;\
}
endif
@@ -337,93 +186,8 @@ ifeq (, $(wildcard $(GOBIN)/yq))
YQ_TMP_DIR=$$(mktemp -d) ;\
cd $$YQ_TMP_DIR ;\
go mod init tmp ;\
go install github.com/mikefarah/yq/v4@v4.25.3 ;\
go get github.com/mikefarah/yq/v3@3.4.0 ;\
rm -rf $$YQ_TMP_DIR ;\
}
endif
YQ=$(GOBIN)/yq
# find or download shellcheck
# download shellcheck if necessary
shellcheck-install:
ifeq (, $(wildcard $(TOOLS_PATH)/shellcheck))
echo "Downloading shellcheck"
@{ \
set -e ;\
SHELLCHECK_TMP_DIR=$$(mktemp -d) ;\
cd $$SHELLCHECK_TMP_DIR ;\
curl -LO https://github.com/koalaman/shellcheck/releases/download/v$(SHELLCHECK_VERSION)/shellcheck-v$(SHELLCHECK_VERSION).$(OS_NAME).x86_64.tar.xz ;\
tar Jxvf shellcheck-v$(SHELLCHECK_VERSION).$(OS_NAME).x86_64.tar.xz ;\
cd $(CURDIR) ;\
mkdir -p $(TOOLS_PATH) ;\
mv $$SHELLCHECK_TMP_DIR/shellcheck-v$(SHELLCHECK_VERSION)/shellcheck $(TOOLS_PATH)/ ;\
rm -rf $$SHELLCHECK_TMP_DIR ;\
}
endif
SHELLCHECK=$(TOOLS_PATH)/shellcheck
# find or download etcd
etcd:
ifeq (, $(shell which etcd))
ifeq (, $(wildcard $(TEST_ASSETS)/etcd))
@{ \
set -xe ;\
INSTALL_TMP_DIR=$$(mktemp -d) ;\
cd $$INSTALL_TMP_DIR ;\
wget https://github.com/coreos/etcd/releases/download/v3.4.22/etcd-v3.4.22-$(OS_NAME)-amd64.$(ETCD_EXTENSION);\
mkdir -p $(TEST_ASSETS) ;\
$(EXTRACT_COMMAND) etcd-v3.4.22-$(OS_NAME)-amd64.$(ETCD_EXTENSION) ;\
mv etcd-v3.4.22-$(OS_NAME)-amd64/etcd $(TEST_ASSETS)/etcd ;\
rm -rf $$INSTALL_TMP_DIR ;\
}
ETCD_BIN=$(TEST_ASSETS)/etcd
else
ETCD_BIN=$(TEST_ASSETS)/etcd
endif
else
ETCD_BIN=$(shell which etcd)
endif
# find or download kube-apiserver
kube-apiserver:
ifeq (, $(shell which kube-apiserver))
ifeq (, $(wildcard $(TEST_ASSETS)/kube-apiserver))
@{ \
set -xe ;\
INSTALL_TMP_DIR=$$(mktemp -d) ;\
cd $$INSTALL_TMP_DIR ;\
wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mkdir -p $(TEST_ASSETS) ;\
tar zxvf kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kube-apiserver $(TEST_ASSETS)/kube-apiserver ;\
rm -rf $$INSTALL_TMP_DIR ;\
}
KUBE_APISERVER_BIN=$(TEST_ASSETS)/kube-apiserver
else
KUBE_APISERVER_BIN=$(TEST_ASSETS)/kube-apiserver
endif
else
KUBE_APISERVER_BIN=$(shell which kube-apiserver)
endif
# find or download kubectl
kubectl:
ifeq (, $(shell which kubectl))
ifeq (, $(wildcard $(TEST_ASSETS)/kubectl))
@{ \
set -xe ;\
INSTALL_TMP_DIR=$$(mktemp -d) ;\
cd $$INSTALL_TMP_DIR ;\
wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.2/kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mkdir -p $(TEST_ASSETS) ;\
tar zxvf kubebuilder_2.3.2_$(OS_NAME)_amd64.tar.gz ;\
mv kubebuilder_2.3.2_$(OS_NAME)_amd64/bin/kubectl $(TEST_ASSETS)/kubectl ;\
rm -rf $$INSTALL_TMP_DIR ;\
}
KUBECTL_BIN=$(TEST_ASSETS)/kubectl
else
KUBECTL_BIN=$(TEST_ASSETS)/kubectl
endif
else
KUBECTL_BIN=$(shell which kubectl)
endif

14
PROJECT
View File

@@ -1,5 +1,5 @@
domain: summerwind.dev
repo: github.com/actions/actions-runner-controller
repo: github.com/summerwind/actions-runner-controller
resources:
- group: actions
kind: Runner
@@ -10,16 +10,4 @@ resources:
- group: actions
kind: RunnerDeployment
version: v1alpha1
- group: actions
kind: AutoscalingRunnerSet
version: v1alpha1
- group: actions
kind: EphemeralRunnerSet
version: v1alpha1
- group: actions
kind: EphemeralRunner
version: v1alpha1
- group: actions
kind: AutoscalingListener
version: v1alpha1
version: "2"

587
README.md
View File

@@ -1,66 +1,551 @@
# Actions Runner Controller (ARC)
# actions-runner-controller
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/6061/badge)](https://bestpractices.coreinfrastructure.org/projects/6061)
[![awesome-runners](https://img.shields.io/badge/listed%20on-awesome--runners-blue.svg)](https://github.com/jonico/awesome-runners)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/actions-runner-controller)](https://artifacthub.io/packages/search?repo=actions-runner-controller)
This controller operates self-hosted runners for GitHub Actions on your Kubernetes cluster.
## People
`actions-runner-controller` is an open-source project currently developed and maintained in collaboration with maintainers @mumoshu and @toast-gear, various [contributors](https://github.com/actions/actions-runner-controller/graphs/contributors), and the [awesome community](https://github.com/actions/actions-runner-controller/discussions), mostly in their spare time.
If you think the project is awesome and it's becoming a basis for your important business, consider [sponsoring us](https://github.com/sponsors/actions-runner-controller)!
In case you are already the employer of one of contributors, sponsoring via GitHub Sponsors might not be an option. Just support them in other means!
We don't currently have [any sponsors dedicated to this project yet](https://github.com/sponsors/actions-runner-controller).
However, [HelloFresh](https://www.hellofreshgroup.com/en/) has recently started sponsoring @mumoshu for this project along with his other works. A part of their sponsorship will enable @mumoshu to add an E2E test to keep ARC even more reliable on AWS. Thank you for your sponsorship!
[<img src="https://user-images.githubusercontent.com/22009/170898715-07f02941-35ec-418b-8cd4-251b422fa9ac.png" width="219" height="71" />](https://careers.hellofresh.com/)
## Status
Even though actions-runner-controller is used in production environments, it is still in its early stage of development, hence versioned 0.x.
actions-runner-controller complies to Semantic Versioning 2.0.0 in which v0.x means that there could be backward-incompatible changes for every release.
The documentation is kept inline with master@HEAD, we do our best to highlight any features that require a specific ARC version or higher however this is not always easily done due to there being many moving parts. Additionally, we actively do not retain compatibly with every GitHub Enterprise Server version nor every Kubernetes version so you will need to ensure you stay current within a reasonable timespan.
## About
## Motivation
[GitHub Actions](https://github.com/features/actions) is a very useful tool for automating development. GitHub Actions jobs are run in the cloud by default, but you may want to run your jobs in your environment. [Self-hosted runner](https://github.com/actions/runner) can be used for such use cases, but requires the provisioning and configuration of a virtual machine instance. Instead if you already have a Kubernetes cluster, it makes more sense to run the self-hosted runner on top of it.
**actions-runner-controller** makes that possible. Just create a *Runner* resource on your Kubernetes, and it will run and operate the self-hosted runner for the specified repository. Combined with Kubernetes RBAC, you can also build simple Self-hosted runners as a Service.
## Getting Started
To give ARC a try with just a handful of commands, Please refer to the [Quickstart guide](/docs/quickstart.md).
## Installation
For an overview of ARC, please refer to [About ARC](https://github.com/actions/actions-runner-controller/blob/master/docs/about-arc.md)
actions-runner-controller uses [cert-manager](https://cert-manager.io/docs/installation/kubernetes/) for certificate management of Admission Webhook. Make sure you have already installed cert-manager before you install. The installation instructions for cert-manager can be found below.
For more information, please refer to detailed documentation below!
- [Installing cert-manager on Kubernetes](https://cert-manager.io/docs/installation/kubernetes/)
## Documentation
Install the custom resource and actions-runner-controller itself. This will create actions-runner-system namespace in your Kubernetes and deploy the required resources.
- [Quickstart guide](/docs/quickstart.md)
- [About ARC](/docs/about-arc.md)
- [Installing ARC](/docs/installing-arc.md)
- [Authenticating to the GitHub API](/docs/authenticating-to-the-github-api.md)
- [Deploying ARC runners](/docs/deploying-arc-runners.md)
- [Adding ARC runners to a repository, organization, or enterprise](/docs/choosing-runner-destination.md)
- [Automatically scaling runners](/docs/automatically-scaling-runners.md)
- [Using custom volumes](/docs/using-custom-volumes.md)
- [Using ARC runners in a workflow](/docs/using-arc-runners-in-a-workflow.md)
- [Managing access with runner groups](/docs/managing-access-with-runner-groups.md)
- [Configuring Windows runners](/docs/configuring-windows-runners.md)
- [Using ARC across organizations](/docs/using-arc-across-organizations.md)
- [Using entrypoint features](/docs/using-entrypoint-features.md)
- [Deploying alternative runners](/docs/deploying-alternative-runners.md)
- [Monitoring and troubleshooting](/docs/monitoring-and-troubleshooting.md)
```
kubectl apply -f https://github.com/summerwind/actions-runner-controller/releases/latest/download/actions-runner-controller.yaml
```
## Contributing
### Github Enterprise support
We welcome contributions from the community. For more details on contributing to the project (including requirements), please refer to "[Getting Started with Contributing](https://github.com/actions/actions-runner-controller/blob/master/CONTRIBUTING.md)."
If you use either Github Enterprise Cloud or Server (and have recent enought version supporting Actions), you can use **actions-runner-controller** with those, too. Authentication works same way as with public Github (repo and organization level).
## Troubleshooting
```shell
kubectl set env deploy controller-manager -c manager GITHUB_ENTERPRISE_URL=<GHEC/S URL> --namespace actions-runner-system
```
We are very happy to help you with any issues you have. Please refer to the "[Troubleshooting](https://github.com/actions/actions-runner-controller/blob/master/TROUBLESHOOTING.md)" section for common issues.
[Enterprise level](https://docs.github.com/en/enterprise-server@2.22/actions/hosting-your-own-runners/adding-self-hosted-runners#adding-a-self-hosted-runner-to-an-enterprise) runners are not working yet as there's no API definition for those.
## Setting up authentication with GitHub API
There are two ways for actions-runner-controller to authenticate with the GitHub API:
1. Using GitHub App.
2. Using Personal Access Token.
Regardless of which authentication method you use, the same permissions are required, those permissions are:
- Repository: Administration (read/write)
- Repository: Actions (read)
- Organization: Self-hosted runners (read/write)
**NOTE: It is extremely important to only follow one of the sections below and not both.**
### Using GitHub App
You can create a GitHub App for either your account or any organization. If you want to create a GitHub App for your account, open the following link to the creation page, enter any unique name in the "GitHub App name" field, and hit the "Create GitHub App" button at the bottom of the page.
- [Create GitHub Apps on your account](https://github.com/settings/apps/new?url=http://github.com/summerwind/actions-runner-controller&webhook_active=false&public=false&administration=write&actions=read)
If you want to create a GitHub App for your organization, replace the `:org` part of the following URL with your organization name before opening it. Then enter any unique name in the "GitHub App name" field, and hit the "Create GitHub App" button at the bottom of the page to create a GitHub App.
- [Create GitHub Apps on your organization](https://github.com/organizations/:org/settings/apps/new?url=http://github.com/summerwind/actions-runner-controller&webhook_active=false&public=false&administration=write&organization_self_hosted_runners=write&actions=read)
You will see an *App ID* on the page of the GitHub App you created as follows, the value of this App ID will be used later.
<img width="750" alt="App ID" src="https://user-images.githubusercontent.com/230145/78968802-6e7c8880-7b40-11ea-8b08-0c1b8e6a15f0.png">
Download the private key file by pushing the "Generate a private key" button at the bottom of the GitHub App page. This file will also be used later.
<img width="750" alt="Generate a private key" src="https://user-images.githubusercontent.com/230145/78968805-71777900-7b40-11ea-97e6-55c48dfc44ac.png">
Go to the "Install App" tab on the left side of the page and install the GitHub App that you created for your account or organization.
<img width="750" alt="Install App" src="https://user-images.githubusercontent.com/230145/78968806-72100f80-7b40-11ea-810d-2bd3261e9d40.png">
When the installation is complete, you will be taken to a URL in one of the following formats, the last number of the URL will be used as the Installation ID later (For example, if the URL ends in `settings/installations/12345`, then the Installation ID is `12345`).
- `https://github.com/settings/installations/${INSTALLATION_ID}`
- `https://github.com/organizations/eventreactor/settings/installations/${INSTALLATION_ID}`
Finally, register the App ID (`APP_ID`), Installation ID (`INSTALLATION_ID`), and downloaded private key file (`PRIVATE_KEY_FILE_PATH`) to Kubernetes as Secret.
```shell
$ kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_app_id=${APP_ID} \
--from-literal=github_app_installation_id=${INSTALLATION_ID} \
--from-file=github_app_private_key=${PRIVATE_KEY_FILE_PATH}
```
### Using Personal Access Token
From an account that has `admin` privileges for the repository, create a [personal access token](https://github.com/settings/tokens) with `repo` scope. This token is used to register a self-hosted runner by *actions-runner-controller*.
Self-hosted runners in GitHub can either be connected to a single repository, or to a GitHub organization (so they are available to all repositories in the organization). This token is used to register a self-hosted runner by *actions-runner-controller*.
For adding a runner to a repository, the token should have `repo` scope. If the runner should be added to an organization, the token should have `admin:org` scope. Note that to use a Personal Access Token, you must issue the token with an account that has `admin` privileges (on the repository and/or the organization).
Open the Create Token page from the following link, grant the `repo` and/or `admin:org` scope, and press the "Generate Token" button at the bottom of the page to create the token.
- [Create personal access token](https://github.com/settings/tokens/new)
Register the created token (`GITHUB_TOKEN`) as a Kubernetes secret.
```shell
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_token=${GITHUB_TOKEN}
```
## Usage
There are two ways to use this controller:
- Manage runners one by one with `Runner`.
- Manage a set of runners with `RunnerDeployment`.
### Repository runners
To launch a single self-hosted runner, you need to create a manifest file includes *Runner* resource as follows. This example launches a self-hosted runner with name *example-runner* for the *summerwind/actions-runner-controller* repository.
```yaml
# runner.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
name: example-runner
spec:
repository: summerwind/actions-runner-controller
env: []
```
Apply the created manifest file to your Kubernetes.
```shell
$ kubectl apply -f runner.yaml
runner.actions.summerwind.dev/example-runner created
```
You can see that the Runner resource has been created.
```shell
$ kubectl get runners
NAME REPOSITORY STATUS
example-runner summerwind/actions-runner-controller Running
```
You can also see that the runner pod has been running.
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-runner 2/2 Running 0 1m
```
The runner you created has been registered to your repository.
<img width="756" alt="Actions tab in your repository settings" src="https://user-images.githubusercontent.com/230145/73618667-8cbf9700-466c-11ea-80b6-c67e6d3f70e7.png">
Now you can use your self-hosted runner. See the [official documentation](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/using-self-hosted-runners-in-a-workflow) on how to run a job with it.
### Organization Runners
To add the runner to an organization, you only need to replace the `repository` field with `organization`, so the runner will register itself to the organization.
```yaml
# runner.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
name: example-org-runner
spec:
organization: your-organization-name
```
Now you can see the runner on the organization level (if you have organization owner permissions).
### RunnerDeployments
There are `RunnerReplicaSet` and `RunnerDeployment` that corresponds to `ReplicaSet` and `Deployment` but for `Runner`.
You usually need only `RunnerDeployment` rather than `RunnerReplicaSet` as the former is for managing the latter.
```yaml
# runnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeploy
spec:
replicas: 2
template:
spec:
repository: mumoshu/actions-runner-controller-ci
env: []
```
Apply the manifest file to your cluster:
```shell
$ kubectl apply -f runner.yaml
runnerdeployment.actions.summerwind.dev/example-runnerdeploy created
```
You can see that 2 runners have been created as specified by `replicas: 2`:
```shell
$ kubectl get runners
NAME REPOSITORY STATUS
example-runnerdeploy2475h595fr mumoshu/actions-runner-controller-ci Running
example-runnerdeploy2475ht2qbr mumoshu/actions-runner-controller-ci Running
```
#### Autoscaling
`RunnerDeployment` can scale the number of runners between `minReplicas` and `maxReplicas` fields, depending on pending workflow runs.
In the below example, `actions-runner` checks for pending workflow runs for each sync period, and scale to e.g. 3 if there're 3 pending jobs at sync time.
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runner-deployment
spec:
template:
spec:
repository: summerwind/actions-runner-controller
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
metrics:
- type: TotalNumberOfQueuedAndInProgressWorkflowRuns
repositoryNames:
- summerwind/actions-runner-controller
```
The scale out performance is controlled via the manager containers startup `--sync-period` argument. The default value is 10 minutes to prevent unconfigured deployments rate limiting themselves from the GitHub API. The period can be customised in the `config/default/manager_auth_proxy_patch.yaml` patch for those that are building the solution via the kustomize setup.
Additionally, the autoscaling feature has an anti-flapping option that prevents periodic loop of scaling up and down.
By default, it doesn't scale down until the grace period of 10 minutes passes after a scale up. The grace period can be configured by setting `scaleDownDelaySecondsAfterScaleUp`:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runner-deployment
spec:
template:
spec:
repository: summerwind/actions-runner-controller
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
scaleDownDelaySecondsAfterScaleOut: 60
metrics:
- type: TotalNumberOfQueuedAndInProgressWorkflowRuns
repositoryNames:
- summerwind/actions-runner-controller
```
If you do not want to manage an explicit list of repositories to scale, an alternate autoscaling scheme that can be applied is the PercentageRunnersBusy scheme. The number of desired pods are evaulated by checking how many runners are currently busy and applying a scaleup or scale down factor if certain thresholds are met. By setting the metric type to PercentageRunnersBusy, the HorizontalRunnerAutoscaler will query github for the number of busy runners which live in the RunnerDeployment namespace. Scaleup and scaledown thresholds are the percentage of busy runners at which the number of desired runners are re-evaluated. Scaleup and scaledown factors are the multiplicative factor applied to the current number of runners used to calculate the number of desired runners. This scheme is also especially useful if you want multiple controllers in various clusters, each responsible for scaling their own runner pods per namespace.
```yaml
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
scaleDownDelaySecondsAfterScaleOut: 60
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75'
scaleDownThreshold: '0.3'
scaleUpFactor: '1.4'
scaleDownFactor: '0.7'
```
## Runner with DinD
When using default runner, runner pod starts up 2 containers: runner and DinD (Docker-in-Docker). This might create issues if there's `LimitRange` set to namespace.
```yaml
# dindrunnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-dindrunnerdeploy
spec:
replicas: 2
template:
spec:
image: summerwind/actions-runner-dind
dockerdWithinRunnerContainer: true
repository: mumoshu/actions-runner-controller-ci
env: []
```
This also helps with resources, as you don't need to give resources separately to docker and runner.
## Additional tweaks
You can pass details through the spec selector. Here's an eg. of what you may like to do:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: actions-runner
namespace: default
spec:
replicas: 2
template:
spec:
nodeSelector:
node-role.kubernetes.io/test: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/test
operator: Exists
repository: mumoshu/actions-runner-controller-ci
image: custom-image/actions-runner:latest
imagePullPolicy: Always
resources:
limits:
cpu: "4.0"
memory: "8Gi"
requests:
cpu: "2.0"
memory: "4Gi"
# If set to false, there are no privileged container and you cannot use docker.
dockerEnabled: false
# If set to true, runner pod container only 1 container that's expected to be able to run docker, too.
# image summerwind/actions-runner-dind or custom one should be used with true -value
dockerdWithinRunnerContainer: false
# Valid if dockerdWithinRunnerContainer is not true
dockerdContainerResources:
limits:
cpu: "4.0"
memory: "8Gi"
requests:
cpu: "2.0"
memory: "4Gi"
sidecarContainers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: abcd1234
securityContext:
runAsUser: 0
# if workDir is not specified, the default working directory is /runner/_work
# this setting allows you to customize the working directory location
# for example, the below setting is the same as on the ubuntu-18.04 image
workDir: /home/runner/work
```
## Runner labels
To run a workflow job on a self-hosted runner, you can use the following syntax in your workflow:
```yaml
jobs:
release:
runs-on: self-hosted
```
When you have multiple kinds of self-hosted runners, you can distinguish between them using labels. In order to do so, you can specify one or more labels in your `Runner` or `RunnerDeployment` spec.
```yaml
# runnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: custom-runner
spec:
replicas: 1
template:
spec:
repository: summerwind/actions-runner-controller
labels:
- custom-runner
```
Once this spec is applied, you can observe the labels for your runner from the repository or organization in the GitHub settings page for the repository or organization. You can now select a specific runner from your workflow by using the label in `runs-on`:
```yaml
jobs:
release:
runs-on: custom-runner
```
Note that if you specify `self-hosted` in your workflow, then this will run your job on _any_ self-hosted runner, regardless of the labels that they have.
## Runner Groups
Runner groups can be used to limit which repositories are able to use the GitHub Runner at an Organisation level. Runner groups have to be [created in GitHub first](https://docs.github.com/en/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups) before they can be referenced.
To add the runner to the group `NewGroup`, specify the group in your `Runner` or `RunnerDeployment` spec.
```yaml
# runnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: custom-runner
spec:
replicas: 1
template:
spec:
group: NewGroup
```
## Using EKS IAM role for service accounts
`actions-runner-controller` v0.15.0 or later has support for EKS IAM role for service accounts.
As similar as for regular pods and deployments, you firstly need an existing service account with the IAM role associated.
Create one using e.g. `eksctl`. You can refer to [the EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) for more details.
Once you set up the service account, all you need is to add `serviceAccountName` and `fsGroup` to any pods that uses
the IAM-role enabled service account.
For `RunnerDeployment`, you can set those two fields under the runner spec at `RunnerDeployment.Spec.Template`:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeploy
spec:
template:
spec:
repository: USER/REO
serviceAccountName: my-service-account
securityContext:
fsGroup: 1447
```
## Software installed in the runner image
The GitHub hosted runners include a large amount of pre-installed software packages. For Ubuntu 18.04, this list can be found at <https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md>
The container image is based on Ubuntu 18.04, but it does not contain all of the software installed on the GitHub runners. It contains the following subset of packages from the GitHub runners:
- Basic CLI packages
- git (2.26)
- docker
- build-essentials
The virtual environments from GitHub contain a lot more software packages (different versions of Java, Node.js, Golang, .NET, etc) which are not provided in the runner image. Most of these have dedicated setup actions which allow the tools to be installed on-demand in a workflow, for example: `actions/setup-java` or `actions/setup-node`
If there is a need to include packages in the runner image for which there is no setup action, then this can be achieved by building a custom container image for the runner. The easiest way is to start with the `summerwind/actions-runner` image and installing the extra dependencies directly in the docker image:
```shell
FROM summerwind/actions-runner:latest
RUN sudo apt update -y \
&& sudo apt install YOUR_PACKAGE
&& sudo rm -rf /var/lib/apt/lists/*
```
You can then configure the runner to use a custom docker image by configuring the `image` field of a `Runner` or `RunnerDeployment`:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
name: custom-runner
spec:
repository: summerwind/actions-runner-controller
image: YOUR_CUSTOM_DOCKER_IMAGE
```
## Common Errors
### invalid header field value
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error {"controller": "runner", "request": "actions-runner-system/runner-deployment-dk7q8-dk5c9", "error": "failed to create registration token: Post \"https://api.github.com/orgs/$YOUR_ORG_HERE/actions/runners/registration-token\": net/http: invalid header field value \"Bearer $YOUR_TOKEN_HERE\\n\" for key Authorization"}
```
**Solutions**<br />
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documeneted flags
# Developing
If you'd like to modify the controller to fork or contribute, I'd suggest using the following snippet for running
the acceptance test:
```shell
# This sets `VERSION` envvar to some appropriate value
. hack/make-env.sh
NAME=$DOCKER_USER/actions-runner-controller \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
make docker-build docker-push acceptance
```
Please follow the instructions explained in [Using Personal Access Token](#using-personal-access-token) to obtain
`GITHUB_TOKEN`, and those in [Using GitHub App](#using-github-app) to obtain `APP_ID`, `INSTALLATION_ID`, and
`PRIVATE_KEY_FILE_PATH`.
The test creates a one-off `kind` cluster, deploys `cert-manager` and `actions-runner-controller`,
creates a `RunnerDeployment` custom resource for a public Git repository to confirm that the
controller is able to bring up a runner pod with the actions runner registration token installed.
If you prefer to test in a non-kind cluster, you can instead run:
```shell
KUBECONFIG=path/to/kubeconfig \
NAME=$DOCKER_USER/actions-runner-controller \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make docker-build docker-push \
acceptance/setup acceptance/tests
```
# Alternatives
The following is a list of alternative solutions that may better fit you depending on your use-case:
- <https://github.com/evryfs/github-actions-runner-operator/>
Although the situation can change over time, as of writing this sentence, the benefits of using `actions-runner-controller` over the alternatives are:
- `actions-runner-controller` has the ability to autoscale runners based on number of pending/progressing jobs (#99)
- `actions-runner-controller` is able to gracefully stop runners (#103)
- `actions-runner-controller` has ARM support

View File

@@ -1,31 +0,0 @@
Thanks for helping make GitHub safe for everyone.
## Security
GitHub takes the security of our software products and services seriously, including all of the open source code repositories managed through our GitHub organizations, such as [GitHub](https://github.com/GitHub).
Even though [open source repositories are outside of the scope of our bug bounty program](https://bounty.github.com/index.html#scope) and therefore not eligible for bounty rewards, we will ensure that your finding gets passed along to the appropriate maintainers for remediation.
## Reporting Security Issues
If you believe you have found a security vulnerability in any GitHub-owned repository, please report it to us through coordinated disclosure.
**Please do not report security vulnerabilities through public GitHub issues, discussions, or pull requests.**
Instead, please send an email to opensource-security[@]github.com.
Please include as much of the information listed below as you can to help us better understand and resolve the issue:
* The type of issue (e.g., buffer overflow, SQL injection, or cross-site scripting)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
## Policy
See [GitHub's Safe Harbor Policy](https://docs.github.com/en/github/site-policy/github-bug-bounty-program-legal-safe-harbor#1-safe-harbor-terms)

View File

@@ -1,306 +0,0 @@
# Troubleshooting
* [Tools](#tools)
* [Installation](#installation)
* [InternalError when calling webhook: context deadline exceeded](#internalerror-when-calling-webhook-context-deadline-exceeded)
* [Invalid header field value](#invalid-header-field-value)
* [Helm chart install failure: certificate signed by unknown authority](#helm-chart-install-failure-certificate-signed-by-unknown-authority)
* [Operations](#operations)
* [Stuck runner kind or backing pod](#stuck-runner-kind-or-backing-pod)
* [Delay in jobs being allocated to runners](#delay-in-jobs-being-allocated-to-runners)
* [Runner coming up before network available](#runner-coming-up-before-network-available)
* [Outgoing network action hangs indefinitely](#outgoing-network-action-hangs-indefinitely)
* [Unable to scale to zero with TotalNumberOfQueuedAndInProgressWorkflowRuns](#unable-to-scale-to-zero-with-totalnumberofqueuedandinprogressworkflowruns)
* [Slow / failure to boot dind sidecar (default runner)](#slow--failure-to-boot-dind-sidecar-default-runner)
## Tools
A list of tools which are helpful for troubleshooting
* [Kubernetes resources hierarchy parsing tool `kubectl-fields`](https://github.com/rewanthtammana/kubectl-fields)
* [Multi pod and container log tailing for Kubernetes `stern`](https://github.com/stern/stern)
## Installation
Troubeshooting runbooks that relate to ARC installation problems
### InternalError when calling webhook: context deadline exceeded
**Problem**
This issue can come up for various reasons like leftovers from previous installations or not being able to access the K8s service's clusterIP associated with the admission webhook server (of ARC).
```text
Internal error occurred: failed calling webhook "mutate.runnerdeployment.actions.summerwind.dev":
Post "https://actions-runner-controller-webhook.actions-runner-system.svc:443/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment?timeout=10s": context deadline exceeded
```
**Solution**
First we will try the common solution of checking webhook leftovers from previous installations:
1. ```bash
kubectl get validatingwebhookconfiguration -A
kubectl get mutatingwebhookconfiguration -A
```
2. If you see any webhooks related to actions-runner-controller, delete them:
```bash
kubectl delete mutatingwebhookconfiguration actions-runner-controller-mutating-webhook-configuration
kubectl delete validatingwebhookconfiguration actions-runner-controller-validating-webhook-configuration
```
If that didn't work then probably your K8s control-plane is somehow unable to access the K8s service's clusterIP associated with the admission webhook server:
1. You're running apiserver as a binary and you didn't make service cluster IPs available to the host network.
2. You're running the apiserver in the pod but your pod network (i.e. CNI plugin installation and config) is not good so your pods(like kube-apiserver) in the K8s control-plane nodes can't access ARC's admission webhook server pod(s) in probably data-plane nodes.
Another reason could be due to GKEs firewall settings you may run into the following errors when trying to deploy runners on a private GKE cluster:
To fix this, you may either:
1. Configure the webhook to use another port, such as 443 or 10250, [each of
which allow traffic by default](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules).
```sh
# With helm, you'd set `webhookPort` to the port number of your choice
# See https://github.com/actions/actions-runner-controller/pull/1410/files for more information
helm upgrade --install --namespace actions-runner-system --create-namespace \
--wait actions-runner-controller actions-runner-controller/actions-runner-controller \
--set webhookPort=10250
```
2. Set up a firewall rule to allow the master node to connect to the default
webhook port. The exact way to do this may vary, but the following script
should point you in the right direction:
```sh
# 1) Retrieve the network tag automatically given to the worker nodes
# NOTE: this only works if you have only one cluster in your GCP project. You will have to manually inspect the result of this command to find the tag for the cluster you want to target
WORKER_NODES_TAG=$(gcloud compute instances list --format='text(tags.items[0])' --filter='metadata.kubelet-config:*' | grep tags | awk '{print $2}' | sort | uniq)
# 2) Take note of the VPC network in which you deployed your cluster
# NOTE this only works if you have only one network in which you deploy your clusters
NETWORK=$(gcloud compute instances list --format='text(networkInterfaces[0].network)' --filter='metadata.kubelet-config:*' | grep networks | awk -F'/' '{print $NF}' | sort | uniq)
# 3) Get the master source ip block
SOURCE=$(gcloud container clusters describe <cluster-name> --region <region> | grep masterIpv4CidrBlock| cut -d ':' -f 2 | tr -d ' ')
gcloud compute firewall-rules create k8s-cert-manager --source-ranges $SOURCE --target-tags $WORKER_NODES_TAG --allow TCP:9443 --network $NETWORK
```
### Invalid header field value
**Problem**
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error
{
"controller": "runner",
"request": "actions-runner-system/runner-deployment-dk7q8-dk5c9",
"error": "failed to create registration token: Post \"https://api.github.com/orgs/$YOUR_ORG_HERE/actions/runners/registration-token\": net/http: invalid header field value \"Bearer $YOUR_TOKEN_HERE\\n\" for key Authorization"
}
```
**Solution**
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added, either:
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documented flags
### Helm chart install failure: certificate signed by unknown authority
**Problem**
```text
Error: UPGRADE FAILED: failed to create resource: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority
```
Apparently, it's failing while `helm` is creating one of resources defined in the ARC chart and the cause was that cert-manager's webhook is not working correctly, due to the missing or the invalid CA certficate.
You'd try to tail logs from the `cert-manager-cainjector` and see it's failing with an error like:
```text
$ kubectl -n cert-manager logs cert-manager-cainjector-7cdbb9c945-g6bt4
I0703 03:31:55.159339 1 start.go:91] "starting" version="v1.1.1" revision="3ac7418070e22c87fae4b22603a6b952f797ae96"
I0703 03:31:55.615061 1 leaderelection.go:243] attempting to acquire leader lease kube-system/cert-manager-cainjector-leader-election...
I0703 03:32:10.738039 1 leaderelection.go:253] successfully acquired lease kube-system/cert-manager-cainjector-leader-election
I0703 03:32:10.739941 1 recorder.go:52] cert-manager/controller-runtime/manager/events "msg"="Normal" "message"="cert-manager-cainjector-7cdbb9c945-g6bt4_88e4bc70-eded-4343-a6fb-0ddd6434eb55 became leader" "object"={"kind":"ConfigMap","namespace":"kube-system","name":"cert-manager-cainjector-leader-election","uid":"942a021e-364c-461a-978c-f54a95723cdc","apiVersion":"v1","resourceVersion":"1576"} "reason"="LeaderElection"
E0703 03:32:11.192128 1 start.go:119] cert-manager/ca-injector "msg"="manager goroutine exited" "error"=null
I0703 03:32:12.339197 1 request.go:645] Throttling request took 1.047437675s, request: GET:https://10.96.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
E0703 03:32:13.143790 1 start.go:151] cert-manager/ca-injector "msg"="Error registering certificate based controllers. Retrying after 5 seconds." "error"="no matches for kind \"MutatingWebhookConfiguration\" in version \"admissionregistration.k8s.io/v1beta1\""
Error: error registering secret controller: no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
```
**Solution**
Your cluster is based on a new enough Kubernetes of version 1.22 or greater which does not support the legacy `admissionregistration.k8s.io/v1beta1` API anymore, and your `cert-manager` is not up-to-date hence it's still trying to use the leagcy Kubernetes API.
In many cases, it's not an option to downgrade Kubernetes. So, just upgrade `cert-manager` to a more recent version that does have have the support for the specific Kubernetes version you're using.
See <https://cert-manager.io/docs/installation/supported-releases/> for the list of available cert-manager versions.
## Operations
Troubeshooting runbooks that relate to ARC operational problems
### Stuck runner kind or backing pod
**Problem**
Sometimes either the runner kind (`kubectl get runners`) or it's underlying pod can get stuck in a terminating state for various reasons. You can get the kind unstuck by removing its finaliser using something like this:
**Solution**
Remove the finaliser from the relevent runner kind or pod
```text
# Get all kind runners and remove the finalizer
$ kubectl get runners --no-headers | awk {'print $1'} | xargs kubectl patch runner --type merge -p '{"metadata":{"finalizers":null}}'
# Get all pods that are stuck terminating and remove the finalizer
$ kubectl -n get pods | grep Terminating | awk {'print $1'} | xargs kubectl patch pod -p '{"metadata":{"finalizers":null}}'
```
_Note the code assumes you have already selected the namespace your runners are in and that they
are in a namespace not shared with anything else_
### Delay in jobs being allocated to runners
**Problem**
ARC isn't involved in jobs actually getting allocated to a runner. ARC is responsible for orchestrating runners and the runner lifecycle. Why some people see large delays in job allocation is not clear however it has been confirmed https://github.com/actions/actions-runner-controller/issues/1387#issuecomment-1122593984 that this is caused from the self-update process somehow.
**Solution**
Disable the self-update process in your runner manifests
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeployment-with-sleep
spec:
template:
spec:
...
env:
- name: DISABLE_RUNNER_UPDATE
value: "true"
```
### Runner coming up before network available
**Problem**
If you're running your action runners on a service mesh like Istio, you might
have problems with runner configuration accompanied by logs like:
```text
....
runner Starting Runner listener with startup type: service
runner Started listener process
runner An error occurred: Not configured
runner Runner listener exited with error code 2
runner Runner listener exit with retryable error, re-launch runner in 5 seconds.
....
```
This is because the `istio-proxy` has not completed configuring itself when the
configuration script tries to communicate with the network.
More broadly, there are many other circumstances where the runner pod coming up first can cause issues.
**Solution**
> Added originally to help users with older istio instances.
> Newer Istio instances can use Istio's `holdApplicationUntilProxyStarts` attribute ([istio/istio#11130](https://github.com/istio/istio/issues/11130)) to avoid having to delay starting up the runner.
> Please read the discussion in [#592](https://github.com/actions/actions-runner-controller/pull/592) for more information.
You can add a delay to the runner's entrypoint script by setting the `STARTUP_DELAY_IN_SECONDS` environment variable for the runner pod. This will cause the script to sleep X seconds, this works with any runner kind.
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeployment-with-sleep
spec:
template:
spec:
...
env:
- name: STARTUP_DELAY_IN_SECONDS
value: "5"
```
### Outgoing network action hangs indefinitely
**Problem**
Some random outgoing network actions hangs indefinitely. This could be because your cluster does not give Docker the standard MTU of 1500, you can check this out by running `ip link` in a pod that encounters the problem and reading the outgoing interface's MTU value. If it is smaller than 1500, then try the following.
**Solution**
Add a `dockerMTU` key in your runner's spec with the value you read on the outgoing interface. For instance:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: github-runner
namespace: github-system
spec:
replicas: 6
template:
spec:
dockerMTU: 1400
repository: $username/$repo
env: []
```
If the issue still persists, you can set the `ARC_DOCKER_MTU_PROPAGATION` to propagate the host MTU to networks created
by the GitHub Runner. For instance:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: github-runner
namespace: github-system
spec:
replicas: 6
template:
spec:
dockerMTU: 1400
repository: $username/$repo
env:
- name: ARC_DOCKER_MTU_PROPAGATION
value: "true"
```
You can read the discussion regarding this issue in
[#1406](https://github.com/actions/actions-runner-controller/issues/1046).
### Unable to scale to zero with TotalNumberOfQueuedAndInProgressWorkflowRuns
**Problem**
HRA doesn't scale the RunnerDeployment to zero, even though you did configure HRA correctly, to have a pull-based scaling metric `TotalNumberOfQueuedAndInProgressWorkflowRuns`, and set `minReplicas: 0`.
**Solution**
You very likely have some dangling workflow jobs stuck in `queued` or `in_progress` as seen in [#1057](https://github.com/actions/actions-runner-controller/issues/1057#issuecomment-1133439061).
Manually call [the "list workflow runs" API](https://docs.github.com/en/rest/actions/workflow-runs#list-workflow-runs-for-a-repository), and [remove the dangling workflow job(s)](https://docs.github.com/en/rest/actions/workflow-runs#delete-a-workflow-run).
### Slow / failure to boot dind sidecar (default runner)
**Problem**
If you noticed that it takes several minutes for sidecar dind container to be created or it exits with with error just after being created it might indicate that you are experiencing disk performance issue. You might see message `failed to reserve container name` when scaling up multiple runners at once. When you ssh on kubernetes node that problematic pods were scheduled on you can use tools like `atop`, `htop` or `iotop` to check IO usage and cpu time percentage used on iowait. If you see that disk usage is high (80-100%) and iowaits are taking a significant chunk of you cpu time (normally it should not be higher than 10%) it means that performance is being bottlenecked by slow disk.
**Solution**
The solution is to switch to using faster storage, if you are experiencing this issue you are probably using HDD storage. Switching to SSD storage fixed the problem in my case. Most cloud providers have a list of storage options to use just pick something faster that your current disk, for on prem clusters you will need to invest in some SSDs.

View File

@@ -1,100 +0,0 @@
#!/usr/bin/env bash
# See https://developers.cloudflare.com/cloudflare-one/tutorials/many-cfd-one-tunnel/
kubectl create ns tunnel || :
kubectl -n tunnel delete secret tunnel-credentials || :
kubectl -n tunnel create secret generic tunnel-credentials \
--from-file=credentials.json=$HOME/.cloudflared/${TUNNEL_ID}.json || :
cat <<MANIFEST | kubectl -n tunnel ${OP} -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudflared
spec:
selector:
matchLabels:
app: cloudflared
replicas: 2 # You could also consider elastic scaling for this deployment
template:
metadata:
labels:
app: cloudflared
spec:
containers:
- name: cloudflared
image: cloudflare/cloudflared:latest
args:
- tunnel
# Points cloudflared to the config file, which configures what
# cloudflared will actually do. This file is created by a ConfigMap
# below.
- --config
- /etc/cloudflared/config/config.yaml
- run
livenessProbe:
httpGet:
# Cloudflared has a /ready endpoint which returns 200 if and only if
# it has an active connection to the edge.
path: /ready
port: 2000
failureThreshold: 1
initialDelaySeconds: 10
periodSeconds: 10
volumeMounts:
- name: config
mountPath: /etc/cloudflared/config
readOnly: true
# Each tunnel has an associated "credentials file" which authorizes machines
# to run the tunnel. cloudflared will read this file from its local filesystem,
# and it'll be stored in a k8s secret.
- name: creds
mountPath: /etc/cloudflared/creds
readOnly: true
volumes:
- name: creds
secret:
secretName: tunnel-credentials
# Create a config.yaml file from the ConfigMap below.
- name: config
configMap:
name: cloudflared
items:
- key: config.yaml
path: config.yaml
---
# This ConfigMap is just a way to define the cloudflared config.yaml file in k8s.
# It's useful to define it in k8s, rather than as a stand-alone .yaml file, because
# this lets you use various k8s templating solutions (e.g. Helm charts) to
# parameterize your config, instead of just using string literals.
apiVersion: v1
kind: ConfigMap
metadata:
name: cloudflared
data:
config.yaml: |
# Name of the tunnel you want to run
tunnel: ${TUNNEL_NAME}
credentials-file: /etc/cloudflared/creds/credentials.json
# Serves the metrics server under /metrics and the readiness server under /ready
metrics: 0.0.0.0:2000
# Autoupdates applied in a k8s pod will be lost when the pod is removed or restarted, so
# autoupdate doesn't make sense in Kubernetes. However, outside of Kubernetes, we strongly
# recommend using autoupdate.
no-autoupdate: true
ingress:
# The first rule proxies traffic to the httpbin sample Service defined in app.yaml
- hostname: ${TUNNEL_HOSTNAME}
service: http://actions-runner-controller-actions-metrics-server.actions-runner-system:80
path: /metrics$
- hostname: ${TUNNEL_HOSTNAME}
service: http://actions-runner-controller-github-webhook-server.actions-runner-system:80
# This rule matches any traffic which didn't match a previous rule, and responds with HTTP 404.
- service: http_status:404
MANIFEST
kubectl -n tunnel delete po -l app=cloudflared || :

View File

@@ -1,84 +1,29 @@
#!/usr/bin/env bash
set +e
set -e
repo_runnerdeployment_passed="skipped"
repo_runnerset_passed="skipped"
runner_name=
echo "Checking if RunnerDeployment repo test is set"
if [ "${TEST_REPO}" ] && [ ! "${USE_RUNNERSET}" ]; then
runner_name=
count=0
while [ $count -le 30 ]; do
echo "Finding Runner ..."
runner_name=$(kubectl get runner --output=jsonpath="{.items[*].metadata.name}")
if [ "${runner_name}" ]; then
while [ $count -le 30 ]; do
runner_pod_name=
echo "Found Runner \""${runner_name}"\""
echo "Finding underlying pod ..."
runner_pod_name=$(kubectl get pod --output=jsonpath="{.items[*].metadata.name}" | grep ${runner_name})
if [ "${runner_pod_name}" ]; then
echo "Found underlying pod \""${runner_pod_name}"\""
echo "Waiting for pod \""${runner_pod_name}"\" to become ready..."
kubectl wait pod/${runner_pod_name} --for condition=ready --timeout 270s
break 2
fi
sleep 1
let "count=count+1"
done
fi
sleep 1
let "count=count+1"
done
if [ $count -ge 30 ]; then
repo_runnerdeployment_passed=false
else
repo_runnerdeployment_passed=true
fi
echo "Checking if RunnerSet repo test is set"
elif [ "${TEST_REPO}" ] && [ "${USE_RUNNERSET}" ]; then
runnerset_name=
count=0
while [ $count -le 30 ]; do
echo "Finding RunnerSet ..."
runnerset_name=$(kubectl get runnerset --output=jsonpath="{.items[*].metadata.name}")
if [ "${runnerset_name}" ]; then
while [ $count -le 30 ]; do
runnerset_pod_name=
echo "Found RunnerSet \""${runnerset_name}"\""
echo "Finding underlying pod ..."
runnerset_pod_name=$(kubectl get pod --output=jsonpath="{.items[*].metadata.name}" | grep ${runnerset_name})
echo "BEFORE IF"
if [ "${runnerset_pod_name}" ]; then
echo "AFTER IF"
echo "Found underlying pod \""${runnerset_pod_name}"\""
echo "Waiting for pod \""${runnerset_pod_name}"\" to become ready..."
kubectl wait pod/${runnerset_pod_name} --for condition=ready --timeout 270s
break 2
fi
sleep 1
let "count=count+1"
done
fi
sleep 1
let "count=count+1"
done
if [ $count -ge 30 ]; then
repo_runnerset_passed=false
else
repo_runnerset_passed=true
fi
fi
while [ -z "${runner_name}" ]; do
echo Finding the runner... 1>&2
sleep 1
runner_name=$(kubectl get runner --output=jsonpath="{.items[*].metadata.name}")
done
if [ ${repo_runnerset_passed} == true ] || [ ${repo_runnerset_passed} == "skipped" ] && \
[ ${repo_runnerdeployment_passed} == true ] || [ ${repo_runnerdeployment_passed} == "skipped" ]; then
echo "INFO : All tests passed or skipped"
echo "RunnerSet Repo Test Status : ${repo_runnerset_passed}"
echo "RunnerDeployment Repo Test Status : ${repo_runnerdeployment_passed}"
else
echo "ERROR : Some tests failed"
echo "RunnerSet Repo Test Status : ${repo_runnerset_passed}"
echo "RunnerDeployment Repo Test Status : ${repo_runnerdeployment_passed}"
exit 1
fi
echo Found runner ${runner_name}.
pod_name=
while [ -z "${pod_name}" ]; do
echo Finding the runner pod... 1>&2
sleep 1
pod_name=$(kubectl get pod --output=jsonpath="{.items[*].metadata.name}" | grep ${runner_name})
done
echo Found pod ${pod_name}.
echo Waiting for pod ${runner_name} to become ready... 1>&2
kubectl wait pod/${runner_name} --for condition=ready --timeout 180s
echo All tests passed. 1>&2

View File

@@ -4,126 +4,39 @@ set -e
tpe=${ACCEPTANCE_TEST_SECRET_TYPE}
VALUES_FILE=${VALUES_FILE:-$(dirname $0)/values.yaml}
kubectl delete secret -n actions-runner-system controller-manager || :
if [ "${tpe}" == "token" ]; then
if ! kubectl get secret controller-manager -n actions-runner-system >/dev/null; then
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_token=${GITHUB_TOKEN:?GITHUB_TOKEN must not be empty}
fi
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_token=${GITHUB_TOKEN:?GITHUB_TOKEN must not be empty}
elif [ "${tpe}" == "app" ]; then
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_app_id=${APP_ID:?must not be empty} \
--from-literal=github_app_installation_id=${APP_INSTALLATION_ID:?must not be empty} \
--from-file=github_app_private_key=${APP_PRIVATE_KEY_FILE:?must not be empty}
--from-literal=github_app_installation_id=${INSTALLATION_ID:?must not be empty} \
--from-file=github_app_private_key=${PRIVATE_KEY_FILE_PATH:?must not be empty}
else
echo "ACCEPTANCE_TEST_SECRET_TYPE must be set to either \"token\" or \"app\"" 1>&2
exit 1
fi
if [ -n "${WEBHOOK_GITHUB_TOKEN}" ]; then
kubectl -n actions-runner-system delete secret \
github-webhook-server || :
kubectl -n actions-runner-system create secret generic \
github-webhook-server \
--from-literal=github_token=${WEBHOOK_GITHUB_TOKEN:?WEBHOOK_GITHUB_TOKEN must not be empty}
else
echo 'Skipped deploying secret "github-webhook-server". Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
fi
if [ -n "${WEBHOOK_GITHUB_TOKEN}" ] && [ -z "${CREATE_SECRETS_USING_HELM}" ]; then
kubectl -n actions-runner-system delete secret \
actions-metrics-server || :
kubectl -n actions-runner-system create secret generic \
actions-metrics-server \
--from-literal=github_token=${WEBHOOK_GITHUB_TOKEN:?WEBHOOK_GITHUB_TOKEN must not be empty}
else
echo 'Skipped deploying secret "actions-metrics-server". Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
fi
tool=${ACCEPTANCE_TEST_DEPLOYMENT_TOOL}
TEST_ID=${TEST_ID:-default}
if [ "${tool}" == "helm" ]; then
set -v
CHART=${CHART:-charts/actions-runner-controller}
flags=()
if [ "${IMAGE_PULL_SECRET}" != "" ]; then
flags+=( --set imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
flags+=( --set image.actionsRunnerImagePullSecrets[0].name=${IMAGE_PULL_SECRET})
flags+=( --set githubWebhookServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
flags+=( --set actionsMetricsServer.imagePullSecrets[0].name=${IMAGE_PULL_SECRET})
fi
if [ "${WATCH_NAMESPACE}" != "" ]; then
flags+=( --set watchNamespace=${WATCH_NAMESPACE} --set singleNamespace=true)
fi
if [ "${CHART_VERSION}" != "" ]; then
flags+=( --version ${CHART_VERSION})
fi
if [ "${LOG_FORMAT}" != "" ]; then
flags+=( --set logFormat=${LOG_FORMAT})
flags+=( --set githubWebhookServer.logFormat=${LOG_FORMAT})
flags+=( --set actionsMetricsServer.logFormat=${LOG_FORMAT})
fi
if [ "${ADMISSION_WEBHOOKS_TIMEOUT}" != "" ]; then
flags+=( --set admissionWebHooks.timeoutSeconds=${ADMISSION_WEBHOOKS_TIMEOUT})
fi
if [ -n "${CREATE_SECRETS_USING_HELM}" ]; then
if [ -z "${WEBHOOK_GITHUB_TOKEN}" ]; then
echo 'Failed deploying secret "actions-metrics-server" using helm. Set WEBHOOK_GITHUB_TOKEN to deploy.' 1>&2
exit 1
fi
flags+=( --set actionsMetricsServer.secret.create=true)
flags+=( --set actionsMetricsServer.secret.github_token=${WEBHOOK_GITHUB_TOKEN})
fi
if [ -n "${GITHUB_WEBHOOK_SERVER_ENV_NAME}" ] && [ -n "${GITHUB_WEBHOOK_SERVER_ENV_VALUE}" ]; then
flags+=( --set githubWebhookServer.env[0].name=${GITHUB_WEBHOOK_SERVER_ENV_NAME})
flags+=( --set githubWebhookServer.env[0].value=${GITHUB_WEBHOOK_SERVER_ENV_VALUE})
fi
set -vx
helm upgrade --install actions-runner-controller \
${CHART} \
charts/actions-runner-controller \
-n actions-runner-system \
--create-namespace \
--set syncPeriod=${SYNC_PERIOD} \
--set authSecret.create=false \
--set image.repository=${NAME} \
--set image.tag=${VERSION} \
--set podAnnotations.test-id=${TEST_ID} \
--set githubWebhookServer.podAnnotations.test-id=${TEST_ID} \
--set actionsMetricsServer.podAnnotations.test-id=${TEST_ID} \
${flags[@]} --set image.imagePullPolicy=${IMAGE_PULL_POLICY} \
-f ${VALUES_FILE}
set +v
# To prevent `CustomResourceDefinition.apiextensions.k8s.io "runners.actions.summerwind.dev" is invalid: metadata.annotations: Too long: must have at most 262144 bytes`
# errors
kubectl create -f charts/actions-runner-controller/crds || kubectl replace -f charts/actions-runner-controller/crds
# This wait fails due to timeout when it's already in crashloopback and this update doesn't change the image tag.
# That's why we add `|| :`. With that we prevent stopping the script in case of timeout and
# proceed to delete (possibly in crashloopback and/or running with outdated image) pods so that they are recreated by K8s.
kubectl -n actions-runner-system wait deploy/actions-runner-controller --for condition=available --timeout 60s || :
--set syncPeriod=5m
kubectl -n actions-runner-system wait deploy/actions-runner-controller --for condition=available
else
kubectl apply \
-n actions-runner-system \
-f release/actions-runner-controller.yaml
kubectl -n actions-runner-system wait deploy/controller-manager --for condition=available --timeout 120s || :
kubectl -n actions-runner-system wait deploy/controller-manager --for condition=available --timeout 60s
fi
# Restart all ARC pods
kubectl -n actions-runner-system delete po -l app.kubernetes.io/name=actions-runner-controller
echo Waiting for all ARC pods to be up and running after restart
kubectl -n actions-runner-system wait deploy/actions-runner-controller --for condition=available --timeout 120s
# Adhocly wait for some time until actions-runner-controller's admission webhook gets ready
sleep 20
kubectl apply \
-f acceptance/testdata/runnerdeploy.yaml

View File

@@ -1,60 +0,0 @@
#!/usr/bin/env bash
set -e
OP=${OP:-apply}
RUNNER_LABEL=${RUNNER_LABEL:-self-hosted}
cat acceptance/testdata/kubernetes_container_mode.envsubst.yaml | NAMESPACE=${RUNNER_NAMESPACE} envsubst | kubectl apply -f -
if [ -n "${TEST_REPO}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerset envsubst | kubectl ${OP} -f -
else
echo "Running ${OP} runnerdeployment and hra. Set USE_RUNNERSET if you want to deploy runnerset instead."
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_ORG= RUNNER_MIN_REPLICAS=${REPO_RUNNER_MIN_REPLICAS} NAME=repo-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} for runnerdeployment and hra. Set TEST_REPO to "yourorg/yourrepo" to deploy."
fi
if [ -n "${TEST_ORG}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} NAME=org-runnerdeploy envsubst | kubectl ${OP} -f -
fi
if [ -n "${TEST_ORG_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ENTERPRISE= TEST_REPO= RUNNER_MIN_REPLICAS=${ORG_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ORG_GROUP} NAME=orggroup-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ORG_GROUP to ${OP}."
fi
else
echo "Skipped ${OP} on organizational runnerdeployment. Set TEST_ORG to ${OP}."
fi
if [ -n "${TEST_ENTERPRISE}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} NAME=enterprise-runnerdeploy envsubst | kubectl ${OP} -f -
fi
if [ -n "${TEST_ENTERPRISE_GROUP}" ]; then
if [ "${USE_RUNNERSET}" != "false" ]; then
cat acceptance/testdata/runnerset.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerset envsubst | kubectl ${OP} -f -
else
cat acceptance/testdata/runnerdeploy.envsubst.yaml | TEST_ORG= TEST_REPO= RUNNER_MIN_REPLICAS=${ENTERPRISE_RUNNER_MIN_REPLICAS} TEST_GROUP=${TEST_ENTERPRISE_GROUP} NAME=enterprisegroup-runnerdeploy envsubst | kubectl ${OP} -f -
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ENTERPRISE_GROUP to ${OP}."
fi
else
echo "Skipped ${OP} on enterprise runnerdeployment. Set TEST_ENTERPRISE to ${OP}."
fi

View File

@@ -1,10 +0,0 @@
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 31000
hostPort: 31000
listenAddress: "0.0.0.0"
protocol: tcp
#- role: worker

View File

@@ -1,36 +0,0 @@
name: EKS Integration Tests
on:
workflow_dispatch:
env:
IRSA_ROLE_ARN:
ASSUME_ROLE_ARN:
AWS_REGION:
jobs:
assume-role-in-runner-test:
runs-on: ['self-hosted', 'Linux']
steps:
- name: Test aws-actions/configure-aws-credentials Action
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: ${{ env.AWS_REGION }}
role-to-assume: ${{ env.ASSUME_ROLE_ARN }}
role-duration-seconds: 900
assume-role-in-container-test:
runs-on: ['self-hosted', 'Linux']
container:
image: amazon/aws-cli
env:
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
AWS_ROLE_ARN: ${{ env.IRSA_ROLE_ARN }}
volumes:
- /var/run/secrets/eks.amazonaws.com/serviceaccount/token:/var/run/secrets/eks.amazonaws.com/serviceaccount/token
steps:
- name: Test aws-actions/configure-aws-credentials Action in container
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: ${{ env.AWS_REGION }}
role-to-assume: ${{ env.ASSUME_ROLE_ARN }}
role-duration-seconds: 900

View File

@@ -1,83 +0,0 @@
name: Runner Integration Tests
on:
workflow_dispatch:
env:
ImageOS: ubuntu18 # Used by ruby/setup-ruby action | Update me for the runner OS version you are testing against
jobs:
run-step-in-container-test:
runs-on: ['self-hosted', 'Linux']
container:
image: alpine
steps:
- name: Test we are working in the container
run: |
if [[ $(sed -n '2p' < /etc/os-release | cut -d "=" -f2) != "alpine" ]]; then
echo "::error ::Failed OS detection test, could not match /etc/os-release with alpine. Are we really running in the container?"
echo "/etc/os-release below:"
cat /etc/os-release
exit 1
fi
setup-python-test:
runs-on: ['self-hosted', 'Linux']
steps:
- name: Print native Python environment
run: |
which python
python --version
- uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Test actions/setup-python works
run: |
VERSION=$(python --version 2>&1 | cut -d ' ' -f2 | cut -d '.' -f1-2)
if [[ $VERSION != '3.9' ]]; then
echo "Python version detected : $(python --version 2>&1)"
echo "::error ::Detected python failed setup version test, could not match version with version specified in the setup action"
exit 1
else
echo "Python version detected : $(python --version 2>&1)"
fi
setup-node-test:
runs-on: ['self-hosted', 'Linux']
steps:
- uses: actions/setup-node@v2
with:
node-version: '12'
- name: Test actions/setup-node works
run: |
VERSION=$(node --version | cut -c 2- | cut -d '.' -f1)
if [[ $VERSION != '12' ]]; then
echo "Node version detected : $(node --version 2>&1)"
echo "::error ::Detected node failed setup version test, could not match version with version specified in the setup action"
exit 1
else
echo "Node version detected : $(node --version 2>&1)"
fi
setup-ruby-test:
runs-on: ['self-hosted', 'Linux']
steps:
- uses: ruby/setup-ruby@v1
with:
ruby-version: 3.0
bundler-cache: true
- name: Test ruby/setup-ruby works
run: |
VERSION=$(ruby --version | cut -d ' ' -f2 | cut -d '.' -f1-2)
if [[ $VERSION != '3.0' ]]; then
echo "Ruby version detected : $(ruby --version 2>&1)"
echo "::error ::Detected ruby failed setup version test, could not match version with version specified in the setup action"
exit 1
else
echo "Ruby version detected : $(ruby --version 2>&1)"
fi
python-shell-test:
runs-on: ['self-hosted', 'Linux']
steps:
- name: Test Python shell works
run: |
import os
print(os.environ['PATH'])
shell: python

View File

@@ -1,86 +0,0 @@
# USAGE:
# cat acceptance/testdata/kubernetes_container_mode.envsubst.yaml | NAMESPACE=default envsubst | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-mode-runner
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch",]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "delete"]
# Needed to report test success by crating a cm from within workflow job step
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: runner-status-updater
rules:
- apiGroups: ["actions.summerwind.dev"]
resources: ["runners/status"]
verbs: ["get", "update", "patch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ${RUNNER_SERVICE_ACCOUNT_NAME}
namespace: ${NAMESPACE}
---
# To verify it's working, try:
# kubectl auth can-i --as system:serviceaccount:default:runner get pod
# If incomplete, workflows and jobs would fail with an error message like:
# Error: Error: The Service account needs the following permissions [{"group":"","verbs":["get","list","create","delete"],"resource":"pods","subresource":""},{"group":"","verbs":["get","create"],"resource":"pods","subresource":"exec"},{"group":"","verbs":["get","list","watch"],"resource":"pods","subresource":"log"},{"group":"batch","verbs":["get","list","create","delete"],"resource":"jobs","subresource":""},{"group":"","verbs":["create","delete","get","list"],"resource":"secrets","subresource":""}] on the pod resource in the 'default' namespace. Please contact your self hosted runner administrator.
# Error: Process completed with exit code 1.
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
name: runner-k8s-mode-runner
namespace: ${NAMESPACE}
subjects:
- kind: ServiceAccount
name: ${RUNNER_SERVICE_ACCOUNT_NAME}
namespace: ${NAMESPACE}
roleRef:
kind: ClusterRole
name: k8s-mode-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: runner-runner-stat-supdater
namespace: ${NAMESPACE}
subjects:
- kind: ServiceAccount
name: ${RUNNER_SERVICE_ACCOUNT_NAME}
namespace: ${NAMESPACE}
roleRef:
kind: ClusterRole
name: runner-status-updater
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: org-runnerdeploy-runner-work-dir
labels:
content: org-runnerdeploy-runner-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

View File

@@ -1,139 +0,0 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-runner-work-dir
labels:
content: ${NAME}-runner-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-rootless-dind-work-dir
labels:
content: ${NAME}-rootless-dind-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: ${NAME}
spec:
# replicas: 1
template:
spec:
enterprise: ${TEST_ENTERPRISE}
group: ${TEST_GROUP}
organization: ${TEST_ORG}
repository: ${TEST_REPO}
#
# Custom runner image
#
image: ${RUNNER_NAME}:${RUNNER_TAG}
imagePullPolicy: IfNotPresent
ephemeral: ${TEST_EPHEMERAL}
#
# dockerd within runner container
#
## Replace `mumoshu/actions-runner-dind:dev` with your dind image
#dockerdWithinRunnerContainer: true
#image: mumoshu/actions-runner-dind:dev
dockerdWithinRunnerContainer: ${RUNNER_DOCKERD_WITHIN_RUNNER_CONTAINER}
#
# Set the MTU used by dockerd-managed network interfaces (including docker-build-ubuntu)
#
#dockerMTU: 1450
#Runner group
# labels:
# - "mylabel 1"
# - "mylabel 2"
labels:
- "${RUNNER_LABEL}"
serviceAccountName: ${RUNNER_SERVICE_ACCOUNT_NAME}
terminationGracePeriodSeconds: ${RUNNER_TERMINATION_GRACE_PERIOD_SECONDS}
env:
- name: RUNNER_GRACEFUL_STOP_TIMEOUT
value: "${RUNNER_GRACEFUL_STOP_TIMEOUT}"
- name: ROLLING_UPDATE_PHASE
value: "${ROLLING_UPDATE_PHASE}"
- name: ARC_DOCKER_MTU_PROPAGATION
value: "true"
# https://github.com/docker/docs/issues/8663
- name: DOCKER_DEFAULT_ADDRESS_POOL_BASE
value: "172.17.0.0/12"
- name: DOCKER_DEFAULT_ADDRESS_POOL_SIZE
value: "24"
- name: WAIT_FOR_DOCKER_SECONDS
value: "3"
dockerMTU: 1400
dockerEnv:
- name: RUNNER_GRACEFUL_STOP_TIMEOUT
value: "${RUNNER_GRACEFUL_STOP_TIMEOUT}"
# Fix the following no space left errors with rootless-dind runners that can happen while running buildx build:
# ------
# > [4/5] RUN go mod download:
# ------
# ERROR: failed to solve: failed to prepare yxsw8lv9hqnuafzlfta244l0z: mkdir /home/runner/.local/share/docker/vfs/dir/yxsw8lv9hqnuafzlfta244l0z/usr/local/go/src/cmd/compile/internal/types2/testdata: no space left on device
# Error: Process completed with exit code 1.
#
volumeMounts:
- name: rootless-dind-work-dir
# Omit the /share/docker part of the /home/runner/.local/share/docker as
# that part is created by dockerd.
mountPath: /home/runner/.local
readOnly: false
volumes:
- name: rootless-dind-work-dir
ephemeral:
volumeClaimTemplate:
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "${NAME}-rootless-dind-work-dir"
resources:
requests:
storage: 3Gi
#
# Non-standard working directory
#
# workDir: "/"
# # Uncomment the below to enable the kubernetes container mode
# # See https://github.com/actions/actions-runner-controller#runner-with-k8s-jobs
containerMode: ${RUNNER_CONTAINER_MODE}
workVolumeClaimTemplate:
accessModes:
- ReadWriteOnce
storageClassName: "${NAME}-runner-work-dir"
resources:
requests:
storage: 10Gi
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: ${NAME}
spec:
scaleTargetRef:
name: ${NAME}
scaleUpTriggers:
- githubEvent:
workflowJob: {}
amount: 1
duration: "10m"
minReplicas: ${RUNNER_MIN_REPLICAS}
maxReplicas: 10
scaleDownDelaySecondsAfterScaleOut: ${RUNNER_SCALE_DOWN_DELAY_SECONDS_AFTER_SCALE_OUT}

9
acceptance/testdata/runnerdeploy.yaml vendored Normal file
View File

@@ -0,0 +1,9 @@
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeploy
spec:
# replicas: 1
template:
spec:
repository: mumoshu/actions-runner-controller-ci

View File

@@ -1,312 +0,0 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-runner-work-dir
labels:
content: ${NAME}-runner-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}
# In kind environments, the provider writes:
# /var/lib/docker/volumes/KIND_NODE_CONTAINER_VOL_ID/_data/local-path-provisioner/PV_NAME
# It can be hundreds of gigabytes depending on what you cache in the test workflow. Beware to not encounter `no space left on device` errors!
# If you did encounter no space errorrs try:
# docker system prune
# docker buildx prune #=> frees up /var/lib/docker/volumes/buildx_buildkit_container-builder0_state
# sudo rm -rf /var/lib/docker/volumes/KIND_NODE_CONTAINER_VOL_ID/_data/local-path-provisioner #=> frees up local-path-provisioner's data
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-var-lib-docker
labels:
content: ${NAME}-var-lib-docker
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-cache
labels:
content: ${NAME}-cache
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-runner-tool-cache
labels:
content: ${NAME}-runner-tool-cache
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ${NAME}-rootless-dind-work-dir
labels:
content: ${NAME}-rootless-dind-work-dir
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerSet
metadata:
name: ${NAME}
spec:
# MANDATORY because it is based on StatefulSet: Results in a below error when omitted:
# missing required field "selector" in dev.summerwind.actions.v1alpha1.RunnerSet.spec
selector:
matchLabels:
app: ${NAME}
# MANDATORY because it is based on StatefulSet: Results in a below error when omitted:
# missing required field "serviceName" in dev.summerwind.actions.v1alpha1.RunnerSet.spec]
serviceName: ${NAME}
#replicas: 1
# From my limited testing, `ephemeral: true` is more reliable.
# Seomtimes, updating already deployed runners from `ephemeral: false` to `ephemeral: true` seems to
# result in queued jobs hanging forever.
ephemeral: ${TEST_EPHEMERAL}
enterprise: ${TEST_ENTERPRISE}
group: ${TEST_GROUP}
organization: ${TEST_ORG}
repository: ${TEST_REPO}
#
# Custom runner image
#
image: ${RUNNER_NAME}:${RUNNER_TAG}
#
# dockerd within runner container
#
## Replace `mumoshu/actions-runner-dind:dev` with your dind image
#dockerdWithinRunnerContainer: true
dockerdWithinRunnerContainer: ${RUNNER_DOCKERD_WITHIN_RUNNER_CONTAINER}
#
# Set the MTU used by dockerd-managed network interfaces (including docker-build-ubuntu)
#
#dockerMTU: 1450
#Runner group
# labels:
# - "mylabel 1"
# - "mylabel 2"
labels:
- "${RUNNER_LABEL}"
#
# Non-standard working directory
#
# workDir: "/"
template:
metadata:
labels:
app: ${NAME}
spec:
serviceAccountName: ${RUNNER_SERVICE_ACCOUNT_NAME}
terminationGracePeriodSeconds: ${RUNNER_TERMINATION_GRACE_PERIOD_SECONDS}
containers:
# # Uncomment only when non-dind-runner / you're using docker sidecar
# - name: docker
# # Image is required for the dind sidecar definition within RunnerSet spec
# image: "docker:dind"
# env:
# - name: RUNNER_GRACEFUL_STOP_TIMEOUT
# value: "${RUNNER_GRACEFUL_STOP_TIMEOUT}"
- name: runner
imagePullPolicy: IfNotPresent
env:
- name: RUNNER_GRACEFUL_STOP_TIMEOUT
value: "${RUNNER_GRACEFUL_STOP_TIMEOUT}"
- name: RUNNER_FEATURE_FLAG_EPHEMERAL
value: "${RUNNER_FEATURE_FLAG_EPHEMERAL}"
- name: GOMODCACHE
value: "/home/runner/.cache/go-mod"
- name: ROLLING_UPDATE_PHASE
value: "${ROLLING_UPDATE_PHASE}"
# PV-backed runner work dir
volumeMounts:
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
# The volume and mount with the same names will be created by workVolumeClaimTemplate and the kubernetes container mode support.
# - name: work
# mountPath: /runner/_work
# Cache docker image layers, in case dockerdWithinRunnerContainer=true
- name: var-lib-docker
mountPath: /var/lib/docker
# Cache go modules and builds
# - name: gocache
# # Run `goenv | grep GOCACHE` to verify the path is correct for your env
# mountPath: /home/runner/.cache/go-build
# - name: gomodcache
# # Run `goenv | grep GOMODCACHE` to verify the path is correct for your env
# # mountPath: /home/runner/go/pkg/mod
- name: cache
# go: could not create module cache: stat /home/runner/.cache/go-mod: permission denied
mountPath: "/home/runner/.cache"
- name: runner-tool-cache
# This corresponds to our runner image's default setting of RUNNER_TOOL_CACHE=/opt/hostedtoolcache.
#
# In case you customize the envvar in both runner and docker containers of the runner pod spec,
# You'd need to change this mountPath accordingly.
#
# The tool cache directory is defined in actions/toolkit's tool-cache module:
# https://github.com/actions/toolkit/blob/2f164000dcd42fb08287824a3bc3030dbed33687/packages/tool-cache/src/tool-cache.ts#L621-L638
#
# Many setup-* actions like setup-go utilizes the tool-cache module to download and cache installed binaries:
# https://github.com/actions/setup-go/blob/56a61c9834b4a4950dbbf4740af0b8a98c73b768/src/installer.ts#L144
mountPath: "/opt/hostedtoolcache"
# Valid only when dockerdWithinRunnerContainer=false
# - name: docker
# # PV-backed runner work dir
# volumeMounts:
# - name: work
# mountPath: /runner/_work
# # Cache docker image layers, in case dockerdWithinRunnerContainer=false
# - name: var-lib-docker
# mountPath: /var/lib/docker
# # image: mumoshu/actions-runner-dind:dev
# # For buildx cache
# - name: cache
# mountPath: "/home/runner/.cache"
# For fixing no space left error on rootless dind runner
- name: rootless-dind-work-dir
# Omit the /share/docker part of the /home/runner/.local/share/docker as
# that part is created by dockerd.
mountPath: /home/runner/.local
readOnly: false
# Comment out the ephemeral work volume if you're going to test the kubernetes container mode
# volumes:
# - name: work
# ephemeral:
# volumeClaimTemplate:
# spec:
# accessModes:
# - ReadWriteOnce
# storageClassName: "${NAME}-runner-work-dir"
# resources:
# requests:
# storage: 10Gi
# Fix the following no space left errors with rootless-dind runners that can happen while running buildx build:
# ------
# > [4/5] RUN go mod download:
# ------
# ERROR: failed to solve: failed to prepare yxsw8lv9hqnuafzlfta244l0z: mkdir /home/runner/.local/share/docker/vfs/dir/yxsw8lv9hqnuafzlfta244l0z/usr/local/go/src/cmd/compile/internal/types2/testdata: no space left on device
# Error: Process completed with exit code 1.
#
volumes:
- name: rootless-dind-work-dir
ephemeral:
volumeClaimTemplate:
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "${NAME}-rootless-dind-work-dir"
resources:
requests:
storage: 3Gi
volumeClaimTemplates:
- metadata:
name: vol1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: ${NAME}
## Dunno which provider supports auto-provisioning with selector.
## At least the rancher local path provider stopped with:
## waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
# selector:
# matchLabels:
# runnerset-volume-id: ${NAME}-vol1
- metadata:
name: vol2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: ${NAME}
# selector:
# matchLabels:
# runnerset-volume-id: ${NAME}-vol2
- metadata:
name: var-lib-docker
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: ${NAME}-var-lib-docker
- metadata:
name: cache
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: ${NAME}-cache
- metadata:
name: runner-tool-cache
# It turns out labels doesn't distinguish PVs across PVCs and the
# end result is PVs are reused by wrong PVCs.
# The correct way seems to be to differentiate storage class per pvc template.
# labels:
# id: runner-tool-cache
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: ${NAME}-runner-tool-cache
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: ${NAME}
spec:
scaleTargetRef:
kind: RunnerSet
name: ${NAME}
scaleUpTriggers:
- githubEvent:
workflowJob: {}
amount: 1
duration: "10m"
minReplicas: ${RUNNER_MIN_REPLICAS}
maxReplicas: 10
scaleDownDelaySecondsAfterScaleOut: ${RUNNER_SCALE_DOWN_DELAY_SECONDS_AFTER_SCALE_OUT}
# Comment out the whole metrics if you'd like to solely test webhook-based scaling
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75'
scaleDownThreshold: '0.25'
scaleUpFactor: '2'
scaleDownFactor: '0.5'

View File

@@ -1,55 +0,0 @@
# Set actions-runner-controller settings for testing
logLevel: "-4"
imagePullSecrets: []
image:
# This needs to be an empty array rather than a single-item array with empty name.
# Otherwise you end up with the following error on helm-upgrade:
# Error: UPGRADE FAILED: failed to create patch: map: map[] does not contain declared merge key: name && failed to create patch: map: map[] does not contain declared merge key: name
actionsRunnerImagePullSecrets: []
runner:
statusUpdateHook:
enabled: true
rbac:
allowGrantingKubernetesContainerModePermissions: true
githubWebhookServer:
imagePullSecrets: []
logLevel: "-4"
enabled: true
labels: {}
replicaCount: 1
syncPeriod: 10m
useRunnerGroupsVisibility: true
secret:
enabled: true
# create: true
name: "github-webhook-server"
### GitHub Webhook Configuration
#github_webhook_secret_token: ""
service:
type: NodePort
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
nodePort: 31000
actionsMetricsServer:
imagePullSecrets: []
logLevel: "-4"
enabled: true
labels: {}
replicaCount: 1
secret:
enabled: true
# create: true
name: "actions-metrics-server"
### GitHub Webhook Configuration
#github_webhook_secret_token: ""
service:
type: NodePort
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
nodePort: 31001

View File

@@ -0,0 +1,122 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// HorizontalRunnerAutoscalerSpec defines the desired state of HorizontalRunnerAutoscaler
type HorizontalRunnerAutoscalerSpec struct {
// ScaleTargetRef sis the reference to scaled resource like RunnerDeployment
ScaleTargetRef ScaleTargetRef `json:"scaleTargetRef,omitempty"`
// MinReplicas is the minimum number of replicas the deployment is allowed to scale
// +optional
MinReplicas *int `json:"minReplicas,omitempty"`
// MinReplicas is the maximum number of replicas the deployment is allowed to scale
// +optional
MaxReplicas *int `json:"maxReplicas,omitempty"`
// ScaleDownDelaySecondsAfterScaleUp is the approximate delay for a scale down followed by a scale up
// Used to prevent flapping (down->up->down->... loop)
// +optional
ScaleDownDelaySecondsAfterScaleUp *int `json:"scaleDownDelaySecondsAfterScaleOut,omitempty"`
// Metrics is the collection of various metric targets to calculate desired number of runners
// +optional
Metrics []MetricSpec `json:"metrics,omitempty"`
}
type ScaleTargetRef struct {
Name string `json:"name,omitempty"`
}
type MetricSpec struct {
// Type is the type of metric to be used for autoscaling.
// The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
Type string `json:"type,omitempty"`
// RepositoryNames is the list of repository names to be used for calculating the metric.
// For example, a repository name is the REPO part of `github.com/USER/REPO`.
// +optional
RepositoryNames []string `json:"repositoryNames,omitempty"`
// ScaleUpThreshold is the percentage of busy runners greater than which will
// trigger the hpa to scale runners up.
// +optional
ScaleUpThreshold string `json:"scaleUpThreshold,omitempty"`
// ScaleDownThreshold is the percentage of busy runners less than which will
// trigger the hpa to scale the runners down.
// +optional
ScaleDownThreshold string `json:"scaleDownThreshold,omitempty"`
// ScaleUpFactor is the multiplicative factor applied to the current number of runners used
// to determine how many pods should be added.
// +optional
ScaleUpFactor string `json:"scaleUpFactor,omitempty"`
// ScaleDownFactor is the multiplicative factor applied to the current number of runners used
// to determine how many pods should be removed.
// +optional
ScaleDownFactor string `json:"scaleDownFactor,omitempty"`
}
type HorizontalRunnerAutoscalerStatus struct {
// ObservedGeneration is the most recent generation observed for the target. It corresponds to e.g.
// RunnerDeployment's generation, which is updated on mutation by the API Server.
// +optional
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
// DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet
// This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
// +optional
DesiredReplicas *int `json:"desiredReplicas,omitempty"`
// +optional
LastSuccessfulScaleOutTime *metav1.Time `json:"lastSuccessfulScaleOutTime,omitempty"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.minReplicas",name=Min,type=number
// +kubebuilder:printcolumn:JSONPath=".spec.maxReplicas",name=Max,type=number
// +kubebuilder:printcolumn:JSONPath=".status.desiredReplicas",name=Desired,type=number
// HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler API
type HorizontalRunnerAutoscaler struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec HorizontalRunnerAutoscalerSpec `json:"spec,omitempty"`
Status HorizontalRunnerAutoscalerStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// HorizontalRunnerAutoscalerList contains a list of HorizontalRunnerAutoscaler
type HorizontalRunnerAutoscalerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []HorizontalRunnerAutoscaler `json:"items"`
}
func init() {
SchemeBuilder.Register(&HorizontalRunnerAutoscaler{}, &HorizontalRunnerAutoscalerList{})
}

View File

@@ -0,0 +1,167 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"errors"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// RunnerSpec defines the desired state of Runner
type RunnerSpec struct {
// +optional
// +kubebuilder:validation:Pattern=`^[^/]+$`
Organization string `json:"organization,omitempty"`
// +optional
// +kubebuilder:validation:Pattern=`^[^/]+/[^/]+$`
Repository string `json:"repository,omitempty"`
// +optional
Labels []string `json:"labels,omitempty"`
// +optional
Group string `json:"group,omitempty"`
// +optional
Containers []corev1.Container `json:"containers,omitempty"`
// +optional
DockerdContainerResources corev1.ResourceRequirements `json:"dockerdContainerResources,omitempty"`
// +optional
Resources corev1.ResourceRequirements `json:"resources,omitempty"`
// +optional
VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"`
// +optional
EnvFrom []corev1.EnvFromSource `json:"envFrom,omitempty"`
// +optional
Image string `json:"image"`
// +optional
ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"`
// +optional
Env []corev1.EnvVar `json:"env,omitempty"`
// +optional
Volumes []corev1.Volume `json:"volumes,omitempty"`
// +optional
WorkDir string `json:"workDir,omitempty"`
// +optional
InitContainers []corev1.Container `json:"initContainers,omitempty"`
// +optional
SidecarContainers []corev1.Container `json:"sidecarContainers,omitempty"`
// +optional
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// +optional
ServiceAccountName string `json:"serviceAccountName,omitempty"`
// +optional
AutomountServiceAccountToken *bool `json:"automountServiceAccountToken,omitempty"`
// +optional
SecurityContext *corev1.PodSecurityContext `json:"securityContext,omitempty"`
// +optional
ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"`
// +optional
Affinity *corev1.Affinity `json:"affinity,omitempty"`
// +optional
Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
// +optional
EphemeralContainers []corev1.EphemeralContainer `json:"ephemeralContainers,omitempty"`
// +optional
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
// +optional
DockerdWithinRunnerContainer *bool `json:"dockerdWithinRunnerContainer,omitempty"`
// +optional
DockerEnabled *bool `json:"dockerEnabled,omitempty"`
}
// ValidateRepository validates repository field.
func (rs *RunnerSpec) ValidateRepository() error {
// Organization and repository are both exclusive.
if len(rs.Organization) == 0 && len(rs.Repository) == 0 {
return errors.New("Spec needs organization or repository")
}
if len(rs.Organization) > 0 && len(rs.Repository) > 0 {
return errors.New("Spec cannot have both organization and repository")
}
return nil
}
// RunnerStatus defines the observed state of Runner
type RunnerStatus struct {
Registration RunnerStatusRegistration `json:"registration"`
Phase string `json:"phase"`
Reason string `json:"reason"`
Message string `json:"message"`
}
// RunnerStatusRegistration contains runner registration status
type RunnerStatusRegistration struct {
Organization string `json:"organization,omitempty"`
Repository string `json:"repository,omitempty"`
Labels []string `json:"labels,omitempty"`
Token string `json:"token"`
ExpiresAt metav1.Time `json:"expiresAt"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.organization",name=Organization,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.repository",name=Repository,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.labels",name=Labels,type=string
// +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string
// Runner is the Schema for the runners API
type Runner struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RunnerSpec `json:"spec,omitempty"`
Status RunnerStatus `json:"status,omitempty"`
}
func (r Runner) IsRegisterable() bool {
if r.Status.Registration.Repository != r.Spec.Repository {
return false
}
if r.Status.Registration.Token == "" {
return false
}
now := metav1.Now()
if r.Status.Registration.ExpiresAt.Before(&now) {
return false
}
return true
}
// +kubebuilder:object:root=true
// RunnerList contains a list of Runner
type RunnerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Runner `json:"items"`
}
func init() {
SchemeBuilder.Register(&Runner{}, &RunnerList{})
}

View File

@@ -21,7 +21,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/validation/field"
ctrl "sigs.k8s.io/controller-runtime"
logf "sigs.k8s.io/controller-runtime/pkg/log"
logf "sigs.k8s.io/controller-runtime/pkg/runtime/log"
"sigs.k8s.io/controller-runtime/pkg/webhook"
)
@@ -34,7 +34,7 @@ func (r *Runner) SetupWebhookWithManager(mgr ctrl.Manager) error {
Complete()
}
// +kubebuilder:webhook:path=/mutate-actions-summerwind-dev-v1alpha1-runner,verbs=create;update,mutating=true,failurePolicy=fail,groups=actions.summerwind.dev,resources=runners,versions=v1alpha1,name=mutate.runner.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
// +kubebuilder:webhook:path=/mutate-actions-summerwind-dev-v1alpha1-runner,verbs=create;update,mutating=true,failurePolicy=fail,groups=actions.summerwind.dev,resources=runners,versions=v1alpha1,name=mutate.runner.actions.summerwind.dev
var _ webhook.Defaulter = &Runner{}
@@ -43,7 +43,7 @@ func (r *Runner) Default() {
// Nothing to do.
}
// +kubebuilder:webhook:path=/validate-actions-summerwind-dev-v1alpha1-runner,verbs=create;update,mutating=false,failurePolicy=fail,groups=actions.summerwind.dev,resources=runners,versions=v1alpha1,name=validate.runner.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
// +kubebuilder:webhook:path=/validate-actions-summerwind-dev-v1alpha1-runner,verbs=create;update,mutating=false,failurePolicy=fail,groups=actions.summerwind.dev,resources=runners,versions=v1alpha1,name=validate.runner.actions.summerwind.dev
var _ webhook.Validator = &Runner{}
@@ -66,7 +66,15 @@ func (r *Runner) ValidateDelete() error {
// Validate validates resource spec.
func (r *Runner) Validate() error {
errList := r.Spec.Validate(field.NewPath("spec"))
var (
errList field.ErrorList
err error
)
err = r.Spec.ValidateRepository()
if err != nil {
errList = append(errList, field.Invalid(field.NewPath("spec", "repository"), r.Spec.Repository, err.Error()))
}
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)

View File

@@ -0,0 +1,73 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns = "TotalNumberOfQueuedAndInProgressWorkflowRuns"
AutoscalingMetricTypePercentageRunnersBusy = "PercentageRunnersBusy"
)
// RunnerReplicaSetSpec defines the desired state of RunnerDeployment
type RunnerDeploymentSpec struct {
// +optional
// +nullable
Replicas *int `json:"replicas,omitempty"`
Template RunnerTemplate `json:"template"`
}
type RunnerDeploymentStatus struct {
AvailableReplicas int `json:"availableReplicas"`
ReadyReplicas int `json:"readyReplicas"`
// Replicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet
// This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
// +optional
Replicas *int `json:"desiredReplicas,omitempty"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.availableReplicas",name=Current,type=number
// +kubebuilder:printcolumn:JSONPath=".status.readyReplicas",name=Ready,type=number
// RunnerDeployment is the Schema for the runnerdeployments API
type RunnerDeployment struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RunnerDeploymentSpec `json:"spec,omitempty"`
Status RunnerDeploymentStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// RunnerList contains a list of Runner
type RunnerDeploymentList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []RunnerDeployment `json:"items"`
}
func init() {
SchemeBuilder.Register(&RunnerDeployment{}, &RunnerDeploymentList{})
}

View File

@@ -21,12 +21,12 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/validation/field"
ctrl "sigs.k8s.io/controller-runtime"
logf "sigs.k8s.io/controller-runtime/pkg/log"
logf "sigs.k8s.io/controller-runtime/pkg/runtime/log"
"sigs.k8s.io/controller-runtime/pkg/webhook"
)
// log is for logging in this package.
var runnerDeploymentLog = logf.Log.WithName("runnerdeployment-resource")
var runenrDeploymentLog = logf.Log.WithName("runnerdeployment-resource")
func (r *RunnerDeployment) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
@@ -34,7 +34,7 @@ func (r *RunnerDeployment) SetupWebhookWithManager(mgr ctrl.Manager) error {
Complete()
}
// +kubebuilder:webhook:path=/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment,verbs=create;update,mutating=true,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerdeployments,versions=v1alpha1,name=mutate.runnerdeployment.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
// +kubebuilder:webhook:path=/mutate-actions-summerwind-dev-v1alpha1-runnerdeployment,verbs=create;update,mutating=true,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerdeployments,versions=v1alpha1,name=mutate.runnerdeployment.actions.summerwind.dev
var _ webhook.Defaulter = &RunnerDeployment{}
@@ -43,19 +43,19 @@ func (r *RunnerDeployment) Default() {
// Nothing to do.
}
// +kubebuilder:webhook:path=/validate-actions-summerwind-dev-v1alpha1-runnerdeployment,verbs=create;update,mutating=false,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerdeployments,versions=v1alpha1,name=validate.runnerdeployment.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
// +kubebuilder:webhook:path=/validate-actions-summerwind-dev-v1alpha1-runnerdeployment,verbs=create;update,mutating=false,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerdeployments,versions=v1alpha1,name=validate.runnerdeployment.actions.summerwind.dev
var _ webhook.Validator = &RunnerDeployment{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (r *RunnerDeployment) ValidateCreate() error {
runnerDeploymentLog.Info("validate resource to be created", "name", r.Name)
runenrDeploymentLog.Info("validate resource to be created", "name", r.Name)
return r.Validate()
}
// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (r *RunnerDeployment) ValidateUpdate(old runtime.Object) error {
runnerDeploymentLog.Info("validate resource to be updated", "name", r.Name)
runenrDeploymentLog.Info("validate resource to be updated", "name", r.Name)
return r.Validate()
}
@@ -66,7 +66,15 @@ func (r *RunnerDeployment) ValidateDelete() error {
// Validate validates resource spec.
func (r *RunnerDeployment) Validate() error {
errList := r.Spec.Template.Spec.Validate(field.NewPath("spec", "template", "spec"))
var (
errList field.ErrorList
err error
)
err = r.Spec.Template.Spec.ValidateRepository()
if err != nil {
errList = append(errList, field.Invalid(field.NewPath("spec", "template", "spec", "repository"), r.Spec.Template.Spec.Repository, err.Error()))
}
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)

View File

@@ -26,35 +26,12 @@ type RunnerReplicaSetSpec struct {
// +nullable
Replicas *int `json:"replicas,omitempty"`
// EffectiveTime is the time the upstream controller requested to sync Replicas.
// It is usually populated by the webhook-based autoscaler via HRA and RunnerDeployment.
// The value is used to prevent runnerreplicaset controller from unnecessarily recreating ephemeral runners
// based on potentially outdated Replicas value.
//
// +optional
// +nullable
EffectiveTime *metav1.Time `json:"effectiveTime"`
// +optional
// +nullable
Selector *metav1.LabelSelector `json:"selector"`
Template RunnerTemplate `json:"template"`
Template RunnerTemplate `json:"template"`
}
type RunnerReplicaSetStatus struct {
// See K8s replicaset controller code for reference
// https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/controller/replicaset/replica_set_utils.go#L101-L106
// Replicas is the number of runners that are created and still being managed by this runner replica set.
// +optional
Replicas *int `json:"replicas"`
// ReadyReplicas is the number of runners that are created and Runnning.
ReadyReplicas *int `json:"readyReplicas"`
// AvailableReplicas is the number of runners that are created and Runnning.
// This is currently same as ReadyReplicas but perserved for future use.
AvailableReplicas *int `json:"availableReplicas"`
AvailableReplicas int `json:"availableReplicas"`
ReadyReplicas int `json:"readyReplicas"`
}
type RunnerTemplate struct {
@@ -64,12 +41,10 @@ type RunnerTemplate struct {
}
// +kubebuilder:object:root=true
// +kubebuilder:resource:shortName=rrs
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Current,type=number
// +kubebuilder:printcolumn:JSONPath=".status.availableReplicas",name=Current,type=number
// +kubebuilder:printcolumn:JSONPath=".status.readyReplicas",name=Ready,type=number
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// RunnerReplicaSet is the Schema for the runnerreplicasets API
type RunnerReplicaSet struct {

View File

@@ -21,7 +21,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/validation/field"
ctrl "sigs.k8s.io/controller-runtime"
logf "sigs.k8s.io/controller-runtime/pkg/log"
logf "sigs.k8s.io/controller-runtime/pkg/runtime/log"
"sigs.k8s.io/controller-runtime/pkg/webhook"
)
@@ -34,7 +34,7 @@ func (r *RunnerReplicaSet) SetupWebhookWithManager(mgr ctrl.Manager) error {
Complete()
}
// +kubebuilder:webhook:path=/mutate-actions-summerwind-dev-v1alpha1-runnerreplicaset,verbs=create;update,mutating=true,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerreplicasets,versions=v1alpha1,name=mutate.runnerreplicaset.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
// +kubebuilder:webhook:path=/mutate-actions-summerwind-dev-v1alpha1-runnerreplicaset,verbs=create;update,mutating=true,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerreplicasets,versions=v1alpha1,name=mutate.runnerreplicaset.actions.summerwind.dev
var _ webhook.Defaulter = &RunnerReplicaSet{}
@@ -43,7 +43,7 @@ func (r *RunnerReplicaSet) Default() {
// Nothing to do.
}
// +kubebuilder:webhook:path=/validate-actions-summerwind-dev-v1alpha1-runnerreplicaset,verbs=create;update,mutating=false,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerreplicasets,versions=v1alpha1,name=validate.runnerreplicaset.actions.summerwind.dev,sideEffects=None,admissionReviewVersions=v1beta1
// +kubebuilder:webhook:path=/validate-actions-summerwind-dev-v1alpha1-runnerreplicaset,verbs=create;update,mutating=false,failurePolicy=fail,groups=actions.summerwind.dev,resources=runnerreplicasets,versions=v1alpha1,name=validate.runnerreplicaset.actions.summerwind.dev
var _ webhook.Validator = &RunnerReplicaSet{}
@@ -66,7 +66,15 @@ func (r *RunnerReplicaSet) ValidateDelete() error {
// Validate validates resource spec.
func (r *RunnerReplicaSet) Validate() error {
errList := r.Spec.Template.Spec.Validate(field.NewPath("spec", "template", "spec"))
var (
errList field.ErrorList
err error
)
err = r.Spec.Template.Spec.ValidateRepository()
if err != nil {
errList = append(errList, field.Invalid(field.NewPath("spec", "template", "spec", "repository"), r.Spec.Template.Spec.Repository, err.Error()))
}
if len(errList) > 0 {
return apierrors.NewInvalid(r.GroupVersionKind().GroupKind(), r.Name, errList)

View File

@@ -1,4 +1,3 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
@@ -23,124 +22,9 @@ package v1alpha1
import (
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CacheEntry) DeepCopyInto(out *CacheEntry) {
*out = *in
in.ExpirationTime.DeepCopyInto(&out.ExpirationTime)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CacheEntry.
func (in *CacheEntry) DeepCopy() *CacheEntry {
if in == nil {
return nil
}
out := new(CacheEntry)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CapacityReservation) DeepCopyInto(out *CapacityReservation) {
*out = *in
in.ExpirationTime.DeepCopyInto(&out.ExpirationTime)
in.EffectiveTime.DeepCopyInto(&out.EffectiveTime)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CapacityReservation.
func (in *CapacityReservation) DeepCopy() *CapacityReservation {
if in == nil {
return nil
}
out := new(CapacityReservation)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CheckRunSpec) DeepCopyInto(out *CheckRunSpec) {
*out = *in
if in.Types != nil {
in, out := &in.Types, &out.Types
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Names != nil {
in, out := &in.Names, &out.Names
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Repositories != nil {
in, out := &in.Repositories, &out.Repositories
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CheckRunSpec.
func (in *CheckRunSpec) DeepCopy() *CheckRunSpec {
if in == nil {
return nil
}
out := new(CheckRunSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubAPICredentialsFrom) DeepCopyInto(out *GitHubAPICredentialsFrom) {
*out = *in
out.SecretRef = in.SecretRef
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GitHubAPICredentialsFrom.
func (in *GitHubAPICredentialsFrom) DeepCopy() *GitHubAPICredentialsFrom {
if in == nil {
return nil
}
out := new(GitHubAPICredentialsFrom)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubEventScaleUpTriggerSpec) DeepCopyInto(out *GitHubEventScaleUpTriggerSpec) {
*out = *in
if in.CheckRun != nil {
in, out := &in.CheckRun, &out.CheckRun
*out = new(CheckRunSpec)
(*in).DeepCopyInto(*out)
}
if in.PullRequest != nil {
in, out := &in.PullRequest, &out.PullRequest
*out = new(PullRequestSpec)
(*in).DeepCopyInto(*out)
}
if in.Push != nil {
in, out := &in.Push, &out.Push
*out = new(PushSpec)
**out = **in
}
if in.WorkflowJob != nil {
in, out := &in.WorkflowJob, &out.WorkflowJob
*out = new(WorkflowJobSpec)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GitHubEventScaleUpTriggerSpec.
func (in *GitHubEventScaleUpTriggerSpec) DeepCopy() *GitHubEventScaleUpTriggerSpec {
if in == nil {
return nil
}
out := new(GitHubEventScaleUpTriggerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HorizontalRunnerAutoscaler) DeepCopyInto(out *HorizontalRunnerAutoscaler) {
*out = *in
@@ -226,32 +110,6 @@ func (in *HorizontalRunnerAutoscalerSpec) DeepCopyInto(out *HorizontalRunnerAuto
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.ScaleUpTriggers != nil {
in, out := &in.ScaleUpTriggers, &out.ScaleUpTriggers
*out = make([]ScaleUpTrigger, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.CapacityReservations != nil {
in, out := &in.CapacityReservations, &out.CapacityReservations
*out = make([]CapacityReservation, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.ScheduledOverrides != nil {
in, out := &in.ScheduledOverrides, &out.ScheduledOverrides
*out = make([]ScheduledOverride, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.GitHubAPICredentialsFrom != nil {
in, out := &in.GitHubAPICredentialsFrom, &out.GitHubAPICredentialsFrom
*out = new(GitHubAPICredentialsFrom)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerSpec.
@@ -276,18 +134,6 @@ func (in *HorizontalRunnerAutoscalerStatus) DeepCopyInto(out *HorizontalRunnerAu
in, out := &in.LastSuccessfulScaleOutTime, &out.LastSuccessfulScaleOutTime
*out = (*in).DeepCopy()
}
if in.CacheEntries != nil {
in, out := &in.CacheEntries, &out.CacheEntries
*out = make([]CacheEntry, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.ScheduledOverridesSummary != nil {
in, out := &in.ScheduledOverridesSummary, &out.ScheduledOverridesSummary
*out = new(string)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerStatus.
@@ -320,62 +166,6 @@ func (in *MetricSpec) DeepCopy() *MetricSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PullRequestSpec) DeepCopyInto(out *PullRequestSpec) {
*out = *in
if in.Types != nil {
in, out := &in.Types, &out.Types
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Branches != nil {
in, out := &in.Branches, &out.Branches
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PullRequestSpec.
func (in *PullRequestSpec) DeepCopy() *PullRequestSpec {
if in == nil {
return nil
}
out := new(PullRequestSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PushSpec) DeepCopyInto(out *PushSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PushSpec.
func (in *PushSpec) DeepCopy() *PushSpec {
if in == nil {
return nil
}
out := new(PushSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RecurrenceRule) DeepCopyInto(out *RecurrenceRule) {
*out = *in
in.UntilTime.DeepCopyInto(&out.UntilTime)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RecurrenceRule.
func (in *RecurrenceRule) DeepCopy() *RecurrenceRule {
if in == nil {
return nil
}
out := new(RecurrenceRule)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Runner) DeepCopyInto(out *Runner) {
*out = *in
@@ -403,66 +193,6 @@ func (in *Runner) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerConfig) DeepCopyInto(out *RunnerConfig) {
*out = *in
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Ephemeral != nil {
in, out := &in.Ephemeral, &out.Ephemeral
*out = new(bool)
**out = **in
}
if in.DockerdWithinRunnerContainer != nil {
in, out := &in.DockerdWithinRunnerContainer, &out.DockerdWithinRunnerContainer
*out = new(bool)
**out = **in
}
if in.DockerEnabled != nil {
in, out := &in.DockerEnabled, &out.DockerEnabled
*out = new(bool)
**out = **in
}
if in.DockerMTU != nil {
in, out := &in.DockerMTU, &out.DockerMTU
*out = new(int64)
**out = **in
}
if in.DockerRegistryMirror != nil {
in, out := &in.DockerRegistryMirror, &out.DockerRegistryMirror
*out = new(string)
**out = **in
}
if in.VolumeSizeLimit != nil {
in, out := &in.VolumeSizeLimit, &out.VolumeSizeLimit
x := (*in).DeepCopy()
*out = &x
}
if in.VolumeStorageMedium != nil {
in, out := &in.VolumeStorageMedium, &out.VolumeStorageMedium
*out = new(string)
**out = **in
}
if in.GitHubAPICredentialsFrom != nil {
in, out := &in.GitHubAPICredentialsFrom, &out.GitHubAPICredentialsFrom
*out = new(GitHubAPICredentialsFrom)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerConfig.
func (in *RunnerConfig) DeepCopy() *RunnerConfig {
if in == nil {
return nil
}
out := new(RunnerConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerDeployment) DeepCopyInto(out *RunnerDeployment) {
*out = *in
@@ -530,15 +260,6 @@ func (in *RunnerDeploymentSpec) DeepCopyInto(out *RunnerDeploymentSpec) {
*out = new(int)
**out = **in
}
if in.EffectiveTime != nil {
in, out := &in.EffectiveTime, &out.EffectiveTime
*out = (*in).DeepCopy()
}
if in.Selector != nil {
in, out := &in.Selector, &out.Selector
*out = new(metav1.LabelSelector)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template)
}
@@ -555,26 +276,6 @@ func (in *RunnerDeploymentSpec) DeepCopy() *RunnerDeploymentSpec {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerDeploymentStatus) DeepCopyInto(out *RunnerDeploymentStatus) {
*out = *in
if in.AvailableReplicas != nil {
in, out := &in.AvailableReplicas, &out.AvailableReplicas
*out = new(int)
**out = **in
}
if in.ReadyReplicas != nil {
in, out := &in.ReadyReplicas, &out.ReadyReplicas
*out = new(int)
**out = **in
}
if in.UpdatedReplicas != nil {
in, out := &in.UpdatedReplicas, &out.UpdatedReplicas
*out = new(int)
**out = **in
}
if in.DesiredReplicas != nil {
in, out := &in.DesiredReplicas, &out.DesiredReplicas
*out = new(int)
**out = **in
}
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(int)
@@ -624,173 +325,13 @@ func (in *RunnerList) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerPodSpec) DeepCopyInto(out *RunnerPodSpec) {
*out = *in
in.DockerdContainerResources.DeepCopyInto(&out.DockerdContainerResources)
if in.DockerVolumeMounts != nil {
in, out := &in.DockerVolumeMounts, &out.DockerVolumeMounts
*out = make([]v1.VolumeMount, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.DockerEnv != nil {
in, out := &in.DockerEnv, &out.DockerEnv
*out = make([]v1.EnvVar, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Containers != nil {
in, out := &in.Containers, &out.Containers
*out = make([]v1.Container, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Env != nil {
in, out := &in.Env, &out.Env
*out = make([]v1.EnvVar, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.EnvFrom != nil {
in, out := &in.EnvFrom, &out.EnvFrom
*out = make([]v1.EnvFromSource, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
in.Resources.DeepCopyInto(&out.Resources)
if in.VolumeMounts != nil {
in, out := &in.VolumeMounts, &out.VolumeMounts
*out = make([]v1.VolumeMount, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Volumes != nil {
in, out := &in.Volumes, &out.Volumes
*out = make([]v1.Volume, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.EnableServiceLinks != nil {
in, out := &in.EnableServiceLinks, &out.EnableServiceLinks
*out = new(bool)
**out = **in
}
if in.InitContainers != nil {
in, out := &in.InitContainers, &out.InitContainers
*out = make([]v1.Container, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.NodeSelector != nil {
in, out := &in.NodeSelector, &out.NodeSelector
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.AutomountServiceAccountToken != nil {
in, out := &in.AutomountServiceAccountToken, &out.AutomountServiceAccountToken
*out = new(bool)
**out = **in
}
if in.SidecarContainers != nil {
in, out := &in.SidecarContainers, &out.SidecarContainers
*out = make([]v1.Container, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.SecurityContext != nil {
in, out := &in.SecurityContext, &out.SecurityContext
*out = new(v1.PodSecurityContext)
(*in).DeepCopyInto(*out)
}
if in.ImagePullSecrets != nil {
in, out := &in.ImagePullSecrets, &out.ImagePullSecrets
*out = make([]v1.LocalObjectReference, len(*in))
copy(*out, *in)
}
if in.Affinity != nil {
in, out := &in.Affinity, &out.Affinity
*out = new(v1.Affinity)
(*in).DeepCopyInto(*out)
}
if in.Tolerations != nil {
in, out := &in.Tolerations, &out.Tolerations
*out = make([]v1.Toleration, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.TerminationGracePeriodSeconds != nil {
in, out := &in.TerminationGracePeriodSeconds, &out.TerminationGracePeriodSeconds
*out = new(int64)
**out = **in
}
if in.EphemeralContainers != nil {
in, out := &in.EphemeralContainers, &out.EphemeralContainers
*out = make([]v1.EphemeralContainer, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.HostAliases != nil {
in, out := &in.HostAliases, &out.HostAliases
*out = make([]v1.HostAlias, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.TopologySpreadConstraints != nil {
in, out := &in.TopologySpreadConstraints, &out.TopologySpreadConstraints
*out = make([]v1.TopologySpreadConstraint, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.RuntimeClassName != nil {
in, out := &in.RuntimeClassName, &out.RuntimeClassName
*out = new(string)
**out = **in
}
if in.DnsConfig != nil {
in, out := &in.DnsConfig, &out.DnsConfig
*out = new(v1.PodDNSConfig)
(*in).DeepCopyInto(*out)
}
if in.WorkVolumeClaimTemplate != nil {
in, out := &in.WorkVolumeClaimTemplate, &out.WorkVolumeClaimTemplate
*out = new(WorkVolumeClaimTemplate)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerPodSpec.
func (in *RunnerPodSpec) DeepCopy() *RunnerPodSpec {
if in == nil {
return nil
}
out := new(RunnerPodSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerReplicaSet) DeepCopyInto(out *RunnerReplicaSet) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerReplicaSet.
@@ -851,15 +392,6 @@ func (in *RunnerReplicaSetSpec) DeepCopyInto(out *RunnerReplicaSetSpec) {
*out = new(int)
**out = **in
}
if in.EffectiveTime != nil {
in, out := &in.EffectiveTime, &out.EffectiveTime
*out = (*in).DeepCopy()
}
if in.Selector != nil {
in, out := &in.Selector, &out.Selector
*out = new(metav1.LabelSelector)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template)
}
@@ -876,21 +408,6 @@ func (in *RunnerReplicaSetSpec) DeepCopy() *RunnerReplicaSetSpec {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerReplicaSetStatus) DeepCopyInto(out *RunnerReplicaSetStatus) {
*out = *in
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(int)
**out = **in
}
if in.ReadyReplicas != nil {
in, out := &in.ReadyReplicas, &out.ReadyReplicas
*out = new(int)
**out = **in
}
if in.AvailableReplicas != nil {
in, out := &in.AvailableReplicas, &out.AvailableReplicas
*out = new(int)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerReplicaSetStatus.
@@ -904,135 +421,120 @@ func (in *RunnerReplicaSetStatus) DeepCopy() *RunnerReplicaSetStatus {
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerSet) DeepCopyInto(out *RunnerSet) {
func (in *RunnerSpec) DeepCopyInto(out *RunnerSpec) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSet.
func (in *RunnerSet) DeepCopy() *RunnerSet {
if in == nil {
return nil
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make([]string, len(*in))
copy(*out, *in)
}
out := new(RunnerSet)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *RunnerSet) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerSetList) DeepCopyInto(out *RunnerSetList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]RunnerSet, len(*in))
if in.Containers != nil {
in, out := &in.Containers, &out.Containers
*out = make([]v1.Container, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSetList.
func (in *RunnerSetList) DeepCopy() *RunnerSetList {
if in == nil {
return nil
in.DockerdContainerResources.DeepCopyInto(&out.DockerdContainerResources)
in.Resources.DeepCopyInto(&out.Resources)
if in.VolumeMounts != nil {
in, out := &in.VolumeMounts, &out.VolumeMounts
*out = make([]v1.VolumeMount, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
out := new(RunnerSetList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *RunnerSetList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
if in.EnvFrom != nil {
in, out := &in.EnvFrom, &out.EnvFrom
*out = make([]v1.EnvFromSource, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerSetSpec) DeepCopyInto(out *RunnerSetSpec) {
*out = *in
in.RunnerConfig.DeepCopyInto(&out.RunnerConfig)
if in.EffectiveTime != nil {
in, out := &in.EffectiveTime, &out.EffectiveTime
*out = (*in).DeepCopy()
if in.Env != nil {
in, out := &in.Env, &out.Env
*out = make([]v1.EnvVar, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.WorkVolumeClaimTemplate != nil {
in, out := &in.WorkVolumeClaimTemplate, &out.WorkVolumeClaimTemplate
*out = new(WorkVolumeClaimTemplate)
if in.Volumes != nil {
in, out := &in.Volumes, &out.Volumes
*out = make([]v1.Volume, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.InitContainers != nil {
in, out := &in.InitContainers, &out.InitContainers
*out = make([]v1.Container, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.SidecarContainers != nil {
in, out := &in.SidecarContainers, &out.SidecarContainers
*out = make([]v1.Container, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.NodeSelector != nil {
in, out := &in.NodeSelector, &out.NodeSelector
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.AutomountServiceAccountToken != nil {
in, out := &in.AutomountServiceAccountToken, &out.AutomountServiceAccountToken
*out = new(bool)
**out = **in
}
if in.SecurityContext != nil {
in, out := &in.SecurityContext, &out.SecurityContext
*out = new(v1.PodSecurityContext)
(*in).DeepCopyInto(*out)
}
in.StatefulSetSpec.DeepCopyInto(&out.StatefulSetSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSetSpec.
func (in *RunnerSetSpec) DeepCopy() *RunnerSetSpec {
if in == nil {
return nil
if in.ImagePullSecrets != nil {
in, out := &in.ImagePullSecrets, &out.ImagePullSecrets
*out = make([]v1.LocalObjectReference, len(*in))
copy(*out, *in)
}
out := new(RunnerSetSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerSetStatus) DeepCopyInto(out *RunnerSetStatus) {
*out = *in
if in.CurrentReplicas != nil {
in, out := &in.CurrentReplicas, &out.CurrentReplicas
*out = new(int)
if in.Affinity != nil {
in, out := &in.Affinity, &out.Affinity
*out = new(v1.Affinity)
(*in).DeepCopyInto(*out)
}
if in.Tolerations != nil {
in, out := &in.Tolerations, &out.Tolerations
*out = make([]v1.Toleration, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.EphemeralContainers != nil {
in, out := &in.EphemeralContainers, &out.EphemeralContainers
*out = make([]v1.EphemeralContainer, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.TerminationGracePeriodSeconds != nil {
in, out := &in.TerminationGracePeriodSeconds, &out.TerminationGracePeriodSeconds
*out = new(int64)
**out = **in
}
if in.ReadyReplicas != nil {
in, out := &in.ReadyReplicas, &out.ReadyReplicas
*out = new(int)
if in.DockerdWithinRunnerContainer != nil {
in, out := &in.DockerdWithinRunnerContainer, &out.DockerdWithinRunnerContainer
*out = new(bool)
**out = **in
}
if in.UpdatedReplicas != nil {
in, out := &in.UpdatedReplicas, &out.UpdatedReplicas
*out = new(int)
if in.DockerEnabled != nil {
in, out := &in.DockerEnabled, &out.DockerEnabled
*out = new(bool)
**out = **in
}
if in.DesiredReplicas != nil {
in, out := &in.DesiredReplicas, &out.DesiredReplicas
*out = new(int)
**out = **in
}
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(int)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSetStatus.
func (in *RunnerSetStatus) DeepCopy() *RunnerSetStatus {
if in == nil {
return nil
}
out := new(RunnerSetStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RunnerSpec) DeepCopyInto(out *RunnerSpec) {
*out = *in
in.RunnerConfig.DeepCopyInto(&out.RunnerConfig)
in.RunnerPodSpec.DeepCopyInto(&out.RunnerPodSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSpec.
@@ -1049,15 +551,6 @@ func (in *RunnerSpec) DeepCopy() *RunnerSpec {
func (in *RunnerStatus) DeepCopyInto(out *RunnerStatus) {
*out = *in
in.Registration.DeepCopyInto(&out.Registration)
if in.WorkflowStatus != nil {
in, out := &in.WorkflowStatus, &out.WorkflowStatus
*out = new(WorkflowStatus)
**out = **in
}
if in.LastRegistrationCheckTime != nil {
in, out := &in.LastRegistrationCheckTime, &out.LastRegistrationCheckTime
*out = (*in).DeepCopy()
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerStatus.
@@ -1122,113 +615,3 @@ func (in *ScaleTargetRef) DeepCopy() *ScaleTargetRef {
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScaleUpTrigger) DeepCopyInto(out *ScaleUpTrigger) {
*out = *in
if in.GitHubEvent != nil {
in, out := &in.GitHubEvent, &out.GitHubEvent
*out = new(GitHubEventScaleUpTriggerSpec)
(*in).DeepCopyInto(*out)
}
out.Duration = in.Duration
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaleUpTrigger.
func (in *ScaleUpTrigger) DeepCopy() *ScaleUpTrigger {
if in == nil {
return nil
}
out := new(ScaleUpTrigger)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScheduledOverride) DeepCopyInto(out *ScheduledOverride) {
*out = *in
in.StartTime.DeepCopyInto(&out.StartTime)
in.EndTime.DeepCopyInto(&out.EndTime)
if in.MinReplicas != nil {
in, out := &in.MinReplicas, &out.MinReplicas
*out = new(int)
**out = **in
}
in.RecurrenceRule.DeepCopyInto(&out.RecurrenceRule)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduledOverride.
func (in *ScheduledOverride) DeepCopy() *ScheduledOverride {
if in == nil {
return nil
}
out := new(ScheduledOverride)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SecretReference) DeepCopyInto(out *SecretReference) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SecretReference.
func (in *SecretReference) DeepCopy() *SecretReference {
if in == nil {
return nil
}
out := new(SecretReference)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkVolumeClaimTemplate) DeepCopyInto(out *WorkVolumeClaimTemplate) {
*out = *in
if in.AccessModes != nil {
in, out := &in.AccessModes, &out.AccessModes
*out = make([]v1.PersistentVolumeAccessMode, len(*in))
copy(*out, *in)
}
in.Resources.DeepCopyInto(&out.Resources)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkVolumeClaimTemplate.
func (in *WorkVolumeClaimTemplate) DeepCopy() *WorkVolumeClaimTemplate {
if in == nil {
return nil
}
out := new(WorkVolumeClaimTemplate)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkflowJobSpec) DeepCopyInto(out *WorkflowJobSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkflowJobSpec.
func (in *WorkflowJobSpec) DeepCopy() *WorkflowJobSpec {
if in == nil {
return nil
}
out := new(WorkflowJobSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkflowStatus) DeepCopyInto(out *WorkflowStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkflowStatus.
func (in *WorkflowStatus) DeepCopy() *WorkflowStatus {
if in == nil {
return nil
}
out := new(WorkflowStatus)
in.DeepCopyInto(out)
return out
}

View File

@@ -1,97 +0,0 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// AutoscalingListenerSpec defines the desired state of AutoscalingListener
type AutoscalingListenerSpec struct {
// Required
GitHubConfigUrl string `json:"githubConfigUrl,omitempty"`
// Required
GitHubConfigSecret string `json:"githubConfigSecret,omitempty"`
// Required
RunnerScaleSetId int `json:"runnerScaleSetId,omitempty"`
// Required
AutoscalingRunnerSetNamespace string `json:"autoscalingRunnerSetNamespace,omitempty"`
// Required
AutoscalingRunnerSetName string `json:"autoscalingRunnerSetName,omitempty"`
// Required
EphemeralRunnerSetName string `json:"ephemeralRunnerSetName,omitempty"`
// Required
// +kubebuilder:validation:Minimum:=0
MaxRunners int `json:"maxRunners,omitempty"`
// Required
// +kubebuilder:validation:Minimum:=0
MinRunners int `json:"minRunners,omitempty"`
// Required
Image string `json:"image,omitempty"`
// Required
ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"`
// Required
ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"`
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
}
// AutoscalingListenerStatus defines the observed state of AutoscalingListener
type AutoscalingListenerStatus struct{}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:printcolumn:JSONPath=".spec.githubConfigUrl",name=GitHub Configure URL,type=string
//+kubebuilder:printcolumn:JSONPath=".spec.autoscalingRunnerSetNamespace",name=AutoscalingRunnerSet Namespace,type=string
//+kubebuilder:printcolumn:JSONPath=".spec.autoscalingRunnerSetName",name=AutoscalingRunnerSet Name,type=string
// AutoscalingListener is the Schema for the autoscalinglisteners API
type AutoscalingListener struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec AutoscalingListenerSpec `json:"spec,omitempty"`
Status AutoscalingListenerStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// AutoscalingListenerList contains a list of AutoscalingListener
type AutoscalingListenerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []AutoscalingListener `json:"items"`
}
func init() {
SchemeBuilder.Register(&AutoscalingListener{}, &AutoscalingListenerList{})
}

View File

@@ -1,289 +0,0 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"crypto/x509"
"fmt"
"net/http"
"net/url"
"strings"
"github.com/actions/actions-runner-controller/hash"
"golang.org/x/net/http/httpproxy"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:printcolumn:JSONPath=".spec.minRunners",name=Minimum Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".spec.maxRunners",name=Maximum Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.currentRunners",name=Current Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.state",name=State,type=string
//+kubebuilder:printcolumn:JSONPath=".status.pendingEphemeralRunners",name=Pending Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.runningEphemeralRunners",name=Running Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.finishedEphemeralRunners",name=Finished Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.deletingEphemeralRunners",name=Deleting Runners,type=integer
// AutoscalingRunnerSet is the Schema for the autoscalingrunnersets API
type AutoscalingRunnerSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec AutoscalingRunnerSetSpec `json:"spec,omitempty"`
Status AutoscalingRunnerSetStatus `json:"status,omitempty"`
}
// AutoscalingRunnerSetSpec defines the desired state of AutoscalingRunnerSet
type AutoscalingRunnerSetSpec struct {
// Required
GitHubConfigUrl string `json:"githubConfigUrl,omitempty"`
// Required
GitHubConfigSecret string `json:"githubConfigSecret,omitempty"`
// +optional
RunnerGroup string `json:"runnerGroup,omitempty"`
// +optional
RunnerScaleSetName string `json:"runnerScaleSetName,omitempty"`
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
// Required
Template corev1.PodTemplateSpec `json:"template,omitempty"`
// +optional
// +kubebuilder:validation:Minimum:=0
MaxRunners *int `json:"maxRunners,omitempty"`
// +optional
// +kubebuilder:validation:Minimum:=0
MinRunners *int `json:"minRunners,omitempty"`
}
type GitHubServerTLSConfig struct {
// Required
CertificateFrom *TLSCertificateSource `json:"certificateFrom,omitempty"`
}
func (c *GitHubServerTLSConfig) ToCertPool(keyFetcher func(name, key string) ([]byte, error)) (*x509.CertPool, error) {
if c.CertificateFrom == nil {
return nil, fmt.Errorf("certificateFrom not specified")
}
if c.CertificateFrom.ConfigMapKeyRef == nil {
return nil, fmt.Errorf("configMapKeyRef not specified")
}
cert, err := keyFetcher(c.CertificateFrom.ConfigMapKeyRef.Name, c.CertificateFrom.ConfigMapKeyRef.Key)
if err != nil {
return nil, fmt.Errorf(
"failed to fetch key %q in configmap %q: %w",
c.CertificateFrom.ConfigMapKeyRef.Key,
c.CertificateFrom.ConfigMapKeyRef.Name,
err,
)
}
systemPool, err := x509.SystemCertPool()
if err != nil {
return nil, fmt.Errorf("failed to get system cert pool: %w", err)
}
pool := systemPool.Clone()
if !pool.AppendCertsFromPEM(cert) {
return nil, fmt.Errorf("failed to parse certificate")
}
return pool, nil
}
type TLSCertificateSource struct {
// Required
ConfigMapKeyRef *corev1.ConfigMapKeySelector `json:"configMapKeyRef,omitempty"`
}
type ProxyConfig struct {
// +optional
HTTP *ProxyServerConfig `json:"http,omitempty"`
// +optional
HTTPS *ProxyServerConfig `json:"https,omitempty"`
// +optional
NoProxy []string `json:"noProxy,omitempty"`
}
func (c *ProxyConfig) toHTTPProxyConfig(secretFetcher func(string) (*corev1.Secret, error)) (*httpproxy.Config, error) {
config := &httpproxy.Config{
NoProxy: strings.Join(c.NoProxy, ","),
}
if c.HTTP != nil {
u, err := url.Parse(c.HTTP.Url)
if err != nil {
return nil, fmt.Errorf("failed to parse proxy http url %q: %w", c.HTTP.Url, err)
}
if c.HTTP.CredentialSecretRef != "" {
secret, err := secretFetcher(c.HTTP.CredentialSecretRef)
if err != nil {
return nil, fmt.Errorf(
"failed to get secret %s for http proxy: %w",
c.HTTP.CredentialSecretRef,
err,
)
}
u.User = url.UserPassword(
string(secret.Data["username"]),
string(secret.Data["password"]),
)
}
config.HTTPProxy = u.String()
}
if c.HTTPS != nil {
u, err := url.Parse(c.HTTPS.Url)
if err != nil {
return nil, fmt.Errorf("failed to parse proxy https url %q: %w", c.HTTPS.Url, err)
}
if c.HTTPS.CredentialSecretRef != "" {
secret, err := secretFetcher(c.HTTPS.CredentialSecretRef)
if err != nil {
return nil, fmt.Errorf(
"failed to get secret %s for https proxy: %w",
c.HTTPS.CredentialSecretRef,
err,
)
}
u.User = url.UserPassword(
string(secret.Data["username"]),
string(secret.Data["password"]),
)
}
config.HTTPSProxy = u.String()
}
return config, nil
}
func (c *ProxyConfig) ToSecretData(secretFetcher func(string) (*corev1.Secret, error)) (map[string][]byte, error) {
config, err := c.toHTTPProxyConfig(secretFetcher)
if err != nil {
return nil, err
}
data := map[string][]byte{}
data["http_proxy"] = []byte(config.HTTPProxy)
data["https_proxy"] = []byte(config.HTTPSProxy)
data["no_proxy"] = []byte(config.NoProxy)
return data, nil
}
func (c *ProxyConfig) ProxyFunc(secretFetcher func(string) (*corev1.Secret, error)) (func(*http.Request) (*url.URL, error), error) {
config, err := c.toHTTPProxyConfig(secretFetcher)
if err != nil {
return nil, err
}
proxyFunc := func(req *http.Request) (*url.URL, error) {
return config.ProxyFunc()(req.URL)
}
return proxyFunc, nil
}
type ProxyServerConfig struct {
// Required
Url string `json:"url,omitempty"`
// +optional
CredentialSecretRef string `json:"credentialSecretRef,omitempty"`
}
// AutoscalingRunnerSetStatus defines the observed state of AutoscalingRunnerSet
type AutoscalingRunnerSetStatus struct {
// +optional
CurrentRunners int `json:"currentRunners"`
// +optional
State string `json:"state"`
// EphemeralRunner counts separated by the stage ephemeral runners are in, taken from the EphemeralRunnerSet
//+optional
PendingEphemeralRunners int `json:"pendingEphemeralRunners"`
// +optional
RunningEphemeralRunners int `json:"runningEphemeralRunners"`
// +optional
FailedEphemeralRunners int `json:"failedEphemeralRunners"`
}
func (ars *AutoscalingRunnerSet) ListenerSpecHash() string {
arsSpec := ars.Spec.DeepCopy()
spec := arsSpec
return hash.ComputeTemplateHash(&spec)
}
func (ars *AutoscalingRunnerSet) RunnerSetSpecHash() string {
type runnerSetSpec struct {
GitHubConfigUrl string
GitHubConfigSecret string
RunnerGroup string
RunnerScaleSetName string
Proxy *ProxyConfig
GitHubServerTLS *GitHubServerTLSConfig
Template corev1.PodTemplateSpec
}
spec := &runnerSetSpec{
GitHubConfigUrl: ars.Spec.GitHubConfigUrl,
GitHubConfigSecret: ars.Spec.GitHubConfigSecret,
RunnerGroup: ars.Spec.RunnerGroup,
RunnerScaleSetName: ars.Spec.RunnerScaleSetName,
Proxy: ars.Spec.Proxy,
GitHubServerTLS: ars.Spec.GitHubServerTLS,
Template: ars.Spec.Template,
}
return hash.ComputeTemplateHash(&spec)
}
//+kubebuilder:object:root=true
// AutoscalingRunnerSetList contains a list of AutoscalingRunnerSet
type AutoscalingRunnerSetList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []AutoscalingRunnerSet `json:"items"`
}
func init() {
SchemeBuilder.Register(&AutoscalingRunnerSet{}, &AutoscalingRunnerSetList{})
}

View File

@@ -1,133 +0,0 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.githubConfigUrl",name="GitHub Config URL",type=string
// +kubebuilder:printcolumn:JSONPath=".status.runnerId",name=RunnerId,type=number
// +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string
// +kubebuilder:printcolumn:JSONPath=".status.jobRepositoryName",name=JobRepository,type=string
// +kubebuilder:printcolumn:JSONPath=".status.jobWorkflowRef",name=JobWorkflowRef,type=string
// +kubebuilder:printcolumn:JSONPath=".status.workflowRunId",name=WorkflowRunId,type=number
// +kubebuilder:printcolumn:JSONPath=".status.jobDisplayName",name=JobDisplayName,type=string
// +kubebuilder:printcolumn:JSONPath=".status.message",name=Message,type=string
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// EphemeralRunner is the Schema for the ephemeralrunners API
type EphemeralRunner struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec EphemeralRunnerSpec `json:"spec,omitempty"`
Status EphemeralRunnerStatus `json:"status,omitempty"`
}
// EphemeralRunnerSpec defines the desired state of EphemeralRunner
type EphemeralRunnerSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// +required
GitHubConfigUrl string `json:"githubConfigUrl,omitempty"`
// +required
GitHubConfigSecret string `json:"githubConfigSecret,omitempty"`
// +required
RunnerScaleSetId int `json:"runnerScaleSetId,omitempty"`
// +optional
Proxy *ProxyConfig `json:"proxy,omitempty"`
// +optional
ProxySecretRef string `json:"proxySecretRef,omitempty"`
// +optional
GitHubServerTLS *GitHubServerTLSConfig `json:"githubServerTLS,omitempty"`
// +required
corev1.PodTemplateSpec `json:",inline"`
}
// EphemeralRunnerStatus defines the observed state of EphemeralRunner
type EphemeralRunnerStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Turns true only if the runner is online.
// +optional
Ready bool `json:"ready"`
// Phase describes phases where EphemeralRunner can be in.
// The underlying type is a PodPhase, but the meaning is more restrictive
//
// The PodFailed phase should be set only when EphemeralRunner fails to start
// after multiple retries. That signals that this EphemeralRunner won't work,
// and manual inspection is required
//
// The PodSucceded phase should be set only when confirmed that EphemeralRunner
// actually executed the job and has been removed from the service.
// +optional
Phase corev1.PodPhase `json:"phase,omitempty"`
// +optional
Reason string `json:"reason,omitempty"`
// +optional
Message string `json:"message,omitempty"`
// +optional
RunnerId int `json:"runnerId,omitempty"`
// +optional
RunnerName string `json:"runnerName,omitempty"`
// +optional
RunnerJITConfig string `json:"runnerJITConfig,omitempty"`
// +optional
Failures map[string]bool `json:"failures,omitempty"`
// +optional
JobRequestId int64 `json:"jobRequestId,omitempty"`
// +optional
JobRepositoryName string `json:"jobRepositoryName,omitempty"`
// +optional
JobWorkflowRef string `json:"jobWorkflowRef,omitempty"`
// +optional
WorkflowRunId int64 `json:"workflowRunId,omitempty"`
// +optional
JobDisplayName string `json:"jobDisplayName,omitempty"`
}
//+kubebuilder:object:root=true
// EphemeralRunnerList contains a list of EphemeralRunner
type EphemeralRunnerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []EphemeralRunner `json:"items"`
}
func init() {
SchemeBuilder.Register(&EphemeralRunner{}, &EphemeralRunnerList{})
}

View File

@@ -1,75 +0,0 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EphemeralRunnerSetSpec defines the desired state of EphemeralRunnerSet
type EphemeralRunnerSetSpec struct {
// Replicas is the number of desired EphemeralRunner resources in the k8s namespace.
Replicas int `json:"replicas,omitempty"`
EphemeralRunnerSpec EphemeralRunnerSpec `json:"ephemeralRunnerSpec,omitempty"`
}
// EphemeralRunnerSetStatus defines the observed state of EphemeralRunnerSet
type EphemeralRunnerSetStatus struct {
// CurrentReplicas is the number of currently running EphemeralRunner resources being managed by this EphemeralRunnerSet.
CurrentReplicas int `json:"currentReplicas"`
// EphemeralRunner counts separated by the stage ephemeral runners are in
// +optional
PendingEphemeralRunners int `json:"pendingEphemeralRunners"`
// +optional
RunningEphemeralRunners int `json:"runningEphemeralRunners"`
// +optional
FailedEphemeralRunners int `json:"failedEphemeralRunners"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name="DesiredReplicas",type="integer"
// +kubebuilder:printcolumn:JSONPath=".status.currentReplicas", name="CurrentReplicas",type="integer"
//+kubebuilder:printcolumn:JSONPath=".status.pendingEphemeralRunners",name=Pending Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.runningEphemeralRunners",name=Running Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.finishedEphemeralRunners",name=Finished Runners,type=integer
//+kubebuilder:printcolumn:JSONPath=".status.deletingEphemeralRunners",name=Deleting Runners,type=integer
// EphemeralRunnerSet is the Schema for the ephemeralrunnersets API
type EphemeralRunnerSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec EphemeralRunnerSetSpec `json:"spec,omitempty"`
Status EphemeralRunnerSetStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// EphemeralRunnerSetList contains a list of EphemeralRunnerSet
type EphemeralRunnerSetList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []EphemeralRunnerSet `json:"items"`
}
func init() {
SchemeBuilder.Register(&EphemeralRunnerSet{}, &EphemeralRunnerSetList{})
}

View File

@@ -1,36 +0,0 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1 contains API Schema definitions for the batch v1 API group
// +kubebuilder:object:generate=true
// +groupName=actions.github.com
package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
var (
// GroupVersion is group version used to register these objects
GroupVersion = schema.GroupVersion{Group: "actions.github.com", Version: "v1alpha1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)

View File

@@ -1,118 +0,0 @@
package v1alpha1_test
import (
"net/http"
"testing"
corev1 "k8s.io/api/core/v1"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestProxyConfig_ToSecret(t *testing.T) {
config := &v1alpha1.ProxyConfig{
HTTP: &v1alpha1.ProxyServerConfig{
Url: "http://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
HTTPS: &v1alpha1.ProxyServerConfig{
Url: "https://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
NoProxy: []string{
"noproxy.example.com",
"noproxy2.example.com",
},
}
secretFetcher := func(string) (*corev1.Secret, error) {
return &corev1.Secret{
Data: map[string][]byte{
"username": []byte("username"),
"password": []byte("password"),
},
}, nil
}
result, err := config.ToSecretData(secretFetcher)
require.NoError(t, err)
require.NotNil(t, result)
assert.Equal(t, "http://username:password@proxy.example.com:8080", string(result["http_proxy"]))
assert.Equal(t, "https://username:password@proxy.example.com:8080", string(result["https_proxy"]))
assert.Equal(t, "noproxy.example.com,noproxy2.example.com", string(result["no_proxy"]))
}
func TestProxyConfig_ProxyFunc(t *testing.T) {
config := &v1alpha1.ProxyConfig{
HTTP: &v1alpha1.ProxyServerConfig{
Url: "http://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
HTTPS: &v1alpha1.ProxyServerConfig{
Url: "https://proxy.example.com:8080",
CredentialSecretRef: "my-secret",
},
NoProxy: []string{
"noproxy.example.com",
"noproxy2.example.com",
},
}
secretFetcher := func(string) (*corev1.Secret, error) {
return &corev1.Secret{
Data: map[string][]byte{
"username": []byte("username"),
"password": []byte("password"),
},
}, nil
}
result, err := config.ProxyFunc(secretFetcher)
require.NoError(t, err)
tests := []struct {
name string
in string
out string
}{
{
name: "http target",
in: "http://target.com",
out: "http://username:password@proxy.example.com:8080",
},
{
name: "https target",
in: "https://target.com",
out: "https://username:password@proxy.example.com:8080",
},
{
name: "no proxy",
in: "https://noproxy.example.com",
out: "",
},
{
name: "no proxy 2",
in: "https://noproxy2.example.com",
out: "",
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req, err := http.NewRequest("GET", test.in, nil)
require.NoError(t, err)
u, err := result(req)
require.NoError(t, err)
if test.out == "" {
assert.Nil(t, u)
return
}
assert.Equal(t, test.out, u.String())
})
}
}

View File

@@ -1,105 +0,0 @@
package v1alpha1_test
import (
"crypto/tls"
"crypto/x509"
"net/http"
"os"
"path/filepath"
"testing"
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
"github.com/actions/actions-runner-controller/github/actions/testserver"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
v1 "k8s.io/api/core/v1"
)
func TestGitHubServerTLSConfig_ToCertPool(t *testing.T) {
t.Run("returns an error if CertificateFrom not specified", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: nil,
}
pool, err := c.ToCertPool(nil)
assert.Nil(t, pool)
require.Error(t, err)
assert.Equal(t, err.Error(), "certificateFrom not specified")
})
t.Run("returns an error if CertificateFrom.ConfigMapKeyRef not specified", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: &v1alpha1.TLSCertificateSource{},
}
pool, err := c.ToCertPool(nil)
assert.Nil(t, pool)
require.Error(t, err)
assert.Equal(t, err.Error(), "configMapKeyRef not specified")
})
t.Run("returns a valid cert pool with correct configuration", func(t *testing.T) {
c := &v1alpha1.GitHubServerTLSConfig{
CertificateFrom: &v1alpha1.TLSCertificateSource{
ConfigMapKeyRef: &v1.ConfigMapKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: "name",
},
Key: "key",
},
},
}
certsFolder := filepath.Join(
"../../../",
"github",
"actions",
"testdata",
)
fetcher := func(name, key string) ([]byte, error) {
cert, err := os.ReadFile(filepath.Join(certsFolder, "rootCA.crt"))
require.NoError(t, err)
pool := x509.NewCertPool()
ok := pool.AppendCertsFromPEM(cert)
assert.True(t, ok)
return cert, nil
}
pool, err := c.ToCertPool(fetcher)
require.NoError(t, err)
require.NotNil(t, pool)
// can be used to communicate with a server
serverSuccessfullyCalled := false
server := testserver.NewUnstarted(t, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
serverSuccessfullyCalled = true
w.WriteHeader(http.StatusOK)
}))
cert, err := tls.LoadX509KeyPair(
filepath.Join(certsFolder, "server.crt"),
filepath.Join(certsFolder, "server.key"),
)
require.NoError(t, err)
server.TLS = &tls.Config{Certificates: []tls.Certificate{cert}}
server.StartTLS()
client := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
},
},
}
_, err = client.Get(server.URL)
assert.NoError(t, err)
assert.True(t, serverSuccessfullyCalled)
})
}

View File

@@ -1,523 +0,0 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
package v1alpha1
import (
"k8s.io/api/core/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListener) DeepCopyInto(out *AutoscalingListener) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListener.
func (in *AutoscalingListener) DeepCopy() *AutoscalingListener {
if in == nil {
return nil
}
out := new(AutoscalingListener)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingListener) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListenerList) DeepCopyInto(out *AutoscalingListenerList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]AutoscalingListener, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerList.
func (in *AutoscalingListenerList) DeepCopy() *AutoscalingListenerList {
if in == nil {
return nil
}
out := new(AutoscalingListenerList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingListenerList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListenerSpec) DeepCopyInto(out *AutoscalingListenerSpec) {
*out = *in
if in.ImagePullSecrets != nil {
in, out := &in.ImagePullSecrets, &out.ImagePullSecrets
*out = make([]v1.LocalObjectReference, len(*in))
copy(*out, *in)
}
if in.Proxy != nil {
in, out := &in.Proxy, &out.Proxy
*out = new(ProxyConfig)
(*in).DeepCopyInto(*out)
}
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerSpec.
func (in *AutoscalingListenerSpec) DeepCopy() *AutoscalingListenerSpec {
if in == nil {
return nil
}
out := new(AutoscalingListenerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingListenerStatus) DeepCopyInto(out *AutoscalingListenerStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingListenerStatus.
func (in *AutoscalingListenerStatus) DeepCopy() *AutoscalingListenerStatus {
if in == nil {
return nil
}
out := new(AutoscalingListenerStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSet) DeepCopyInto(out *AutoscalingRunnerSet) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSet.
func (in *AutoscalingRunnerSet) DeepCopy() *AutoscalingRunnerSet {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSet)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingRunnerSet) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSetList) DeepCopyInto(out *AutoscalingRunnerSetList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]AutoscalingRunnerSet, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSetList.
func (in *AutoscalingRunnerSetList) DeepCopy() *AutoscalingRunnerSetList {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSetList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *AutoscalingRunnerSetList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSetSpec) DeepCopyInto(out *AutoscalingRunnerSetSpec) {
*out = *in
if in.Proxy != nil {
in, out := &in.Proxy, &out.Proxy
*out = new(ProxyConfig)
(*in).DeepCopyInto(*out)
}
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template)
if in.MaxRunners != nil {
in, out := &in.MaxRunners, &out.MaxRunners
*out = new(int)
**out = **in
}
if in.MinRunners != nil {
in, out := &in.MinRunners, &out.MinRunners
*out = new(int)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSetSpec.
func (in *AutoscalingRunnerSetSpec) DeepCopy() *AutoscalingRunnerSetSpec {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSetSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AutoscalingRunnerSetStatus) DeepCopyInto(out *AutoscalingRunnerSetStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AutoscalingRunnerSetStatus.
func (in *AutoscalingRunnerSetStatus) DeepCopy() *AutoscalingRunnerSetStatus {
if in == nil {
return nil
}
out := new(AutoscalingRunnerSetStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunner) DeepCopyInto(out *EphemeralRunner) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunner.
func (in *EphemeralRunner) DeepCopy() *EphemeralRunner {
if in == nil {
return nil
}
out := new(EphemeralRunner)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunner) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerList) DeepCopyInto(out *EphemeralRunnerList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]EphemeralRunner, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerList.
func (in *EphemeralRunnerList) DeepCopy() *EphemeralRunnerList {
if in == nil {
return nil
}
out := new(EphemeralRunnerList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunnerList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSet) DeepCopyInto(out *EphemeralRunnerSet) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSet.
func (in *EphemeralRunnerSet) DeepCopy() *EphemeralRunnerSet {
if in == nil {
return nil
}
out := new(EphemeralRunnerSet)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunnerSet) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSetList) DeepCopyInto(out *EphemeralRunnerSetList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]EphemeralRunnerSet, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSetList.
func (in *EphemeralRunnerSetList) DeepCopy() *EphemeralRunnerSetList {
if in == nil {
return nil
}
out := new(EphemeralRunnerSetList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *EphemeralRunnerSetList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSetSpec) DeepCopyInto(out *EphemeralRunnerSetSpec) {
*out = *in
in.EphemeralRunnerSpec.DeepCopyInto(&out.EphemeralRunnerSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSetSpec.
func (in *EphemeralRunnerSetSpec) DeepCopy() *EphemeralRunnerSetSpec {
if in == nil {
return nil
}
out := new(EphemeralRunnerSetSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSetStatus) DeepCopyInto(out *EphemeralRunnerSetStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSetStatus.
func (in *EphemeralRunnerSetStatus) DeepCopy() *EphemeralRunnerSetStatus {
if in == nil {
return nil
}
out := new(EphemeralRunnerSetStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerSpec) DeepCopyInto(out *EphemeralRunnerSpec) {
*out = *in
if in.Proxy != nil {
in, out := &in.Proxy, &out.Proxy
*out = new(ProxyConfig)
(*in).DeepCopyInto(*out)
}
if in.GitHubServerTLS != nil {
in, out := &in.GitHubServerTLS, &out.GitHubServerTLS
*out = new(GitHubServerTLSConfig)
(*in).DeepCopyInto(*out)
}
in.PodTemplateSpec.DeepCopyInto(&out.PodTemplateSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerSpec.
func (in *EphemeralRunnerSpec) DeepCopy() *EphemeralRunnerSpec {
if in == nil {
return nil
}
out := new(EphemeralRunnerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *EphemeralRunnerStatus) DeepCopyInto(out *EphemeralRunnerStatus) {
*out = *in
if in.Failures != nil {
in, out := &in.Failures, &out.Failures
*out = make(map[string]bool, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EphemeralRunnerStatus.
func (in *EphemeralRunnerStatus) DeepCopy() *EphemeralRunnerStatus {
if in == nil {
return nil
}
out := new(EphemeralRunnerStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubServerTLSConfig) DeepCopyInto(out *GitHubServerTLSConfig) {
*out = *in
if in.CertificateFrom != nil {
in, out := &in.CertificateFrom, &out.CertificateFrom
*out = new(TLSCertificateSource)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GitHubServerTLSConfig.
func (in *GitHubServerTLSConfig) DeepCopy() *GitHubServerTLSConfig {
if in == nil {
return nil
}
out := new(GitHubServerTLSConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProxyConfig) DeepCopyInto(out *ProxyConfig) {
*out = *in
if in.HTTP != nil {
in, out := &in.HTTP, &out.HTTP
*out = new(ProxyServerConfig)
**out = **in
}
if in.HTTPS != nil {
in, out := &in.HTTPS, &out.HTTPS
*out = new(ProxyServerConfig)
**out = **in
}
if in.NoProxy != nil {
in, out := &in.NoProxy, &out.NoProxy
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProxyConfig.
func (in *ProxyConfig) DeepCopy() *ProxyConfig {
if in == nil {
return nil
}
out := new(ProxyConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProxyServerConfig) DeepCopyInto(out *ProxyServerConfig) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProxyServerConfig.
func (in *ProxyServerConfig) DeepCopy() *ProxyServerConfig {
if in == nil {
return nil
}
out := new(ProxyServerConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TLSCertificateSource) DeepCopyInto(out *TLSCertificateSource) {
*out = *in
if in.ConfigMapKeyRef != nil {
in, out := &in.ConfigMapKeyRef, &out.ConfigMapKeyRef
*out = new(v1.ConfigMapKeySelector)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TLSCertificateSource.
func (in *TLSCertificateSource) DeepCopy() *TLSCertificateSource {
if in == nil {
return nil
}
out := new(TLSCertificateSource)
in.DeepCopyInto(out)
return out
}

View File

@@ -1,269 +0,0 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// HorizontalRunnerAutoscalerSpec defines the desired state of HorizontalRunnerAutoscaler
type HorizontalRunnerAutoscalerSpec struct {
// ScaleTargetRef sis the reference to scaled resource like RunnerDeployment
ScaleTargetRef ScaleTargetRef `json:"scaleTargetRef,omitempty"`
// MinReplicas is the minimum number of replicas the deployment is allowed to scale
// +optional
MinReplicas *int `json:"minReplicas,omitempty"`
// MaxReplicas is the maximum number of replicas the deployment is allowed to scale
// +optional
MaxReplicas *int `json:"maxReplicas,omitempty"`
// ScaleDownDelaySecondsAfterScaleUp is the approximate delay for a scale down followed by a scale up
// Used to prevent flapping (down->up->down->... loop)
// +optional
ScaleDownDelaySecondsAfterScaleUp *int `json:"scaleDownDelaySecondsAfterScaleOut,omitempty"`
// Metrics is the collection of various metric targets to calculate desired number of runners
// +optional
Metrics []MetricSpec `json:"metrics,omitempty"`
// ScaleUpTriggers is an experimental feature to increase the desired replicas by 1
// on each webhook requested received by the webhookBasedAutoscaler.
//
// This feature requires you to also enable and deploy the webhookBasedAutoscaler onto your cluster.
//
// Note that the added runners remain until the next sync period at least,
// and they may or may not be used by GitHub Actions depending on the timing.
// They are intended to be used to gain "resource slack" immediately after you
// receive a webhook from GitHub, so that you can loosely expect MinReplicas runners to be always available.
ScaleUpTriggers []ScaleUpTrigger `json:"scaleUpTriggers,omitempty"`
CapacityReservations []CapacityReservation `json:"capacityReservations,omitempty" patchStrategy:"merge" patchMergeKey:"name"`
// ScheduledOverrides is the list of ScheduledOverride.
// It can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule.
// The earlier a scheduled override is, the higher it is prioritized.
// +optional
ScheduledOverrides []ScheduledOverride `json:"scheduledOverrides,omitempty"`
// +optional
GitHubAPICredentialsFrom *GitHubAPICredentialsFrom `json:"githubAPICredentialsFrom,omitempty"`
}
type ScaleUpTrigger struct {
GitHubEvent *GitHubEventScaleUpTriggerSpec `json:"githubEvent,omitempty"`
Amount int `json:"amount,omitempty"`
Duration metav1.Duration `json:"duration,omitempty"`
}
type GitHubEventScaleUpTriggerSpec struct {
CheckRun *CheckRunSpec `json:"checkRun,omitempty"`
PullRequest *PullRequestSpec `json:"pullRequest,omitempty"`
Push *PushSpec `json:"push,omitempty"`
WorkflowJob *WorkflowJobSpec `json:"workflowJob,omitempty"`
}
// https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
type CheckRunSpec struct {
// One of: created, rerequested, or completed
Types []string `json:"types,omitempty"`
Status string `json:"status,omitempty"`
// Names is a list of GitHub Actions glob patterns.
// Any check_run event whose name matches one of patterns in the list can trigger autoscaling.
// Note that check_run name seem to equal to the job name you've defined in your actions workflow yaml file.
// So it is very likely that you can utilize this to trigger depending on the job.
Names []string `json:"names,omitempty"`
// Repositories is a list of GitHub repositories.
// Any check_run event whose repository matches one of repositories in the list can trigger autoscaling.
Repositories []string `json:"repositories,omitempty"`
}
// https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#workflow_job
type WorkflowJobSpec struct {
}
// https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request
type PullRequestSpec struct {
Types []string `json:"types,omitempty"`
Branches []string `json:"branches,omitempty"`
}
// PushSpec is the condition for triggering scale-up on push event
// Also see https://docs.github.com/en/actions/reference/events-that-trigger-workflows#push
type PushSpec struct {
}
// CapacityReservation specifies the number of replicas temporarily added
// to the scale target until ExpirationTime.
type CapacityReservation struct {
Name string `json:"name,omitempty"`
ExpirationTime metav1.Time `json:"expirationTime,omitempty"`
Replicas int `json:"replicas,omitempty"`
// +optional
EffectiveTime metav1.Time `json:"effectiveTime,omitempty"`
}
type ScaleTargetRef struct {
// Kind is the type of resource being referenced
// +optional
// +kubebuilder:validation:Enum=RunnerDeployment;RunnerSet
Kind string `json:"kind,omitempty"`
// Name is the name of resource being referenced
Name string `json:"name,omitempty"`
}
type MetricSpec struct {
// Type is the type of metric to be used for autoscaling.
// It can be TotalNumberOfQueuedAndInProgressWorkflowRuns or PercentageRunnersBusy.
Type string `json:"type,omitempty"`
// RepositoryNames is the list of repository names to be used for calculating the metric.
// For example, a repository name is the REPO part of `github.com/USER/REPO`.
// +optional
RepositoryNames []string `json:"repositoryNames,omitempty"`
// ScaleUpThreshold is the percentage of busy runners greater than which will
// trigger the hpa to scale runners up.
// +optional
ScaleUpThreshold string `json:"scaleUpThreshold,omitempty"`
// ScaleDownThreshold is the percentage of busy runners less than which will
// trigger the hpa to scale the runners down.
// +optional
ScaleDownThreshold string `json:"scaleDownThreshold,omitempty"`
// ScaleUpFactor is the multiplicative factor applied to the current number of runners used
// to determine how many pods should be added.
// +optional
ScaleUpFactor string `json:"scaleUpFactor,omitempty"`
// ScaleDownFactor is the multiplicative factor applied to the current number of runners used
// to determine how many pods should be removed.
// +optional
ScaleDownFactor string `json:"scaleDownFactor,omitempty"`
// ScaleUpAdjustment is the number of runners added on scale-up.
// You can only specify either ScaleUpFactor or ScaleUpAdjustment.
// +optional
ScaleUpAdjustment int `json:"scaleUpAdjustment,omitempty"`
// ScaleDownAdjustment is the number of runners removed on scale-down.
// You can only specify either ScaleDownFactor or ScaleDownAdjustment.
// +optional
ScaleDownAdjustment int `json:"scaleDownAdjustment,omitempty"`
}
// ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule.
// A schedule can optionally be recurring, so that the corresponding override happens every day, week, month, or year.
type ScheduledOverride struct {
// StartTime is the time at which the first override starts.
StartTime metav1.Time `json:"startTime"`
// EndTime is the time at which the first override ends.
EndTime metav1.Time `json:"endTime"`
// MinReplicas is the number of runners while overriding.
// If omitted, it doesn't override minReplicas.
// +optional
// +nullable
// +kubebuilder:validation:Minimum=0
MinReplicas *int `json:"minReplicas,omitempty"`
// +optional
RecurrenceRule RecurrenceRule `json:"recurrenceRule,omitempty"`
}
type RecurrenceRule struct {
// Frequency is the name of a predefined interval of each recurrence.
// The valid values are "Daily", "Weekly", "Monthly", and "Yearly".
// If empty, the corresponding override happens only once.
// +optional
// +kubebuilder:validation:Enum=Daily;Weekly;Monthly;Yearly
Frequency string `json:"frequency,omitempty"`
// UntilTime is the time of the final recurrence.
// If empty, the schedule recurs forever.
// +optional
UntilTime metav1.Time `json:"untilTime,omitempty"`
}
type HorizontalRunnerAutoscalerStatus struct {
// ObservedGeneration is the most recent generation observed for the target. It corresponds to e.g.
// RunnerDeployment's generation, which is updated on mutation by the API Server.
// +optional
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
// DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet
// This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
// +optional
DesiredReplicas *int `json:"desiredReplicas,omitempty"`
// +optional
// +nullable
LastSuccessfulScaleOutTime *metav1.Time `json:"lastSuccessfulScaleOutTime,omitempty"`
// +optional
CacheEntries []CacheEntry `json:"cacheEntries,omitempty"`
// ScheduledOverridesSummary is the summary of active and upcoming scheduled overrides to be shown in e.g. a column of a `kubectl get hra` output
// for observability.
// +optional
ScheduledOverridesSummary *string `json:"scheduledOverridesSummary,omitempty"`
}
const CacheEntryKeyDesiredReplicas = "desiredReplicas"
type CacheEntry struct {
Key string `json:"key,omitempty"`
Value int `json:"value,omitempty"`
ExpirationTime metav1.Time `json:"expirationTime,omitempty"`
}
// +kubebuilder:object:root=true
// +kubebuilder:resource:shortName=hra
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.minReplicas",name=Min,type=number
// +kubebuilder:printcolumn:JSONPath=".spec.maxReplicas",name=Max,type=number
// +kubebuilder:printcolumn:JSONPath=".status.desiredReplicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.scheduledOverridesSummary",name=Schedule,type=string
// HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler API
type HorizontalRunnerAutoscaler struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec HorizontalRunnerAutoscalerSpec `json:"spec,omitempty"`
Status HorizontalRunnerAutoscalerStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// HorizontalRunnerAutoscalerList contains a list of HorizontalRunnerAutoscaler
type HorizontalRunnerAutoscalerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []HorizontalRunnerAutoscaler `json:"items"`
}
func init() {
SchemeBuilder.Register(&HorizontalRunnerAutoscaler{}, &HorizontalRunnerAutoscalerList{})
}

View File

@@ -1,406 +0,0 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"errors"
"fmt"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/util/validation/field"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// RunnerSpec defines the desired state of Runner
type RunnerSpec struct {
RunnerConfig `json:",inline"`
RunnerPodSpec `json:",inline"`
}
type RunnerConfig struct {
// +optional
// +kubebuilder:validation:Pattern=`^[^/]+$`
Enterprise string `json:"enterprise,omitempty"`
// +optional
// +kubebuilder:validation:Pattern=`^[^/]+$`
Organization string `json:"organization,omitempty"`
// +optional
// +kubebuilder:validation:Pattern=`^[^/]+/[^/]+$`
Repository string `json:"repository,omitempty"`
// +optional
Labels []string `json:"labels,omitempty"`
// +optional
Group string `json:"group,omitempty"`
// +optional
Ephemeral *bool `json:"ephemeral,omitempty"`
// +optional
Image string `json:"image"`
// +optional
WorkDir string `json:"workDir,omitempty"`
// +optional
DockerdWithinRunnerContainer *bool `json:"dockerdWithinRunnerContainer,omitempty"`
// +optional
DockerEnabled *bool `json:"dockerEnabled,omitempty"`
// +optional
DockerMTU *int64 `json:"dockerMTU,omitempty"`
// +optional
DockerRegistryMirror *string `json:"dockerRegistryMirror,omitempty"`
// +optional
VolumeSizeLimit *resource.Quantity `json:"volumeSizeLimit,omitempty"`
// +optional
VolumeStorageMedium *string `json:"volumeStorageMedium,omitempty"`
// +optional
ContainerMode string `json:"containerMode,omitempty"`
GitHubAPICredentialsFrom *GitHubAPICredentialsFrom `json:"githubAPICredentialsFrom,omitempty"`
}
type GitHubAPICredentialsFrom struct {
SecretRef SecretReference `json:"secretRef,omitempty"`
}
type SecretReference struct {
Name string `json:"name"`
}
// RunnerPodSpec defines the desired pod spec fields of the runner pod
type RunnerPodSpec struct {
// +optional
DockerdContainerResources corev1.ResourceRequirements `json:"dockerdContainerResources,omitempty"`
// +optional
DockerVolumeMounts []corev1.VolumeMount `json:"dockerVolumeMounts,omitempty"`
// +optional
DockerEnv []corev1.EnvVar `json:"dockerEnv,omitempty"`
// +optional
Containers []corev1.Container `json:"containers,omitempty"`
// +optional
ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"`
// +optional
Env []corev1.EnvVar `json:"env,omitempty"`
// +optional
EnvFrom []corev1.EnvFromSource `json:"envFrom,omitempty"`
// +optional
Resources corev1.ResourceRequirements `json:"resources,omitempty"`
// +optional
VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"`
// +optional
Volumes []corev1.Volume `json:"volumes,omitempty"`
// +optional
EnableServiceLinks *bool `json:"enableServiceLinks,omitempty"`
// +optional
InitContainers []corev1.Container `json:"initContainers,omitempty"`
// +optional
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// +optional
ServiceAccountName string `json:"serviceAccountName,omitempty"`
// +optional
AutomountServiceAccountToken *bool `json:"automountServiceAccountToken,omitempty"`
// +optional
SidecarContainers []corev1.Container `json:"sidecarContainers,omitempty"`
// +optional
SecurityContext *corev1.PodSecurityContext `json:"securityContext,omitempty"`
// +optional
ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"`
// +optional
Affinity *corev1.Affinity `json:"affinity,omitempty"`
// +optional
Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
// +optional
PriorityClassName string `json:"priorityClassName,omitempty"`
// +optional
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
// +optional
EphemeralContainers []corev1.EphemeralContainer `json:"ephemeralContainers,omitempty"`
// +optional
HostAliases []corev1.HostAlias `json:"hostAliases,omitempty"`
// +optional
TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"`
// RuntimeClassName is the container runtime configuration that containers should run under.
// More info: https://kubernetes.io/docs/concepts/containers/runtime-class
// +optional
RuntimeClassName *string `json:"runtimeClassName,omitempty"`
// +optional
DnsPolicy corev1.DNSPolicy `json:"dnsPolicy,omitempty"`
// +optional
DnsConfig *corev1.PodDNSConfig `json:"dnsConfig,omitempty"`
// +optional
WorkVolumeClaimTemplate *WorkVolumeClaimTemplate `json:"workVolumeClaimTemplate,omitempty"`
}
func (rs *RunnerSpec) Validate(rootPath *field.Path) field.ErrorList {
var (
errList field.ErrorList
err error
)
err = rs.validateRepository()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("repository"), rs.Repository, err.Error()))
}
err = rs.validateWorkVolumeClaimTemplate()
if err != nil {
errList = append(errList, field.Invalid(rootPath.Child("workVolumeClaimTemplate"), rs.WorkVolumeClaimTemplate, err.Error()))
}
return errList
}
// ValidateRepository validates repository field.
func (rs *RunnerSpec) validateRepository() error {
// Enterprise, Organization and repository are both exclusive.
foundCount := 0
if len(rs.Organization) > 0 {
foundCount += 1
}
if len(rs.Repository) > 0 {
foundCount += 1
}
if len(rs.Enterprise) > 0 {
foundCount += 1
}
if foundCount == 0 {
return errors.New("Spec needs enterprise, organization or repository")
}
if foundCount > 1 {
return errors.New("Spec cannot have many fields defined enterprise, organization and repository")
}
return nil
}
func (rs *RunnerSpec) validateWorkVolumeClaimTemplate() error {
if rs.ContainerMode != "kubernetes" {
return nil
}
if rs.WorkVolumeClaimTemplate == nil {
return errors.New("Spec.ContainerMode: kubernetes must have workVolumeClaimTemplate field specified")
}
return rs.WorkVolumeClaimTemplate.validate()
}
// RunnerStatus defines the observed state of Runner
type RunnerStatus struct {
// Turns true only if the runner pod is ready.
// +optional
Ready bool `json:"ready"`
// +optional
Registration RunnerStatusRegistration `json:"registration"`
// +optional
Phase string `json:"phase,omitempty"`
// +optional
Reason string `json:"reason,omitempty"`
// +optional
Message string `json:"message,omitempty"`
// +optional
WorkflowStatus *WorkflowStatus `json:"workflow"`
// +optional
// +nullable
LastRegistrationCheckTime *metav1.Time `json:"lastRegistrationCheckTime,omitempty"`
}
// WorkflowStatus contains various information that is propagated
// from GitHub Actions workflow run environment variables to
// ease monitoring workflow run/job/steps that are triggerred on the runner.
type WorkflowStatus struct {
// +optional
// Name is the name of the workflow
// that is triggerred within the runner.
// It corresponds to GITHUB_WORKFLOW defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Name string `json:"name,omitempty"`
// +optional
// Repository is the owner and repository name of the workflow
// that is triggerred within the runner.
// It corresponds to GITHUB_REPOSITORY defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Repository string `json:"repository,omitempty"`
// +optional
// ReositoryOwner is the repository owner's name for the workflow
// that is triggerred within the runner.
// It corresponds to GITHUB_REPOSITORY_OWNER defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
RepositoryOwner string `json:"repositoryOwner,omitempty"`
// +optional
// GITHUB_RUN_NUMBER is the unique number for the current workflow run
// that is triggerred within the runner.
// It corresponds to GITHUB_RUN_ID defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
RunNumber string `json:"runNumber,omitempty"`
// +optional
// RunID is the unique number for the current workflow run
// that is triggerred within the runner.
// It corresponds to GITHUB_RUN_ID defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
RunID string `json:"runID,omitempty"`
// +optional
// Job is the name of the current job
// that is triggerred within the runner.
// It corresponds to GITHUB_JOB defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Job string `json:"job,omitempty"`
// +optional
// Action is the name of the current action or the step ID of the current step
// that is triggerred within the runner.
// It corresponds to GITHUB_ACTION defined in
// https://docs.github.com/en/actions/learn-github-actions/environment-variables
Action string `json:"action,omitempty"`
}
// RunnerStatusRegistration contains runner registration status
type RunnerStatusRegistration struct {
Enterprise string `json:"enterprise,omitempty"`
Organization string `json:"organization,omitempty"`
Repository string `json:"repository,omitempty"`
Labels []string `json:"labels,omitempty"`
Token string `json:"token"`
ExpiresAt metav1.Time `json:"expiresAt"`
}
type WorkVolumeClaimTemplate struct {
StorageClassName string `json:"storageClassName"`
AccessModes []corev1.PersistentVolumeAccessMode `json:"accessModes"`
Resources corev1.ResourceRequirements `json:"resources"`
}
func (w *WorkVolumeClaimTemplate) validate() error {
if w.AccessModes == nil || len(w.AccessModes) == 0 {
return errors.New("Access mode should have at least one mode specified")
}
for _, accessMode := range w.AccessModes {
switch accessMode {
case corev1.ReadWriteOnce, corev1.ReadWriteMany:
default:
return fmt.Errorf("Access mode %v is not supported", accessMode)
}
}
return nil
}
func (w *WorkVolumeClaimTemplate) V1Volume() corev1.Volume {
return corev1.Volume{
Name: "work",
VolumeSource: corev1.VolumeSource{
Ephemeral: &corev1.EphemeralVolumeSource{
VolumeClaimTemplate: &corev1.PersistentVolumeClaimTemplate{
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: w.AccessModes,
StorageClassName: &w.StorageClassName,
Resources: w.Resources,
},
},
},
},
}
}
func (w *WorkVolumeClaimTemplate) V1VolumeMount(mountPath string) corev1.VolumeMount {
return corev1.VolumeMount{
MountPath: mountPath,
Name: "work",
}
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.enterprise",name=Enterprise,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.organization",name=Organization,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.repository",name=Repository,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.group",name=Group,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.labels",name=Labels,type=string
// +kubebuilder:printcolumn:JSONPath=".status.phase",name=Status,type=string
// +kubebuilder:printcolumn:JSONPath=".status.message",name=Message,type=string
// +kubebuilder:printcolumn:JSONPath=".status.workflow.repository",name=WF Repo,type=string
// +kubebuilder:printcolumn:JSONPath=".status.workflow.runID",name=WF Run,type=string
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// Runner is the Schema for the runners API
type Runner struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RunnerSpec `json:"spec,omitempty"`
Status RunnerStatus `json:"status,omitempty"`
}
func (r Runner) IsRegisterable() bool {
if r.Status.Registration.Repository != r.Spec.Repository {
return false
}
if r.Status.Registration.Token == "" {
return false
}
now := metav1.Now()
return !r.Status.Registration.ExpiresAt.Before(&now)
}
// +kubebuilder:object:root=true
// RunnerList contains a list of Runner
type RunnerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Runner `json:"items"`
}
func init() {
SchemeBuilder.Register(&Runner{}, &RunnerList{})
}

View File

@@ -1,111 +0,0 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns = "TotalNumberOfQueuedAndInProgressWorkflowRuns"
AutoscalingMetricTypePercentageRunnersBusy = "PercentageRunnersBusy"
)
// RunnerDeploymentSpec defines the desired state of RunnerDeployment
type RunnerDeploymentSpec struct {
// +optional
// +nullable
Replicas *int `json:"replicas,omitempty"`
// EffectiveTime is the time the upstream controller requested to sync Replicas.
// It is usually populated by the webhook-based autoscaler via HRA.
// The value is inherited to RunnerReplicaSet(s) and used to prevent ephemeral runners from unnecessarily recreated.
//
// +optional
// +nullable
EffectiveTime *metav1.Time `json:"effectiveTime"`
// +optional
// +nullable
Selector *metav1.LabelSelector `json:"selector"`
Template RunnerTemplate `json:"template"`
}
type RunnerDeploymentStatus struct {
// See K8s deployment controller code for reference
// https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/controller/deployment/sync.go#L487-L505
// AvailableReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to the sum of status.availableReplicas of all the runner replica sets.
// +optional
AvailableReplicas *int `json:"availableReplicas"`
// ReadyReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to the sum of status.readyReplicas of all the runner replica sets.
// +optional
ReadyReplicas *int `json:"readyReplicas"`
// ReadyReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to status.replicas of the runner replica set that has the desired template hash.
// +optional
UpdatedReplicas *int `json:"updatedReplicas"`
// DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet
// This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
// +optional
DesiredReplicas *int `json:"desiredReplicas"`
// Replicas is the total number of replicas
// +optional
Replicas *int `json:"replicas"`
}
// +kubebuilder:object:root=true
// +kubebuilder:resource:shortName=rdeploy
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.enterprise",name=Enterprise,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.organization",name=Organization,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.repository",name=Repository,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.group",name=Group,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.template.spec.labels",name=Labels,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Current,type=number
// +kubebuilder:printcolumn:JSONPath=".status.updatedReplicas",name=Up-To-Date,type=number
// +kubebuilder:printcolumn:JSONPath=".status.availableReplicas",name=Available,type=number
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// RunnerDeployment is the Schema for the runnerdeployments API
type RunnerDeployment struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RunnerDeploymentSpec `json:"spec,omitempty"`
Status RunnerDeploymentStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// RunnerList contains a list of Runner
type RunnerDeploymentList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []RunnerDeployment `json:"items"`
}
func init() {
SchemeBuilder.Register(&RunnerDeployment{}, &RunnerDeploymentList{})
}

View File

@@ -1,102 +0,0 @@
/*
Copyright 2021 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
appsv1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// RunnerSetSpec defines the desired state of RunnerSet
type RunnerSetSpec struct {
RunnerConfig `json:",inline"`
// EffectiveTime is the time the upstream controller requested to sync Replicas.
// It is usually populated by the webhook-based autoscaler via HRA.
// It is used to prevent ephemeral runners from unnecessarily recreated.
//
// +optional
// +nullable
EffectiveTime *metav1.Time `json:"effectiveTime,omitempty"`
// +optional
ServiceAccountName string `json:"serviceAccountName,omitempty"`
// +optional
WorkVolumeClaimTemplate *WorkVolumeClaimTemplate `json:"workVolumeClaimTemplate,omitempty"`
appsv1.StatefulSetSpec `json:",inline"`
}
type RunnerSetStatus struct {
// See K8s deployment controller code for reference
// https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/controller/deployment/sync.go#L487-L505
// AvailableReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to the sum of status.availableReplicas of all the runner replica sets.
// +optional
CurrentReplicas *int `json:"availableReplicas"`
// ReadyReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to the sum of status.readyReplicas of all the runner replica sets.
// +optional
ReadyReplicas *int `json:"readyReplicas"`
// ReadyReplicas is the total number of available runners which have been successfully registered to GitHub and still running.
// This corresponds to status.replicas of the runner replica set that has the desired template hash.
// +optional
UpdatedReplicas *int `json:"updatedReplicas"`
// DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet
// This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
// +optional
DesiredReplicas *int `json:"desiredReplicas"`
// Replicas is the total number of replicas
// +optional
Replicas *int `json:"replicas"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.replicas",name=Desired,type=number
// +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Current,type=number
// +kubebuilder:printcolumn:JSONPath=".status.updatedReplicas",name=Up-To-Date,type=number
// +kubebuilder:printcolumn:JSONPath=".status.availableReplicas",name=Available,type=number
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// RunnerSet is the Schema for the runnersets API
type RunnerSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RunnerSetSpec `json:"spec,omitempty"`
Status RunnerSetStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// RunnerList contains a list of Runner
type RunnerSetList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []RunnerSet `json:"items"`
}
func init() {
SchemeBuilder.Register(&RunnerSet{}, &RunnerSetList{})
}

4
artifacthub-repo.yml Normal file
View File

@@ -0,0 +1,4 @@
repositoryID: 6e120248-b034-45e5-b16c-6015ecfa7c6c
owners:
- name: mumoshu
email: ykuoka@gmail.com

View File

@@ -1,4 +0,0 @@
package build
// This is overridden at build-time using go-build ldflags. dev is the fallback value
var Version = "NA"

View File

@@ -1,9 +0,0 @@
# This file defines the config for "ct" (chart tester) used by the helm linting GitHub workflow
lint-conf: charts/.ci/lint-config.yaml
chart-repos:
- jetstack=https://charts.jetstack.io
check-version-increment: false # Disable checking that the chart version has been bumped
charts:
- charts/gha-runner-scale-set-controller
- charts/gha-runner-scale-set
skip-clean-up: true

View File

@@ -1,7 +0,0 @@
# This file defines the config for "ct" (chart tester) used by the helm linting GitHub workflow
lint-conf: charts/.ci/lint-config.yaml
chart-repos:
- jetstack=https://charts.jetstack.io
check-version-increment: false # Disable checking that the chart version has been bumped
charts:
- charts/actions-runner-controller

View File

@@ -1,6 +0,0 @@
rules:
# One blank line is OK
empty-lines:
max-start: 1
max-end: 1
max: 1

View File

@@ -1,3 +0,0 @@
#!/bin/bash
docker run --rm -it -w /repo -v $(pwd):/repo quay.io/helmpack/chart-testing ct lint --all --config charts/.ci/ct-config.yaml

View File

@@ -1,15 +0,0 @@
#!/bin/bash
for chart in `ls charts`;
do
helm template --values charts/$chart/ci/ci-values.yaml charts/$chart | kube-score score - \
--ignore-test pod-networkpolicy \
--ignore-test deployment-has-poddisruptionbudget \
--ignore-test deployment-has-host-podantiaffinity \
--ignore-test pod-probes \
--ignore-test container-image-tag \
--enable-optional-test container-security-context-privileged \
--enable-optional-test container-security-context-readonlyrootfilesystem \
--ignore-test container-security-context
done

View File

@@ -21,5 +21,3 @@
.idea/
*.tmproj
.vscode/
# Docs
docs/

View File

@@ -15,16 +15,9 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.23.2
version: 0.1.0
# Used as the default manager tag value when no tag property is provided in the values.yaml
appVersion: 0.27.3
home: https://github.com/actions/actions-runner-controller
sources:
- https://github.com/actions/actions-runner-controller
maintainers:
- name: actions-runner-controller
url: https://github.com/actions-runner-controller
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 0.11.2

View File

@@ -1,151 +0,0 @@
## Docs
All additional docs are kept in the `docs/` folder, this README is solely for documenting the values.yaml keys and values
## Values
**_The values are documented as of HEAD, to review the configuration options for your chart version ensure you view this file at the relevant [tag](https://github.com/actions/actions-runner-controller/tags)_**
> _Default values are the defaults set in the charts `values.yaml`, some properties have default configurations in the code for when the property is omitted or invalid_
| Key | Description | Default |
|----------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|
| `labels` | Set labels to apply to all resources in the chart | |
| `replicaCount` | Set the number of controller pods | 1 |
| `webhookPort` | Set the containerPort for the webhook Pod | 9443 |
| `syncPeriod` | Set the period in which the controller reconciles the desired runners count | 1m |
| `enableLeaderElection` | Enable election configuration | true |
| `leaderElectionId` | Set the election ID for the controller group | |
| `githubEnterpriseServerURL` | Set the URL for a self-hosted GitHub Enterprise Server | |
| `githubURL` | Override GitHub URL to be used for GitHub API calls | |
| `githubUploadURL` | Override GitHub Upload URL to be used for GitHub API calls | |
| `runnerGithubURL` | Override GitHub URL to be used by runners during registration | |
| `logLevel` | Set the log level of the controller container | |
| `logFormat` | Set the log format of the controller. Valid options are "text" and "json" | text |
| `additionalVolumes` | Set additional volumes to add to the manager container | |
| `additionalVolumeMounts` | Set additional volume mounts to add to the manager container | |
| `authSecret.create` | Deploy the controller auth secret | false |
| `authSecret.name` | Set the name of the auth secret | controller-manager |
| `authSecret.annotations` | Set annotations for the auth Secret | |
| `authSecret.github_app_id` | The ID of your GitHub App. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_installation_id` | The ID of your GitHub App installation. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_app_private_key` | The multiline string of your GitHub App's private key. **This can't be set at the same time as `authSecret.github_token`** | |
| `authSecret.github_token` | Your chosen GitHub PAT token. **This can't be set at the same time as the `authSecret.github_app_*`** | |
| `authSecret.github_basicauth_username` | Username for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `authSecret.github_basicauth_password` | Password for GitHub basic auth to use instead of PAT or GitHub APP in case it's running behind a proxy API | |
| `dockerRegistryMirror` | The default Docker Registry Mirror used by runners. | |
| `hostNetwork` | The "hostNetwork" of the controller container | false |
| `image.repository` | The "repository/image" of the controller container | summerwind/actions-runner-controller |
| `image.tag` | The tag of the controller container | |
| `image.actionsRunnerRepositoryAndTag` | The "repository/image" of the actions runner container | summerwind/actions-runner:latest |
| `image.actionsRunnerImagePullSecrets` | Optional image pull secrets to be included in the runner pod's ImagePullSecrets | |
| `image.dindSidecarRepositoryAndTag` | The "repository/image" of the dind sidecar container | docker:dind |
| `image.pullPolicy` | The pull policy of the controller image | IfNotPresent |
| `metrics.serviceMonitor` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `metrics.serviceAnnotations` | Set annotations for the provisioned metrics service resource | |
| `metrics.port` | Set port of metrics service | 8443 |
| `metrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
| `metrics.proxy.image.repository` | The "repository/image" of the kube-proxy container | quay.io/brancz/kube-rbac-proxy |
| `metrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.10.0 |
| `metrics.serviceMonitorLabels` | Set labels to apply to ServiceMonitor resources | |
| `imagePullSecrets` | Specifies the secret to be used when pulling the controller pod containers | |
| `fullnameOverride` | Override the full resource names | |
| `nameOverride` | Override the resource name prefix | |
| `serviceAccount.annotations` | Set annotations to the service account | |
| `serviceAccount.create` | Deploy the controller pod under a service account | true |
| `podAnnotations` | Set annotations for the controller pod | |
| `podLabels` | Set labels for the controller pod | |
| `serviceAccount.name` | Set the name of the service account | |
| `securityContext` | Set the security context for each container in the controller pod | |
| `podSecurityContext` | Set the security context to controller pod | |
| `service.annotations` | Set annotations for the provisioned webhook service resource | |
| `service.port` | Set controller service ports | |
| `service.type` | Set controller service type | |
| `topologySpreadConstraints` | Set the controller pod topologySpreadConstraints | |
| `nodeSelector` | Set the controller pod nodeSelector | |
| `resources` | Set the controller pod resources | |
| `affinity` | Set the controller pod affinity rules | |
| `podDisruptionBudget.enabled` | Enables a PDB to ensure HA of controller pods | false |
| `podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |
| `tolerations` | Set the controller pod tolerations | |
| `env` | Set environment variables for the controller container | |
| `priorityClassName` | Set the controller pod priorityClassName | |
| `scope.watchNamespace` | Tells the controller and the github webhook server which namespace to watch if `scope.singleNamespace` is true | `Release.Namespace` (the default namespace of the helm chart). |
| `scope.singleNamespace` | Limit the controller to watch a single namespace | false |
| `certManagerEnabled` | Enable cert-manager. If disabled you must set admissionWebHooks.caBundle and create TLS secrets manually | true |
| `runner.statusUpdateHook.enabled` | Use custom RBAC for runners (role, role binding and service account), this will enable reporting runner statuses | false |
| `admissionWebHooks.caBundle` | Base64-encoded PEM bundle containing the CA that signed the webhook's serving certificate | |
| `githubWebhookServer.logLevel` | Set the log level of the githubWebhookServer container | |
| `githubWebhookServer.logFormat` | Set the log format of the githubWebhookServer controller. Valid options are "text" and "json" | text |
| `githubWebhookServer.replicaCount` | Set the number of webhook server pods | 1 |
| `githubWebhookServer.useRunnerGroupsVisibility` | Enable supporting runner groups with custom visibility, you also need to set `githubWebhookServer.secret.enabled` to enable this feature. | false |
| `githubWebhookServer.enabled` | Deploy the webhook server pod | false |
| `githubWebhookServer.queueLimit` | Set the queue size limit in the githubWebhookServer | |
| `githubWebhookServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false |
| `githubWebhookServer.secret.create` | Deploy the webhook hook secret | false |
| `githubWebhookServer.secret.name` | Set the name of the webhook hook secret | github-webhook-server |
| `githubWebhookServer.secret.github_webhook_secret_token` | Set the webhook secret token value | |
| `githubWebhookServer.imagePullSecrets` | Specifies the secret to be used when pulling the githubWebhookServer pod containers | |
| `githubWebhookServer.nameOverride` | Override the resource name prefix | |
| `githubWebhookServer.fullnameOverride` | Override the full resource names | |
| `githubWebhookServer.serviceAccount.create` | Deploy the githubWebhookServer under a service account | true |
| `githubWebhookServer.serviceAccount.annotations` | Set annotations for the service account | |
| `githubWebhookServer.serviceAccount.name` | Set the service account name | |
| `githubWebhookServer.podAnnotations` | Set annotations for the githubWebhookServer pod | |
| `githubWebhookServer.podLabels` | Set labels for the githubWebhookServer pod | |
| `githubWebhookServer.podSecurityContext` | Set the security context to githubWebhookServer pod | |
| `githubWebhookServer.securityContext` | Set the security context for each container in the githubWebhookServer pod | |
| `githubWebhookServer.resources` | Set the githubWebhookServer pod resources | |
| `githubWebhookServer.topologySpreadConstraints` | Set the githubWebhookServer pod topologySpreadConstraints | |
| `githubWebhookServer.nodeSelector` | Set the githubWebhookServer pod nodeSelector | |
| `githubWebhookServer.tolerations` | Set the githubWebhookServer pod tolerations | |
| `githubWebhookServer.affinity` | Set the githubWebhookServer pod affinity rules | |
| `githubWebhookServer.priorityClassName` | Set the githubWebhookServer pod priorityClassName | |
| `githubWebhookServer.service.type` | Set githubWebhookServer service type | |
| `githubWebhookServer.service.ports` | Set githubWebhookServer service ports | `[{"port":80, "targetPort:"http", "protocol":"TCP", "name":"http"}]` |
| `githubWebhookServer.ingress.enabled` | Deploy an ingress kind for the githubWebhookServer | false |
| `githubWebhookServer.ingress.annotations` | Set annotations for the ingress kind | |
| `githubWebhookServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
| `githubWebhookServer.ingress.tls` | Set tls configuration for ingress | |
| `githubWebhookServer.ingress.ingressClassName` | Set ingress class name | |
| `githubWebhookServer.podDisruptionBudget.enabled` | Enables a PDB to ensure HA of githubwebhook pods | false |
| `githubWebhookServer.podDisruptionBudget.minAvailable` | Minimum number of pods that must be available after eviction | |
| `githubWebhookServer.podDisruptionBudget.maxUnavailable` | Maximum number of pods that can be unavailable after eviction. Kubernetes 1.7+ required. | |
| `actionsMetricsServer.logLevel` | Set the log level of the actionsMetricsServer container | |
| `actionsMetricsServer.logFormat` | Set the log format of the actionsMetricsServer controller. Valid options are "text" and "json" | text |
| `actionsMetricsServer.enabled` | Deploy the actions metrics server pod | false |
| `actionsMetricsServer.secret.enabled` | Passes the webhook hook secret to the github-webhook-server | false |
| `actionsMetricsServer.secret.create` | Deploy the webhook hook secret | false |
| `actionsMetricsServer.secret.name` | Set the name of the webhook hook secret | github-webhook-server |
| `actionsMetricsServer.secret.github_webhook_secret_token` | Set the webhook secret token value | |
| `actionsMetricsServer.imagePullSecrets` | Specifies the secret to be used when pulling the actionsMetricsServer pod containers | |
| `actionsMetricsServer.nameOverride` | Override the resource name prefix | |
| `actionsMetricsServer.fullnameOverride` | Override the full resource names | |
| `actionsMetricsServer.serviceAccount.create` | Deploy the actionsMetricsServer under a service account | true |
| `actionsMetricsServer.serviceAccount.annotations` | Set annotations for the service account | |
| `actionsMetricsServer.serviceAccount.name` | Set the service account name | |
| `actionsMetricsServer.podAnnotations` | Set annotations for the actionsMetricsServer pod | |
| `actionsMetricsServer.podLabels` | Set labels for the actionsMetricsServer pod | |
| `actionsMetricsServer.podSecurityContext` | Set the security context to actionsMetricsServer pod | |
| `actionsMetricsServer.securityContext` | Set the security context for each container in the actionsMetricsServer pod | |
| `actionsMetricsServer.resources` | Set the actionsMetricsServer pod resources | |
| `actionsMetricsServer.topologySpreadConstraints` | Set the actionsMetricsServer pod topologySpreadConstraints | |
| `actionsMetricsServer.nodeSelector` | Set the actionsMetricsServer pod nodeSelector | |
| `actionsMetricsServer.tolerations` | Set the actionsMetricsServer pod tolerations | |
| `actionsMetricsServer.affinity` | Set the actionsMetricsServer pod affinity rules | |
| `actionsMetricsServer.priorityClassName` | Set the actionsMetricsServer pod priorityClassName | |
| `actionsMetricsServer.service.type` | Set actionsMetricsServer service type | |
| `actionsMetricsServer.service.ports` | Set actionsMetricsServer service ports | `[{"port":80, "targetPort:"http", "protocol":"TCP", "name":"http"}]` |
| `actionsMetricsServer.ingress.enabled` | Deploy an ingress kind for the actionsMetricsServer | false |
| `actionsMetricsServer.ingress.annotations` | Set annotations for the ingress kind | |
| `actionsMetricsServer.ingress.hosts` | Set hosts configuration for ingress | `[{"host": "chart-example.local", "paths": []}]` |
| `actionsMetricsServer.ingress.tls` | Set tls configuration for ingress | |
| `actionsMetricsServer.ingress.ingressClassName` | Set ingress class name | |
| `actionsMetrics.serviceMonitor` | Deploy serviceMonitor kind for for use with prometheus-operator CRDs | false |
| `actionsMetrics.serviceAnnotations` | Set annotations for the provisioned actions metrics service resource | |
| `actionsMetrics.port` | Set port of actions metrics service | 8443 |
| `actionsMetrics.proxy.enabled` | Deploy kube-rbac-proxy container in controller pod | true |
| `actionsMetrics.proxy.image.repository` | The "repository/image" of the kube-proxy container | quay.io/brancz/kube-rbac-proxy |
| `actionsMetrics.proxy.image.tag` | The tag of the kube-proxy image to use when pulling the container | v0.10.0 |
| `actionsMetrics.serviceMonitorLabels` | Set labels to apply to ServiceMonitor resources | |

View File

@@ -1,30 +0,0 @@
# This file sets some opinionated values for kube-score to use
# when parsing the chart
image:
pullPolicy: Always
podSecurityContext:
fsGroup: 2000
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 2000
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
authSecret:
create: false
# Set the following to true to create a dummy secret, allowing the manager pod to start
# This is only useful in CI
createDummySecret: true

View File

@@ -1,256 +1,133 @@
apiVersion: apiextensions.k8s.io/v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: horizontalrunnerautoscalers.actions.summerwind.dev
spec:
additionalPrinterColumns:
- JSONPath: .spec.minReplicas
name: Min
type: number
- JSONPath: .spec.maxReplicas
name: Max
type: number
- JSONPath: .status.desiredReplicas
name: Desired
type: number
group: actions.summerwind.dev
names:
kind: HorizontalRunnerAutoscaler
listKind: HorizontalRunnerAutoscalerList
plural: horizontalrunnerautoscalers
shortNames:
- hra
singular: horizontalrunnerautoscaler
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.minReplicas
name: Min
type: number
- jsonPath: .spec.maxReplicas
name: Max
type: number
- jsonPath: .status.desiredReplicas
name: Desired
type: number
- jsonPath: .status.scheduledOverridesSummary
name: Schedule
subresources:
status: {}
validation:
openAPIV3Schema:
description: HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler
API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler API
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: HorizontalRunnerAutoscalerSpec defines the desired state of
HorizontalRunnerAutoscaler
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: HorizontalRunnerAutoscalerSpec defines the desired state of HorizontalRunnerAutoscaler
properties:
capacityReservations:
items:
description: CapacityReservation specifies the number of replicas temporarily added to the scale target until ExpirationTime.
properties:
effectiveTime:
format: date-time
type: string
expirationTime:
format: date-time
type: string
name:
type: string
replicas:
type: integer
type: object
type: array
githubAPICredentialsFrom:
properties:
secretRef:
properties:
name:
type: string
required:
- name
type: object
type: object
maxReplicas:
description: MaxReplicas is the maximum number of replicas the deployment is allowed to scale
type: integer
metrics:
description: Metrics is the collection of various metric targets to calculate desired number of runners
items:
properties:
repositoryNames:
description: RepositoryNames is the list of repository names to be used for calculating the metric. For example, a repository name is the REPO part of `github.com/USER/REPO`.
items:
type: string
type: array
scaleDownAdjustment:
description: ScaleDownAdjustment is the number of runners removed on scale-down. You can only specify either ScaleDownFactor or ScaleDownAdjustment.
type: integer
scaleDownFactor:
description: ScaleDownFactor is the multiplicative factor applied to the current number of runners used to determine how many pods should be removed.
type: string
scaleDownThreshold:
description: ScaleDownThreshold is the percentage of busy runners less than which will trigger the hpa to scale the runners down.
type: string
scaleUpAdjustment:
description: ScaleUpAdjustment is the number of runners added on scale-up. You can only specify either ScaleUpFactor or ScaleUpAdjustment.
type: integer
scaleUpFactor:
description: ScaleUpFactor is the multiplicative factor applied to the current number of runners used to determine how many pods should be added.
type: string
scaleUpThreshold:
description: ScaleUpThreshold is the percentage of busy runners greater than which will trigger the hpa to scale runners up.
type: string
type:
description: Type is the type of metric to be used for autoscaling. It can be TotalNumberOfQueuedAndInProgressWorkflowRuns or PercentageRunnersBusy.
type: string
type: object
type: array
minReplicas:
description: MinReplicas is the minimum number of replicas the deployment is allowed to scale
type: integer
scaleDownDelaySecondsAfterScaleOut:
description: ScaleDownDelaySecondsAfterScaleUp is the approximate delay for a scale down followed by a scale up Used to prevent flapping (down->up->down->... loop)
type: integer
scaleTargetRef:
description: ScaleTargetRef sis the reference to scaled resource like RunnerDeployment
properties:
kind:
description: Kind is the type of resource being referenced
enum:
- RunnerDeployment
- RunnerSet
maxReplicas:
description: MinReplicas is the maximum number of replicas the deployment
is allowed to scale
type: integer
metrics:
description: Metrics is the collection of various metric targets to
calculate desired number of runners
items:
properties:
repositoryNames:
description: RepositoryNames is the list of repository names to
be used for calculating the metric. For example, a repository
name is the REPO part of `github.com/USER/REPO`.
items:
type: string
name:
description: Name is the name of resource being referenced
type: string
type: object
scaleUpTriggers:
description: "ScaleUpTriggers is an experimental feature to increase the desired replicas by 1 on each webhook requested received by the webhookBasedAutoscaler. \n This feature requires you to also enable and deploy the webhookBasedAutoscaler onto your cluster. \n Note that the added runners remain until the next sync period at least, and they may or may not be used by GitHub Actions depending on the timing. They are intended to be used to gain \"resource slack\" immediately after you receive a webhook from GitHub, so that you can loosely expect MinReplicas runners to be always available."
items:
properties:
amount:
type: integer
duration:
type: string
githubEvent:
properties:
checkRun:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
properties:
names:
description: Names is a list of GitHub Actions glob patterns. Any check_run event whose name matches one of patterns in the list can trigger autoscaling. Note that check_run name seem to equal to the job name you've defined in your actions workflow yaml file. So it is very likely that you can utilize this to trigger depending on the job.
items:
type: string
type: array
repositories:
description: Repositories is a list of GitHub repositories. Any check_run event whose repository matches one of repositories in the list can trigger autoscaling.
items:
type: string
type: array
status:
type: string
types:
description: 'One of: created, rerequested, or completed'
items:
type: string
type: array
type: object
pullRequest:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request
properties:
branches:
items:
type: string
type: array
types:
items:
type: string
type: array
type: object
push:
description: PushSpec is the condition for triggering scale-up on push event Also see https://docs.github.com/en/actions/reference/events-that-trigger-workflows#push
type: object
workflowJob:
description: https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#workflow_job
type: object
type: object
type: object
type: array
scheduledOverrides:
description: ScheduledOverrides is the list of ScheduledOverride. It can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. The earlier a scheduled override is, the higher it is prioritized.
items:
description: ScheduledOverride can be used to override a few fields of HorizontalRunnerAutoscalerSpec on schedule. A schedule can optionally be recurring, so that the corresponding override happens every day, week, month, or year.
properties:
endTime:
description: EndTime is the time at which the first override ends.
format: date-time
type: string
minReplicas:
description: MinReplicas is the number of runners while overriding. If omitted, it doesn't override minReplicas.
minimum: 0
nullable: true
type: integer
recurrenceRule:
properties:
frequency:
description: Frequency is the name of a predefined interval of each recurrence. The valid values are "Daily", "Weekly", "Monthly", and "Yearly". If empty, the corresponding override happens only once.
enum:
- Daily
- Weekly
- Monthly
- Yearly
type: string
untilTime:
description: UntilTime is the time of the final recurrence. If empty, the schedule recurs forever.
format: date-time
type: string
type: object
startTime:
description: StartTime is the time at which the first override starts.
format: date-time
type: string
required:
- endTime
- startTime
type: object
type: array
type: object
status:
type: array
scaleDownFactor:
description: ScaleDownFactor is the multiplicative factor applied
to the current number of runners used to determine how many
pods should be removed.
type: string
scaleDownThreshold:
description: ScaleDownThreshold is the percentage of busy runners
less than which will trigger the hpa to scale the runners down.
type: string
scaleUpFactor:
description: ScaleUpFactor is the multiplicative factor applied
to the current number of runners used to determine how many
pods should be added.
type: string
scaleUpThreshold:
description: ScaleUpThreshold is the percentage of busy runners
greater than which will trigger the hpa to scale runners up.
type: string
type:
description: Type is the type of metric to be used for autoscaling.
The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
type: string
type: object
type: array
minReplicas:
description: MinReplicas is the minimum number of replicas the deployment
is allowed to scale
type: integer
scaleDownDelaySecondsAfterScaleOut:
description: ScaleDownDelaySecondsAfterScaleUp is the approximate delay
for a scale down followed by a scale up Used to prevent flapping (down->up->down->...
loop)
type: integer
scaleTargetRef:
description: ScaleTargetRef sis the reference to scaled resource like
RunnerDeployment
properties:
cacheEntries:
items:
properties:
expirationTime:
format: date-time
type: string
key:
type: string
value:
type: integer
type: object
type: array
desiredReplicas:
description: DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
type: integer
lastSuccessfulScaleOutTime:
format: date-time
nullable: true
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed for the target. It corresponds to e.g. RunnerDeployment's generation, which is updated on mutation by the API Server.
format: int64
type: integer
scheduledOverridesSummary:
description: ScheduledOverridesSummary is the summary of active and upcoming scheduled overrides to be shown in e.g. a column of a `kubectl get hra` output for observability.
name:
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}
preserveUnknownFields: false
status:
properties:
desiredReplicas:
description: DesiredReplicas is the total number of desired, non-terminated
and latest pods to be set for the primary RunnerSet This doesn't include
outdated pods while upgrading the deployment and replacing the runnerset.
type: integer
lastSuccessfulScaleOutTime:
format: date-time
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed
for the target. It corresponds to e.g. RunnerDeployment's generation,
which is updated on mutation by the API Server.
format: int64
type: integer
type: object
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ""

View File

@@ -1,46 +0,0 @@
## Upgrading
This project makes extensive use of CRDs to provide much of its functionality. Helm unfortunately does not support [managing](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/) CRDs by design:
_The full breakdown as to how they came to this decision and why they have taken the approach they have for dealing with CRDs can be found in [Helm Improvement Proposal 11](https://github.com/helm/community/blob/main/hips/hip-0011.md)_
```
There is no support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much
community discussion due to the danger for unintentional data loss. Furthermore, there is currently no community
consensus around how to handle CRDs and their lifecycle. As this evolves, Helm will add support for those use cases.
```
Helm will do an initial install of CRDs but it will not touch them afterwards (update or delete).
Additionally, because the project leverages CRDs so extensively you **MUST** run the matching controller app container with its matching CRDs i.e. always redeploy your CRDs if you are changing the app version.
Due to the above you can't just do a `helm upgrade` to release the latest version of the chart, the best practice steps are recorded below:
## Steps
1. Upgrade CRDs, this isn't optional, the CRDs you are using must be those that correspond with the version of the controller you are installing
```shell
# REMEMBER TO UPDATE THE CHART_VERSION TO RELEVANT CHART VERISON!!!!
CHART_VERSION=0.18.0
curl -L https://github.com/actions/actions-runner-controller/releases/download/actions-runner-controller-${CHART_VERSION}/actions-runner-controller-${CHART_VERSION}.tgz | tar zxv --strip 1 actions-runner-controller/crds
kubectl replace -f crds/
```
Note that in case you're going to create prometheus-operator `ServiceMonitor` resources via the chart, you'd need to deploy prometheus-operator-related CRDs as well.
2. Upgrade the Helm release
```shell
# helm repo [command]
helm repo update
# helm upgrade [RELEASE] [CHART] [flags]
helm upgrade actions-runner-controller \
actions-runner-controller/actions-runner-controller \
--install \
--namespace actions-runner-system \
--version ${CHART_VERSION}
```

View File

@@ -1,8 +1,8 @@
1. Get the application URL by running these commands:
{{- if .Values.githubWebhookServer.ingress.enabled }}
{{- range $host := .Values.githubWebhookServer.ingress.hosts }}
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.githubWebhookServer.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}

View File

@@ -1,60 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "actions-runner-controller-actions-metrics-server.name" -}}
{{- default .Chart.Name .Values.actionsMetricsServer.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "actions-runner-controller-actions-metrics-server.instance" -}}
{{- printf "%s-%s" .Release.Name "actions-metrics-server" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "actions-runner-controller-actions-metrics-server.fullname" -}}
{{- if .Values.actionsMetricsServer.fullnameOverride }}
{{- .Values.actionsMetricsServer.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.actionsMetricsServer.nameOverride }}
{{- $instance := include "actions-runner-controller-actions-metrics-server.instance" . }}
{{- if contains $name $instance }}
{{- $instance | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s-%s" .Release.Name $name "actions-metrics-server" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "actions-runner-controller-actions-metrics-server.selectorLabels" -}}
app.kubernetes.io/name: {{ include "actions-runner-controller-actions-metrics-server.name" . }}
app.kubernetes.io/instance: {{ include "actions-runner-controller-actions-metrics-server.instance" . }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "actions-runner-controller-actions-metrics-server.serviceAccountName" -}}
{{- if .Values.actionsMetricsServer.serviceAccount.create }}
{{- default (include "actions-runner-controller-actions-metrics-server.fullname" .) .Values.actionsMetricsServer.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.actionsMetricsServer.serviceAccount.name }}
{{- end }}
{{- end }}
{{- define "actions-runner-controller-actions-metrics-server.secretName" -}}
{{- default (include "actions-runner-controller-actions-metrics-server.fullname" .) .Values.actionsMetricsServer.secret.name }}
{{- end }}
{{- define "actions-runner-controller-actions-metrics-server.roleName" -}}
{{- include "actions-runner-controller-actions-metrics-server.fullname" . }}
{{- end }}
{{- define "actions-runner-controller-actions-metrics-server.serviceMonitorName" -}}
{{- include "actions-runner-controller-actions-metrics-server.fullname" . | trunc 47 }}-service-monitor
{{- end }}

View File

@@ -1,64 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "actions-runner-controller-github-webhook-server.name" -}}
{{- default .Chart.Name .Values.githubWebhookServer.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.instance" -}}
{{- printf "%s-%s" .Release.Name "github-webhook-server" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "actions-runner-controller-github-webhook-server.fullname" -}}
{{- if .Values.githubWebhookServer.fullnameOverride }}
{{- .Values.githubWebhookServer.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.githubWebhookServer.nameOverride }}
{{- $instance := include "actions-runner-controller-github-webhook-server.instance" . }}
{{- if contains $name $instance }}
{{- $instance | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s-%s" .Release.Name $name "github-webhook-server" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "actions-runner-controller-github-webhook-server.selectorLabels" -}}
app.kubernetes.io/name: {{ include "actions-runner-controller-github-webhook-server.name" . }}
app.kubernetes.io/instance: {{ include "actions-runner-controller-github-webhook-server.instance" . }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "actions-runner-controller-github-webhook-server.serviceAccountName" -}}
{{- if .Values.githubWebhookServer.serviceAccount.create }}
{{- default (include "actions-runner-controller-github-webhook-server.fullname" .) .Values.githubWebhookServer.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.githubWebhookServer.serviceAccount.name }}
{{- end }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.secretName" -}}
{{- default (include "actions-runner-controller-github-webhook-server.fullname" .) .Values.githubWebhookServer.secret.name }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.roleName" -}}
{{- include "actions-runner-controller-github-webhook-server.fullname" . }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.serviceMonitorName" -}}
{{- include "actions-runner-controller-github-webhook-server.fullname" . | trunc 47 }}-service-monitor
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.pdbName" -}}
{{- include "actions-runner-controller-github-webhook-server.fullname" . | trunc 59 }}-pdb
{{- end }}

View File

@@ -64,14 +64,6 @@ Create the name of the service account to use
{{- end }}
{{- end }}
{{- define "actions-runner-controller.secretName" -}}
{{- default (include "actions-runner-controller.fullname" .) .Values.authSecret.name -}}
{{- end }}
{{- define "actions-runner-controller.githubWebhookServerSecretName" -}}
{{- default (include "actions-runner-controller.fullname" .) .Values.githubWebhookServer.secret.name -}}
{{- end }}
{{- define "actions-runner-controller.leaderElectionRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-leader-election
{{- end }}
@@ -93,15 +85,11 @@ Create the name of the service account to use
{{- end }}
{{- define "actions-runner-controller.webhookServiceName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 55 }}-webhook
{{- include "actions-runner-controller.fullname" . }}-webhook
{{- end }}
{{- define "actions-runner-controller.metricsServiceName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 47 }}-metrics-service
{{- end }}
{{- define "actions-runner-controller.serviceMonitorName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 47 }}-service-monitor
{{- define "actions-runner-controller.authProxyServiceName" -}}
{{- include "actions-runner-controller.fullname" . }}-controller-manager-metrics-service
{{- end }}
{{- define "actions-runner-controller.selfsignedIssuerName" -}}
@@ -111,7 +99,3 @@ Create the name of the service account to use
{{- define "actions-runner-controller.servingCertName" -}}
{{- include "actions-runner-controller.fullname" . }}-serving-cert
{{- end }}
{{- define "actions-runner-controller.pdbName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 59 }}-pdb
{{- end }}

View File

@@ -1,162 +0,0 @@
{{- if .Values.actionsMetricsServer.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.actionsMetricsServer.replicaCount }}
selector:
matchLabels:
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.actionsMetricsServer.podAnnotations }}
annotations:
kubectl.kubernetes.io/default-container: "actions-metrics-server"
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 8 }}
{{- with .Values.actionsMetricsServer.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.actionsMetricsServer.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "actions-runner-controller-actions-metrics-server.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.actionsMetricsServer.podSecurityContext | nindent 8 }}
{{- with .Values.actionsMetricsServer.priorityClassName }}
priorityClassName: "{{ . }}"
{{- end }}
containers:
- args:
{{- $metricsHost := .Values.metrics.proxy.enabled | ternary "127.0.0.1" "0.0.0.0" }}
{{- $metricsPort := .Values.metrics.proxy.enabled | ternary "8080" .Values.metrics.port }}
- "--metrics-addr={{ $metricsHost }}:{{ $metricsPort }}"
{{- if .Values.actionsMetricsServer.logLevel }}
- "--log-level={{ .Values.actionsMetricsServer.logLevel }}"
{{- end }}
{{- if .Values.runnerGithubURL }}
- "--runner-github-url={{ .Values.runnerGithubURL }}"
{{- end }}
{{- if .Values.actionsMetricsServer.logFormat }}
- "--log-format={{ .Values.actionsMetricsServer.logFormat }}"
{{- end }}
command:
- "/actions-metrics-server"
env:
- name: GITHUB_WEBHOOK_SECRET_TOKEN
valueFrom:
secretKeyRef:
key: github_webhook_secret_token
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
{{- if .Values.githubEnterpriseServerURL }}
- name: GITHUB_ENTERPRISE_URL
value: {{ .Values.githubEnterpriseServerURL }}
{{- end }}
{{- if .Values.githubURL }}
- name: GITHUB_URL
value: {{ .Values.githubURL }}
{{- end }}
{{- if .Values.githubUploadURL }}
- name: GITHUB_UPLOAD_URL
value: {{ .Values.githubUploadURL }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.enabled }}
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
key: github_token
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_ID
valueFrom:
secretKeyRef:
key: github_app_id
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_INSTALLATION_ID
valueFrom:
secretKeyRef:
key: github_app_installation_id
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
- name: GITHUB_APP_PRIVATE_KEY
valueFrom:
secretKeyRef:
key: github_app_private_key
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
optional: true
{{- if .Values.authSecret.github_basicauth_username }}
- name: GITHUB_BASICAUTH_USERNAME
value: {{ .Values.authSecret.github_basicauth_username }}
{{- end }}
- name: GITHUB_BASICAUTH_PASSWORD
valueFrom:
secretKeyRef:
key: github_basicauth_password
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
{{- end }}
{{- range $key, $val := .Values.actionsMetricsServer.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}"
name: actions-metrics-server
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 8000
name: http
protocol: TCP
{{- if not .Values.metrics.proxy.enabled }}
- containerPort: {{ .Values.metrics.port }}
name: metrics-port
protocol: TCP
{{- end }}
resources:
{{- toYaml .Values.actionsMetricsServer.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.actionsMetricsServer.securityContext | nindent 12 }}
{{- if .Values.metrics.proxy.enabled }}
- args:
- "--secure-listen-address=0.0.0.0:{{ .Values.metrics.port }}"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=10"
image: "{{ .Values.metrics.proxy.image.repository }}:{{ .Values.metrics.proxy.image.tag }}"
name: kube-rbac-proxy
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.metrics.port }}
name: metrics-port
resources:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- end }}
terminationGracePeriodSeconds: 10
{{- with .Values.actionsMetricsServer.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.actionsMetricsServer.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.actionsMetricsServer.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.actionsMetricsServer.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -1,47 +0,0 @@
{{- if .Values.actionsMetricsServer.ingress.enabled -}}
{{- $fullName := include "actions-runner-controller-actions-metrics-server.fullname" . -}}
{{- $svcPort := (index .Values.actionsMetricsServer.service.ports 0).port -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.actionsMetricsServer.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.actionsMetricsServer.ingress.tls }}
tls:
{{- range .Values.actionsMetricsServer.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
{{- with .Values.actionsMetricsServer.ingress.ingressClassName }}
ingressClassName: {{ . }}
{{- end }}
rules:
{{- range .Values.actionsMetricsServer.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- if .extraPaths }}
{{- toYaml .extraPaths | nindent 10 }}
{{- end }}
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,28 +0,0 @@
{{- if .Values.actionsMetricsServer.enabled }}
{{- if .Values.actionsMetricsServer.secret.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.secretName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque
data:
{{- if .Values.actionsMetricsServer.secret.github_webhook_secret_token }}
github_webhook_secret_token: {{ .Values.actionsMetricsServer.secret.github_webhook_secret_token | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_id }}
github_app_id: {{ .Values.actionsMetricsServer.secret.github_app_id | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_installation_id }}
github_app_installation_id: {{ .Values.actionsMetricsServer.secret.github_app_installation_id | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_app_private_key }}
github_app_private_key: {{ .Values.actionsMetricsServer.secret.github_app_private_key | toString | b64enc }}
{{- end }}
{{- if .Values.actionsMetricsServer.secret.github_token }}
github_token: {{ .Values.actionsMetricsServer.secret.github_token | toString | b64enc }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,26 +0,0 @@
{{- if .Values.actionsMetricsServer.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- if .Values.actionsMetricsServer.service.annotations }}
annotations:
{{ toYaml .Values.actionsMetricsServer.service.annotations | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.actionsMetricsServer.service.type }}
ports:
{{ range $_, $port := .Values.actionsMetricsServer.service.ports -}}
- {{ $port | toYaml | nindent 6 }}
{{- end }}
{{- if .Values.metrics.serviceMonitor }}
- name: metrics-port
port: {{ .Values.metrics.port }}
targetPort: metrics-port
{{- end }}
selector:
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -1,15 +0,0 @@
{{- if .Values.actionsMetricsServer.enabled -}}
{{- if .Values.actionsMetricsServer.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "actions-runner-controller-actions-metrics-server.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.actionsMetricsServer.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,25 +0,0 @@
{{- if and .Values.actionsMetricsServer.enabled .Values.actionsMetrics.serviceMonitor }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.actionsMetricsServer.serviceMonitorLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "actions-runner-controller-actions-metrics-server.serviceMonitorName" . }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- path: /metrics
port: metrics-port
{{- if .Values.actionsMetrics.proxy.enabled }}
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
scheme: https
tlsConfig:
insecureSkipVerify: true
{{- end }}
selector:
matchLabels:
{{- include "actions-runner-controller-actions-metrics-server.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@@ -1,4 +1,3 @@
{{- if .Values.metrics.proxy.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
@@ -12,4 +11,3 @@ rules:
resources:
- subjectaccessreviews
verbs: ["create"]
{{- end }}

View File

@@ -1,4 +1,3 @@
{{- if .Values.metrics.proxy.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
@@ -11,4 +10,3 @@ subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

Some files were not shown because too many files have changed in this diff Show More