Compare commits

..

47 Commits

Author SHA1 Message Date
Yusuke Kuoka
faaca10fba Rename Runner.Spec.dockerWithinRunnerContainer to docker"d"WithinRunnerContainer (#134)
* Rename Runner.Spec.dockerWithinRunnerContainer to dockerdWithinRunnerContainer

Ref https://github.com/summerwind/actions-runner-controller/pull/126#issuecomment-712501790
2020-10-21 21:32:40 +09:00
Juho Saarinen
d16dfac0f8 Restart if pod ends up succeeded (#136)
Fixes #132
2020-10-21 21:32:26 +09:00
Juho Saarinen
af483d83da Possibility to define resources for DIND container (#125)
Ref #119
2020-10-21 10:26:19 +09:00
Juho Saarinen
92920926fe Configurable "runner and DinD in a single container" (#126) 2020-10-20 08:48:28 +09:00
Brendan Galloway
7d0bfb77e3 Inject Env Vars into Runner defined Container Spec (#127)
The runner token is now injected into the `runner` container defined within Runner.Spec.Containers[]
2020-10-20 08:43:53 +09:00
Brendan Galloway
c4074130e8 Patch additional protocol instances during manifest generation (#129)
Fixes #128
2020-10-19 09:57:53 +09:00
Yusuke Kuoka
be2e61f209 Add "alternatives" section to README (#124)
Grow together :)
2020-10-15 08:40:02 +09:00
Juho Saarinen
da818a898a Manifests valid for K8s 1.18 and 1.19 (#123)
Fixes #113
Fixes #116
2020-10-15 08:39:46 +09:00
Jesse Newland
2d250d5e06 Fix release build (#122) 2020-10-14 08:43:14 +09:00
Jesse Newland
231cde1531 Update build-runner workflow to be compatible with forks, fix image push (#117)
Partly revert and enhances #115

This is a follow-up to #115 that replaces the hardcoded `summerwind` portion of the image name with `${{ github.repository_owner }}` to enable contributors to test the image pushing behavior and fixes image building by conditionally passing `--push` to the build step based on the event that triggered the workflow.

After setting the `DOCKER_ACCESS_TOKEN` Secret on my fork of this repository, I was able to use this updated workflow to [build and push](https://github.com/urcomputeringpal/actions-runner-controller/runs/1242793758?check_suite_focus=true) a [set of images](https://hub.docker.com/r/urcomputeringpal/actions-runner/tags) and confirm their functionality. I imagine this will be useful to future contributors who wish to help with the chore of keeping up with https://github.com/actions/runner/releases.
2020-10-13 21:14:36 +09:00
Hayden Fuss
c986c5553d Fixing Docker Build and Push for Runner Image (#115)
A new image tag for the runner stopped being published on master merges from changes in #86.

This fixes that in the following ways:
- Tests the GH workflow on PRs w/o pushing the images
- Runner and Docker version are moved from Makefile to GH Actions workflow and are passed in as build args
- GHA workflow runs on PRs, and if the workflow file itself is changed (i.e. version bump) or the runner Docker source changes (excluding the Makefile since thats just for local dev)
- Images are pushed on push (i.e. a merge)
2020-10-09 09:16:24 +09:00
Harry Gogonis
f12bb76fd1 Update runner to v2.273.5 (#111) 2020-10-08 09:02:01 +09:00
Dominic LoBue
a63860029a Prefer autoscaling based on jobs rather than workflows if available (#114)
Adds the ability to autoscale on jobs in addition to workflows. We fall back to using workflow metrics if job details are not present.

Resolves #89
2020-10-08 09:00:44 +09:00
Yusuke Kuoka
1bc6809c1b Merge pull request #110 from summerwind/ensure-controller-gen-code-manifests-sync
Ensure controller-gen is up-to-date and the code and the manifests are in-sync
2020-10-06 09:44:25 +09:00
Yusuke Kuoka
2e7b77321d Verify manifests on pull request build 2020-10-06 09:30:25 +09:00
Yusuke Kuoka
1e466ad3df Ensure controller-gen is up-to-date and the code and the manifests are in-sync
Follow-up for #95 that added /finalizers subresource permission and #103 that upgraded controller-gen from 0.2.4 from 0.3.0
2020-10-06 09:23:03 +09:00
MIℂHΛΞL FѲRИΛRѲ
a309eb1687 Initial multi-arch image commit (#86)
Adding multi-arch image support for `arm64` and `amd64`. This uses dockers new `buildx` feature, to enable further architectures more work will be required to update the `runner/Dockerfile` file to pull architecture-specific releases.

The Makefile targets really should only be used for local testing and not for release, additional work to appropriately tag the release images may need to be added but for now, I've not added that logic.

Fixes: #86 

Signed-off-by: Michael Fornaro <20387402+xUnholy@users.noreply.github.com>
2020-10-05 09:26:46 +09:00
Tomoaki Nakagawa
e8a7733ee7 Change api version of cert manager (#94)
* change apiVersion cert-manager

* change apiVersion kustomization.yaml
2020-10-05 09:13:10 +09:00
Hayden Fuss
729f5fde81 Allowing access to finalizers for all managed resources (#95) 2020-10-05 09:12:01 +09:00
Helder Moreira
7a2fa7fbce runner-controller: do not delete runner if it is busy (#103)
Currently, after refreshing the token, the controller re-creates the runner with the new token. This results in jobs being interrupted. This PR makes sure the pod is not restarted if it is busy.

Closes #74
2020-10-05 09:06:37 +09:00
Dominic LoBue
7b5e62e266 Add gh workflow to run tests on PRs (#108) 2020-10-05 09:01:44 +09:00
John Wiebalk
acb1700b7c Fix a couple typos in readme (#107) 2020-10-05 08:59:34 +09:00
Yury Tsarev
b79ea980b8 Use self update ready entrypoint (#99)
* Use self update ready entrypoint

* Add --once support for runsvc.sh

Run `cd runner; NAME=$DOCKER_USER/actions-runner TAG=dev make docker-build docker-push`,
`kubectl apply -f release/actions-runner-controller.yaml`,
then update the runner image(not the controller image) by updating e.g. `Runner.Spec.Image` to `$DOCKER_USER/actions-runner:$TAG`, for testing.

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2020-10-05 08:58:20 +09:00
Moto Ishizawa
b1ba5bf0e8 Merge pull request #101 from summerwind/runner-v2.273.4
Update runner to v2.273.4
2020-09-20 13:26:24 +09:00
Moto Ishizawa
7a25a8962b Update runner to v2.273.4 2020-09-20 13:24:55 +09:00
Moto Ishizawa
9e61a78c62 Merge pull request #93 from summerwind/runner-v2.273.1
Update runner to v2.273.1
2020-09-09 06:46:42 +09:00
Moto Ishizawa
0179abfee5 Update runner to v2.273.1 2020-09-09 06:45:23 +09:00
Moto Ishizawa
c7b560b8cb Merge pull request #88 from summerwind/runner-v2.273.0
Update runner to v2.273.0
2020-08-20 20:21:24 +09:00
Moto Ishizawa
0cc499d77b Update runner to v2.273.0 2020-08-20 20:19:57 +09:00
Moto Ishizawa
35caf436d4 Merge pull request #82 from summerwind/add-hra-crd-to-kustomization
Do include currently missing HRA CRD in the released manifests
2020-08-05 21:35:57 +09:00
Yusuke Kuoka
a136714723 Do include currently missing HRA CRD in the released manifests
The standard installation procedure explained in https://github.com/summerwind/actions-runner-controller#installation has been broken since v0.7.0. This is due to that I missed adding the CRD to the kustomization.yaml which is used for kustomize-based deployments and generation of released manifests. This fixes that.
2020-08-05 08:38:49 +09:00
Moto Ishizawa
fde8df608b Merge pull request #81 from summerwind/add-scaling-down-integration-test
Add scaling-down scenario to integration test
2020-08-04 19:22:17 +09:00
Yusuke Kuoka
4733edc20d Add scaling-down scenario to integration test 2020-08-02 16:10:01 +09:00
Moto Ishizawa
3818e584ec Merge pull request #76 from summerwind/fix-crash-on-startup
Fix crash on startup after the HRDA addition
2020-08-02 13:10:56 +09:00
Yusuke Kuoka
50487bbb54 Fix the HRA controller name 2020-08-02 10:38:15 +09:00
Yusuke Kuoka
e2164f9946 Fix integration test bugs and do verify scaling out 2020-08-02 10:34:58 +09:00
Moto Ishizawa
bdc1279e9e Merge pull request #80 from summerwind/runner-v2.272.0
Update runner to v2.272.0
2020-08-01 17:53:37 +09:00
Moto Ishizawa
3223480bc0 Update docker to v19.03.12 2020-08-01 17:28:40 +09:00
Moto Ishizawa
e642632a50 Update runner to v2.272.0 2020-08-01 17:28:16 +09:00
Yusuke Kuoka
3c3077a11c Fix crash on startup after the HRDA addition
This is a follow-up for #66.

The reconciler for the new HorizontalRunnerDeploymentAutoscaler had a terrible flaw that prevented the controller to fail launching due to an error like:

```
indexer conflict: map[field:.metadata.controller:{}]
```

This fixes that, while adding `integration_test.go` to verify its actually fixed and prevent regression in the future.
2020-07-29 21:20:46 +09:00
Moto Ishizawa
e10637ce35 Merge pull request #66 from summerwind/org-runner-autoscale
feat: Organizational RunnerDeployment Autoscaling
2020-07-28 19:17:18 +09:00
Yusuke Kuoka
ae30648985 feat: Use HorizontalRunnerAutoscaler for autoscaling 2020-07-27 20:33:44 +09:00
Moto Ishizawa
be0850a582 Merge pull request #73 from summerwind/runner-v2.267.1
Update runner to v2.267.1
2020-07-13 21:54:51 +09:00
Moto Ishizawa
a995597111 Update runner to v2.267.1 2020-07-13 21:52:30 +09:00
Moto Ishizawa
ba8f61141b Merge pull request #71 from dvdliao/respect-imagepullpolicy
add config to respect image pull policy
2020-07-13 21:49:58 +09:00
David Liao
c0914743b0 add config to respect image pull policy 2020-07-08 23:53:52 -07:00
Yusuke Kuoka
eca6917c6a feat: Organizational RunnerDeployment Autoscaling
Enhances #57 to add support for organizational runners.

As GitHub Actions does not have an appropriate API for this, this is the spec you need:

```
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: myrunners
spec:
  minReplicas: 1
  maxReplicas: 3
  autoscaling:
    metrics:
    - type: TotalNumberOfQueuedAndProgressingWorkflowRuns
      repositories:
      # Assumes that you have `github.com/myorg/myrepo1` repo
      - myrepo1
      - myrepo2
  template:
    spec:
      organization: myorg
```

It works by collecting "in_progress" and "queued" workflow runs for the repositories `myrepo1` and `myrepo2` to autoscale the number of replicas, assuming you have this organizational runner deployment only for those two repositories.

For example, if `myrepo1` had 1 `in_progress` and 2 `queued` workflow runs, and `myrepo2` had 4 `in_progress` and 8 `queued` workflow runs at the time of running the reconcilation loop on the runner deployment, it will scale replicas to 1 + 2 + 4 + 8 = 15.

Perhaps we might be better add a kind of "ratio" setting so that you can configure the controller to create e.g. 2x runners than demanded. But that's another story.

Ref #10
2020-07-03 09:12:47 +09:00
45 changed files with 3835 additions and 17032 deletions

View File

@@ -1,22 +1,78 @@
on: on:
pull_request:
branches:
- '**'
paths:
- 'runner/**'
- .github/workflows/build-runner.yml
push: push:
branches: branches:
- master - master
paths: paths:
- 'runner/**' - runner/patched/*
- runner/Dockerfile
- runner/entrypoint.sh
- .github/workflows/build-runner.yml
name: Runner
jobs: jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Build runner name: Build
env:
RUNNER_VERSION: 2.273.5
DOCKER_VERSION: 19.03.12
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v2
- name: Build container image
run: make docker-build - name: Set up Docker Buildx
working-directory: runner id: buildx
- name: Docker Login uses: crazy-max/ghaction-docker-buildx@v1
run: docker login -u summerwind -p ${{ secrets.DOCKER_ACCESS_TOKEN }} with:
- name: Push container image buildx-version: latest
run: make docker-push
working-directory: runner - name: Build Container Image
working-directory: runner
if: ${{ github.event_name == 'pull_request' }}
run: |
docker buildx build \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
--platform linux/amd64,linux/arm64 \
--tag ${DOCKERHUB_USERNAME}/actions-runner:v${RUNNER_VERSION} \
--tag ${DOCKERHUB_USERNAME}/actions-runner:latest \
-f Dockerfile .
docker buildx build \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
--platform linux/amd64,linux/arm64 \
--tag ${DOCKERHUB_USERNAME}/actions-runner-dind:v${RUNNER_VERSION} \
--tag ${DOCKERHUB_USERNAME}/actions-runner-dind:latest \
-f Dockerfile.dindrunner .
- name: Login to GitHub Docker Registry
run: echo "${DOCKERHUB_PASSWORD}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin
if: ${{ github.event_name == 'push' }}
env:
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
DOCKERHUB_PASSWORD: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and Push Container Image
working-directory: runner
if: ${{ github.event_name == 'push' }}
run: |
docker buildx build \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
--platform linux/amd64,linux/arm64 \
--tag ${DOCKERHUB_USERNAME}/actions-runner:v${RUNNER_VERSION} \
--tag ${DOCKERHUB_USERNAME}/actions-runner:latest \
-f Dockerfile . --push
docker buildx build \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
--platform linux/amd64,linux/arm64 \
--tag ${DOCKERHUB_USERNAME}/actions-runner-dind:v${RUNNER_VERSION} \
--tag ${DOCKERHUB_USERNAME}/actions-runner-dind:latest \
-f Dockerfile.dindrunner . --push

View File

@@ -7,27 +7,45 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Release name: Release
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v2
- name: Install tools
run: | - name: Install tools
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.2.0/kubebuilder_2.2.0_linux_amd64.tar.gz run: |
tar zxvf kubebuilder_2.2.0_linux_amd64.tar.gz curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.2.0/kubebuilder_2.2.0_linux_amd64.tar.gz
sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder tar zxvf kubebuilder_2.2.0_linux_amd64.tar.gz
curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder
sudo mv kustomize /usr/local/bin curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash
curl -L -O https://github.com/tcnksm/ghr/releases/download/v0.13.0/ghr_v0.13.0_linux_amd64.tar.gz sudo mv kustomize /usr/local/bin
tar zxvf ghr_v0.13.0_linux_amd64.tar.gz curl -L -O https://github.com/tcnksm/ghr/releases/download/v0.13.0/ghr_v0.13.0_linux_amd64.tar.gz
sudo mv ghr_v0.13.0_linux_amd64/ghr /usr/local/bin tar zxvf ghr_v0.13.0_linux_amd64.tar.gz
- name: Set version sudo mv ghr_v0.13.0_linux_amd64/ghr /usr/local/bin
run: echo "::set-env name=VERSION::$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')"
- name: Upload artifacts - name: Set version
env: run: echo "::set-env name=VERSION::$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')"
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: make github-release - name: Upload artifacts
- name: Build container image env:
run: make docker-build GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Docker Login run: make github-release
run: docker login -u summerwind -p ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Push container image - name: Set up Docker Buildx
run: make docker-push id: buildx
uses: crazy-max/ghaction-docker-buildx@v1
with:
buildx-version: latest
- name: Login to GitHub Docker Registry
run: echo "${DOCKERHUB_PASSWORD}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin
env:
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
DOCKERHUB_PASSWORD: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build Container Image
env:
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
run: |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag ${DOCKERHUB_USERNAME}/actions-runner-controller:${{ env.VERSION }} \
-f Dockerfile . --push

View File

@@ -1,15 +1,16 @@
name: CI
on: on:
push: pull_request:
branches: branches:
- master - master
paths-ignore: paths-ignore:
- 'runner/**' - 'runner/**'
- '.github/**'
jobs: jobs:
build: test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Build name: Test
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v2
@@ -20,9 +21,7 @@ jobs:
sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder
- name: Run tests - name: Run tests
run: make test run: make test
- name: Build container image - name: Verify manifests are up-to-date
run: make docker-build run: |
- name: Docker Login make manifests
run: docker login -u summerwind -p ${{ secrets.DOCKER_ACCESS_TOKEN }} git diff --exit-code
- name: Push container image
run: make docker-push

View File

@@ -1,28 +1,37 @@
# Build the manager binary # Build the manager binary
FROM golang:1.13 as builder FROM golang:1.13 as builder
ARG TARGETPLATFORM
WORKDIR /workspace WORKDIR /workspace
ENV GO111MODULE=on \
CGO_ENABLED=0
# Copy the Go Modules manifests # Copy the Go Modules manifests
COPY go.mod go.mod COPY go.mod go.sum ./
COPY go.sum go.sum
# cache deps before building and copying source so that we don't need to re-download as much # cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer # and so that source changes don't invalidate our downloaded layer
RUN go mod download RUN go mod download
# Copy the go source # Copy the go source
COPY main.go main.go COPY . .
COPY api/ api/
COPY controllers/ controllers/
COPY github/ github/
# Build # Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go RUN export GOOS=$(echo ${TARGETPLATFORM} | cut -d / -f1) && \
export GOARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) && \
GOARM=$(echo ${TARGETPLATFORM} | cut -d / -f3 | cut -c2-) && \
go build -a -o manager main.go
# Use distroless as minimal base image to package the manager binary # Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details # Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot FROM gcr.io/distroless/static:nonroot
WORKDIR / WORKDIR /
COPY --from=builder /workspace/manager . COPY --from=builder /workspace/manager .
USER nonroot:nonroot USER nonroot:nonroot
ENTRYPOINT ["/manager"] ENTRYPOINT ["/manager"]

View File

@@ -1,5 +1,8 @@
NAME ?= summerwind/actions-runner-controller NAME ?= summerwind/actions-runner-controller
VERSION ?= latest VERSION ?= latest
# From https://github.com/VictoriaMetrics/operator/pull/44
YAML_DROP=$(YQ) delete --inplace
YAML_DROP_PREFIX=spec.validation.openAPIV3Schema.properties.spec.properties
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion) # Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:trivialVersions=true" CRD_OPTIONS ?= "crd:trivialVersions=true"
@@ -11,6 +14,23 @@ else
GOBIN=$(shell go env GOBIN) GOBIN=$(shell go env GOBIN)
endif endif
# default list of platforms for which multiarch image is built
ifeq (${PLATFORMS}, )
export PLATFORMS="linux/amd64,linux/arm64"
endif
# if IMG_RESULT is unspecified, by default the image will be pushed to registry
ifeq (${IMG_RESULT}, load)
export PUSH_ARG="--load"
# if load is specified, image will be built only for the build machine architecture.
export PLATFORMS="local"
else ifeq (${IMG_RESULT}, cache)
# if cache is specified, image will only be available in the build cache, it won't be pushed or loaded
# therefore no PUSH_ARG will be specified
else
export PUSH_ARG="--push"
endif
all: manager all: manager
# Run tests # Run tests
@@ -39,7 +59,9 @@ deploy: manifests
kustomize build config/default | kubectl apply -f - kustomize build config/default | kubectl apply -f -
# Generate manifests e.g. CRD, RBAC etc. # Generate manifests e.g. CRD, RBAC etc.
manifests: controller-gen manifests: manifests-118 fix118
manifests-118: controller-gen
$(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases $(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
# Run go fmt against code # Run go fmt against code
@@ -50,6 +72,22 @@ fmt:
vet: vet:
go vet ./... go vet ./...
# workaround for CRD issue with k8s 1.18 & controller-gen
# ref: https://github.com/kubernetes/kubernetes/issues/91395
fix118: yq
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.ephemeralContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.ephemeralContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).ephemeralContainers.items.properties
# Generate code # Generate code
generate: controller-gen generate: controller-gen
$(CONTROLLER_GEN) object:headerFile=./hack/boilerplate.go.txt paths="./..." $(CONTROLLER_GEN) object:headerFile=./hack/boilerplate.go.txt paths="./..."
@@ -62,6 +100,18 @@ docker-build: test
docker-push: docker-push:
docker push ${NAME}:${VERSION} docker push ${NAME}:${VERSION}
docker-buildx:
export DOCKER_CLI_EXPERIMENTAL=enabled
@if ! docker buildx ls | grep -q container-builder; then\
docker buildx create --platform ${PLATFORMS} --name container-builder --use;\
fi
docker buildx build --platform ${PLATFORMS} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
-t "${NAME}:${VERSION}" \
-f Dockerfile \
. ${PUSH_ARG}
# Generate the release manifest file # Generate the release manifest file
release: manifests release: manifests
cd config/manager && kustomize edit set image controller=${NAME}:${VERSION} cd config/manager && kustomize edit set image controller=${NAME}:${VERSION}
@@ -76,15 +126,34 @@ github-release: release
# download controller-gen if necessary # download controller-gen if necessary
controller-gen: controller-gen:
ifeq (, $(shell which controller-gen)) ifeq (, $(shell which controller-gen))
ifeq (, $(wildcard $(GOBIN)/controller-gen))
@{ \ @{ \
set -e ;\ set -e ;\
CONTROLLER_GEN_TMP_DIR=$$(mktemp -d) ;\ CONTROLLER_GEN_TMP_DIR=$$(mktemp -d) ;\
cd $$CONTROLLER_GEN_TMP_DIR ;\ cd $$CONTROLLER_GEN_TMP_DIR ;\
go mod init tmp ;\ go mod init tmp ;\
go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.2.4 ;\ go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.3.0 ;\
rm -rf $$CONTROLLER_GEN_TMP_DIR ;\ rm -rf $$CONTROLLER_GEN_TMP_DIR ;\
} }
endif
CONTROLLER_GEN=$(GOBIN)/controller-gen CONTROLLER_GEN=$(GOBIN)/controller-gen
else else
CONTROLLER_GEN=$(shell which controller-gen) CONTROLLER_GEN=$(shell which controller-gen)
endif endif
# find or download yq
# download yq if necessary
# Use always go-version to get consistent line wraps etc.
yq:
ifeq (, $(wildcard $(GOBIN)/yq))
echo "Downloading yq"
@{ \
set -e ;\
YQ_TMP_DIR=$$(mktemp -d) ;\
cd $$YQ_TMP_DIR ;\
go mod init tmp ;\
go get github.com/mikefarah/yq/v3@3.4.0 ;\
rm -rf $$YQ_TMP_DIR ;\
}
endif
YQ=$(GOBIN)/yq

View File

@@ -22,7 +22,7 @@ $ kubectl apply -f https://github.com/summerwind/actions-runner-controller/relea
## Setting up authentication with GitHub API ## Setting up authentication with GitHub API
There are two ways for actions-runner-controller to authenticate with the the GitHub API: There are two ways for actions-runner-controller to authenticate with the GitHub API:
1. Using GitHub App. 1. Using GitHub App.
2. Using Personal Access Token. 2. Using Personal Access Token.
@@ -135,7 +135,7 @@ The runner you created has been registered to your repository.
<img width="756" alt="Actions tab in your repository settings" src="https://user-images.githubusercontent.com/230145/73618667-8cbf9700-466c-11ea-80b6-c67e6d3f70e7.png"> <img width="756" alt="Actions tab in your repository settings" src="https://user-images.githubusercontent.com/230145/73618667-8cbf9700-466c-11ea-80b6-c67e6d3f70e7.png">
Now your can use your self-hosted runner. See the [official documentation](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/using-self-hosted-runners-in-a-workflow) on how to run a job with it. Now you can use your self-hosted runner. See the [official documentation](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/using-self-hosted-runners-in-a-workflow) on how to run a job with it.
### Organization Runners ### Organization Runners
@@ -191,7 +191,7 @@ example-runnerdeploy2475ht2qbr mumoshu/actions-runner-controller-ci Running
#### Autoscaling #### Autoscaling
`RunnerDeployment` can scale number of runners between `minReplicas` and `maxReplicas` fields, depending on pending workflow runs. `RunnerDeployment` can scale the number of runners between `minReplicas` and `maxReplicas` fields, depending on pending workflow runs.
In the below example, `actions-runner` checks for pending workflow runs for each sync period, and scale to e.g. 3 if there're 3 pending jobs at sync time. In the below example, `actions-runner` checks for pending workflow runs for each sync period, and scale to e.g. 3 if there're 3 pending jobs at sync time.
@@ -199,13 +199,25 @@ In the below example, `actions-runner` checks for pending workflow runs for each
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment kind: RunnerDeployment
metadata: metadata:
name: summerwind-actions-runner-controller name: example-runner-deployment
spec: spec:
minReplicas: 1
maxReplicas: 3
template: template:
spec: spec:
repository: summerwind/actions-runner-controller repository: summerwind/actions-runner-controller
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
metrics:
- type: TotalNumberOfQueuedAndInProgressWorkflowRuns
repositoryNames:
- summerwind/actions-runner-controller
``` ```
Please also note that the sync period is set to 10 minutes by default and it's configurable via `--sync-period` flag. Please also note that the sync period is set to 10 minutes by default and it's configurable via `--sync-period` flag.
@@ -217,15 +229,48 @@ By default, it doesn't scale down until the grace period of 10 minutes passes af
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment kind: RunnerDeployment
metadata: metadata:
name: summerwind-actions-runner-controller name: example-runner-deployment
spec: spec:
minReplicas: 1
maxReplicas: 3
scaleDownDelaySecondsAfterScaleUp: 1m
template: template:
spec: spec:
repository: summerwind/actions-runner-controller repository: summerwind/actions-runner-controller
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
scaleDownDelaySecondsAfterScaleOut: 60
metrics:
- type: TotalNumberOfQueuedAndInProgressWorkflowRuns
repositoryNames:
- summerwind/actions-runner-controller
``` ```
## Runner with DinD
When using default runner, runner pod starts up 2 containers: runner and DinD (Docker-in-Docker). This might create issues if there's `LimitRange` set to namespace.
```yaml
# dindrunnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-dindrunnerdeploy
spec:
replicas: 2
template:
spec:
image: summerwind/actions-runner-dind
dockerdWithinRunnerContainer: true
repository: mumoshu/actions-runner-controller-ci
env: []
```
This also helps with resources, as you don't need to give resources separately to docker and runner.
## Additional tweaks ## Additional tweaks
@@ -250,8 +295,8 @@ spec:
operator: Exists operator: Exists
repository: mumoshu/actions-runner-controller-ci repository: mumoshu/actions-runner-controller-ci
ImagePullPolicy: Always
image: custom-image/actions-runner:latest image: custom-image/actions-runner:latest
imagePullPolicy: Always
resources: resources:
limits: limits:
cpu: "4.0" cpu: "4.0"
@@ -259,6 +304,17 @@ spec:
requests: requests:
cpu: "2.0" cpu: "2.0"
memory: "4Gi" memory: "4Gi"
# If set to true, runner pod container only 1 container that's expected to be able to run docker, too.
# image summerwind/actions-runner-dind or custom one should be used with true -value
dockerdWithinRunnerContainer: false
# Valid if dockerdWithinRunnerContainer is not true
dockerdContainerResources:
limits:
cpu: "4.0"
memory: "8Gi"
requests:
cpu: "2.0"
memory: "4Gi"
sidecarContainers: sidecarContainers:
- name: mysql - name: mysql
image: mysql:5.7 image: mysql:5.7
@@ -304,9 +360,9 @@ jobs:
runs-on: custom-runner runs-on: custom-runner
``` ```
Note that if you specify `self-hosted` in your worlflow, then this will run your job on _any_ self-hosted runner, regardless of the labels that they have. Note that if you specify `self-hosted` in your workflow, then this will run your job on _any_ self-hosted runner, regardless of the labels that they have.
## Softeware installed in the runner image ## Software installed in the runner image
The GitHub hosted runners include a large amount of pre-installed software packages. For Ubuntu 18.04, this list can be found at https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md The GitHub hosted runners include a large amount of pre-installed software packages. For Ubuntu 18.04, this list can be found at https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md
@@ -317,7 +373,7 @@ The container image is based on Ubuntu 18.04, but it does not contain all of the
* docker * docker
* build-essentials * build-essentials
The virtual environments from GitHub contain a lot more software packages (different versions of Java, Node.js, Golang, .NET, etc) which are not provided in the runner image. Most of these have dedicated setup actions which allow the tools to be installed on-demand in a workflow, for example: `actions/setup-java` or `actions/setup-node` The virtual environments from GitHub contain a lot more software packages (different versions of Java, Node.js, Golang, .NET, etc) which are not provided in the runner image. Most of these have dedicated setup actions which allow the tools to be installed on-demand in a workflow, for example: `actions/setup-java` or `actions/setup-node`
If there is a need to include packages in the runner image for which there is no setup action, then this can be achieved by building a custom container image for the runner. The easiest way is to start with the `summerwind/actions-runner` image and installing the extra dependencies directly in the docker image: If there is a need to include packages in the runner image for which there is no setup action, then this can be achieved by building a custom container image for the runner. The easiest way is to start with the `summerwind/actions-runner` image and installing the extra dependencies directly in the docker image:
@@ -340,3 +396,15 @@ spec:
repository: summerwind/actions-runner-controller repository: summerwind/actions-runner-controller
image: YOUR_CUSTOM_DOCKER_IMAGE image: YOUR_CUSTOM_DOCKER_IMAGE
``` ```
# Alternatives
The following is a list of alternative solutions that may better fit you depending on your use-case:
- https://github.com/evryfs/github-actions-runner-operator/
Although the situation can change over time, as of writing this sentence, the benefits of using `actions-runner-controller` over the alternatives are:
- `actions-runner-controller` has the ability to autoscale runners based on number of pending/progressing jobs (#99)
- `actions-runner-controller` is able to gracefully stop runners (#103)
- `actions-runner-controller` has ARM support

View File

@@ -0,0 +1,102 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// HorizontalRunnerAutoscalerSpec defines the desired state of HorizontalRunnerAutoscaler
type HorizontalRunnerAutoscalerSpec struct {
// ScaleTargetRef sis the reference to scaled resource like RunnerDeployment
ScaleTargetRef ScaleTargetRef `json:"scaleTargetRef,omitempty"`
// MinReplicas is the minimum number of replicas the deployment is allowed to scale
// +optional
MinReplicas *int `json:"minReplicas,omitempty"`
// MinReplicas is the maximum number of replicas the deployment is allowed to scale
// +optional
MaxReplicas *int `json:"maxReplicas,omitempty"`
// ScaleDownDelaySecondsAfterScaleUp is the approximate delay for a scale down followed by a scale up
// Used to prevent flapping (down->up->down->... loop)
// +optional
ScaleDownDelaySecondsAfterScaleUp *int `json:"scaleDownDelaySecondsAfterScaleOut,omitempty"`
// Metrics is the collection of various metric targets to calculate desired number of runners
// +optional
Metrics []MetricSpec `json:"metrics,omitempty"`
}
type ScaleTargetRef struct {
Name string `json:"name,omitempty"`
}
type MetricSpec struct {
// Type is the type of metric to be used for autoscaling.
// The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
Type string `json:"type,omitempty"`
// RepositoryNames is the list of repository names to be used for calculating the metric.
// For example, a repository name is the REPO part of `github.com/USER/REPO`.
// +optional
RepositoryNames []string `json:"repositoryNames,omitempty"`
}
type HorizontalRunnerAutoscalerStatus struct {
// ObservedGeneration is the most recent generation observed for the target. It corresponds to e.g.
// RunnerDeployment's generation, which is updated on mutation by the API Server.
// +optional
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
// DesiredReplicas is the total number of desired, non-terminated and latest pods to be set for the primary RunnerSet
// This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
// +optional
DesiredReplicas *int `json:"desiredReplicas,omitempty"`
// +optional
LastSuccessfulScaleOutTime *metav1.Time `json:"lastSuccessfulScaleOutTime,omitempty"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.minReplicas",name=Min,type=number
// +kubebuilder:printcolumn:JSONPath=".spec.maxReplicas",name=Max,type=number
// +kubebuilder:printcolumn:JSONPath=".status.desiredReplicas",name=Desired,type=number
// HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler API
type HorizontalRunnerAutoscaler struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec HorizontalRunnerAutoscalerSpec `json:"spec,omitempty"`
Status HorizontalRunnerAutoscalerStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// HorizontalRunnerAutoscalerList contains a list of HorizontalRunnerAutoscaler
type HorizontalRunnerAutoscalerList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []HorizontalRunnerAutoscaler `json:"items"`
}
func init() {
SchemeBuilder.Register(&HorizontalRunnerAutoscaler{}, &HorizontalRunnerAutoscalerList{})
}

View File

@@ -39,6 +39,8 @@ type RunnerSpec struct {
// +optional // +optional
Containers []corev1.Container `json:"containers,omitempty"` Containers []corev1.Container `json:"containers,omitempty"`
// +optional // +optional
DockerdContainerResources corev1.ResourceRequirements `json:"dockerdContainerResources,omitempty"`
// +optional
Resources corev1.ResourceRequirements `json:"resources,omitempty"` Resources corev1.ResourceRequirements `json:"resources,omitempty"`
// +optional // +optional
VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"` VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"`
@@ -48,6 +50,8 @@ type RunnerSpec struct {
// +optional // +optional
Image string `json:"image"` Image string `json:"image"`
// +optional // +optional
ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"`
// +optional
Env []corev1.EnvVar `json:"env,omitempty"` Env []corev1.EnvVar `json:"env,omitempty"`
// +optional // +optional
@@ -75,6 +79,8 @@ type RunnerSpec struct {
EphemeralContainers []corev1.EphemeralContainer `json:"ephemeralContainers,omitempty"` EphemeralContainers []corev1.EphemeralContainer `json:"ephemeralContainers,omitempty"`
// +optional // +optional
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"` TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
// +optional
DockerdWithinRunnerContainer *bool `json:"dockerdWithinRunnerContainer,omitempty"`
} }
// ValidateRepository validates repository field. // ValidateRepository validates repository field.

View File

@@ -20,24 +20,15 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
) )
const (
AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns = "TotalNumberOfQueuedAndInProgressWorkflowRuns"
)
// RunnerReplicaSetSpec defines the desired state of RunnerDeployment // RunnerReplicaSetSpec defines the desired state of RunnerDeployment
type RunnerDeploymentSpec struct { type RunnerDeploymentSpec struct {
// +optional // +optional
Replicas *int `json:"replicas,omitempty"` Replicas *int `json:"replicas,omitempty"`
// MinReplicas is the minimum number of replicas the deployment is allowed to scale
// +optional
MinReplicas *int `json:"minReplicas,omitempty"`
// MinReplicas is the maximum number of replicas the deployment is allowed to scale
// +optional
MaxReplicas *int `json:"maxReplicas,omitempty"`
// ScaleDownDelaySecondsAfterScaleUp is the approximate delay for a scale down followed by a scale up
// Used to prevent flapping (down->up->down->... loop)
// +optional
ScaleDownDelaySecondsAfterScaleUp *int `json:"scaleDownDelaySecondsAfterScaleOut,omitempty"`
Template RunnerTemplate `json:"template"` Template RunnerTemplate `json:"template"`
} }
@@ -49,9 +40,6 @@ type RunnerDeploymentStatus struct {
// This doesn't include outdated pods while upgrading the deployment and replacing the runnerset. // This doesn't include outdated pods while upgrading the deployment and replacing the runnerset.
// +optional // +optional
Replicas *int `json:"desiredReplicas,omitempty"` Replicas *int `json:"desiredReplicas,omitempty"`
// +optional
LastSuccessfulScaleOutTime *metav1.Time `json:"lastSuccessfulScaleOutTime,omitempty"`
} }
// +kubebuilder:object:root=true // +kubebuilder:object:root=true

View File

@@ -25,6 +25,147 @@ import (
"k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime"
) )
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HorizontalRunnerAutoscaler) DeepCopyInto(out *HorizontalRunnerAutoscaler) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscaler.
func (in *HorizontalRunnerAutoscaler) DeepCopy() *HorizontalRunnerAutoscaler {
if in == nil {
return nil
}
out := new(HorizontalRunnerAutoscaler)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *HorizontalRunnerAutoscaler) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HorizontalRunnerAutoscalerList) DeepCopyInto(out *HorizontalRunnerAutoscalerList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]HorizontalRunnerAutoscaler, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerList.
func (in *HorizontalRunnerAutoscalerList) DeepCopy() *HorizontalRunnerAutoscalerList {
if in == nil {
return nil
}
out := new(HorizontalRunnerAutoscalerList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *HorizontalRunnerAutoscalerList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HorizontalRunnerAutoscalerSpec) DeepCopyInto(out *HorizontalRunnerAutoscalerSpec) {
*out = *in
out.ScaleTargetRef = in.ScaleTargetRef
if in.MinReplicas != nil {
in, out := &in.MinReplicas, &out.MinReplicas
*out = new(int)
**out = **in
}
if in.MaxReplicas != nil {
in, out := &in.MaxReplicas, &out.MaxReplicas
*out = new(int)
**out = **in
}
if in.ScaleDownDelaySecondsAfterScaleUp != nil {
in, out := &in.ScaleDownDelaySecondsAfterScaleUp, &out.ScaleDownDelaySecondsAfterScaleUp
*out = new(int)
**out = **in
}
if in.Metrics != nil {
in, out := &in.Metrics, &out.Metrics
*out = make([]MetricSpec, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerSpec.
func (in *HorizontalRunnerAutoscalerSpec) DeepCopy() *HorizontalRunnerAutoscalerSpec {
if in == nil {
return nil
}
out := new(HorizontalRunnerAutoscalerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HorizontalRunnerAutoscalerStatus) DeepCopyInto(out *HorizontalRunnerAutoscalerStatus) {
*out = *in
if in.DesiredReplicas != nil {
in, out := &in.DesiredReplicas, &out.DesiredReplicas
*out = new(int)
**out = **in
}
if in.LastSuccessfulScaleOutTime != nil {
in, out := &in.LastSuccessfulScaleOutTime, &out.LastSuccessfulScaleOutTime
*out = (*in).DeepCopy()
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerStatus.
func (in *HorizontalRunnerAutoscalerStatus) DeepCopy() *HorizontalRunnerAutoscalerStatus {
if in == nil {
return nil
}
out := new(HorizontalRunnerAutoscalerStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *MetricSpec) DeepCopyInto(out *MetricSpec) {
*out = *in
if in.RepositoryNames != nil {
in, out := &in.RepositoryNames, &out.RepositoryNames
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MetricSpec.
func (in *MetricSpec) DeepCopy() *MetricSpec {
if in == nil {
return nil
}
out := new(MetricSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Runner) DeepCopyInto(out *Runner) { func (in *Runner) DeepCopyInto(out *Runner) {
*out = *in *out = *in
@@ -119,21 +260,6 @@ func (in *RunnerDeploymentSpec) DeepCopyInto(out *RunnerDeploymentSpec) {
*out = new(int) *out = new(int)
**out = **in **out = **in
} }
if in.MinReplicas != nil {
in, out := &in.MinReplicas, &out.MinReplicas
*out = new(int)
**out = **in
}
if in.MaxReplicas != nil {
in, out := &in.MaxReplicas, &out.MaxReplicas
*out = new(int)
**out = **in
}
if in.ScaleDownDelaySecondsAfterScaleUp != nil {
in, out := &in.ScaleDownDelaySecondsAfterScaleUp, &out.ScaleDownDelaySecondsAfterScaleUp
*out = new(int)
**out = **in
}
in.Template.DeepCopyInto(&out.Template) in.Template.DeepCopyInto(&out.Template)
} }
@@ -155,10 +281,6 @@ func (in *RunnerDeploymentStatus) DeepCopyInto(out *RunnerDeploymentStatus) {
*out = new(int) *out = new(int)
**out = **in **out = **in
} }
if in.LastSuccessfulScaleOutTime != nil {
in, out := &in.LastSuccessfulScaleOutTime, &out.LastSuccessfulScaleOutTime
*out = (*in).DeepCopy()
}
} }
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerDeploymentStatus. // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerDeploymentStatus.
@@ -313,6 +435,7 @@ func (in *RunnerSpec) DeepCopyInto(out *RunnerSpec) {
(*in)[i].DeepCopyInto(&(*out)[i]) (*in)[i].DeepCopyInto(&(*out)[i])
} }
} }
in.DockerdContainerResources.DeepCopyInto(&out.DockerdContainerResources)
in.Resources.DeepCopyInto(&out.Resources) in.Resources.DeepCopyInto(&out.Resources)
if in.VolumeMounts != nil { if in.VolumeMounts != nil {
in, out := &in.VolumeMounts, &out.VolumeMounts in, out := &in.VolumeMounts, &out.VolumeMounts
@@ -402,6 +525,11 @@ func (in *RunnerSpec) DeepCopyInto(out *RunnerSpec) {
*out = new(int64) *out = new(int64)
**out = **in **out = **in
} }
if in.DockerdWithinRunnerContainer != nil {
in, out := &in.DockerdWithinRunnerContainer, &out.DockerdWithinRunnerContainer
*out = new(bool)
**out = **in
}
} }
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSpec. // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSpec.
@@ -467,3 +595,18 @@ func (in *RunnerTemplate) DeepCopy() *RunnerTemplate {
in.DeepCopyInto(out) in.DeepCopyInto(out)
return out return out
} }
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScaleTargetRef) DeepCopyInto(out *ScaleTargetRef) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaleTargetRef.
func (in *ScaleTargetRef) DeepCopy() *ScaleTargetRef {
if in == nil {
return nil
}
out := new(ScaleTargetRef)
in.DeepCopyInto(out)
return out
}

View File

@@ -1,7 +1,7 @@
# The following manifests contain a self-signed issuer CR and a certificate CR. # The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io # More document can be found at https://docs.cert-manager.io
# WARNING: Targets CertManager 0.11 check https://docs.cert-manager.io/en/latest/tasks/upgrading/index.html for breaking changes # WARNING: Targets CertManager 0.11 check https://docs.cert-manager.io/en/latest/tasks/upgrading/index.html for breaking changes
apiVersion: cert-manager.io/v1alpha2 apiVersion: cert-manager.io/v1
kind: Issuer kind: Issuer
metadata: metadata:
name: selfsigned-issuer name: selfsigned-issuer
@@ -9,7 +9,7 @@ metadata:
spec: spec:
selfSigned: {} selfSigned: {}
--- ---
apiVersion: cert-manager.io/v1alpha2 apiVersion: cert-manager.io/v1
kind: Certificate kind: Certificate
metadata: metadata:
name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml

View File

@@ -0,0 +1,118 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: horizontalrunnerautoscalers.actions.summerwind.dev
spec:
additionalPrinterColumns:
- JSONPath: .spec.minReplicas
name: Min
type: number
- JSONPath: .spec.maxReplicas
name: Max
type: number
- JSONPath: .status.desiredReplicas
name: Desired
type: number
group: actions.summerwind.dev
names:
kind: HorizontalRunnerAutoscaler
listKind: HorizontalRunnerAutoscalerList
plural: horizontalrunnerautoscalers
singular: horizontalrunnerautoscaler
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler
API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: HorizontalRunnerAutoscalerSpec defines the desired state of
HorizontalRunnerAutoscaler
properties:
maxReplicas:
description: MinReplicas is the maximum number of replicas the deployment
is allowed to scale
type: integer
metrics:
description: Metrics is the collection of various metric targets to
calculate desired number of runners
items:
properties:
repositoryNames:
description: RepositoryNames is the list of repository names to
be used for calculating the metric. For example, a repository
name is the REPO part of `github.com/USER/REPO`.
items:
type: string
type: array
type:
description: Type is the type of metric to be used for autoscaling.
The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
type: string
type: object
type: array
minReplicas:
description: MinReplicas is the minimum number of replicas the deployment
is allowed to scale
type: integer
scaleDownDelaySecondsAfterScaleOut:
description: ScaleDownDelaySecondsAfterScaleUp is the approximate delay
for a scale down followed by a scale up Used to prevent flapping (down->up->down->...
loop)
type: integer
scaleTargetRef:
description: ScaleTargetRef sis the reference to scaled resource like
RunnerDeployment
properties:
name:
type: string
type: object
type: object
status:
properties:
desiredReplicas:
description: DesiredReplicas is the total number of desired, non-terminated
and latest pods to be set for the primary RunnerSet This doesn't include
outdated pods while upgrading the deployment and replacing the runnerset.
type: integer
lastSuccessfulScaleOutTime:
format: date-time
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed
for the target. It corresponds to e.g. RunnerDeployment's generation,
which is updated on mutation by the API Server.
format: int64
type: integer
type: object
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -5,6 +5,7 @@ resources:
- bases/actions.summerwind.dev_runners.yaml - bases/actions.summerwind.dev_runners.yaml
- bases/actions.summerwind.dev_runnerreplicasets.yaml - bases/actions.summerwind.dev_runnerreplicasets.yaml
- bases/actions.summerwind.dev_runnerdeployments.yaml - bases/actions.summerwind.dev_runnerdeployments.yaml
- bases/actions.summerwind.dev_horizontalrunnerautoscalers.yaml
# +kubebuilder:scaffold:crdkustomizeresource # +kubebuilder:scaffold:crdkustomizeresource
patchesStrategicMerge: patchesStrategicMerge:

View File

@@ -50,7 +50,7 @@ vars:
objref: objref:
kind: Certificate kind: Certificate
group: cert-manager.io group: cert-manager.io
version: v1alpha2 version: v1
name: serving-cert # this name should match the one in certificate.yaml name: serving-cert # this name should match the one in certificate.yaml
fieldref: fieldref:
fieldpath: metadata.namespace fieldpath: metadata.namespace
@@ -58,7 +58,7 @@ vars:
objref: objref:
kind: Certificate kind: Certificate
group: cert-manager.io group: cert-manager.io
version: v1alpha2 version: v1
name: serving-cert # this name should match the one in certificate.yaml name: serving-cert # this name should match the one in certificate.yaml
- name: SERVICE_NAMESPACE # namespace of the service - name: SERVICE_NAMESPACE # namespace of the service
objref: objref:

View File

@@ -6,6 +6,38 @@ metadata:
creationTimestamp: null creationTimestamp: null
name: manager-role name: manager-role
rules: rules:
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/status
verbs:
- get
- patch
- update
- apiGroups: - apiGroups:
- actions.summerwind.dev - actions.summerwind.dev
resources: resources:
@@ -18,6 +50,18 @@ rules:
- patch - patch
- update - update
- watch - watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups: - apiGroups:
- actions.summerwind.dev - actions.summerwind.dev
resources: resources:
@@ -38,6 +82,18 @@ rules:
- patch - patch
- update - update
- watch - watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerreplicasets/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups: - apiGroups:
- actions.summerwind.dev - actions.summerwind.dev
resources: resources:
@@ -58,6 +114,18 @@ rules:
- patch - patch
- update - update
- watch - watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups: - apiGroups:
- actions.summerwind.dev - actions.summerwind.dev
resources: resources:
@@ -85,3 +153,15 @@ rules:
- patch - patch
- update - update
- watch - watch
- apiGroups:
- ""
resources:
- pods/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

View File

@@ -2,66 +2,111 @@ package controllers
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
"strings" "strings"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
) )
type NotSupported struct { func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
} if hra.Spec.MinReplicas == nil {
return nil, fmt.Errorf("horizontalrunnerautoscaler %s/%s is missing minReplicas", hra.Namespace, hra.Name)
var _ error = NotSupported{} } else if hra.Spec.MaxReplicas == nil {
return nil, fmt.Errorf("horizontalrunnerautoscaler %s/%s is missing maxReplicas", hra.Namespace, hra.Name)
func (e NotSupported) Error() string {
return "Autoscaling is currently supported only when spec.repository is set"
}
func (r *RunnerDeploymentReconciler) determineDesiredReplicas(rd v1alpha1.RunnerDeployment) (*int, error) {
if rd.Spec.Replicas != nil {
return nil, fmt.Errorf("bug: determineDesiredReplicas should not be called for deplomeny with specific replicas")
} else if rd.Spec.MinReplicas == nil {
return nil, fmt.Errorf("runnerdeployment %s/%s is missing minReplicas", rd.Namespace, rd.Name)
} else if rd.Spec.MaxReplicas == nil {
return nil, fmt.Errorf("runnerdeployment %s/%s is missing maxReplicas", rd.Namespace, rd.Name)
} }
var replicas int var repos [][]string
repoID := rd.Spec.Template.Spec.Repository repoID := rd.Spec.Template.Spec.Repository
if repoID == "" { if repoID == "" {
return nil, NotSupported{} orgName := rd.Spec.Template.Spec.Organization
} if orgName == "" {
return nil, fmt.Errorf("asserting runner deployment spec to detect bug: spec.template.organization should not be empty on this code path")
}
repo := strings.Split(repoID, "/") metrics := hra.Spec.Metrics
user, repoName := repo[0], repo[1]
list, _, err := r.GitHubClient.Actions.ListRepositoryWorkflowRuns(context.TODO(), user, repoName, nil) if len(metrics) == 0 {
if err != nil { return nil, fmt.Errorf("validating autoscaling metrics: one or more metrics is required")
return nil, err } else if tpe := metrics[0].Type; tpe != v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns {
return nil, fmt.Errorf("validting autoscaling metrics: unsupported metric type %q: only supported value is %s", tpe, v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns)
} else if len(metrics[0].RepositoryNames) == 0 {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment")
}
for _, repoName := range metrics[0].RepositoryNames {
repos = append(repos, []string{orgName, repoName})
}
} else {
repo := strings.Split(repoID, "/")
repos = append(repos, repo)
} }
var total, inProgress, queued, completed, unknown int var total, inProgress, queued, completed, unknown int
type callback func()
for _, r := range list.WorkflowRuns { listWorkflowJobs := func(user string, repoName string, runID int64, fallback_cb callback) {
total++ if runID == 0 {
fallback_cb()
// In May 2020, there are only 3 statuses. return
// Follow the below links for more details: }
// - https://developer.github.com/v3/actions/workflow-runs/#list-repository-workflow-runs jobs, _, err := r.GitHubClient.Actions.ListWorkflowJobs(context.TODO(), user, repoName, runID, nil)
// - https://developer.github.com/v3/checks/runs/#create-a-check-run if err != nil {
switch r.GetStatus() { r.Log.Error(err, "Error listing workflow jobs")
case "completed": fallback_cb()
completed++ } else if len(jobs.Jobs) == 0 {
case "in_progress": fallback_cb()
inProgress++ } else {
case "queued": for _, job := range jobs.Jobs {
queued++ switch job.GetStatus() {
default: case "completed":
unknown++ // We add a case for `completed` so it is not counted in `unknown`.
// And we do not increment the counter for completed because
// that counter only refers to workflows. The reason for
// this is because we do not get a list of jobs for
// completed workflows in order to keep the number of API
// calls to a minimum.
case "in_progress":
inProgress++
case "queued":
queued++
default:
unknown++
}
}
} }
} }
minReplicas := *rd.Spec.MinReplicas for _, repo := range repos {
maxReplicas := *rd.Spec.MaxReplicas user, repoName := repo[0], repo[1]
list, _, err := r.GitHubClient.Actions.ListRepositoryWorkflowRuns(context.TODO(), user, repoName, nil)
if err != nil {
return nil, err
}
for _, run := range list.WorkflowRuns {
total++
// In May 2020, there are only 3 statuses.
// Follow the below links for more details:
// - https://developer.github.com/v3/actions/workflow-runs/#list-repository-workflow-runs
// - https://developer.github.com/v3/checks/runs/#create-a-check-run
switch run.GetStatus() {
case "completed":
completed++
case "in_progress":
listWorkflowJobs(user, repoName, run.GetID(), func() { inProgress++ })
case "queued":
listWorkflowJobs(user, repoName, run.GetID(), func() { queued++ })
default:
unknown++
}
}
}
minReplicas := *hra.Spec.MinReplicas
maxReplicas := *hra.Spec.MaxReplicas
necessaryReplicas := queued + inProgress necessaryReplicas := queued + inProgress
var desiredReplicas int var desiredReplicas int
@@ -75,7 +120,7 @@ func (r *RunnerDeploymentReconciler) determineDesiredReplicas(rd v1alpha1.Runner
} }
rd.Status.Replicas = &desiredReplicas rd.Status.Replicas = &desiredReplicas
replicas = desiredReplicas replicas := desiredReplicas
r.Log.V(1).Info( r.Log.V(1).Info(
"Calculated desired replicas", "Calculated desired replicas",

View File

@@ -2,16 +2,17 @@ package controllers
import ( import (
"fmt" "fmt"
"net/http/httptest"
"net/url"
"testing"
"github.com/summerwind/actions-runner-controller/api/v1alpha1" "github.com/summerwind/actions-runner-controller/api/v1alpha1"
"github.com/summerwind/actions-runner-controller/github" "github.com/summerwind/actions-runner-controller/github"
"github.com/summerwind/actions-runner-controller/github/fake" "github.com/summerwind/actions-runner-controller/github/fake"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme" clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"net/http/httptest"
"net/url"
"sigs.k8s.io/controller-runtime/pkg/log/zap" "sigs.k8s.io/controller-runtime/pkg/log/zap"
"testing"
) )
func newGithubClient(server *httptest.Server) *github.Client { func newGithubClient(server *httptest.Server) *github.Client {
@@ -44,9 +45,11 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
sReplicas *int sReplicas *int
sTime *metav1.Time sTime *metav1.Time
workflowRuns string workflowRuns string
workflowJobs map[int]string
want int want int
err string err string
}{ }{
// Legacy functionality
// 3 demanded, max at 3 // 3 demanded, max at 3
{ {
repo: "test/valid", repo: "test/valid",
@@ -115,23 +118,27 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
}, },
// fixed at 3 // fixed at 3
{ {
repo: "test/valid", repo: "test/valid",
fixed: intPtr(3),
want: 3,
},
// org runner, fixed at 3
{
org: "test",
fixed: intPtr(3),
want: 3,
},
// org runner, 1 demanded, min at 1
{
org: "test",
min: intPtr(1), min: intPtr(1),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`, fixed: intPtr(3),
err: "Autoscaling is currently supported only when spec.repository is set", workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
},
// Job-level autoscaling
// 5 requested from 3 workflows
{
repo: "test/valid",
min: intPtr(2),
max: intPtr(10),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
workflowJobs: map[int]string{
1: `{"jobs": [{"status":"queued"}, {"status":"queued"}]}`,
2: `{"jobs": [{"status": "in_progress"}, {"status":"completed"}]}`,
3: `{"jobs": [{"status": "in_progress"}, {"status":"queued"}]}`,
},
want: 5,
}, },
} }
@@ -147,11 +154,11 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
_ = v1alpha1.AddToScheme(scheme) _ = v1alpha1.AddToScheme(scheme)
t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) { t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) {
server := fake.NewServer(fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns)) server := fake.NewServer(fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns), fake.WithListWorkflowJobsResponse(200, tc.workflowJobs))
defer server.Close() defer server.Close()
client := newGithubClient(server) client := newGithubClient(server)
r := &RunnerDeploymentReconciler{ h := &HorizontalRunnerAutoscalerReconciler{
Log: log, Log: log,
GitHubClient: client, GitHubClient: client,
Scheme: scheme, Scheme: scheme,
@@ -159,23 +166,34 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
rd := v1alpha1.RunnerDeployment{ rd := v1alpha1.RunnerDeployment{
TypeMeta: metav1.TypeMeta{}, TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: "testrd",
},
Spec: v1alpha1.RunnerDeploymentSpec{ Spec: v1alpha1.RunnerDeploymentSpec{
Template: v1alpha1.RunnerTemplate{ Template: v1alpha1.RunnerTemplate{
Spec: v1alpha1.RunnerSpec{ Spec: v1alpha1.RunnerSpec{
Repository: tc.repo, Repository: tc.repo,
}, },
}, },
Replicas: tc.fixed, Replicas: tc.fixed,
},
Status: v1alpha1.RunnerDeploymentStatus{
Replicas: tc.sReplicas,
},
}
hra := v1alpha1.HorizontalRunnerAutoscaler{
Spec: v1alpha1.HorizontalRunnerAutoscalerSpec{
MaxReplicas: tc.max, MaxReplicas: tc.max,
MinReplicas: tc.min, MinReplicas: tc.min,
}, },
Status: v1alpha1.RunnerDeploymentStatus{ Status: v1alpha1.HorizontalRunnerAutoscalerStatus{
Replicas: tc.sReplicas, DesiredReplicas: tc.sReplicas,
LastSuccessfulScaleOutTime: tc.sTime, LastSuccessfulScaleOutTime: tc.sTime,
}, },
} }
rs, err := r.newRunnerReplicaSetWithAutoscaling(rd) got, err := h.computeReplicas(rd, hra)
if err != nil { if err != nil {
if tc.err == "" { if tc.err == "" {
t.Fatalf("unexpected error: expected none, got %v", err) t.Fatalf("unexpected error: expected none, got %v", err)
@@ -185,8 +203,6 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
return return
} }
got := rs.Spec.Replicas
if got == nil { if got == nil {
t.Fatalf("unexpected value of rs.Spec.Replicas: nil") t.Fatalf("unexpected value of rs.Spec.Replicas: nil")
} }
@@ -197,3 +213,222 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
}) })
} }
} }
func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
intPtr := func(v int) *int {
return &v
}
metav1Now := metav1.Now()
testcases := []struct {
repos []string
org string
fixed *int
max *int
min *int
sReplicas *int
sTime *metav1.Time
workflowRuns string
workflowJobs map[int]string
want int
err string
}{
// 3 demanded, max at 3
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
},
// 2 demanded, max at 3, currently 3, delay scaling down due to grace period
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
sReplicas: intPtr(3),
sTime: &metav1Now,
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
},
// 3 demanded, max at 2
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(2),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2,
},
// 2 demanded, min at 2
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2,
},
// 1 demanded, min at 2
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
want: 2,
},
// 1 demanded, min at 2
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2,
},
// 1 demanded, min at 1
{
org: "test",
repos: []string{"valid"},
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
want: 1,
},
// 1 demanded, min at 1
{
org: "test",
repos: []string{"valid"},
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
want: 1,
},
// fixed at 3
{
org: "test",
repos: []string{"valid"},
fixed: intPtr(1),
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
},
// org runner, fixed at 3
{
org: "test",
repos: []string{"valid"},
fixed: intPtr(1),
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
},
// org runner, 1 demanded, min at 1, no repos
{
org: "test",
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
err: "validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment",
},
// Job-level autoscaling
// 5 requested from 3 workflows
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(10),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
workflowJobs: map[int]string{
1: `{"jobs": [{"status":"queued"}, {"status":"queued"}]}`,
2: `{"jobs": [{"status": "in_progress"}, {"status":"completed"}]}`,
3: `{"jobs": [{"status": "in_progress"}, {"status":"queued"}]}`,
},
want: 5,
},
}
for i := range testcases {
tc := testcases[i]
log := zap.New(func(o *zap.Options) {
o.Development = true
})
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
_ = v1alpha1.AddToScheme(scheme)
t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) {
server := fake.NewServer(fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns), fake.WithListWorkflowJobsResponse(200, tc.workflowJobs))
defer server.Close()
client := newGithubClient(server)
h := &HorizontalRunnerAutoscalerReconciler{
Log: log,
Scheme: scheme,
GitHubClient: client,
}
rd := v1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: "testrd",
},
Spec: v1alpha1.RunnerDeploymentSpec{
Template: v1alpha1.RunnerTemplate{
Spec: v1alpha1.RunnerSpec{
Organization: tc.org,
},
},
Replicas: tc.fixed,
},
Status: v1alpha1.RunnerDeploymentStatus{
Replicas: tc.sReplicas,
},
}
hra := v1alpha1.HorizontalRunnerAutoscaler{
Spec: v1alpha1.HorizontalRunnerAutoscalerSpec{
ScaleTargetRef: v1alpha1.ScaleTargetRef{
Name: "testrd",
},
MaxReplicas: tc.max,
MinReplicas: tc.min,
Metrics: []v1alpha1.MetricSpec{
{
Type: v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns,
RepositoryNames: tc.repos,
},
},
},
Status: v1alpha1.HorizontalRunnerAutoscalerStatus{
DesiredReplicas: tc.sReplicas,
LastSuccessfulScaleOutTime: tc.sTime,
},
}
got, err := h.computeReplicas(rd, hra)
if err != nil {
if tc.err == "" {
t.Fatalf("unexpected error: expected none, got %v", err)
} else if err.Error() != tc.err {
t.Fatalf("unexpected error: expected %v, got %v", tc.err, err)
}
return
}
if got == nil {
t.Fatalf("unexpected value of rs.Spec.Replicas: nil, wanted %v", tc.want)
}
if *got != tc.want {
t.Errorf("%d: incorrect desired replicas: want %d, got %d", i, tc.want, *got)
}
})
}
}

View File

@@ -0,0 +1,168 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"context"
"time"
"github.com/summerwind/actions-runner-controller/github"
"k8s.io/apimachinery/pkg/types"
"github.com/go-logr/logr"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
)
const (
DefaultScaleDownDelay = 10 * time.Minute
)
// HorizontalRunnerAutoscalerReconciler reconciles a HorizontalRunnerAutoscaler object
type HorizontalRunnerAutoscalerReconciler struct {
client.Client
GitHubClient *github.Client
Log logr.Logger
Recorder record.EventRecorder
Scheme *runtime.Scheme
}
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments,verbs=get;list;watch;update;patch
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=horizontalrunnerautoscalers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=horizontalrunnerautoscalers/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=horizontalrunnerautoscalers/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
func (r *HorizontalRunnerAutoscalerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
log := r.Log.WithValues("horizontalrunnerautoscaler", req.NamespacedName)
var hra v1alpha1.HorizontalRunnerAutoscaler
if err := r.Get(ctx, req.NamespacedName, &hra); err != nil {
return ctrl.Result{}, client.IgnoreNotFound(err)
}
if !hra.ObjectMeta.DeletionTimestamp.IsZero() {
return ctrl.Result{}, nil
}
var rd v1alpha1.RunnerDeployment
if err := r.Get(ctx, types.NamespacedName{
Namespace: req.Namespace,
Name: hra.Spec.ScaleTargetRef.Name,
}, &rd); err != nil {
return ctrl.Result{}, client.IgnoreNotFound(err)
}
if !rd.ObjectMeta.DeletionTimestamp.IsZero() {
return ctrl.Result{}, nil
}
replicas, err := r.computeReplicas(rd, hra)
if err != nil {
r.Recorder.Event(&hra, corev1.EventTypeNormal, "RunnerAutoscalingFailure", err.Error())
log.Error(err, "Could not compute replicas")
return ctrl.Result{}, err
}
const defaultReplicas = 1
currentDesiredReplicas := getIntOrDefault(rd.Spec.Replicas, defaultReplicas)
newDesiredReplicas := getIntOrDefault(replicas, defaultReplicas)
// Please add more conditions that we can in-place update the newest runnerreplicaset without disruption
if currentDesiredReplicas != newDesiredReplicas {
copy := rd.DeepCopy()
copy.Spec.Replicas = &newDesiredReplicas
if err := r.Client.Update(ctx, copy); err != nil {
log.Error(err, "Failed to update runnerderployment resource")
return ctrl.Result{}, err
}
return ctrl.Result{}, err
}
if hra.Status.DesiredReplicas == nil || *hra.Status.DesiredReplicas != *replicas {
updated := hra.DeepCopy()
if (hra.Status.DesiredReplicas == nil && *replicas > 1) ||
(hra.Status.DesiredReplicas != nil && *replicas > *hra.Status.DesiredReplicas) {
updated.Status.LastSuccessfulScaleOutTime = &metav1.Time{Time: time.Now()}
}
updated.Status.DesiredReplicas = replicas
if err := r.Status().Update(ctx, updated); err != nil {
log.Error(err, "Failed to update horizontalrunnerautoscaler status")
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}
func (r *HorizontalRunnerAutoscalerReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.Recorder = mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller")
return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.HorizontalRunnerAutoscaler{}).
Complete(r)
}
func (r *HorizontalRunnerAutoscalerReconciler) computeReplicas(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
var computedReplicas *int
replicas, err := r.determineDesiredReplicas(rd, hra)
if err != nil {
return nil, err
}
var scaleDownDelay time.Duration
if hra.Spec.ScaleDownDelaySecondsAfterScaleUp != nil {
scaleDownDelay = time.Duration(*hra.Spec.ScaleDownDelaySecondsAfterScaleUp) * time.Second
} else {
scaleDownDelay = DefaultScaleDownDelay
}
now := time.Now()
if hra.Status.DesiredReplicas == nil ||
*hra.Status.DesiredReplicas < *replicas ||
hra.Status.LastSuccessfulScaleOutTime == nil ||
hra.Status.LastSuccessfulScaleOutTime.Add(scaleDownDelay).Before(now) {
computedReplicas = replicas
} else {
computedReplicas = hra.Status.DesiredReplicas
}
return computedReplicas, nil
}

View File

@@ -0,0 +1,314 @@
package controllers
import (
"context"
"time"
"github.com/summerwind/actions-runner-controller/github/fake"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
logf "sigs.k8s.io/controller-runtime/pkg/log"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
)
type testEnvironment struct {
Namespace *corev1.Namespace
Responses *fake.FixedResponses
}
var (
workflowRunsFor3Replicas = `{"total_count": 5, "workflow_runs":[{"status":"queued"}, {"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`
workflowRunsFor1Replicas = `{"total_count": 6, "workflow_runs":[{"status":"queued"}, {"status":"completed"}, {"status":"completed"}, {"status":"completed"}, {"status":"completed"}]}"`
)
// SetupIntegrationTest will set up a testing environment.
// This includes:
// * creating a Namespace to be used during the test
// * starting all the reconcilers
// * stopping all the reconcilers after the test ends
// Call this function at the start of each of your tests.
func SetupIntegrationTest(ctx context.Context) *testEnvironment {
var stopCh chan struct{}
ns := &corev1.Namespace{}
responses := &fake.FixedResponses{}
responses.ListRepositoryWorkflowRuns = &fake.Handler{
Status: 200,
Body: workflowRunsFor3Replicas,
}
fakeGithubServer := fake.NewServer(fake.WithFixedResponses(responses))
BeforeEach(func() {
stopCh = make(chan struct{})
*ns = corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{Name: "testns-" + randStringRunes(5)},
}
err := k8sClient.Create(ctx, ns)
Expect(err).NotTo(HaveOccurred(), "failed to create test namespace")
mgr, err := ctrl.NewManager(cfg, ctrl.Options{})
Expect(err).NotTo(HaveOccurred(), "failed to create manager")
runnersList = fake.NewRunnersList()
server = runnersList.GetServer()
ghClient := newGithubClient(server)
replicasetController := &RunnerReplicaSetReconciler{
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: ghClient,
}
err = replicasetController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
deploymentsController := &RunnerDeploymentReconciler{
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerdeployment-controller"),
}
err = deploymentsController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
client := newGithubClient(fakeGithubServer)
autoscalerController := &HorizontalRunnerAutoscalerReconciler{
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
GitHubClient: client,
Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"),
}
err = autoscalerController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
go func() {
defer GinkgoRecover()
err := mgr.Start(stopCh)
Expect(err).NotTo(HaveOccurred(), "failed to start manager")
}()
})
AfterEach(func() {
close(stopCh)
fakeGithubServer.Close()
err := k8sClient.Delete(ctx, ns)
Expect(err).NotTo(HaveOccurred(), "failed to delete test namespace")
})
return &testEnvironment{Namespace: ns, Responses: responses}
}
var _ = Context("Inside of a new namespace", func() {
ctx := context.TODO()
env := SetupIntegrationTest(ctx)
ns := env.Namespace
responses := env.Responses
Describe("when no existing resources exist", func() {
It("should create and scale runners", func() {
name := "example-runnerdeploy"
{
rs := &actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Replicas: intPtr(1),
Template: actionsv1alpha1.RunnerTemplate{
Spec: actionsv1alpha1.RunnerSpec{
Repository: "test/valid",
Image: "bar",
Env: []corev1.EnvVar{
{Name: "FOO", Value: "FOOVALUE"},
},
},
},
},
}
err := k8sClient.Create(ctx, rs)
Expect(err).NotTo(HaveOccurred(), "failed to create test RunnerDeployment resource")
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return len(runnerSets.Items)
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
if len(runnerSets.Items) == 0 {
logf.Log.Info("No runnerreplicasets exist yet")
return -1
}
return *runnerSets.Items[0].Spec.Replicas
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
}
{
// We wrap the update in the Eventually block to avoid the below error that occurs due to concurrent modification
// made by the controller to update .Status.AvailableReplicas and .Status.ReadyReplicas
// Operation cannot be fulfilled on runnersets.actions.summerwind.dev "example-runnerset": the object has been modified; please apply your changes to the latest version and try again
Eventually(func() error {
var rd actionsv1alpha1.RunnerDeployment
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &rd)
Expect(err).NotTo(HaveOccurred(), "failed to get test RunnerDeployment resource")
rd.Spec.Replicas = intPtr(2)
return k8sClient.Update(ctx, &rd)
},
time.Second*1, time.Millisecond*500).Should(BeNil())
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return len(runnerSets.Items)
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return *runnerSets.Items[0].Spec.Replicas
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2))
}
// Scale-up to 3 replicas
{
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
ScaleTargetRef: actionsv1alpha1.ScaleTargetRef{
Name: name,
},
MinReplicas: intPtr(1),
MaxReplicas: intPtr(3),
ScaleDownDelaySecondsAfterScaleUp: nil,
Metrics: nil,
},
}
err := k8sClient.Create(ctx, hra)
Expect(err).NotTo(HaveOccurred(), "failed to create test HorizontalRunnerAutoscaler resource")
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return len(runnerSets.Items)
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
if len(runnerSets.Items) == 0 {
logf.Log.Info("No runnerreplicasets exist yet")
return -1
}
return *runnerSets.Items[0].Spec.Replicas
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(3))
}
// Scale-down to 1 replica
{
responses.ListRepositoryWorkflowRuns.Body = workflowRunsFor1Replicas
var hra actionsv1alpha1.HorizontalRunnerAutoscaler
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &hra)
Expect(err).NotTo(HaveOccurred(), "failed to get test HorizontalRunnerAutoscaler resource")
hra.Annotations = map[string]string{
"force-update": "1",
}
err = k8sClient.Update(ctx, &hra)
Expect(err).NotTo(HaveOccurred(), "failed to get test HorizontalRunnerAutoscaler resource")
Eventually(
func() int {
var runnerSets actionsv1alpha1.RunnerReplicaSetList
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
if len(runnerSets.Items) == 0 {
logf.Log.Info("No runnerreplicasets exist yet")
return -1
}
return *runnerSets.Items[0].Spec.Replicas
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
}
})
})
})

View File

@@ -53,8 +53,10 @@ type RunnerReconciler struct {
} }
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners/status,verbs=get;update;patch // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=pods/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch // +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) { func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
@@ -165,7 +167,11 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
r.Recorder.Event(&runner, corev1.EventTypeNormal, "PodCreated", fmt.Sprintf("Created pod '%s'", newPod.Name)) r.Recorder.Event(&runner, corev1.EventTypeNormal, "PodCreated", fmt.Sprintf("Created pod '%s'", newPod.Name))
log.Info("Created runner pod", "repository", runner.Spec.Repository) log.Info("Created runner pod", "repository", runner.Spec.Repository)
} else { } else {
if runner.Status.Phase != string(pod.Status.Phase) { // If pod has ended up succeeded we need to restart it
// Happens e.g. when dind is in runner and run completes
restart := pod.Status.Phase == corev1.PodSucceeded
if !restart && runner.Status.Phase != string(pod.Status.Phase) {
updated := runner.DeepCopy() updated := runner.DeepCopy()
updated.Status.Phase = string(pod.Status.Phase) updated.Status.Phase = string(pod.Status.Phase)
updated.Status.Reason = pod.Status.Reason updated.Status.Reason = pod.Status.Reason
@@ -183,8 +189,6 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
return ctrl.Result{}, err return ctrl.Result{}, err
} }
restart := false
if pod.Status.Phase == corev1.PodRunning { if pod.Status.Phase == corev1.PodRunning {
for _, status := range pod.Status.ContainerStatuses { for _, status := range pod.Status.ContainerStatuses {
if status.Name != containerName { if status.Name != containerName {
@@ -203,12 +207,16 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
return ctrl.Result{}, err return ctrl.Result{}, err
} }
if pod.Spec.Containers[0].Image != newPod.Spec.Containers[0].Image { runnerBusy, err := r.isRunnerBusy(ctx, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
restart = true if err != nil {
} log.Error(err, "Failed to check if runner is busy")
if !reflect.DeepEqual(pod.Spec.Containers[0].Env, newPod.Spec.Containers[0].Env) { return ctrl.Result{}, nil
}
if !runnerBusy && (!reflect.DeepEqual(pod.Spec.Containers[0].Env, newPod.Spec.Containers[0].Env) || pod.Spec.Containers[0].Image != newPod.Spec.Containers[0].Image) {
restart = true restart = true
} }
if !restart { if !restart {
return ctrl.Result{}, err return ctrl.Result{}, err
} }
@@ -225,6 +233,21 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
func (r *RunnerReconciler) isRunnerBusy(ctx context.Context, org, repo, name string) (bool, error) {
runners, err := r.GitHubClient.ListRunners(ctx, org, repo)
if err != nil {
return false, err
}
for _, runner := range runners {
if runner.GetName() == name {
return runner.GetBusy(), nil
}
}
return false, fmt.Errorf("runner not found")
}
func (r *RunnerReconciler) unregisterRunner(ctx context.Context, org, repo, name string) (bool, error) { func (r *RunnerReconciler) unregisterRunner(ctx context.Context, org, repo, name string) (bool, error) {
runners, err := r.GitHubClient.ListRunners(ctx, org, repo) runners, err := r.GitHubClient.ListRunners(ctx, org, repo)
if err != nil { if err != nil {
@@ -234,6 +257,9 @@ func (r *RunnerReconciler) unregisterRunner(ctx context.Context, org, repo, name
id := int64(0) id := int64(0)
for _, runner := range runners { for _, runner := range runners {
if runner.GetName() == name { if runner.GetName() == name {
if runner.GetBusy() {
return false, fmt.Errorf("runner is busy")
}
id = runner.GetID() id = runner.GetID()
break break
} }
@@ -252,8 +278,8 @@ func (r *RunnerReconciler) unregisterRunner(ctx context.Context, org, repo, name
func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) { func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
var ( var (
privileged bool = true privileged bool = true
group int64 = 0 dockerdInRunner bool = runner.Spec.DockerdWithinRunnerContainer != nil && *runner.Spec.DockerdWithinRunnerContainer
) )
runnerImage := runner.Spec.Image runnerImage := runner.Spec.Image
@@ -261,6 +287,11 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
runnerImage = r.RunnerImage runnerImage = r.RunnerImage
} }
runnerImagePullPolicy := runner.Spec.ImagePullPolicy
if runnerImagePullPolicy == "" {
runnerImagePullPolicy = corev1.PullAlways
}
env := []corev1.EnvVar{ env := []corev1.EnvVar{
{ {
Name: "RUNNER_NAME", Name: "RUNNER_NAME",
@@ -282,6 +313,10 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
Name: "RUNNER_TOKEN", Name: "RUNNER_TOKEN",
Value: runner.Status.Registration.Token, Value: runner.Status.Registration.Token,
}, },
{
Name: "DOCKERD_IN_RUNNER",
Value: fmt.Sprintf("%v", dockerdInRunner),
},
} }
env = append(env, runner.Spec.Env...) env = append(env, runner.Spec.Env...)
@@ -298,61 +333,71 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
{ {
Name: containerName, Name: containerName,
Image: runnerImage, Image: runnerImage,
ImagePullPolicy: "Always", ImagePullPolicy: runnerImagePullPolicy,
Env: env, Env: env,
EnvFrom: runner.Spec.EnvFrom, EnvFrom: runner.Spec.EnvFrom,
VolumeMounts: []corev1.VolumeMount{
{
Name: "work",
MountPath: "/runner/_work",
},
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{ SecurityContext: &corev1.SecurityContext{
RunAsGroup: &group, // Runner need to run privileged if it contains DinD
Privileged: runner.Spec.DockerdWithinRunnerContainer,
}, },
Resources: runner.Spec.Resources, Resources: runner.Spec.Resources,
}, },
{
Name: "docker",
Image: r.DockerImage,
VolumeMounts: []corev1.VolumeMount{
{
Name: "work",
MountPath: "/runner/_work",
},
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
},
},
Volumes: []corev1.Volume{
{
Name: "work",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
{
Name: "docker",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
}, },
}, },
} }
if !dockerdInRunner {
pod.Spec.Volumes = []corev1.Volume{
{
Name: "work",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
{
Name: "docker",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
}
pod.Spec.Containers[0].VolumeMounts = []corev1.VolumeMount{
{
Name: "work",
MountPath: "/runner/_work",
},
{
Name: "docker",
MountPath: "/var/run",
},
}
pod.Spec.Containers = append(pod.Spec.Containers, corev1.Container{
Name: "docker",
Image: r.DockerImage,
VolumeMounts: []corev1.VolumeMount{
{
Name: "work",
MountPath: "/runner/_work",
},
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
})
}
if len(runner.Spec.Containers) != 0 { if len(runner.Spec.Containers) != 0 {
pod.Spec.Containers = runner.Spec.Containers pod.Spec.Containers = runner.Spec.Containers
for i := 0; i < len(pod.Spec.Containers); i++ {
if pod.Spec.Containers[i].Name == containerName {
pod.Spec.Containers[i].Env = append(pod.Spec.Containers[i].Env, env...)
}
}
} }
if len(runner.Spec.VolumeMounts) != 0 { if len(runner.Spec.VolumeMounts) != 0 {

View File

@@ -23,7 +23,6 @@ import (
"sort" "sort"
"time" "time"
"github.com/summerwind/actions-runner-controller/github"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"github.com/davecgh/go-spew/spew" "github.com/davecgh/go-spew/spew"
@@ -49,13 +48,13 @@ const (
// RunnerDeploymentReconciler reconciles a Runner object // RunnerDeploymentReconciler reconciles a Runner object
type RunnerDeploymentReconciler struct { type RunnerDeploymentReconciler struct {
client.Client client.Client
GitHubClient *github.Client Log logr.Logger
Log logr.Logger Recorder record.EventRecorder
Recorder record.EventRecorder Scheme *runtime.Scheme
Scheme *runtime.Scheme
} }
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments/status,verbs=get;update;patch // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets/status,verbs=get;update;patch // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets/status,verbs=get;update;patch
@@ -97,11 +96,9 @@ func (r *RunnerDeploymentReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
oldSets = myRunnerReplicaSets[1:] oldSets = myRunnerReplicaSets[1:]
} }
desiredRS, err := r.newRunnerReplicaSetWithAutoscaling(rd) desiredRS, err := r.newRunnerReplicaSet(rd)
if err != nil { if err != nil {
if _, ok := err.(NotSupported); ok { r.Recorder.Event(&rd, corev1.EventTypeNormal, "RunnerAutoscalingFailure", err.Error())
r.Recorder.Event(&rd, corev1.EventTypeNormal, "RunnerReplicaSetAutoScaleNotSupported", err.Error())
}
log.Error(err, "Could not create runnerreplicaset") log.Error(err, "Could not create runnerreplicaset")
@@ -195,12 +192,6 @@ func (r *RunnerDeploymentReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
updated := rd.DeepCopy() updated := rd.DeepCopy()
updated.Status.Replicas = desiredRS.Spec.Replicas updated.Status.Replicas = desiredRS.Spec.Replicas
if (rd.Status.Replicas == nil && *desiredRS.Spec.Replicas > 1) ||
(rd.Status.Replicas != nil && *desiredRS.Spec.Replicas > *rd.Status.Replicas) {
updated.Status.LastSuccessfulScaleOutTime = &metav1.Time{Time: time.Now()}
}
if err := r.Status().Update(ctx, updated); err != nil { if err := r.Status().Update(ctx, updated); err != nil {
log.Error(err, "Failed to update runnerdeployment status") log.Error(err, "Failed to update runnerdeployment status")
@@ -265,7 +256,7 @@ func CloneAndAddLabel(labels map[string]string, labelKey, labelValue string) map
return newLabels return newLabels
} }
func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeployment, computedReplicas *int) (*v1alpha1.RunnerReplicaSet, error) { func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeployment) (*v1alpha1.RunnerReplicaSet, error) {
newRSTemplate := *rd.Spec.Template.DeepCopy() newRSTemplate := *rd.Spec.Template.DeepCopy()
templateHash := ComputeHash(&newRSTemplate) templateHash := ComputeHash(&newRSTemplate)
// Add template hash label to selector. // Add template hash label to selector.
@@ -286,10 +277,6 @@ func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeplo
}, },
} }
if computedReplicas != nil {
rs.Spec.Replicas = computedReplicas
}
if err := ctrl.SetControllerReference(&rd, &rs, r.Scheme); err != nil { if err := ctrl.SetControllerReference(&rd, &rs, r.Scheme); err != nil {
return &rs, err return &rs, err
} }
@@ -321,36 +308,3 @@ func (r *RunnerDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error {
Owns(&v1alpha1.RunnerReplicaSet{}). Owns(&v1alpha1.RunnerReplicaSet{}).
Complete(r) Complete(r)
} }
func (r *RunnerDeploymentReconciler) newRunnerReplicaSetWithAutoscaling(rd v1alpha1.RunnerDeployment) (*v1alpha1.RunnerReplicaSet, error) {
var computedReplicas *int
if rd.Spec.Replicas == nil {
replicas, err := r.determineDesiredReplicas(rd)
if err != nil {
return nil, err
}
var scaleDownDelay time.Duration
if rd.Spec.ScaleDownDelaySecondsAfterScaleUp != nil {
scaleDownDelay = time.Duration(*rd.Spec.ScaleDownDelaySecondsAfterScaleUp) * time.Second
} else {
scaleDownDelay = 10 * time.Minute
}
now := time.Now()
if rd.Status.Replicas == nil ||
*rd.Status.Replicas < *replicas ||
rd.Status.LastSuccessfulScaleOutTime == nil ||
rd.Status.LastSuccessfulScaleOutTime.Add(scaleDownDelay).Before(now) {
computedReplicas = replicas
} else {
computedReplicas = rd.Status.Replicas
}
}
return r.newRunnerReplicaSet(rd, computedReplicas)
}

View File

@@ -31,17 +31,20 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/summerwind/actions-runner-controller/api/v1alpha1" "github.com/summerwind/actions-runner-controller/api/v1alpha1"
"github.com/summerwind/actions-runner-controller/github"
) )
// RunnerReplicaSetReconciler reconciles a Runner object // RunnerReplicaSetReconciler reconciles a Runner object
type RunnerReplicaSetReconciler struct { type RunnerReplicaSetReconciler struct {
client.Client client.Client
Log logr.Logger Log logr.Logger
Recorder record.EventRecorder Recorder record.EventRecorder
Scheme *runtime.Scheme Scheme *runtime.Scheme
GitHubClient *github.Client
} }
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets/status,verbs=get;update;patch // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners/status,verbs=get;update;patch // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runners/status,verbs=get;update;patch
@@ -96,8 +99,25 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
if available > desired { if available > desired {
n := available - desired n := available - desired
// get runners that are currently not busy
var notBusy []v1alpha1.Runner
for _, runner := range myRunners {
busy, err := r.isRunnerBusy(ctx, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil {
log.Error(err, "Failed to check if runner is busy")
return ctrl.Result{}, err
}
if !busy {
notBusy = append(notBusy, runner)
}
}
if len(notBusy) < n {
n = len(notBusy)
}
for i := 0; i < n; i++ { for i := 0; i < n; i++ {
if err := r.Client.Delete(ctx, &myRunners[i]); err != nil { if err := r.Client.Delete(ctx, &notBusy[i]); err != nil {
log.Error(err, "Failed to delete runner resource") log.Error(err, "Failed to delete runner resource")
return ctrl.Result{}, err return ctrl.Result{}, err
@@ -166,3 +186,19 @@ func (r *RunnerReplicaSetReconciler) SetupWithManager(mgr ctrl.Manager) error {
Owns(&v1alpha1.Runner{}). Owns(&v1alpha1.Runner{}).
Complete(r) Complete(r)
} }
func (r *RunnerReplicaSetReconciler) isRunnerBusy(ctx context.Context, org, repo, name string) (bool, error) {
runners, err := r.GitHubClient.ListRunners(ctx, org, repo)
r.Log.Info("runners", "github", runners)
if err != nil {
return false, err
}
for _, runner := range runners {
if runner.GetName() == name {
return runner.GetBusy(), nil
}
}
return false, fmt.Errorf("runner not found")
}

View File

@@ -3,11 +3,14 @@ package controllers
import ( import (
"context" "context"
"math/rand" "math/rand"
"net/http/httptest"
"time" "time"
"github.com/google/go-github/v32/github"
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes/scheme" "k8s.io/client-go/kubernetes/scheme"
"k8s.io/utils/pointer"
ctrl "sigs.k8s.io/controller-runtime" ctrl "sigs.k8s.io/controller-runtime"
logf "sigs.k8s.io/controller-runtime/pkg/log" logf "sigs.k8s.io/controller-runtime/pkg/log"
@@ -17,6 +20,12 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/client"
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1" actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
"github.com/summerwind/actions-runner-controller/github/fake"
)
var (
runnersList *fake.RunnersList
server *httptest.Server
) )
// SetupTest will set up a testing environment. // SetupTest will set up a testing environment.
@@ -41,11 +50,16 @@ func SetupTest(ctx context.Context) *corev1.Namespace {
mgr, err := ctrl.NewManager(cfg, ctrl.Options{}) mgr, err := ctrl.NewManager(cfg, ctrl.Options{})
Expect(err).NotTo(HaveOccurred(), "failed to create manager") Expect(err).NotTo(HaveOccurred(), "failed to create manager")
runnersList = fake.NewRunnersList()
server = runnersList.GetServer()
ghClient := newGithubClient(server)
controller := &RunnerReplicaSetReconciler{ controller := &RunnerReplicaSetReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Scheme: scheme.Scheme, Scheme: scheme.Scheme,
Log: logf.Log, Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"), Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: ghClient,
} }
err = controller.SetupWithManager(mgr) err = controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller") Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
@@ -61,6 +75,7 @@ func SetupTest(ctx context.Context) *corev1.Namespace {
AfterEach(func() { AfterEach(func() {
close(stopCh) close(stopCh)
server.Close()
err := k8sClient.Delete(ctx, ns) err := k8sClient.Delete(ctx, ns)
Expect(err).NotTo(HaveOccurred(), "failed to delete test namespace") Expect(err).NotTo(HaveOccurred(), "failed to delete test namespace")
}) })
@@ -124,6 +139,16 @@ var _ = Context("Inside of a new namespace", func() {
logf.Log.Error(err, "list runners") logf.Log.Error(err, "list runners")
} }
for i, runner := range runners.Items {
runnersList.Add(&github.Runner{
ID: pointer.Int64Ptr(int64(i) + 1),
Name: pointer.StringPtr(runner.Name),
OS: pointer.StringPtr("linux"),
Status: pointer.StringPtr("online"),
Busy: pointer.BoolPtr(false),
})
}
return len(runners.Items) return len(runners.Items)
}, },
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1)) time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
@@ -155,6 +180,16 @@ var _ = Context("Inside of a new namespace", func() {
logf.Log.Error(err, "list runners") logf.Log.Error(err, "list runners")
} }
for i, runner := range runners.Items {
runnersList.Add(&github.Runner{
ID: pointer.Int64Ptr(int64(i) + 1),
Name: pointer.StringPtr(runner.Name),
OS: pointer.StringPtr("linux"),
Status: pointer.StringPtr("online"),
Busy: pointer.BoolPtr(false),
})
}
return len(runners.Items) return len(runners.Items)
}, },
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2)) time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2))
@@ -186,6 +221,16 @@ var _ = Context("Inside of a new namespace", func() {
logf.Log.Error(err, "list runners") logf.Log.Error(err, "list runners")
} }
for i, runner := range runners.Items {
runnersList.Add(&github.Runner{
ID: pointer.Int64Ptr(int64(i) + 1),
Name: pointer.StringPtr(runner.Name),
OS: pointer.StringPtr("linux"),
Status: pointer.StringPtr("online"),
Busy: pointer.BoolPtr(false),
})
}
return len(runners.Items) return len(runners.Items)
}, },
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(0)) time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(0))

View File

@@ -4,7 +4,10 @@ import (
"fmt" "fmt"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"strconv"
"strings"
"time" "time"
"unicode"
) )
const ( const (
@@ -14,118 +17,144 @@ const (
{ {
"total_count": 2, "total_count": 2,
"runners": [ "runners": [
{"id": 1, "name": "test1", "os": "linux", "status": "online"}, {"id": 1, "name": "test1", "os": "linux", "status": "online", "busy": false},
{"id": 2, "name": "test2", "os": "linux", "status": "offline"} {"id": 2, "name": "test2", "os": "linux", "status": "offline", "busy": false}
] ]
} }
` `
) )
type handler struct { type Handler struct {
Status int Status int
Body string Body string
} }
func (h *handler) ServeHTTP(w http.ResponseWriter, req *http.Request) { func (h *Handler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.WriteHeader(h.Status) w.WriteHeader(h.Status)
fmt.Fprintf(w, h.Body) fmt.Fprintf(w, h.Body)
} }
type MapHandler struct {
Status int
Bodies map[int]string
}
func (h *MapHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// Parse out int key from URL path
key, err := strconv.Atoi(strings.TrimFunc(req.URL.Path, func(r rune) bool { return !unicode.IsNumber(r) }))
if err != nil {
w.WriteHeader(400)
} else if body := h.Bodies[key]; len(body) == 0 {
w.WriteHeader(404)
} else {
w.WriteHeader(h.Status)
fmt.Fprintf(w, body)
}
}
type ServerConfig struct {
*FixedResponses
}
// NewServer creates a fake server for running unit tests // NewServer creates a fake server for running unit tests
func NewServer(opts ...Option) *httptest.Server { func NewServer(opts ...Option) *httptest.Server {
var responses FixedResponses config := ServerConfig{
FixedResponses: &FixedResponses{},
for _, o := range opts {
o(&responses)
} }
routes := map[string]handler{ for _, o := range opts {
o(&config)
}
routes := map[string]http.Handler{
// For CreateRegistrationToken // For CreateRegistrationToken
"/repos/test/valid/actions/runners/registration-token": handler{ "/repos/test/valid/actions/runners/registration-token": &Handler{
Status: http.StatusCreated, Status: http.StatusCreated,
Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)), Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)),
}, },
"/repos/test/invalid/actions/runners/registration-token": handler{ "/repos/test/invalid/actions/runners/registration-token": &Handler{
Status: http.StatusOK, Status: http.StatusOK,
Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)), Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)),
}, },
"/repos/test/error/actions/runners/registration-token": handler{ "/repos/test/error/actions/runners/registration-token": &Handler{
Status: http.StatusBadRequest, Status: http.StatusBadRequest,
Body: "", Body: "",
}, },
"/orgs/test/actions/runners/registration-token": handler{ "/orgs/test/actions/runners/registration-token": &Handler{
Status: http.StatusCreated, Status: http.StatusCreated,
Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)), Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)),
}, },
"/orgs/invalid/actions/runners/registration-token": handler{ "/orgs/invalid/actions/runners/registration-token": &Handler{
Status: http.StatusOK, Status: http.StatusOK,
Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)), Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)),
}, },
"/orgs/error/actions/runners/registration-token": handler{ "/orgs/error/actions/runners/registration-token": &Handler{
Status: http.StatusBadRequest, Status: http.StatusBadRequest,
Body: "", Body: "",
}, },
// For ListRunners // For ListRunners
"/repos/test/valid/actions/runners": handler{ "/repos/test/valid/actions/runners": &Handler{
Status: http.StatusOK, Status: http.StatusOK,
Body: RunnersListBody, Body: RunnersListBody,
}, },
"/repos/test/invalid/actions/runners": handler{ "/repos/test/invalid/actions/runners": &Handler{
Status: http.StatusNoContent, Status: http.StatusNoContent,
Body: "", Body: "",
}, },
"/repos/test/error/actions/runners": handler{ "/repos/test/error/actions/runners": &Handler{
Status: http.StatusBadRequest, Status: http.StatusBadRequest,
Body: "", Body: "",
}, },
"/orgs/test/actions/runners": handler{ "/orgs/test/actions/runners": &Handler{
Status: http.StatusOK, Status: http.StatusOK,
Body: RunnersListBody, Body: RunnersListBody,
}, },
"/orgs/invalid/actions/runners": handler{ "/orgs/invalid/actions/runners": &Handler{
Status: http.StatusNoContent, Status: http.StatusNoContent,
Body: "", Body: "",
}, },
"/orgs/error/actions/runners": handler{ "/orgs/error/actions/runners": &Handler{
Status: http.StatusBadRequest, Status: http.StatusBadRequest,
Body: "", Body: "",
}, },
// For RemoveRunner // For RemoveRunner
"/repos/test/valid/actions/runners/1": handler{ "/repos/test/valid/actions/runners/1": &Handler{
Status: http.StatusNoContent, Status: http.StatusNoContent,
Body: "", Body: "",
}, },
"/repos/test/invalid/actions/runners/1": handler{ "/repos/test/invalid/actions/runners/1": &Handler{
Status: http.StatusOK, Status: http.StatusOK,
Body: "", Body: "",
}, },
"/repos/test/error/actions/runners/1": handler{ "/repos/test/error/actions/runners/1": &Handler{
Status: http.StatusBadRequest, Status: http.StatusBadRequest,
Body: "", Body: "",
}, },
"/orgs/test/actions/runners/1": handler{ "/orgs/test/actions/runners/1": &Handler{
Status: http.StatusNoContent, Status: http.StatusNoContent,
Body: "", Body: "",
}, },
"/orgs/invalid/actions/runners/1": handler{ "/orgs/invalid/actions/runners/1": &Handler{
Status: http.StatusOK, Status: http.StatusOK,
Body: "", Body: "",
}, },
"/orgs/error/actions/runners/1": handler{ "/orgs/error/actions/runners/1": &Handler{
Status: http.StatusBadRequest, Status: http.StatusBadRequest,
Body: "", Body: "",
}, },
// For auto-scaling based on the number of queued(pending) workflow runs // For auto-scaling based on the number of queued(pending) workflow runs
"/repos/test/valid/actions/runs": responses.listRepositoryWorkflowRuns.handler(), "/repos/test/valid/actions/runs": config.FixedResponses.ListRepositoryWorkflowRuns,
// For auto-scaling based on the number of queued(pending) workflow jobs
"/repos/test/valid/actions/runs/": config.FixedResponses.ListWorkflowJobs,
} }
mux := http.NewServeMux() mux := http.NewServeMux()
for path, handler := range routes { for path, handler := range routes {
h := handler mux.Handle(path, handler)
mux.Handle(path, &h)
} }
return httptest.NewServer(mux) return httptest.NewServer(mux)

View File

@@ -1,28 +1,32 @@
package fake package fake
type FixedResponses struct { type FixedResponses struct {
listRepositoryWorkflowRuns FixedResponse ListRepositoryWorkflowRuns *Handler
ListWorkflowJobs *MapHandler
} }
type FixedResponse struct { type Option func(*ServerConfig)
Status int
Body string
}
func (r FixedResponse) handler() handler {
return handler{
Status: r.Status,
Body: r.Body,
}
}
type Option func(responses *FixedResponses)
func WithListRepositoryWorkflowRunsResponse(status int, body string) Option { func WithListRepositoryWorkflowRunsResponse(status int, body string) Option {
return func(r *FixedResponses) { return func(c *ServerConfig) {
r.listRepositoryWorkflowRuns = FixedResponse{ c.FixedResponses.ListRepositoryWorkflowRuns = &Handler{
Status: status, Status: status,
Body: body, Body: body,
} }
} }
} }
func WithListWorkflowJobsResponse(status int, bodies map[int]string) Option {
return func(c *ServerConfig) {
c.FixedResponses.ListWorkflowJobs = &MapHandler{
Status: status,
Bodies: bodies,
}
}
}
func WithFixedResponses(responses *FixedResponses) Option {
return func(c *ServerConfig) {
c.FixedResponses = responses
}
}

74
github/fake/runners.go Normal file
View File

@@ -0,0 +1,74 @@
package fake
import (
"encoding/json"
"net/http"
"net/http/httptest"
"strconv"
"github.com/google/go-github/v32/github"
"github.com/gorilla/mux"
)
type RunnersList struct {
runners []*github.Runner
}
func NewRunnersList() *RunnersList {
return &RunnersList{
runners: make([]*github.Runner, 0),
}
}
func (r *RunnersList) Add(runner *github.Runner) {
if !exists(r.runners, runner) {
r.runners = append(r.runners, runner)
}
}
func (r *RunnersList) GetServer() *httptest.Server {
router := mux.NewRouter()
router.Handle("/repos/{owner}/{repo}/actions/runners", r.handleList())
router.Handle("/repos/{owner}/{repo}/actions/runners/{id}", r.handleRemove())
router.Handle("/orgs/{org}/actions/runners", r.handleList())
router.Handle("/orgs/{org}/actions/runners/{id}", r.handleRemove())
return httptest.NewServer(router)
}
func (r *RunnersList) handleList() http.HandlerFunc {
return func(w http.ResponseWriter, res *http.Request) {
j, err := json.Marshal(github.Runners{
TotalCount: len(r.runners),
Runners: r.runners,
})
if err != nil {
panic(err)
}
w.WriteHeader(http.StatusOK)
w.Write(j)
}
}
func (r *RunnersList) handleRemove() http.HandlerFunc {
return func(w http.ResponseWriter, res *http.Request) {
vars := mux.Vars(res)
for i, runner := range r.runners {
if runner.ID != nil && vars["id"] == strconv.FormatInt(*runner.ID, 10) {
r.runners = append(r.runners[:i], r.runners[i+1:]...)
}
}
w.WriteHeader(http.StatusOK)
}
}
func exists(runners []*github.Runner, runner *github.Runner) bool {
for _, r := range runners {
if *r.Name == *runner.Name {
return true
}
}
return false
}

View File

@@ -9,7 +9,7 @@ import (
"time" "time"
"github.com/bradleyfalzon/ghinstallation" "github.com/bradleyfalzon/ghinstallation"
"github.com/google/go-github/v31/github" "github.com/google/go-github/v32/github"
"golang.org/x/oauth2" "golang.org/x/oauth2"
) )
@@ -85,7 +85,7 @@ func (c *Client) GetRegistrationToken(ctx context.Context, org, repo, name strin
return rt, nil return rt, nil
} }
// RemoveRunner removes a runner with specified runner ID from repocitory. // RemoveRunner removes a runner with specified runner ID from repository.
func (c *Client) RemoveRunner(ctx context.Context, org, repo string, runnerID int64) error { func (c *Client) RemoveRunner(ctx context.Context, org, repo string, runnerID int64) error {
owner, repo, err := getOwnerAndRepo(org, repo) owner, repo, err := getOwnerAndRepo(org, repo)
@@ -121,7 +121,7 @@ func (c *Client) ListRunners(ctx context.Context, org, repo string) ([]*github.R
list, res, err := c.listRunners(ctx, owner, repo, &opts) list, res, err := c.listRunners(ctx, owner, repo, &opts)
if err != nil { if err != nil {
return runners, fmt.Errorf("failed to remove runner: %v", err) return runners, fmt.Errorf("failed to list runners: %v", err)
} }
runners = append(runners, list.Runners...) runners = append(runners, list.Runners...)

View File

@@ -10,7 +10,7 @@ import (
"net/url" "net/url"
"reflect" "reflect"
"github.com/google/go-github/v31/github" "github.com/google/go-github/v32/github"
"github.com/google/go-querystring/query" "github.com/google/go-querystring/query"
) )

View File

@@ -7,7 +7,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/google/go-github/v31/github" "github.com/google/go-github/v32/github"
"github.com/summerwind/actions-runner-controller/github/fake" "github.com/summerwind/actions-runner-controller/github/fake"
) )
@@ -41,9 +41,9 @@ func TestGetRegistrationToken(t *testing.T) {
token string token string
err bool err bool
}{ }{
{org: "test", repo: "valid", token: fake.RegistrationToken, err: false}, {org: "", repo: "test/valid", token: fake.RegistrationToken, err: false},
{org: "test", repo: "invalid", token: "", err: true}, {org: "", repo: "test/invalid", token: "", err: true},
{org: "test", repo: "error", token: "", err: true}, {org: "", repo: "test/error", token: "", err: true},
{org: "test", repo: "", token: fake.RegistrationToken, err: false}, {org: "test", repo: "", token: fake.RegistrationToken, err: false},
{org: "invalid", repo: "", token: "", err: true}, {org: "invalid", repo: "", token: "", err: true},
{org: "error", repo: "", token: "", err: true}, {org: "error", repo: "", token: "", err: true},
@@ -68,9 +68,9 @@ func TestListRunners(t *testing.T) {
length int length int
err bool err bool
}{ }{
{org: "test", repo: "valid", length: 2, err: false}, {org: "", repo: "test/valid", length: 2, err: false},
{org: "test", repo: "invalid", length: 0, err: true}, {org: "", repo: "test/invalid", length: 0, err: true},
{org: "test", repo: "error", length: 0, err: true}, {org: "", repo: "test/error", length: 0, err: true},
{org: "test", repo: "", length: 2, err: false}, {org: "test", repo: "", length: 2, err: false},
{org: "invalid", repo: "", length: 0, err: true}, {org: "invalid", repo: "", length: 0, err: true},
{org: "error", repo: "", length: 0, err: true}, {org: "error", repo: "", length: 0, err: true},
@@ -94,9 +94,9 @@ func TestRemoveRunner(t *testing.T) {
repo string repo string
err bool err bool
}{ }{
{org: "test", repo: "valid", err: false}, {org: "", repo: "test/valid", err: false},
{org: "test", repo: "invalid", err: true}, {org: "", repo: "test/invalid", err: true},
{org: "test", repo: "error", err: true}, {org: "", repo: "test/error", err: true},
{org: "test", repo: "", err: false}, {org: "test", repo: "", err: false},
{org: "invalid", repo: "", err: true}, {org: "invalid", repo: "", err: true},
{org: "error", repo: "", err: true}, {org: "error", repo: "", err: true},

5
go.mod
View File

@@ -6,8 +6,9 @@ require (
github.com/bradleyfalzon/ghinstallation v1.1.1 github.com/bradleyfalzon/ghinstallation v1.1.1
github.com/davecgh/go-spew v1.1.1 github.com/davecgh/go-spew v1.1.1
github.com/go-logr/logr v0.1.0 github.com/go-logr/logr v0.1.0
github.com/google/go-github/v31 v31.0.0 github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04
github.com/google/go-querystring v1.0.0 github.com/google/go-querystring v1.0.0
github.com/gorilla/mux v1.8.0
github.com/onsi/ginkgo v1.8.0 github.com/onsi/ginkgo v1.8.0
github.com/onsi/gomega v1.5.0 github.com/onsi/gomega v1.5.0
github.com/stretchr/testify v1.4.0 // indirect github.com/stretchr/testify v1.4.0 // indirect
@@ -15,6 +16,6 @@ require (
k8s.io/api v0.0.0-20190918155943-95b840bb6a1f k8s.io/api v0.0.0-20190918155943-95b840bb6a1f
k8s.io/apimachinery v0.0.0-20190913080033-27d36303b655 k8s.io/apimachinery v0.0.0-20190913080033-27d36303b655
k8s.io/client-go v0.0.0-20190918160344-1fbdaa4c8d90 k8s.io/client-go v0.0.0-20190918160344-1fbdaa4c8d90
k8s.io/klog v0.4.0 k8s.io/utils v0.0.0-20190801114015-581e00157fb1
sigs.k8s.io/controller-runtime v0.4.0 sigs.k8s.io/controller-runtime v0.4.0
) )

7
go.sum
View File

@@ -116,11 +116,10 @@ github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg= github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-github v17.0.0+incompatible h1:N0LgJ1j65A7kfXrZnUDaYCs/Sf4rEjNlfyDHW9dolSY=
github.com/google/go-github/v29 v29.0.2 h1:opYN6Wc7DOz7Ku3Oh4l7prmkOMwEcQxpFtxdU8N8Pts= github.com/google/go-github/v29 v29.0.2 h1:opYN6Wc7DOz7Ku3Oh4l7prmkOMwEcQxpFtxdU8N8Pts=
github.com/google/go-github/v29 v29.0.2/go.mod h1:CHKiKKPHJ0REzfwc14QMklvtHwCveD0PxlMjLlzAM5E= github.com/google/go-github/v29 v29.0.2/go.mod h1:CHKiKKPHJ0REzfwc14QMklvtHwCveD0PxlMjLlzAM5E=
github.com/google/go-github/v31 v31.0.0 h1:JJUxlP9lFK+ziXKimTCprajMApV1ecWD4NB6CCb0plo= github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04 h1:wEYk2h/GwOhImcVjiTIceP88WxVbXw2F+ARYUQMEsfg=
github.com/google/go-github/v31 v31.0.0/go.mod h1:NQPZol8/1sMoWYGN2yaALIBytu17gAWfhbweiEed3pM= github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04/go.mod h1:rIEpZD9CTDQwDK9GDrtMTycQNA4JU3qBsCizh3q2WCI=
github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk= github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI= github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
@@ -136,6 +135,8 @@ github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsC
github.com/googleapis/gnostic v0.3.1 h1:WeAefnSUHlBb0iJKwxFDZdbfGwkd7xRNuV+IpXMJhYk= github.com/googleapis/gnostic v0.3.1 h1:WeAefnSUHlBb0iJKwxFDZdbfGwkd7xRNuV+IpXMJhYk=
github.com/googleapis/gnostic v0.3.1/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1awfrALZdbtU= github.com/googleapis/gnostic v0.3.1/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1awfrALZdbtU=
github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8= github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8=
github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gregjones/httpcache v0.0.0-20170728041850-787624de3eb7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA= github.com/gregjones/httpcache v0.0.0-20170728041850-787624de3eb7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v0.0.0-20190222133341-cfaf5686ec79/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= github.com/grpc-ecosystem/go-grpc-middleware v0.0.0-20190222133341-cfaf5686ec79/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=

26
main.go
View File

@@ -158,9 +158,10 @@ func main() {
} }
runnerSetReconciler := &controllers.RunnerReplicaSetReconciler{ runnerSetReconciler := &controllers.RunnerReplicaSetReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("RunnerReplicaSet"), Log: ctrl.Log.WithName("controllers").WithName("RunnerReplicaSet"),
Scheme: mgr.GetScheme(), Scheme: mgr.GetScheme(),
GitHubClient: ghClient,
} }
if err = runnerSetReconciler.SetupWithManager(mgr); err != nil { if err = runnerSetReconciler.SetupWithManager(mgr); err != nil {
@@ -169,10 +170,9 @@ func main() {
} }
runnerDeploymentReconciler := &controllers.RunnerDeploymentReconciler{ runnerDeploymentReconciler := &controllers.RunnerDeploymentReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("RunnerDeployment"), Log: ctrl.Log.WithName("controllers").WithName("RunnerDeployment"),
Scheme: mgr.GetScheme(), Scheme: mgr.GetScheme(),
GitHubClient: ghClient,
} }
if err = runnerDeploymentReconciler.SetupWithManager(mgr); err != nil { if err = runnerDeploymentReconciler.SetupWithManager(mgr); err != nil {
@@ -180,6 +180,18 @@ func main() {
os.Exit(1) os.Exit(1)
} }
horizontalRunnerAutoscaler := &controllers.HorizontalRunnerAutoscalerReconciler{
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("HorizontalRunnerAutoscaler"),
Scheme: mgr.GetScheme(),
GitHubClient: ghClient,
}
if err = horizontalRunnerAutoscaler.SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "HorizontalRunnerAutoscaler")
os.Exit(1)
}
if err = (&actionsv1alpha1.Runner{}).SetupWebhookWithManager(mgr); err != nil { if err = (&actionsv1alpha1.Runner{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "Runner") setupLog.Error(err, "unable to create webhook", "webhook", "Runner")
os.Exit(1) os.Exit(1)

View File

@@ -1,7 +1,8 @@
FROM ubuntu:18.04 FROM ubuntu:18.04
ARG RUNNER_VERSION ARG TARGETPLATFORM
ARG DOCKER_VERSION ARG RUNNER_VERSION=2.272.0
ARG DOCKER_VERSION=19.03.12
ENV DEBIAN_FRONTEND=noninteractive ENV DEBIAN_FRONTEND=noninteractive
RUN apt update -y \ RUN apt update -y \
@@ -9,52 +10,62 @@ RUN apt update -y \
&& add-apt-repository -y ppa:git-core/ppa \ && add-apt-repository -y ppa:git-core/ppa \
&& apt update -y \ && apt update -y \
&& apt install -y --no-install-recommends \ && apt install -y --no-install-recommends \
build-essential \ build-essential \
curl \ curl \
ca-certificates \ ca-certificates \
dnsutils \ dnsutils \
ftp \ ftp \
git \ git \
iproute2 \ iproute2 \
iputils-ping \ iputils-ping \
jq \ jq \
libunwind8 \ libunwind8 \
locales \ locales \
netcat \ netcat \
openssh-client \ openssh-client \
parallel \ parallel \
rsync \ rsync \
shellcheck \ shellcheck \
sudo \ sudo \
telnet \ telnet \
time \ time \
tzdata \ tzdata \
unzip \ unzip \
upx \ upx \
wget \ wget \
zip \ zip \
zstd \ zstd \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN curl -L -o docker.tgz https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}.tgz \ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& curl -L -o /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.2/dumb-init_1.2.2_${ARCH} \
&& chmod +x /usr/local/bin/dumb-init
# Docker download supports arm64 as aarch64 & amd64 as x86_64
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& if [ "$ARCH" = "arm64" ]; then export ARCH=aarch64 ; fi \
&& if [ "$ARCH" = "amd64" ]; then export ARCH=x86_64 ; fi \
&& curl -L -o docker.tgz https://download.docker.com/linux/static/stable/${ARCH}/docker-${DOCKER_VERSION}.tgz \
&& tar zxvf docker.tgz \ && tar zxvf docker.tgz \
&& install -o root -g root -m 755 docker/docker /usr/local/bin/docker \ && install -o root -g root -m 755 docker/docker /usr/local/bin/docker \
&& rm -rf docker docker.tgz \ && rm -rf docker docker.tgz \
&& curl -L -o /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.2/dumb-init_1.2.2_amd64 \
&& chmod +x /usr/local/bin/dumb-init \
&& adduser --disabled-password --gecos "" --uid 1000 runner \ && adduser --disabled-password --gecos "" --uid 1000 runner \
&& usermod -aG sudo runner \ && usermod -aG sudo runner \
&& echo "%sudo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers && echo "%sudo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers
RUN mkdir -p /runner \ # Runner download supports amd64 as x64
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& if [ "$ARCH" = "amd64" ]; then export ARCH=x64 ; fi \
&& mkdir -p /runner \
&& cd /runner \ && cd /runner \
&& curl -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \ && curl -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-${ARCH}-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./runner.tar.gz \ && tar xzf ./runner.tar.gz \
&& rm runner.tar.gz \ && rm runner.tar.gz \
&& ./bin/installdependencies.sh \ && ./bin/installdependencies.sh \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
COPY entrypoint.sh /runner COPY entrypoint.sh /runner
COPY patched /runner/patched
USER runner:runner USER runner:runner
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"] ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]

View File

@@ -0,0 +1,99 @@
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
# Dev + DinD dependencies
RUN apt update \
&& apt install -y software-properties-common \
&& add-apt-repository -y ppa:git-core/ppa \
&& apt install -y \
build-essential \
curl \
ca-certificates \
dnsutils \
ftp \
git \
iproute2 \
iptables \
iputils-ping \
jq \
libunwind8 \
locales \
netcat \
openssh-client \
parallel \
rsync \
shellcheck \
sudo \
supervisor \
telnet \
time \
tzdata \
unzip \
upx \
wget \
zip \
zstd \
&& rm -rf /var/lib/apt/list/*
# Runner user
RUN adduser --disabled-password --gecos "" --uid 1000 runner \
&& groupadd docker \
&& usermod -aG sudo runner \
&& usermod -aG docker runner \
&& echo "%sudo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers
ARG TARGETPLATFORM
ARG RUNNER_VERSION=2.272.0
ARG DOCKER_CHANNEL=stable
ARG DOCKER_VERSION=19.03.13
ARG DEBUG=false
# Docker installation
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& if [ "$ARCH" = "arm64" ]; then export ARCH=aarch64 ; fi \
&& if [ "$ARCH" = "amd64" ]; then export ARCH=x86_64 ; fi \
&& if ! curl -L -o docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/${ARCH}/docker-${DOCKER_VERSION}.tgz"; then \
echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${ARCH}'"; \
exit 1; \
fi; \
tar --extract \
--file docker.tgz \
--strip-components 1 \
--directory /usr/local/bin/ \
; \
rm docker.tgz; \
dockerd --version; \
docker --version
# Runner download supports amd64 as x64
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& if [ "$ARCH" = "amd64" ]; then export ARCH=x64 ; fi \
&& mkdir -p /runner \
&& cd /runner \
&& curl -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-${ARCH}-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./runner.tar.gz \
&& rm runner.tar.gz \
&& ./bin/installdependencies.sh \
&& rm -rf /var/lib/apt/lists/*
COPY modprobe startup.sh /usr/local/bin/
COPY supervisor/ /etc/supervisor/conf.d/
COPY logger.sh /opt/bash-utils/logger.sh
COPY entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/startup.sh /usr/local/bin/entrypoint.sh /usr/local/bin/modprobe
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
&& curl -L -o /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.2/dumb-init_1.2.2_${ARCH} \
&& chmod +x /usr/local/bin/dumb-init
VOLUME /var/lib/docker
COPY patched /runner/patched
# No group definition, as that makes it harder to run docker.
USER runner
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]
CMD ["startup.sh"]

View File

@@ -1,11 +1,52 @@
NAME ?= summerwind/actions-runner NAME ?= summerwind/actions-runner
DIND_RUNNER_NAME ?= ${NAME}-dind
TAG ?= latest
RUNNER_VERSION ?= 2.267.0 RUNNER_VERSION ?= 2.273.5
DOCKER_VERSION ?= 19.03.8 DOCKER_VERSION ?= 19.03.12
# default list of platforms for which multiarch image is built
ifeq (${PLATFORMS}, )
export PLATFORMS="linux/amd64,linux/arm64"
endif
# if IMG_RESULT is unspecified, by default the image will be pushed to registry
ifeq (${IMG_RESULT}, load)
export PUSH_ARG="--load"
# if load is specified, image will be built only for the build machine architecture.
export PLATFORMS="local"
else ifeq (${IMG_RESULT}, cache)
# if cache is specified, image will only be available in the build cache, it won't be pushed or loaded
# therefore no PUSH_ARG will be specified
else
export PUSH_ARG="--push"
endif
docker-build: docker-build:
docker build --build-arg RUNNER_VERSION=${RUNNER_VERSION} --build-arg DOCKER_VERSION=${DOCKER_VERSION} -t ${NAME}:latest -t ${NAME}:v${RUNNER_VERSION} . docker build --build-arg RUNNER_VERSION=${RUNNER_VERSION} --build-arg DOCKER_VERSION=${DOCKER_VERSION} -t ${NAME}:${TAG} -t ${NAME}:v${RUNNER_VERSION} .
docker build --build-arg RUNNER_VERSION=${RUNNER_VERSION} --build-arg DOCKER_VERSION=${DOCKER_VERSION} -t ${DIND_RUNNER_NAME}:${TAG} -t ${DIND_RUNNER_NAME}:v${RUNNER_VERSION} -f Dockerfile.dindrunner .
docker-push: docker-push:
docker push ${NAME}:latest docker push ${NAME}:${TAG}
docker push ${NAME}:v${RUNNER_VERSION} docker push ${NAME}:v${RUNNER_VERSION}
docker push ${DIND_RUNNER_NAME}:${TAG}
docker push ${DIND_RUNNER_NAME}:v${RUNNER_VERSION}
docker-buildx:
export DOCKER_CLI_EXPERIMENTAL=enabled
@if ! docker buildx ls | grep -q container-builder; then\
docker buildx create --platform ${PLATFORMS} --name container-builder --use;\
fi
docker buildx build --platform ${PLATFORMS} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
-t "${NAME}:latest" \
-f Dockerfile \
. ${PUSH_ARG}
docker buildx build --platform ${PLATFORMS} \
--build-arg RUNNER_VERSION=${RUNNER_VERSION} \
--build-arg DOCKER_VERSION=${DOCKER_VERSION} \
-t "${DIND_RUNNER_NAME}:latest" \
-f Dockerfile.dindrunner \
. ${PUSH_ARG}

View File

@@ -28,5 +28,11 @@ fi
cd /runner cd /runner
./config.sh --unattended --replace --name "${RUNNER_NAME}" --url "https://github.com/${ATTACH}" --token "${RUNNER_TOKEN}" ${LABEL_ARG} ./config.sh --unattended --replace --name "${RUNNER_NAME}" --url "https://github.com/${ATTACH}" --token "${RUNNER_TOKEN}" ${LABEL_ARG}
for f in runsvc.sh RunnerService.js; do
diff {bin,patched}/${f} || :
sudo mv bin/${f}{,.bak}
sudo mv {patched,bin}/${f}
done
unset RUNNER_NAME RUNNER_REPO RUNNER_TOKEN unset RUNNER_NAME RUNNER_REPO RUNNER_TOKEN
exec ./run.sh --once exec ./bin/runsvc.sh --once

24
runner/logger.sh Normal file
View File

@@ -0,0 +1,24 @@
#!/bin/sh
# Logger from this post http://www.cubicrace.com/2016/03/log-tracing-mechnism-for-shell-scripts.html
function INFO(){
local function_name="${FUNCNAME[1]}"
local msg="$1"
timeAndDate=`date`
echo "[$timeAndDate] [INFO] [${0}] $msg"
}
function DEBUG(){
local function_name="${FUNCNAME[1]}"
local msg="$1"
timeAndDate=`date`
echo "[$timeAndDate] [DEBUG] [${0}] $msg"
}
function ERROR(){
local function_name="${FUNCNAME[1]}"
local msg="$1"
timeAndDate=`date`
echo "[$timeAndDate] [ERROR] $msg"
}

20
runner/modprobe Normal file
View File

@@ -0,0 +1,20 @@
#!/bin/sh
set -eu
# "modprobe" without modprobe
# https://twitter.com/lucabruno/status/902934379835662336
# this isn't 100% fool-proof, but it'll have a much higher success rate than simply using the "real" modprobe
# Docker often uses "modprobe -va foo bar baz"
# so we ignore modules that start with "-"
for module; do
if [ "${module#-}" = "$module" ]; then
ip link show "$module" || true
lsmod | grep "$module" || true
fi
done
# remove /usr/local/... from PATH so we can exec the real modprobe as a last resort
export PATH='/usr/sbin:/usr/bin:/sbin:/bin'
exec modprobe "$@"

91
runner/patched/RunnerService.js Executable file
View File

@@ -0,0 +1,91 @@
#!/usr/bin/env node
// Copyright (c) GitHub. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
var childProcess = require("child_process");
var path = require("path")
var supported = ['linux', 'darwin']
if (supported.indexOf(process.platform) == -1) {
console.log('Unsupported platform: ' + process.platform);
console.log('Supported platforms are: ' + supported.toString());
process.exit(1);
}
var stopping = false;
var listener = null;
var runService = function() {
var listenerExePath = path.join(__dirname, '../bin/Runner.Listener');
var interactive = process.argv[2] === "interactive";
if(!stopping) {
try {
if (interactive) {
console.log('Starting Runner listener interactively');
listener = childProcess.spawn(listenerExePath, ['run'].concat(process.argv.slice(3)), { env: process.env });
} else {
console.log('Starting Runner listener with startup type: service');
listener = childProcess.spawn(listenerExePath, ['run', '--startuptype', 'service'].concat(process.argv.slice(2)), { env: process.env });
}
console.log('Started listener process');
listener.stdout.on('data', (data) => {
process.stdout.write(data.toString('utf8'));
});
listener.stderr.on('data', (data) => {
process.stdout.write(data.toString('utf8'));
});
listener.on('close', (code) => {
console.log(`Runner listener exited with error code ${code}`);
if (code === 0) {
console.log('Runner listener exit with 0 return code, stop the service, no retry needed.');
stopping = true;
} else if (code === 1) {
console.log('Runner listener exit with terminated error, stop the service, no retry needed.');
stopping = true;
} else if (code === 2) {
console.log('Runner listener exit with retryable error, re-launch runner in 5 seconds.');
} else if (code === 3) {
console.log('Runner listener exit because of updating, re-launch runner in 5 seconds.');
} else {
console.log('Runner listener exit with undefined return code, re-launch runner in 5 seconds.');
}
if(!stopping) {
setTimeout(runService, 5000);
}
});
} catch(ex) {
console.log(ex);
}
}
}
runService();
console.log('Started running service');
var gracefulShutdown = function(code) {
console.log('Shutting down runner listener');
stopping = true;
if (listener) {
console.log('Sending SIGINT to runner listener to stop');
listener.kill('SIGINT');
// TODO wait for 30 seconds and send a SIGKILL
}
}
process.on('SIGINT', () => {
gracefulShutdown(0);
});
process.on('SIGTERM', () => {
gracefulShutdown(0);
});

20
runner/patched/runsvc.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/bin/bash
# convert SIGTERM signal to SIGINT
# for more info on how to propagate SIGTERM to a child process see: http://veithen.github.io/2014/11/16/sigterm-propagation.html
trap 'kill -INT $PID' TERM INT
if [ -f ".path" ]; then
# configure
export PATH=`cat .path`
echo ".path=${PATH}"
fi
# insert anything to setup env when running as a service
# run the host process which keep the listener alive
./externals/node12/bin/node ./bin/RunnerService.js $* &
PID=$!
wait $PID
trap - TERM INT
wait $PID

37
runner/startup.sh Normal file
View File

@@ -0,0 +1,37 @@
#!/bin/bash
source /opt/bash-utils/logger.sh
function wait_for_process () {
local max_time_wait=30
local process_name="$1"
local waited_sec=0
while ! pgrep "$process_name" >/dev/null && ((waited_sec < max_time_wait)); do
INFO "Process $process_name is not running yet. Retrying in 1 seconds"
INFO "Waited $waited_sec seconds of $max_time_wait seconds"
sleep 1
((waited_sec=waited_sec+1))
if ((waited_sec >= max_time_wait)); then
return 1
fi
done
return 0
}
INFO "Starting supervisor"
sudo /usr/bin/supervisord -n >> /dev/null 2>&1 &
INFO "Waiting for processes to be running"
processes=(dockerd)
for process in "${processes[@]}"; do
wait_for_process "$process"
if [ $? -ne 0 ]; then
ERROR "$process is not running after max time"
exit 1
else
INFO "$process is running"
fi
done
# Wait processes to be running
entrypoint.sh

View File

@@ -0,0 +1,6 @@
[program:dockerd]
command=/usr/local/bin/dockerd
autostart=true
autorestart=true
stderr_logfile=/var/log/dockerd.err.log
stdout_logfile=/var/log/dockerd.out.log