Commit Graph

13 Commits

Author SHA1 Message Date
Yusuke Kuoka
e00b3b9714 Make development cycle faster (#508)
Improves Makefile, acceptance/deploy.sh, acceptance/testdata/runnerdeploy.yaml, and the documentation to help developers and contributors.
2021-05-03 13:03:17 +09:00
Yusuke Kuoka
0901456320 Update README with more detailed test instructions (#503)
- You can now use `make acceptance/run` to run only a specific acceptance test case
- Add note about Ubuntu 20.04 users / snap-provided docker
- Add instruction to run Ginkgo tests
- Extract acceptance/load from acceptance/kind
- Make `acceptance/pull` not depend on `docker-build`, so that you can do `make docker-build acceptance/load` for faster image reload
2021-05-02 16:31:07 +09:00
Yusuke Kuoka
dbd7b486d2 feat: Support for scaling from/to zero (#465)
This is an attempt to support scaling from/to zero.

The basic idea is that we create a one-off "registration-only" runner pod on RunnerReplicaSet being scaled to zero, so that there is one "offline" runner, which enables GitHub Actions to queue jobs instead of discarding those.

GitHub Actions seems to immediately throw away the new job when there are no runners at all. Generally, having runners of any status, `busy`, `idle`, or `offline` would prevent GitHub actions from failing jobs. But retaining `busy` or `idle` runners means that we need to keep runner pods running, which conflicts with our desired to scale to/from zero, hence we retain `offline` runners.

In this change, I enhanced the runnerreplicaset controller to create a registration-only runner on very beginning of its reconciliation logic, only when a runnerreplicaset is scaled to zero. The runner controller creates the registration-only runner pod, waits for it to become "offline", and then removes the runner pod. The runner on GitHub stays `offline`, until the runner resource on K8s is deleted. As we remove the registration-only runner pod as soon as it registers, this doesn't block cluster-autoscaler.

Related to #447
2021-05-02 16:11:36 +09:00
ToMe25
ba175148c8 Locally build runner image instead of pulling it (#473)
* Fix acceptance helm test not using newly built controller image

* Locally build runner image instead of pulling it

* Revert runner controller image pull policy to always

and add a line to the test deployment to use IfNotPresent

* Change runner repository from summerwind/action-runner to the owner of actions-runner-controller.

Also fix some Makefile formatting.

* Undo renaming acceptance/pull to docker-pull

* Some env var cleanup

Rename USERNAME to DOCKER_USER(is still used for github too tho)
Add RUNNER_NAME var(defaults to $DOCKER_USER/actions-runner)
Add TEST_REPO(defaults to $DOCKER_USER/actions-runner-controller)
2021-05-01 15:10:57 +09:00
ToMe25
a612b38f9b Cache docker images in acceptance test (#463)
* Cache docker images locally

Cache dind, runner, and kube-rbac-proxy docker image on the host and copy onto the kind node instead of downloading it to the node directly.

* Also cache certmanager docker images
2021-04-18 09:44:59 +09:00
ToMe25
c26fb5ad5f Make acceptance use local docker image (#448)
load the local docker image to the kind cluster instead of pushing it to dockerhub and pulling it from there
2021-04-17 17:13:47 +09:00
callum-tait-pbx
f2680b2f2d Bumping runner to Ubuntu 20.04 (#438)
Images for `actions-runner:v${VERSION}` and `actions-runner:latest` tags are upgraded to Ubuntu 20.04.

If you would like not to upgrade Ubuntu in the runner image in the future, migrate to new tags suffixed with `-ubuntu-20.04` like`actions-runner:v${VERSION}-ubuntu-20.04`.

We also keep publishing the existing Ubuntu 18.04 images with new `actions-runner:v${VERSION}-ubuntu-18.04` tags. Please use it when it turned out that you had workflows dependent on Ubuntu 18.04.

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-04-17 17:02:03 +09:00
Yusuke Kuoka
156e2c1987 Fix MTU configuration for dockerd (#421)
Resolves #393
2021-03-31 09:29:21 +09:00
Yusuke Kuoka
dfffd3fb62 feat: EKS IAM Roles for Service Accounts for Runner Pods (#226)
One of the pod recreation conditions has been modified to use hash of runner spec, so that the controller does not keep restarting pods mutated by admission webhooks. This naturally allows us, for example, to use IRSA for EKS that requires its admission webhook to mutate the runner pod to have additional, IRSA-related volumes, volume mounts and env.

Resolves #200
2020-12-08 17:56:06 +09:00
Yusuke Kuoka
1658f51fcb Make Runner{Deployment,ReplicaSet} replicas actually optional (#186)
If omitted, it properly defaults to 1.

Fixes #64
2020-11-14 22:06:33 +09:00
Yusuke Kuoka
b63879f59f Ensure the chart is passing acceptance tests 2020-11-14 21:58:16 +09:00
Yusuke Kuoka
6a4c29d30e Set ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm to run acceptance tests with chart 2020-11-14 20:31:37 +09:00
Yusuke Kuoka
bbfe03f02b Add acceptance test (#168)
To ease verifying the controller to work before submitting/merging PRs and releasing a new version of the controller.
2020-11-14 20:07:14 +09:00