Compare commits

..

13 Commits

Author SHA1 Message Date
Yusuke Kuoka
8d3a83b07a Add CheckRun.Names scale-up trigger configuration (#390)
This allows you to trigger autoscaling depending on check_run names(i.e. actions job names). If you are willing to differentiate scale amount only for a specific job, or want to scale only on a specific job, try this.
2021-03-14 10:21:42 +09:00
callum-tait-pbx
a6270b44d5 docs: fix typos and add PR link (#379)
* docs: fix typos and add PR link

* docs: changes based on feedback

* docs: fixing numbers in list

* docs: grammer

* docs: better wording
2021-03-12 08:52:34 +09:00
Brandon Kimbrough
2273b198a1 Add ability to set the MTU size of the docker in docker container (#385)
* adding abilitiy to set docker in docker MTU size

* safeguards to only set MTU env var if it is set
2021-03-12 08:44:49 +09:00
Yusuke Kuoka
3d62e73f8c Fix PercentageRunnersBusy scaling not working (#386)
PercentageRunnerBusy seems to have regressed since #355 due to that RunnerDeployment.Spec.Selector is empty by default and the HRA controller was using that empty selector to query runners, which somehow returned 0 runners. This fixes that by using the newly added automatic `runner-deployment-name` label for the default runner label and the selector, which avoids querying with empty selector.

Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-795200205
2021-03-11 20:16:36 +09:00
Yusuke Kuoka
f5c639ae28 Make webhook-based autoscaler github event logs more operator-friendly (#384)
Adds fields like `pullRequest.base.ref` and `checkRun.status` that are useful for verifying the autoscaling behaviour without browsing GitHub.
Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-794175312
2021-03-10 09:40:44 +09:00
Yusuke Kuoka
81016154c0 GITHUB_APP_PRIVATE_KEY can now be the content of the key (#383)
Resolves #382
2021-03-10 09:37:15 +09:00
Yusuke Kuoka
728829be7b Fix panic on scaling organizational runners (#381)
Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793287133
2021-03-09 15:03:47 +09:00
Yusuke Kuoka
c0b8f9d483 Merge pull request #380 from summerwind/ns-flag
Use --watch-namespace flag to restrict the namespace to watch
2021-03-09 15:03:32 +09:00
Yusuke Kuoka
ced1c2321a Fix chart-testing failing due to conflict between authSecret and dummySecret 2021-03-09 14:54:55 +09:00
Yusuke Kuoka
1b8a656051 Use --watch-namespace flag to restrict the namespace to watch
Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793172995
2021-03-09 09:46:21 +09:00
Rob Whitby
1753fa3530 handle GET requests in webhook hra (#378) 2021-03-09 08:46:27 +09:00
Johannes Nicolai
8c0f3dfc79 Set runner group for runners with enterprise scope (#376)
* so far, runner group parameter is only set for runners with org scope
* now set group for enterprise runners as well
* removed null check for org scope as either org or enterprise will be set
2021-03-08 09:18:23 +09:00
Rob Whitby
dbda292f54 fix typo in examples (#373) 2021-03-08 09:18:10 +09:00
31 changed files with 598 additions and 52 deletions

View File

@@ -302,12 +302,12 @@ The scale out performance is controlled via the manager containers startup `--sy
**Benefits of this metric**
1. Supports named repositories allowing you to restrict the runner to a specified set of repositories server side.
2. Scales quickly (within the bounds of the syncPeriod) as it will spin up the number of runners based on the depth of the workflow queue
2. Scales the runner count based on the actual queue depth of the jobs meaning a more 1:1 scaling of runners to queued jobs.
3. Like all scaling metrics, you can manage workflow allocation to the RunnerDeployment through the use of [Github labels](#runner-labels).
**Drawbacks of this metric**
1. Repositories must be named within the scaling metric, maintaining a list of repositories may not be viable in larger environments or self-serve environments.
2. May not scale quick enough for some users needs
2. May not scale quick enough for some users needs. This metric is pull based and so the queue depth is polled as configured by the sync period, as a result scaling performance is bound by this sync period meaning there is a lag to scaling activity.
3. Relatively large amounts of API requests required to maintain this metric, you may run in API rate limiting issues depending on the size of your environment and how aggressive your sync period configuration is
@@ -355,14 +355,14 @@ The `HorizontalRunnerAutoscaler` will pole GitHub based on the configuration syn
**Helm Config :** `syncPeriod`
**Benefits of this metric**
1. Allows for multiple controllers to be deployed as each controller deployed is responsible for scaling their own runner pods on a per namespace basis.
2. Supports named repositories server side the same as the `TotalNumberOfQueuedAndInProgressWorkflowRuns` metric [#313](https://github.com/summerwind/actions-runner-controller/pull/313)
3. Supports github organisation wide scaling without maintaining an explicit list of repositories, this is especially useful for those that are working at a larger scale. [#223](https://github.com/summerwind/actions-runner-controller/pull/223)
4. Like all scaling metrics, you can manage workflow allocation to the RunnerDeployment through the use of [Github labels](#runner-labels)
5. Supports scaling runner count on both a percentage increase / descrease basis as well as on a fixed runner count basis [#223](https://github.com/summerwind/actions-runner-controller/pull/223) [#315](https://github.com/summerwind/actions-runner-controller/pull/315)
1. Supports named repositories server side the same as the `TotalNumberOfQueuedAndInProgressWorkflowRuns` metric [#313](https://github.com/summerwind/actions-runner-controller/pull/313)
2. Supports GitHub organisation wide scaling without maintaining an explicit list of repositories, this is especially useful for those that are working at a larger scale. [#223](https://github.com/summerwind/actions-runner-controller/pull/223)
3. Like all scaling metrics, you can manage workflow allocation to the RunnerDeployment through the use of [Github labels](#runner-labels)
4. Supports scaling desired runner count on both a percentage increase / decrease basis as well as on a fixed increase / decrease count basis [#223](https://github.com/summerwind/actions-runner-controller/pull/223) [#315](https://github.com/summerwind/actions-runner-controller/pull/315)
**Drawbacks of this metric**
1. May not scale quick enough for some users needs as we are scaling up and down based on indicative information rather than a direct count of the workflow queue depth
1. May not scale quick enough for some users needs. This metric is pull based and so the number of busy runners are polled as configured by the sync period, as a result scaling performance is bound by this sync period meaning there is a lag to scaling activity.
2. We are scaling up and down based on indicative information rather than a count of the actual number of queued jobs and so the desired runner count is likely to under provision new runners or overprovision them relative to actual job queue depth, this may or may not be a problem for you.
Examples of each scaling type implemented with a `RunnerDeployment` backed by a `HorizontalRunnerAutoscaler`:
@@ -433,7 +433,7 @@ kind: HorizontalRunnerAutoscaler
spec:
scaleTargetRef:
name: myrunners
scaleUpTrigggers:
scaleUpTriggers:
- githubEvent:
checkRun:
types: ["created"]
@@ -492,7 +492,7 @@ kind: HorizontalRunnerAutoscaler
spec:
scaleTargetRef:
name: myrunners
scaleUpTrigggers:
scaleUpTriggers:
- githubEvent:
checkRun:
types: ["created"]
@@ -514,7 +514,7 @@ kind: HorizontalRunnerAutoscaler
spec:
scaleTargetRef:
name: myrunners
scaleUpTrigggers:
scaleUpTriggers:
- githubEvent:
pullRequest:
types: ["synchronize"]

View File

@@ -72,6 +72,12 @@ type GitHubEventScaleUpTriggerSpec struct {
type CheckRunSpec struct {
Types []string `json:"types,omitempty"`
Status string `json:"status,omitempty"`
// Names is a list of GitHub Actions glob patterns.
// Any check_run event whose name matches one of patterns in the list can trigger autoscaling.
// Note that check_run name seem to equal to the job name you've defined in your actions workflow yaml file.
// So it is very likely that you can utilize this to trigger depending on the job.
Names []string `json:"names,omitempty"`
}
// https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request

View File

@@ -92,6 +92,8 @@ type RunnerSpec struct {
DockerdWithinRunnerContainer *bool `json:"dockerdWithinRunnerContainer,omitempty"`
// +optional
DockerEnabled *bool `json:"dockerEnabled,omitempty"`
// +optional
DockerMTU *int64 `json:"dockerMTU,omitempty"`
}
// ValidateRepository validates repository field.

View File

@@ -66,6 +66,11 @@ func (in *CheckRunSpec) DeepCopyInto(out *CheckRunSpec) {
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Names != nil {
in, out := &in.Names, &out.Names
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CheckRunSpec.
@@ -689,6 +694,11 @@ func (in *RunnerSpec) DeepCopyInto(out *RunnerSpec) {
*out = new(bool)
**out = **in
}
if in.DockerMTU != nil {
in, out := &in.DockerMTU, &out.DockerMTU
*out = new(int64)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSpec.

View File

@@ -15,7 +15,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.0
version: 0.9.0
home: https://github.com/summerwind/actions-runner-controller

View File

@@ -22,6 +22,9 @@ resources:
cpu: 100m
memory: 128Mi
authSecret:
create: false
# Set the following to true to create a dummy secret, allowing the manager pod to start
# This is only useful in CI
createDummySecret: true

View File

@@ -148,6 +148,17 @@ spec:
checkRun:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
properties:
names:
description: Names is a list of GitHub Actions glob patterns.
Any check_run event whose name matches one of patterns
in the list can trigger autoscaling. Note that check_run
name seem to equal to the job name you've defined in
your actions workflow yaml file. So it is very likely
that you can utilize this to trigger depending on the
job.
items:
type: string
type: array
status:
type: string
types:

View File

@@ -433,6 +433,9 @@ spec:
type: array
dockerEnabled:
type: boolean
dockerMTU:
format: int64
type: integer
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:

View File

@@ -433,6 +433,9 @@ spec:
type: array
dockerEnabled:
type: boolean
dockerMTU:
format: int64
type: integer
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:

View File

@@ -398,6 +398,9 @@ spec:
type: array
dockerEnabled:
type: boolean
dockerMTU:
format: int64
type: integer
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:

View File

@@ -35,6 +35,9 @@ spec:
- "--enable-leader-election"
- "--sync-period={{ .Values.syncPeriod }}"
- "--docker-image={{ .Values.image.dindSidecarRepositoryAndTag }}"
{{- if .Values.scope.singleNamespace }}
- "--watch-namespace={{ default .Release.Namespace .Values.scope.watchNamespace }}"
{{- end }}
command:
- "/manager"
env:

View File

@@ -100,6 +100,13 @@ env:
# https_proxy: "proxy.com:8080"
# no_proxy: ""
scope:
# If true, the controller will only watch custom resources in a single namespace
singleNamespace: false
# If `scope.singleNamespace=true`, the controller will only watch custom resources in this namespace
# The default value is "", which means the namespace of the controller
watchNamespace: ""
githubWebhookServer:
enabled: false
labels: {}

View File

@@ -110,7 +110,7 @@ func main() {
Recorder: nil,
Scheme: mgr.GetScheme(),
SecretKeyBytes: []byte(webhookSecretToken),
WatchNamespace: watchNamespace,
Namespace: watchNamespace,
}
if err = hraGitHubWebhook.SetupWithManager(mgr); err != nil {

View File

@@ -148,6 +148,17 @@ spec:
checkRun:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
properties:
names:
description: Names is a list of GitHub Actions glob patterns.
Any check_run event whose name matches one of patterns
in the list can trigger autoscaling. Note that check_run
name seem to equal to the job name you've defined in
your actions workflow yaml file. So it is very likely
that you can utilize this to trigger depending on the
job.
items:
type: string
type: array
status:
type: string
types:

View File

@@ -433,6 +433,9 @@ spec:
type: array
dockerEnabled:
type: boolean
dockerMTU:
format: int64
type: integer
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:

View File

@@ -433,6 +433,9 @@ spec:
type: array
dockerEnabled:
type: boolean
dockerMTU:
format: int64
type: integer
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:

View File

@@ -398,6 +398,9 @@ spec:
type: array
dockerEnabled:
type: boolean
dockerMTU:
format: int64
type: integer
dockerdContainerResources:
description: ResourceRequirements describes the compute resource requirements.
properties:

View File

@@ -91,6 +91,13 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByQueuedAndInPro
return nil, fmt.Errorf("asserting runner deployment spec to detect bug: spec.template.organization should not be empty on this code path")
}
// In case it's an organizational runners deployment without any scaling metrics defined,
// we assume that the desired replicas should always be `minReplicas + capacityReservedThroughWebhook`.
// See https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793372693
if len(metrics) == 0 {
return hra.Spec.MinReplicas, nil
}
if len(metrics[0].RepositoryNames) == 0 {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment")
}
@@ -259,17 +266,26 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunn
scaleDownFactor = sdf
}
selector, err := metav1.LabelSelectorAsSelector(rd.Spec.Selector)
// return the list of runners in namespace. Horizontal Runner Autoscaler should only be responsible for scaling resources in its own ns.
var runnerList v1alpha1.RunnerList
var opts []client.ListOption
opts = append(opts, client.InNamespace(rd.Namespace))
selector, err := metav1.LabelSelectorAsSelector(getSelector(&rd))
if err != nil {
return nil, err
}
// return the list of runners in namespace. Horizontal Runner Autoscaler should only be responsible for scaling resources in its own ns.
var runnerList v1alpha1.RunnerList
opts = append(opts, client.MatchingLabelsSelector{Selector: selector})
r.Log.V(2).Info("Finding runners with selector", "ns", rd.Namespace)
if err := r.List(
ctx,
&runnerList,
client.InNamespace(rd.Namespace),
client.MatchingLabelsSelector{Selector: selector},
opts...,
); err != nil {
if !kerrors.IsNotFound(err) {
return nil, err

View File

@@ -53,11 +53,11 @@ type HorizontalRunnerAutoscalerGitHubWebhook struct {
// the administrator is generated and specified in GitHub Web UI.
SecretKeyBytes []byte
// WatchNamespace is the namespace to watch for HorizontalRunnerAutoscaler's to be
// Namespace is the namespace to watch for HorizontalRunnerAutoscaler's to be
// scaled on Webhook.
// Set to empty for letting it watch for all namespaces.
WatchNamespace string
Name string
Namespace string
Name string
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Reconcile(request reconcile.Request) (reconcile.Result, error) {
@@ -95,6 +95,12 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
}
}()
// respond ok to GET / e.g. for health check
if r.Method == http.MethodGet {
fmt.Fprintln(w, "webhook server is running")
return
}
var payload []byte
if len(autoscaler.SecretKeyBytes) > 0 {
@@ -153,6 +159,13 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
e.Repo.Owner.GetType(),
autoscaler.MatchPullRequestEvent(e),
)
if pullRequest := e.PullRequest; pullRequest != nil {
log = log.WithValues(
"pullRequest.base.ref", e.PullRequest.Base.GetRef(),
"action", e.GetAction(),
)
}
case *gogithub.CheckRunEvent:
target, err = autoscaler.getScaleUpTarget(
context.TODO(),
@@ -162,6 +175,13 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
e.Repo.Owner.GetType(),
autoscaler.MatchCheckRunEvent(e),
)
if checkRun := e.GetCheckRun(); checkRun != nil {
log = log.WithValues(
"checkRun.status", checkRun.GetStatus(),
"action", e.GetAction(),
)
}
case *gogithub.PingEvent:
ok = true
@@ -189,9 +209,11 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
}
if target == nil {
msg := "no horizontalrunnerautoscaler to scale for this github event"
log.Info(
"Scale target not found. If this is unexpected, ensure that there is exactly one repository-wide or organizational runner deployment that matches this webhook event",
)
log.Info(msg, "eventType", webhookType)
msg := "no horizontalrunnerautoscaler to scale for this github event"
ok = true
@@ -224,7 +246,7 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) findHRAsByKey(ctx context.Context, value string) ([]v1alpha1.HorizontalRunnerAutoscaler, error) {
ns := autoscaler.WatchNamespace
ns := autoscaler.Namespace
var defaultListOpts []client.ListOption
@@ -238,8 +260,8 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) findHRAsByKey(ctx con
opts := append([]client.ListOption{}, defaultListOpts...)
opts = append(opts, client.MatchingFields{scaleTargetKey: value})
if autoscaler.WatchNamespace != "" {
opts = append(opts, client.InNamespace(autoscaler.WatchNamespace))
if autoscaler.Namespace != "" {
opts = append(opts, client.InNamespace(autoscaler.Namespace))
}
var hraList v1alpha1.HorizontalRunnerAutoscalerList
@@ -326,7 +348,7 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleTarget(ctx co
autoscaler.Log.Info(
"Found too many scale targets: "+
"It must be exactly one to avoid ambiguity. "+
"Either set WatchNamespace for the webhook-based autoscaler to let it only find HRAs in the namespace, "+
"Either set Namespace for the webhook-based autoscaler to let it only find HRAs in the namespace, "+
"or update Repository or Organization fields in your RunnerDeployment resources to fix the ambiguity.",
"scaleTargets", strings.Join(scaleTargetIDs, ","))
@@ -359,10 +381,6 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleUpTarget(ctx
return target, nil
}
log.Info(
"Scale target not found. If this is unexpected, ensure that there is exactly one repository-wide or organizational runner deployment that matches this webhook event",
)
return nil, nil
}

View File

@@ -3,6 +3,7 @@ package controllers
import (
"github.com/google/go-github/v33/github"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
"github.com/summerwind/actions-runner-controller/pkg/actionsglob"
)
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchCheckRunEvent(event *github.CheckRunEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {
@@ -27,6 +28,16 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchCheckRunEvent(ev
return false
}
if checkRun := event.CheckRun; checkRun != nil && len(cr.Names) > 0 {
for _, pat := range cr.Names {
if r := actionsglob.Match(pat, checkRun.GetName()); r {
return true
}
}
return false
}
return true
}
}

View File

@@ -113,6 +113,19 @@ func TestWebhookPing(t *testing.T) {
)
}
func TestGetRequest(t *testing.T) {
hra := HorizontalRunnerAutoscalerGitHubWebhook{}
request, _ := http.NewRequest(http.MethodGet, "/", nil)
recorder := httptest.ResponseRecorder{}
hra.Handle(&recorder, request)
response := recorder.Result()
if response.StatusCode != http.StatusOK {
t.Errorf("want %d, got %d", http.StatusOK, response.StatusCode)
}
}
func TestGetValidCapacityReservations(t *testing.T) {
now := time.Now()

View File

@@ -131,12 +131,12 @@ func SetupIntegrationTest(ctx context.Context) *testEnvironment {
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
autoscalerWebhook := &HorizontalRunnerAutoscalerGitHubWebhook{
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"),
Name: controllerName("horizontalrunnerautoscalergithubwebhook"),
WatchNamespace: ns.Name,
Client: mgr.GetClient(),
Scheme: scheme.Scheme,
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"),
Name: controllerName("horizontalrunnerautoscalergithubwebhook"),
Namespace: ns.Name,
}
err = autoscalerWebhook.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup autoscaler webhook")
@@ -174,6 +174,93 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
Describe("when no existing resources exist", func() {
It("should create and scale organizational runners without any scaling metrics on pull_request event", func() {
name := "example-runnerdeploy"
{
rd := &actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Replicas: intPtr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{
Organization: "test",
Image: "bar",
Group: "baz",
Env: []corev1.EnvVar{
{Name: "FOO", Value: "FOOVALUE"},
},
},
},
},
}
ExpectCreate(ctx, rd, "test RunnerDeployment")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
}
// Scale-up to 2 replicas
{
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
ScaleTargetRef: actionsv1alpha1.ScaleTargetRef{
Name: name,
},
MinReplicas: intPtr(2),
MaxReplicas: intPtr(5),
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
Metrics: nil,
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
{
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
PullRequest: &actionsv1alpha1.PullRequestSpec{
Types: []string{"created"},
Branches: []string{"main"},
},
},
Amount: 1,
Duration: metav1.Duration{Duration: time.Minute},
},
},
},
}
ExpectCreate(ctx, hra, "test HorizontalRunnerAutoscaler")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 2)
ExpectHRAStatusCacheEntryLengthEventuallyEquals(ctx, ns.Name, name, 1)
}
{
env.ExpectRegisteredNumberCountEventuallyEquals(2, "count of fake runners after HRA creation")
}
// Scale-up to 3 replicas on second pull_request create webhook event
{
env.SendOrgPullRequestEvent("test", "valid", "main", "created")
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3, "runners after second webhook event")
}
})
It("should create and scale organization's repository runners on pull_request event", func() {
name := "example-runnerdeploy"

View File

@@ -530,6 +530,15 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
},
}
if mtu := runner.Spec.DockerMTU; mtu != nil && dockerdInRunner {
pod.Spec.Containers[0].Env = append(pod.Spec.Containers[0].Env, []corev1.EnvVar{
{
Name: "MTU",
Value: fmt.Sprintf("%d", *runner.Spec.DockerMTU),
},
}...)
}
if !dockerdInRunner && dockerEnabled {
runnerVolumeName := "runner"
runnerVolumeMountPath := "/runner"
@@ -612,6 +621,15 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
Resources: runner.Spec.DockerdContainerResources,
})
if mtu := runner.Spec.DockerMTU; mtu != nil {
pod.Spec.Containers[1].Env = append(pod.Spec.Containers[1].Env, []corev1.EnvVar{
{
Name: "DOCKERD_ROOTLESS_ROOTLESSKIT_MTU",
Value: fmt.Sprintf("%d", *runner.Spec.DockerMTU),
},
}...)
}
}
if len(runner.Spec.Containers) != 0 {

View File

@@ -41,7 +41,8 @@ import (
)
const (
LabelKeyRunnerTemplateHash = "runner-template-hash"
LabelKeyRunnerTemplateHash = "runner-template-hash"
LabelKeyRunnerDeploymentName = "runner-deployment-name"
runnerSetOwnerKey = ".metadata.controller"
)
@@ -326,23 +327,32 @@ func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeplo
return newRunnerReplicaSet(&rd, r.CommonRunnerLabels, r.Scheme)
}
func getSelector(rd *v1alpha1.RunnerDeployment) *metav1.LabelSelector {
selector := rd.Spec.Selector
if selector == nil {
selector = &metav1.LabelSelector{MatchLabels: map[string]string{LabelKeyRunnerDeploymentName: rd.Name}}
}
return selector
}
func newRunnerReplicaSet(rd *v1alpha1.RunnerDeployment, commonRunnerLabels []string, scheme *runtime.Scheme) (*v1alpha1.RunnerReplicaSet, error) {
newRSTemplate := *rd.Spec.Template.DeepCopy()
templateHash := ComputeHash(&newRSTemplate)
// Add template hash label to selector.
labels := CloneAndAddLabel(rd.Spec.Template.Labels, LabelKeyRunnerTemplateHash, templateHash)
for _, l := range commonRunnerLabels {
newRSTemplate.Spec.Labels = append(newRSTemplate.Spec.Labels, l)
}
newRSTemplate.Labels = labels
templateHash := ComputeHash(&newRSTemplate)
// Add template hash label to selector.
newRSTemplate.ObjectMeta.Labels = CloneAndAddLabel(newRSTemplate.ObjectMeta.Labels, LabelKeyRunnerTemplateHash, templateHash)
// This label selector is used by default when rd.Spec.Selector is empty.
newRSTemplate.ObjectMeta.Labels = CloneAndAddLabel(newRSTemplate.ObjectMeta.Labels, LabelKeyRunnerDeploymentName, rd.Name)
selector := getSelector(rd)
selector := rd.Spec.Selector
if selector == nil {
selector = &metav1.LabelSelector{MatchLabels: labels}
}
newRSSelector := CloneSelectorAndAddLabel(selector, LabelKeyRunnerTemplateHash, templateHash)
rs := v1alpha1.RunnerReplicaSet{
@@ -350,7 +360,7 @@ func newRunnerReplicaSet(rd *v1alpha1.RunnerDeployment, commonRunnerLabels []str
ObjectMeta: metav1.ObjectMeta{
GenerateName: rd.ObjectMeta.Name + "-",
Namespace: rd.ObjectMeta.Namespace,
Labels: labels,
Labels: newRSTemplate.ObjectMeta.Labels,
},
Spec: v1alpha1.RunnerReplicaSetSpec{
Replicas: rd.Spec.Replicas,

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"net/http"
"net/url"
"os"
"strings"
"sync"
"time"
@@ -39,10 +40,20 @@ func (c *Config) NewClient() (*Client, error) {
if len(c.Token) > 0 {
transport = oauth2.NewClient(context.Background(), oauth2.StaticTokenSource(&oauth2.Token{AccessToken: c.Token})).Transport
} else {
tr, err := ghinstallation.NewKeyFromFile(http.DefaultTransport, c.AppID, c.AppInstallationID, c.AppPrivateKey)
if err != nil {
return nil, fmt.Errorf("authentication failed: %v", err)
var tr *ghinstallation.Transport
if _, err := os.Stat(c.AppPrivateKey); err == nil {
tr, err = ghinstallation.NewKeyFromFile(http.DefaultTransport, c.AppID, c.AppInstallationID, c.AppPrivateKey)
if err != nil {
return nil, fmt.Errorf("authentication failed: using private key at %s: %v", c.AppPrivateKey, err)
}
} else {
tr, err = ghinstallation.New(http.DefaultTransport, c.AppID, c.AppInstallationID, []byte(c.AppPrivateKey))
if err != nil {
return nil, fmt.Errorf("authentication failed: using private key of size %d (%s...): %v", len(c.AppPrivateKey), strings.Split(c.AppPrivateKey, "\n")[0], err)
}
}
if len(c.EnterpriseURL) > 0 {
githubAPIURL, err := getEnterpriseApiUrl(c.EnterpriseURL)
if err != nil {

View File

@@ -63,6 +63,7 @@ func main() {
runnerImage string
dockerImage string
namespace string
commonRunnerLabels commaSeparatedStringSlice
)
@@ -84,6 +85,7 @@ func main() {
flag.StringVar(&c.AppPrivateKey, "github-app-private-key", c.AppPrivateKey, "The path of a private key file to authenticate as a GitHub App")
flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change")
flag.Var(&commonRunnerLabels, "common-runner-labels", "Runner labels in the K1=V1,K2=V2,... format that are inherited all the runners created by the controller. See https://github.com/summerwind/actions-runner-controller/issues/321 for more information")
flag.StringVar(&namespace, "watch-namespace", "", "The namespace to watch for custom resources. Set to empty for letting it watch for all namespaces.")
flag.Parse()
logger := zap.New(func(o *zap.Options) {
@@ -104,6 +106,7 @@ func main() {
LeaderElection: enableLeaderElection,
Port: 9443,
SyncPeriod: &syncPeriod,
Namespace: namespace,
})
if err != nil {
setupLog.Error(err, "unable to start manager")

View File

@@ -0,0 +1,8 @@
This package is an implementation of glob that is intended to simulate the behaviour of
https://github.com/actions/toolkit/tree/master/packages/glob in many cases.
This isn't a complete reimplementation of the referenced nodejs package.
Differences:
- This package doesn't implement `**`

View File

@@ -0,0 +1,78 @@
package actionsglob
import (
"fmt"
"strings"
)
func Match(pat string, s string) bool {
if len(pat) == 0 {
panic(fmt.Sprintf("unexpected length of pattern: %d", len(pat)))
}
var inverse bool
if pat[0] == '!' {
pat = pat[1:]
inverse = true
}
tokens := strings.SplitAfter(pat, "*")
var wildcardInHead bool
for i := 0; i < len(tokens); i++ {
p := tokens[i]
if p == "" {
s = ""
break
}
if p == "*" {
if i == len(tokens)-1 {
s = ""
break
}
wildcardInHead = true
continue
}
wildcardInTail := p[len(p)-1] == '*'
if wildcardInTail {
p = p[:len(p)-1]
}
subs := strings.SplitN(s, p, 2)
if len(subs) == 0 {
break
}
if subs[0] != "" {
if !wildcardInHead {
break
}
}
if subs[1] != "" {
if !wildcardInTail {
break
}
}
s = subs[1]
wildcardInHead = false
}
r := s == ""
if inverse {
r = !r
}
return r
}

View File

@@ -0,0 +1,198 @@
package actionsglob
import (
"testing"
)
func TestMatch(t *testing.T) {
type testcase struct {
Pattern, Target string
Want bool
}
run := func(t *testing.T, tc testcase) {
t.Helper()
got := Match(tc.Pattern, tc.Target)
if got != tc.Want {
t.Errorf("%s against %s: want %v, got %v", tc.Pattern, tc.Target, tc.Want, got)
}
}
t.Run("foo == foo", func(t *testing.T) {
run(t, testcase{
Pattern: "foo",
Target: "foo",
Want: true,
})
})
t.Run("!foo == foo", func(t *testing.T) {
run(t, testcase{
Pattern: "!foo",
Target: "foo",
Want: false,
})
})
t.Run("foo == foo1", func(t *testing.T) {
run(t, testcase{
Pattern: "foo",
Target: "foo1",
Want: false,
})
})
t.Run("!foo == foo1", func(t *testing.T) {
run(t, testcase{
Pattern: "!foo",
Target: "foo1",
Want: true,
})
})
t.Run("*foo == foo", func(t *testing.T) {
run(t, testcase{
Pattern: "*foo",
Target: "foo",
Want: true,
})
})
t.Run("!*foo == foo", func(t *testing.T) {
run(t, testcase{
Pattern: "!*foo",
Target: "foo",
Want: false,
})
})
t.Run("*foo == 1foo", func(t *testing.T) {
run(t, testcase{
Pattern: "*foo",
Target: "1foo",
Want: true,
})
})
t.Run("!*foo == 1foo", func(t *testing.T) {
run(t, testcase{
Pattern: "!*foo",
Target: "1foo",
Want: false,
})
})
t.Run("*foo == foo1", func(t *testing.T) {
run(t, testcase{
Pattern: "*foo",
Target: "foo1",
Want: false,
})
})
t.Run("!*foo == foo1", func(t *testing.T) {
run(t, testcase{
Pattern: "!*foo",
Target: "foo1",
Want: true,
})
})
t.Run("*foo* == foo1", func(t *testing.T) {
run(t, testcase{
Pattern: "*foo*",
Target: "foo1",
Want: true,
})
})
t.Run("!*foo* == foo1", func(t *testing.T) {
run(t, testcase{
Pattern: "!*foo*",
Target: "foo1",
Want: false,
})
})
t.Run("*foo == foobar", func(t *testing.T) {
run(t, testcase{
Pattern: "*foo",
Target: "foobar",
Want: false,
})
})
t.Run("!*foo == foobar", func(t *testing.T) {
run(t, testcase{
Pattern: "!*foo",
Target: "foobar",
Want: true,
})
})
t.Run("*foo* == foobar", func(t *testing.T) {
run(t, testcase{
Pattern: "*foo*",
Target: "foobar",
Want: true,
})
})
t.Run("!*foo* == foobar", func(t *testing.T) {
run(t, testcase{
Pattern: "!*foo*",
Target: "foobar",
Want: false,
})
})
t.Run("foo* == foo", func(t *testing.T) {
run(t, testcase{
Pattern: "foo*",
Target: "foo",
Want: true,
})
})
t.Run("!foo* == foo", func(t *testing.T) {
run(t, testcase{
Pattern: "!foo*",
Target: "foo",
Want: false,
})
})
t.Run("foo* == foobar", func(t *testing.T) {
run(t, testcase{
Pattern: "foo*",
Target: "foobar",
Want: true,
})
})
t.Run("!foo* == foobar", func(t *testing.T) {
run(t, testcase{
Pattern: "!foo*",
Target: "foobar",
Want: false,
})
})
t.Run("foo (* == foo ( 1 / 2 )", func(t *testing.T) {
run(t, testcase{
Pattern: "foo (*",
Target: "foo ( 1 / 2 )",
Want: true,
})
})
t.Run("!foo (* == foo ( 1 / 2 )", func(t *testing.T) {
run(t, testcase{
Pattern: "!foo (*",
Target: "foo ( 1 / 2 )",
Want: false,
})
})
}

View File

@@ -42,7 +42,7 @@ if [ -z "${RUNNER_TOKEN}" ]; then
exit 1
fi
if [ -z "${RUNNER_REPO}" ] && [ -n "${RUNNER_ORG}" ] && [ -n "${RUNNER_GROUP}" ];then
if [ -z "${RUNNER_REPO}" ] && [ -n "${RUNNER_GROUP}" ];then
RUNNER_GROUP_ARG="--runnergroup ${RUNNER_GROUP}"
fi

View File

@@ -33,5 +33,9 @@ for process in "${processes[@]}"; do
fi
done
if [ -n "${MTU}" ]; then
ifconfig docker0 mtu ${MTU} up
fi
# Wait processes to be running
entrypoint.sh