mirror of
https://github.com/actions/actions-runner-controller.git
synced 2025-12-10 11:41:27 +00:00
Compare commits
20 Commits
actions-ru
...
actions-ru
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3f23501b8e | ||
|
|
5530030c67 | ||
|
|
8d3a83b07a | ||
|
|
a6270b44d5 | ||
|
|
2273b198a1 | ||
|
|
3d62e73f8c | ||
|
|
f5c639ae28 | ||
|
|
81016154c0 | ||
|
|
728829be7b | ||
|
|
c0b8f9d483 | ||
|
|
ced1c2321a | ||
|
|
1b8a656051 | ||
|
|
1753fa3530 | ||
|
|
8c0f3dfc79 | ||
|
|
dbda292f54 | ||
|
|
550a864198 | ||
|
|
4fa5315311 | ||
|
|
11e58fcc41 | ||
|
|
f220fefe92 | ||
|
|
56b4598d1d |
24
README.md
24
README.md
@@ -21,7 +21,7 @@ ToC:
|
||||
- [Runner with DinD](#runner-with-dind)
|
||||
- [Additional tweaks](#additional-tweaks)
|
||||
- [Runner labels](#runner-labels)
|
||||
- [Runer groups](#runner-groups)
|
||||
- [Runner groups](#runner-groups)
|
||||
- [Using EKS IAM role for service accounts](#using-eks-iam-role-for-service-accounts)
|
||||
- [Software installed in the runner image](#software-installed-in-the-runner-image)
|
||||
- [Common errors](#common-errors)
|
||||
@@ -302,12 +302,12 @@ The scale out performance is controlled via the manager containers startup `--sy
|
||||
|
||||
**Benefits of this metric**
|
||||
1. Supports named repositories allowing you to restrict the runner to a specified set of repositories server side.
|
||||
2. Scales quickly (within the bounds of the syncPeriod) as it will spin up the number of runners based on the depth of the workflow queue
|
||||
2. Scales the runner count based on the actual queue depth of the jobs meaning a more 1:1 scaling of runners to queued jobs.
|
||||
3. Like all scaling metrics, you can manage workflow allocation to the RunnerDeployment through the use of [Github labels](#runner-labels).
|
||||
|
||||
**Drawbacks of this metric**
|
||||
1. Repositories must be named within the scaling metric, maintaining a list of repositories may not be viable in larger environments or self-serve environments.
|
||||
2. May not scale quick enough for some users needs
|
||||
2. May not scale quick enough for some users needs. This metric is pull based and so the queue depth is polled as configured by the sync period, as a result scaling performance is bound by this sync period meaning there is a lag to scaling activity.
|
||||
3. Relatively large amounts of API requests required to maintain this metric, you may run in API rate limiting issues depending on the size of your environment and how aggressive your sync period configuration is
|
||||
|
||||
|
||||
@@ -355,14 +355,14 @@ The `HorizontalRunnerAutoscaler` will pole GitHub based on the configuration syn
|
||||
**Helm Config :** `syncPeriod`
|
||||
|
||||
**Benefits of this metric**
|
||||
1. Allows for multiple controllers to be deployed as each controller deployed is responsible for scaling their own runner pods on a per namespace basis.
|
||||
2. Supports named repositories server side the same as the `TotalNumberOfQueuedAndInProgressWorkflowRuns` metric [#313](https://github.com/summerwind/actions-runner-controller/pull/313)
|
||||
3. Supports github organisation wide scaling without maintaining an explicit list of repositories, this is especially useful for those that are working at a larger scale. [#223](https://github.com/summerwind/actions-runner-controller/pull/223)
|
||||
4. Like all scaling metrics, you can manage workflow allocation to the RunnerDeployment through the use of [Github labels](#runner-labels)
|
||||
5. Supports scaling runner count on both a percentage increase / descrease basis as well as on a fixed runner count basis [#223](https://github.com/summerwind/actions-runner-controller/pull/223) [#315](https://github.com/summerwind/actions-runner-controller/pull/315)
|
||||
1. Supports named repositories server side the same as the `TotalNumberOfQueuedAndInProgressWorkflowRuns` metric [#313](https://github.com/summerwind/actions-runner-controller/pull/313)
|
||||
2. Supports GitHub organisation wide scaling without maintaining an explicit list of repositories, this is especially useful for those that are working at a larger scale. [#223](https://github.com/summerwind/actions-runner-controller/pull/223)
|
||||
3. Like all scaling metrics, you can manage workflow allocation to the RunnerDeployment through the use of [Github labels](#runner-labels)
|
||||
4. Supports scaling desired runner count on both a percentage increase / decrease basis as well as on a fixed increase / decrease count basis [#223](https://github.com/summerwind/actions-runner-controller/pull/223) [#315](https://github.com/summerwind/actions-runner-controller/pull/315)
|
||||
|
||||
**Drawbacks of this metric**
|
||||
1. May not scale quick enough for some users needs as we are scaling up and down based on indicative information rather than a direct count of the workflow queue depth
|
||||
1. May not scale quick enough for some users needs. This metric is pull based and so the number of busy runners are polled as configured by the sync period, as a result scaling performance is bound by this sync period meaning there is a lag to scaling activity.
|
||||
2. We are scaling up and down based on indicative information rather than a count of the actual number of queued jobs and so the desired runner count is likely to under provision new runners or overprovision them relative to actual job queue depth, this may or may not be a problem for you.
|
||||
|
||||
|
||||
Examples of each scaling type implemented with a `RunnerDeployment` backed by a `HorizontalRunnerAutoscaler`:
|
||||
@@ -433,7 +433,7 @@ kind: HorizontalRunnerAutoscaler
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
name: myrunners
|
||||
scaleUpTrigggers:
|
||||
scaleUpTriggers:
|
||||
- githubEvent:
|
||||
checkRun:
|
||||
types: ["created"]
|
||||
@@ -492,7 +492,7 @@ kind: HorizontalRunnerAutoscaler
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
name: myrunners
|
||||
scaleUpTrigggers:
|
||||
scaleUpTriggers:
|
||||
- githubEvent:
|
||||
checkRun:
|
||||
types: ["created"]
|
||||
@@ -514,7 +514,7 @@ kind: HorizontalRunnerAutoscaler
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
name: myrunners
|
||||
scaleUpTrigggers:
|
||||
scaleUpTriggers:
|
||||
- githubEvent:
|
||||
pullRequest:
|
||||
types: ["synchronize"]
|
||||
|
||||
@@ -72,6 +72,12 @@ type GitHubEventScaleUpTriggerSpec struct {
|
||||
type CheckRunSpec struct {
|
||||
Types []string `json:"types,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
|
||||
// Names is a list of GitHub Actions glob patterns.
|
||||
// Any check_run event whose name matches one of patterns in the list can trigger autoscaling.
|
||||
// Note that check_run name seem to equal to the job name you've defined in your actions workflow yaml file.
|
||||
// So it is very likely that you can utilize this to trigger depending on the job.
|
||||
Names []string `json:"names,omitempty"`
|
||||
}
|
||||
|
||||
// https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request
|
||||
|
||||
@@ -92,6 +92,8 @@ type RunnerSpec struct {
|
||||
DockerdWithinRunnerContainer *bool `json:"dockerdWithinRunnerContainer,omitempty"`
|
||||
// +optional
|
||||
DockerEnabled *bool `json:"dockerEnabled,omitempty"`
|
||||
// +optional
|
||||
DockerMTU *int64 `json:"dockerMTU,omitempty"`
|
||||
}
|
||||
|
||||
// ValidateRepository validates repository field.
|
||||
@@ -123,6 +125,9 @@ type RunnerStatus struct {
|
||||
Phase string `json:"phase"`
|
||||
Reason string `json:"reason"`
|
||||
Message string `json:"message"`
|
||||
|
||||
//+optional
|
||||
LastRegistrationCheckTime *metav1.Time `json:"lastRegistrationCheckTime"`
|
||||
}
|
||||
|
||||
// RunnerStatusRegistration contains runner registration status
|
||||
|
||||
@@ -25,13 +25,16 @@ const (
|
||||
AutoscalingMetricTypePercentageRunnersBusy = "PercentageRunnersBusy"
|
||||
)
|
||||
|
||||
// RunnerReplicaSetSpec defines the desired state of RunnerDeployment
|
||||
// RunnerDeploymentSpec defines the desired state of RunnerDeployment
|
||||
type RunnerDeploymentSpec struct {
|
||||
// +optional
|
||||
// +nullable
|
||||
Replicas *int `json:"replicas,omitempty"`
|
||||
|
||||
Template RunnerTemplate `json:"template"`
|
||||
// +optional
|
||||
// +nullable
|
||||
Selector *metav1.LabelSelector `json:"selector"`
|
||||
Template RunnerTemplate `json:"template"`
|
||||
}
|
||||
|
||||
type RunnerDeploymentStatus struct {
|
||||
|
||||
@@ -26,7 +26,10 @@ type RunnerReplicaSetSpec struct {
|
||||
// +nullable
|
||||
Replicas *int `json:"replicas,omitempty"`
|
||||
|
||||
Template RunnerTemplate `json:"template"`
|
||||
// +optional
|
||||
// +nullable
|
||||
Selector *metav1.LabelSelector `json:"selector"`
|
||||
Template RunnerTemplate `json:"template"`
|
||||
}
|
||||
|
||||
type RunnerReplicaSetStatus struct {
|
||||
|
||||
@@ -22,6 +22,7 @@ package v1alpha1
|
||||
|
||||
import (
|
||||
"k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
@@ -65,6 +66,11 @@ func (in *CheckRunSpec) DeepCopyInto(out *CheckRunSpec) {
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.Names != nil {
|
||||
in, out := &in.Names, &out.Names
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CheckRunSpec.
|
||||
@@ -403,6 +409,11 @@ func (in *RunnerDeploymentSpec) DeepCopyInto(out *RunnerDeploymentSpec) {
|
||||
*out = new(int)
|
||||
**out = **in
|
||||
}
|
||||
if in.Selector != nil {
|
||||
in, out := &in.Selector, &out.Selector
|
||||
*out = new(metav1.LabelSelector)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
in.Template.DeepCopyInto(&out.Template)
|
||||
}
|
||||
|
||||
@@ -535,6 +546,11 @@ func (in *RunnerReplicaSetSpec) DeepCopyInto(out *RunnerReplicaSetSpec) {
|
||||
*out = new(int)
|
||||
**out = **in
|
||||
}
|
||||
if in.Selector != nil {
|
||||
in, out := &in.Selector, &out.Selector
|
||||
*out = new(metav1.LabelSelector)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
in.Template.DeepCopyInto(&out.Template)
|
||||
}
|
||||
|
||||
@@ -678,6 +694,11 @@ func (in *RunnerSpec) DeepCopyInto(out *RunnerSpec) {
|
||||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
if in.DockerMTU != nil {
|
||||
in, out := &in.DockerMTU, &out.DockerMTU
|
||||
*out = new(int64)
|
||||
**out = **in
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSpec.
|
||||
@@ -694,6 +715,10 @@ func (in *RunnerSpec) DeepCopy() *RunnerSpec {
|
||||
func (in *RunnerStatus) DeepCopyInto(out *RunnerStatus) {
|
||||
*out = *in
|
||||
in.Registration.DeepCopyInto(&out.Registration)
|
||||
if in.LastRegistrationCheckTime != nil {
|
||||
in, out := &in.LastRegistrationCheckTime, &out.LastRegistrationCheckTime
|
||||
*out = (*in).DeepCopy()
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerStatus.
|
||||
|
||||
@@ -15,7 +15,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.6.1
|
||||
version: 0.10.0
|
||||
|
||||
home: https://github.com/summerwind/actions-runner-controller
|
||||
|
||||
|
||||
@@ -22,6 +22,9 @@ resources:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
|
||||
authSecret:
|
||||
create: false
|
||||
|
||||
# Set the following to true to create a dummy secret, allowing the manager pod to start
|
||||
# This is only useful in CI
|
||||
createDummySecret: true
|
||||
@@ -148,6 +148,17 @@ spec:
|
||||
checkRun:
|
||||
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
|
||||
properties:
|
||||
names:
|
||||
description: Names is a list of GitHub Actions glob patterns.
|
||||
Any check_run event whose name matches one of patterns
|
||||
in the list can trigger autoscaling. Note that check_run
|
||||
name seem to equal to the job name you've defined in
|
||||
your actions workflow yaml file. So it is very likely
|
||||
that you can utilize this to trigger depending on the
|
||||
job.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
status:
|
||||
type: string
|
||||
types:
|
||||
|
||||
@@ -38,11 +38,42 @@ spec:
|
||||
metadata:
|
||||
type: object
|
||||
spec:
|
||||
description: RunnerReplicaSetSpec defines the desired state of RunnerDeployment
|
||||
description: RunnerDeploymentSpec defines the desired state of RunnerDeployment
|
||||
properties:
|
||||
replicas:
|
||||
nullable: true
|
||||
type: integer
|
||||
selector:
|
||||
description: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
|
||||
nullable: true
|
||||
properties:
|
||||
matchExpressions:
|
||||
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
|
||||
items:
|
||||
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
|
||||
properties:
|
||||
key:
|
||||
description: key is the label key that the selector applies to.
|
||||
type: string
|
||||
operator:
|
||||
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
|
||||
type: string
|
||||
values:
|
||||
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
required:
|
||||
- key
|
||||
- operator
|
||||
type: object
|
||||
type: array
|
||||
matchLabels:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
template:
|
||||
properties:
|
||||
metadata:
|
||||
@@ -402,6 +433,9 @@ spec:
|
||||
type: array
|
||||
dockerEnabled:
|
||||
type: boolean
|
||||
dockerMTU:
|
||||
format: int64
|
||||
type: integer
|
||||
dockerdContainerResources:
|
||||
description: ResourceRequirements describes the compute resource requirements.
|
||||
properties:
|
||||
|
||||
@@ -43,6 +43,37 @@ spec:
|
||||
replicas:
|
||||
nullable: true
|
||||
type: integer
|
||||
selector:
|
||||
description: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
|
||||
nullable: true
|
||||
properties:
|
||||
matchExpressions:
|
||||
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
|
||||
items:
|
||||
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
|
||||
properties:
|
||||
key:
|
||||
description: key is the label key that the selector applies to.
|
||||
type: string
|
||||
operator:
|
||||
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
|
||||
type: string
|
||||
values:
|
||||
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
required:
|
||||
- key
|
||||
- operator
|
||||
type: object
|
||||
type: array
|
||||
matchLabels:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
template:
|
||||
properties:
|
||||
metadata:
|
||||
@@ -402,6 +433,9 @@ spec:
|
||||
type: array
|
||||
dockerEnabled:
|
||||
type: boolean
|
||||
dockerMTU:
|
||||
format: int64
|
||||
type: integer
|
||||
dockerdContainerResources:
|
||||
description: ResourceRequirements describes the compute resource requirements.
|
||||
properties:
|
||||
|
||||
@@ -398,6 +398,9 @@ spec:
|
||||
type: array
|
||||
dockerEnabled:
|
||||
type: boolean
|
||||
dockerMTU:
|
||||
format: int64
|
||||
type: integer
|
||||
dockerdContainerResources:
|
||||
description: ResourceRequirements describes the compute resource requirements.
|
||||
properties:
|
||||
@@ -1538,6 +1541,9 @@ spec:
|
||||
status:
|
||||
description: RunnerStatus defines the observed state of Runner
|
||||
properties:
|
||||
lastRegistrationCheckTime:
|
||||
format: date-time
|
||||
type: string
|
||||
message:
|
||||
type: string
|
||||
phase:
|
||||
|
||||
@@ -35,6 +35,9 @@ spec:
|
||||
- "--enable-leader-election"
|
||||
- "--sync-period={{ .Values.syncPeriod }}"
|
||||
- "--docker-image={{ .Values.image.dindSidecarRepositoryAndTag }}"
|
||||
{{- if .Values.scope.singleNamespace }}
|
||||
- "--watch-namespace={{ default .Release.Namespace .Values.scope.watchNamespace }}"
|
||||
{{- end }}
|
||||
command:
|
||||
- "/manager"
|
||||
env:
|
||||
|
||||
@@ -100,6 +100,13 @@ env:
|
||||
# https_proxy: "proxy.com:8080"
|
||||
# no_proxy: ""
|
||||
|
||||
scope:
|
||||
# If true, the controller will only watch custom resources in a single namespace
|
||||
singleNamespace: false
|
||||
# If `scope.singleNamespace=true`, the controller will only watch custom resources in this namespace
|
||||
# The default value is "", which means the namespace of the controller
|
||||
watchNamespace: ""
|
||||
|
||||
githubWebhookServer:
|
||||
enabled: false
|
||||
labels: {}
|
||||
|
||||
@@ -110,7 +110,7 @@ func main() {
|
||||
Recorder: nil,
|
||||
Scheme: mgr.GetScheme(),
|
||||
SecretKeyBytes: []byte(webhookSecretToken),
|
||||
WatchNamespace: watchNamespace,
|
||||
Namespace: watchNamespace,
|
||||
}
|
||||
|
||||
if err = hraGitHubWebhook.SetupWithManager(mgr); err != nil {
|
||||
|
||||
@@ -148,6 +148,17 @@ spec:
|
||||
checkRun:
|
||||
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
|
||||
properties:
|
||||
names:
|
||||
description: Names is a list of GitHub Actions glob patterns.
|
||||
Any check_run event whose name matches one of patterns
|
||||
in the list can trigger autoscaling. Note that check_run
|
||||
name seem to equal to the job name you've defined in
|
||||
your actions workflow yaml file. So it is very likely
|
||||
that you can utilize this to trigger depending on the
|
||||
job.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
status:
|
||||
type: string
|
||||
types:
|
||||
|
||||
@@ -38,11 +38,42 @@ spec:
|
||||
metadata:
|
||||
type: object
|
||||
spec:
|
||||
description: RunnerReplicaSetSpec defines the desired state of RunnerDeployment
|
||||
description: RunnerDeploymentSpec defines the desired state of RunnerDeployment
|
||||
properties:
|
||||
replicas:
|
||||
nullable: true
|
||||
type: integer
|
||||
selector:
|
||||
description: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
|
||||
nullable: true
|
||||
properties:
|
||||
matchExpressions:
|
||||
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
|
||||
items:
|
||||
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
|
||||
properties:
|
||||
key:
|
||||
description: key is the label key that the selector applies to.
|
||||
type: string
|
||||
operator:
|
||||
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
|
||||
type: string
|
||||
values:
|
||||
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
required:
|
||||
- key
|
||||
- operator
|
||||
type: object
|
||||
type: array
|
||||
matchLabels:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
template:
|
||||
properties:
|
||||
metadata:
|
||||
@@ -402,6 +433,9 @@ spec:
|
||||
type: array
|
||||
dockerEnabled:
|
||||
type: boolean
|
||||
dockerMTU:
|
||||
format: int64
|
||||
type: integer
|
||||
dockerdContainerResources:
|
||||
description: ResourceRequirements describes the compute resource requirements.
|
||||
properties:
|
||||
|
||||
@@ -43,6 +43,37 @@ spec:
|
||||
replicas:
|
||||
nullable: true
|
||||
type: integer
|
||||
selector:
|
||||
description: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
|
||||
nullable: true
|
||||
properties:
|
||||
matchExpressions:
|
||||
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
|
||||
items:
|
||||
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
|
||||
properties:
|
||||
key:
|
||||
description: key is the label key that the selector applies to.
|
||||
type: string
|
||||
operator:
|
||||
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
|
||||
type: string
|
||||
values:
|
||||
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
required:
|
||||
- key
|
||||
- operator
|
||||
type: object
|
||||
type: array
|
||||
matchLabels:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
template:
|
||||
properties:
|
||||
metadata:
|
||||
@@ -402,6 +433,9 @@ spec:
|
||||
type: array
|
||||
dockerEnabled:
|
||||
type: boolean
|
||||
dockerMTU:
|
||||
format: int64
|
||||
type: integer
|
||||
dockerdContainerResources:
|
||||
description: ResourceRequirements describes the compute resource requirements.
|
||||
properties:
|
||||
|
||||
@@ -398,6 +398,9 @@ spec:
|
||||
type: array
|
||||
dockerEnabled:
|
||||
type: boolean
|
||||
dockerMTU:
|
||||
format: int64
|
||||
type: integer
|
||||
dockerdContainerResources:
|
||||
description: ResourceRequirements describes the compute resource requirements.
|
||||
properties:
|
||||
@@ -1538,6 +1541,9 @@ spec:
|
||||
status:
|
||||
description: RunnerStatus defines the observed state of Runner
|
||||
properties:
|
||||
lastRegistrationCheckTime:
|
||||
format: date-time
|
||||
type: string
|
||||
message:
|
||||
type: string
|
||||
phase:
|
||||
|
||||
@@ -10,6 +10,8 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
|
||||
kerrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
)
|
||||
|
||||
@@ -69,7 +71,13 @@ func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alp
|
||||
}
|
||||
|
||||
metrics := hra.Spec.Metrics
|
||||
if len(metrics) == 0 || metrics[0].Type == v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns {
|
||||
if len(metrics) == 0 {
|
||||
if len(hra.Spec.ScaleUpTriggers) == 0 {
|
||||
return r.calculateReplicasByQueuedAndInProgressWorkflowRuns(rd, hra)
|
||||
}
|
||||
|
||||
return hra.Spec.MinReplicas, nil
|
||||
} else if metrics[0].Type == v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns {
|
||||
return r.calculateReplicasByQueuedAndInProgressWorkflowRuns(rd, hra)
|
||||
} else if metrics[0].Type == v1alpha1.AutoscalingMetricTypePercentageRunnersBusy {
|
||||
return r.calculateReplicasByPercentageRunnersBusy(rd, hra)
|
||||
@@ -89,6 +97,13 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByQueuedAndInPro
|
||||
return nil, fmt.Errorf("asserting runner deployment spec to detect bug: spec.template.organization should not be empty on this code path")
|
||||
}
|
||||
|
||||
// In case it's an organizational runners deployment without any scaling metrics defined,
|
||||
// we assume that the desired replicas should always be `minReplicas + capacityReservedThroughWebhook`.
|
||||
// See https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793372693
|
||||
if len(metrics) == 0 {
|
||||
return hra.Spec.MinReplicas, nil
|
||||
}
|
||||
|
||||
if len(metrics[0].RepositoryNames) == 0 {
|
||||
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment")
|
||||
}
|
||||
@@ -259,9 +274,30 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunn
|
||||
|
||||
// return the list of runners in namespace. Horizontal Runner Autoscaler should only be responsible for scaling resources in its own ns.
|
||||
var runnerList v1alpha1.RunnerList
|
||||
if err := r.List(ctx, &runnerList, client.InNamespace(rd.Namespace)); err != nil {
|
||||
|
||||
var opts []client.ListOption
|
||||
|
||||
opts = append(opts, client.InNamespace(rd.Namespace))
|
||||
|
||||
selector, err := metav1.LabelSelectorAsSelector(getSelector(&rd))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
opts = append(opts, client.MatchingLabelsSelector{Selector: selector})
|
||||
|
||||
r.Log.V(2).Info("Finding runners with selector", "ns", rd.Namespace)
|
||||
|
||||
if err := r.List(
|
||||
ctx,
|
||||
&runnerList,
|
||||
opts...,
|
||||
); err != nil {
|
||||
if !kerrors.IsNotFound(err) {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
runnerMap := make(map[string]struct{})
|
||||
for _, items := range runnerList.Items {
|
||||
runnerMap[items.Name] = struct{}{}
|
||||
@@ -282,27 +318,46 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunn
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
numRunners := len(runnerList.Items)
|
||||
numRunnersBusy := 0
|
||||
|
||||
var desiredReplicasBefore int
|
||||
|
||||
if v := rd.Spec.Replicas; v == nil {
|
||||
desiredReplicasBefore = 1
|
||||
} else {
|
||||
desiredReplicasBefore = *v
|
||||
}
|
||||
|
||||
var (
|
||||
numRunners int
|
||||
numRunnersRegistered int
|
||||
numRunnersBusy int
|
||||
)
|
||||
|
||||
numRunners = len(runnerList.Items)
|
||||
|
||||
for _, runner := range runners {
|
||||
if _, ok := runnerMap[*runner.Name]; ok && runner.GetBusy() {
|
||||
numRunnersBusy++
|
||||
if _, ok := runnerMap[*runner.Name]; ok {
|
||||
numRunnersRegistered++
|
||||
|
||||
if runner.GetBusy() {
|
||||
numRunnersBusy++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var desiredReplicas int
|
||||
fractionBusy := float64(numRunnersBusy) / float64(numRunners)
|
||||
fractionBusy := float64(numRunnersBusy) / float64(desiredReplicasBefore)
|
||||
if fractionBusy >= scaleUpThreshold {
|
||||
if scaleUpAdjustment > 0 {
|
||||
desiredReplicas = numRunners + scaleUpAdjustment
|
||||
desiredReplicas = desiredReplicasBefore + scaleUpAdjustment
|
||||
} else {
|
||||
desiredReplicas = int(math.Ceil(float64(numRunners) * scaleUpFactor))
|
||||
desiredReplicas = int(math.Ceil(float64(desiredReplicasBefore) * scaleUpFactor))
|
||||
}
|
||||
} else if fractionBusy < scaleDownThreshold {
|
||||
if scaleDownAdjustment > 0 {
|
||||
desiredReplicas = numRunners - scaleDownAdjustment
|
||||
desiredReplicas = desiredReplicasBefore - scaleDownAdjustment
|
||||
} else {
|
||||
desiredReplicas = int(float64(numRunners) * scaleDownFactor)
|
||||
desiredReplicas = int(float64(desiredReplicasBefore) * scaleDownFactor)
|
||||
}
|
||||
} else {
|
||||
desiredReplicas = *rd.Spec.Replicas
|
||||
@@ -314,13 +369,19 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunn
|
||||
desiredReplicas = maxReplicas
|
||||
}
|
||||
|
||||
// NOTES for operators:
|
||||
//
|
||||
// - num_runners can be as twice as large as replicas_desired_before while
|
||||
// the runnerdeployment controller is replacing RunnerReplicaSet for runner update.
|
||||
|
||||
r.Log.V(1).Info(
|
||||
"Calculated desired replicas",
|
||||
"computed_replicas_desired", desiredReplicas,
|
||||
"spec_replicas_min", minReplicas,
|
||||
"spec_replicas_max", maxReplicas,
|
||||
"current_replicas", rd.Spec.Replicas,
|
||||
"replicas_min", minReplicas,
|
||||
"replicas_max", maxReplicas,
|
||||
"replicas_desired_before", desiredReplicasBefore,
|
||||
"replicas_desired", desiredReplicas,
|
||||
"num_runners", numRunners,
|
||||
"num_runners_registered", numRunnersRegistered,
|
||||
"num_runners_busy", numRunnersBusy,
|
||||
"namespace", hra.Namespace,
|
||||
"runner_deployment", rd.Name,
|
||||
|
||||
@@ -443,7 +443,17 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
|
||||
Name: "testrd",
|
||||
},
|
||||
Spec: v1alpha1.RunnerDeploymentSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: v1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: v1alpha1.RunnerSpec{
|
||||
Organization: tc.org,
|
||||
},
|
||||
|
||||
@@ -53,11 +53,11 @@ type HorizontalRunnerAutoscalerGitHubWebhook struct {
|
||||
// the administrator is generated and specified in GitHub Web UI.
|
||||
SecretKeyBytes []byte
|
||||
|
||||
// WatchNamespace is the namespace to watch for HorizontalRunnerAutoscaler's to be
|
||||
// Namespace is the namespace to watch for HorizontalRunnerAutoscaler's to be
|
||||
// scaled on Webhook.
|
||||
// Set to empty for letting it watch for all namespaces.
|
||||
WatchNamespace string
|
||||
Name string
|
||||
Namespace string
|
||||
Name string
|
||||
}
|
||||
|
||||
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Reconcile(request reconcile.Request) (reconcile.Result, error) {
|
||||
@@ -95,6 +95,12 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
|
||||
}
|
||||
}()
|
||||
|
||||
// respond ok to GET / e.g. for health check
|
||||
if r.Method == http.MethodGet {
|
||||
fmt.Fprintln(w, "webhook server is running")
|
||||
return
|
||||
}
|
||||
|
||||
var payload []byte
|
||||
|
||||
if len(autoscaler.SecretKeyBytes) > 0 {
|
||||
@@ -153,6 +159,13 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
|
||||
e.Repo.Owner.GetType(),
|
||||
autoscaler.MatchPullRequestEvent(e),
|
||||
)
|
||||
|
||||
if pullRequest := e.PullRequest; pullRequest != nil {
|
||||
log = log.WithValues(
|
||||
"pullRequest.base.ref", e.PullRequest.Base.GetRef(),
|
||||
"action", e.GetAction(),
|
||||
)
|
||||
}
|
||||
case *gogithub.CheckRunEvent:
|
||||
target, err = autoscaler.getScaleUpTarget(
|
||||
context.TODO(),
|
||||
@@ -162,6 +175,13 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
|
||||
e.Repo.Owner.GetType(),
|
||||
autoscaler.MatchCheckRunEvent(e),
|
||||
)
|
||||
|
||||
if checkRun := e.GetCheckRun(); checkRun != nil {
|
||||
log = log.WithValues(
|
||||
"checkRun.status", checkRun.GetStatus(),
|
||||
"action", e.GetAction(),
|
||||
)
|
||||
}
|
||||
case *gogithub.PingEvent:
|
||||
ok = true
|
||||
|
||||
@@ -189,9 +209,11 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
|
||||
}
|
||||
|
||||
if target == nil {
|
||||
msg := "no horizontalrunnerautoscaler to scale for this github event"
|
||||
log.Info(
|
||||
"Scale target not found. If this is unexpected, ensure that there is exactly one repository-wide or organizational runner deployment that matches this webhook event",
|
||||
)
|
||||
|
||||
log.Info(msg, "eventType", webhookType)
|
||||
msg := "no horizontalrunnerautoscaler to scale for this github event"
|
||||
|
||||
ok = true
|
||||
|
||||
@@ -224,7 +246,7 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
|
||||
}
|
||||
|
||||
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) findHRAsByKey(ctx context.Context, value string) ([]v1alpha1.HorizontalRunnerAutoscaler, error) {
|
||||
ns := autoscaler.WatchNamespace
|
||||
ns := autoscaler.Namespace
|
||||
|
||||
var defaultListOpts []client.ListOption
|
||||
|
||||
@@ -238,8 +260,8 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) findHRAsByKey(ctx con
|
||||
opts := append([]client.ListOption{}, defaultListOpts...)
|
||||
opts = append(opts, client.MatchingFields{scaleTargetKey: value})
|
||||
|
||||
if autoscaler.WatchNamespace != "" {
|
||||
opts = append(opts, client.InNamespace(autoscaler.WatchNamespace))
|
||||
if autoscaler.Namespace != "" {
|
||||
opts = append(opts, client.InNamespace(autoscaler.Namespace))
|
||||
}
|
||||
|
||||
var hraList v1alpha1.HorizontalRunnerAutoscalerList
|
||||
@@ -326,7 +348,7 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleTarget(ctx co
|
||||
autoscaler.Log.Info(
|
||||
"Found too many scale targets: "+
|
||||
"It must be exactly one to avoid ambiguity. "+
|
||||
"Either set WatchNamespace for the webhook-based autoscaler to let it only find HRAs in the namespace, "+
|
||||
"Either set Namespace for the webhook-based autoscaler to let it only find HRAs in the namespace, "+
|
||||
"or update Repository or Organization fields in your RunnerDeployment resources to fix the ambiguity.",
|
||||
"scaleTargets", strings.Join(scaleTargetIDs, ","))
|
||||
|
||||
@@ -359,10 +381,6 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleUpTarget(ctx
|
||||
return target, nil
|
||||
}
|
||||
|
||||
log.Info(
|
||||
"Scale target not found. If this is unexpected, ensure that there is exactly one repository-wide or organizational runner deployment that matches this webhook event",
|
||||
)
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ package controllers
|
||||
import (
|
||||
"github.com/google/go-github/v33/github"
|
||||
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
|
||||
"github.com/summerwind/actions-runner-controller/pkg/actionsglob"
|
||||
)
|
||||
|
||||
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchCheckRunEvent(event *github.CheckRunEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {
|
||||
@@ -27,6 +28,16 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchCheckRunEvent(ev
|
||||
return false
|
||||
}
|
||||
|
||||
if checkRun := event.CheckRun; checkRun != nil && len(cr.Names) > 0 {
|
||||
for _, pat := range cr.Names {
|
||||
if r := actionsglob.Match(pat, checkRun.GetName()); r {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -113,6 +113,19 @@ func TestWebhookPing(t *testing.T) {
|
||||
)
|
||||
}
|
||||
|
||||
func TestGetRequest(t *testing.T) {
|
||||
hra := HorizontalRunnerAutoscalerGitHubWebhook{}
|
||||
request, _ := http.NewRequest(http.MethodGet, "/", nil)
|
||||
recorder := httptest.ResponseRecorder{}
|
||||
|
||||
hra.Handle(&recorder, request)
|
||||
response := recorder.Result()
|
||||
|
||||
if response.StatusCode != http.StatusOK {
|
||||
t.Errorf("want %d, got %d", http.StatusOK, response.StatusCode)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetValidCapacityReservations(t *testing.T) {
|
||||
now := time.Now()
|
||||
|
||||
|
||||
@@ -3,13 +3,14 @@ package controllers
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"github.com/google/go-github/v33/github"
|
||||
github2 "github.com/summerwind/actions-runner-controller/github"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-github/v33/github"
|
||||
github2 "github.com/summerwind/actions-runner-controller/github"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
"github.com/summerwind/actions-runner-controller/github/fake"
|
||||
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
@@ -130,12 +131,12 @@ func SetupIntegrationTest(ctx context.Context) *testEnvironment {
|
||||
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
|
||||
|
||||
autoscalerWebhook := &HorizontalRunnerAutoscalerGitHubWebhook{
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: scheme.Scheme,
|
||||
Log: logf.Log,
|
||||
Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"),
|
||||
Name: controllerName("horizontalrunnerautoscalergithubwebhook"),
|
||||
WatchNamespace: ns.Name,
|
||||
Client: mgr.GetClient(),
|
||||
Scheme: scheme.Scheme,
|
||||
Log: logf.Log,
|
||||
Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"),
|
||||
Name: controllerName("horizontalrunnerautoscalergithubwebhook"),
|
||||
Namespace: ns.Name,
|
||||
}
|
||||
err = autoscalerWebhook.SetupWithManager(mgr)
|
||||
Expect(err).NotTo(HaveOccurred(), "failed to setup autoscaler webhook")
|
||||
@@ -173,6 +174,93 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
|
||||
Describe("when no existing resources exist", func() {
|
||||
|
||||
It("should create and scale organizational runners without any scaling metrics on pull_request event", func() {
|
||||
name := "example-runnerdeploy"
|
||||
|
||||
{
|
||||
rd := &actionsv1alpha1.RunnerDeployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: ns.Name,
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Replicas: intPtr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Organization: "test",
|
||||
Image: "bar",
|
||||
Group: "baz",
|
||||
Env: []corev1.EnvVar{
|
||||
{Name: "FOO", Value: "FOOVALUE"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
ExpectCreate(ctx, rd, "test RunnerDeployment")
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
}
|
||||
|
||||
// Scale-up to 2 replicas
|
||||
{
|
||||
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: ns.Name,
|
||||
},
|
||||
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
|
||||
ScaleTargetRef: actionsv1alpha1.ScaleTargetRef{
|
||||
Name: name,
|
||||
},
|
||||
MinReplicas: intPtr(2),
|
||||
MaxReplicas: intPtr(5),
|
||||
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
|
||||
Metrics: nil,
|
||||
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
|
||||
{
|
||||
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
|
||||
PullRequest: &actionsv1alpha1.PullRequestSpec{
|
||||
Types: []string{"created"},
|
||||
Branches: []string{"main"},
|
||||
},
|
||||
},
|
||||
Amount: 1,
|
||||
Duration: metav1.Duration{Duration: time.Minute},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
ExpectCreate(ctx, hra, "test HorizontalRunnerAutoscaler")
|
||||
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 2)
|
||||
ExpectHRAStatusCacheEntryLengthEventuallyEquals(ctx, ns.Name, name, 1)
|
||||
}
|
||||
|
||||
{
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(2, "count of fake runners after HRA creation")
|
||||
}
|
||||
|
||||
// Scale-up to 3 replicas on second pull_request create webhook event
|
||||
{
|
||||
env.SendOrgPullRequestEvent("test", "valid", "main", "created")
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3, "runners after second webhook event")
|
||||
}
|
||||
})
|
||||
|
||||
It("should create and scale organization's repository runners on pull_request event", func() {
|
||||
name := "example-runnerdeploy"
|
||||
|
||||
@@ -184,7 +272,17 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Replicas: intPtr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Repository: "test/valid",
|
||||
Image: "bar",
|
||||
@@ -224,7 +322,11 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
MinReplicas: intPtr(1),
|
||||
MaxReplicas: intPtr(3),
|
||||
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
|
||||
Metrics: nil,
|
||||
Metrics: []actionsv1alpha1.MetricSpec{
|
||||
{
|
||||
Type: actionsv1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns,
|
||||
},
|
||||
},
|
||||
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
|
||||
{
|
||||
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
|
||||
@@ -301,7 +403,17 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Replicas: intPtr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Repository: "test/valid",
|
||||
Image: "bar",
|
||||
@@ -339,7 +451,11 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
MinReplicas: intPtr(1),
|
||||
MaxReplicas: intPtr(5),
|
||||
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
|
||||
Metrics: nil,
|
||||
Metrics: []actionsv1alpha1.MetricSpec{
|
||||
{
|
||||
Type: actionsv1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns,
|
||||
},
|
||||
},
|
||||
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
|
||||
{
|
||||
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
|
||||
@@ -385,6 +501,110 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(5, "count of fake list runners")
|
||||
})
|
||||
|
||||
It("should create and scale organization's repository runners only on check_run event", func() {
|
||||
name := "example-runnerdeploy"
|
||||
|
||||
{
|
||||
rd := &actionsv1alpha1.RunnerDeployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: ns.Name,
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Replicas: intPtr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Repository: "test/valid",
|
||||
Image: "bar",
|
||||
Group: "baz",
|
||||
Env: []corev1.EnvVar{
|
||||
{Name: "FOO", Value: "FOOVALUE"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
ExpectCreate(ctx, rd, "test RunnerDeployment")
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
}
|
||||
|
||||
{
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(1, "count of fake list runners")
|
||||
}
|
||||
|
||||
// Scale-up to 3 replicas by the default TotalNumberOfQueuedAndInProgressWorkflowRuns-based scaling
|
||||
// See workflowRunsFor3Replicas_queued and workflowRunsFor3Replicas_in_progress for GitHub List-Runners API responses
|
||||
// used while testing.
|
||||
{
|
||||
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: ns.Name,
|
||||
},
|
||||
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
|
||||
ScaleTargetRef: actionsv1alpha1.ScaleTargetRef{
|
||||
Name: name,
|
||||
},
|
||||
MinReplicas: intPtr(1),
|
||||
MaxReplicas: intPtr(5),
|
||||
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
|
||||
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
|
||||
{
|
||||
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
|
||||
CheckRun: &actionsv1alpha1.CheckRunSpec{
|
||||
Types: []string{"created"},
|
||||
Status: "pending",
|
||||
},
|
||||
},
|
||||
Amount: 1,
|
||||
Duration: metav1.Duration{Duration: time.Minute},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
ExpectCreate(ctx, hra, "test HorizontalRunnerAutoscaler")
|
||||
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
}
|
||||
|
||||
{
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(1, "count of fake list runners")
|
||||
}
|
||||
|
||||
// Scale-up to 2 replicas on first check_run create webhook event
|
||||
{
|
||||
env.SendOrgCheckRunEvent("test", "valid", "pending", "created")
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1, "runner sets after webhook")
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 2, "runners after first webhook event")
|
||||
}
|
||||
|
||||
{
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(2, "count of fake list runners")
|
||||
}
|
||||
|
||||
// Scale-up to 3 replicas on second check_run create webhook event
|
||||
{
|
||||
env.SendOrgCheckRunEvent("test", "valid", "pending", "created")
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3, "runners after second webhook event")
|
||||
}
|
||||
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(3, "count of fake list runners")
|
||||
})
|
||||
|
||||
It("should create and scale user's repository runners on pull_request event", func() {
|
||||
name := "example-runnerdeploy"
|
||||
|
||||
@@ -396,7 +616,17 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Replicas: intPtr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Repository: "test/valid",
|
||||
Image: "bar",
|
||||
@@ -436,7 +666,11 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
MinReplicas: intPtr(1),
|
||||
MaxReplicas: intPtr(3),
|
||||
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
|
||||
Metrics: nil,
|
||||
Metrics: []actionsv1alpha1.MetricSpec{
|
||||
{
|
||||
Type: actionsv1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns,
|
||||
},
|
||||
},
|
||||
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
|
||||
{
|
||||
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
|
||||
@@ -504,6 +738,99 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
}
|
||||
})
|
||||
|
||||
It("should create and scale user's repository runners only on pull_request event", func() {
|
||||
name := "example-runnerdeploy"
|
||||
|
||||
{
|
||||
rd := &actionsv1alpha1.RunnerDeployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: ns.Name,
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Replicas: intPtr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Repository: "test/valid",
|
||||
Image: "bar",
|
||||
Group: "baz",
|
||||
Env: []corev1.EnvVar{
|
||||
{Name: "FOO", Value: "FOOVALUE"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
ExpectCreate(ctx, rd, "test RunnerDeployment")
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
}
|
||||
|
||||
{
|
||||
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: ns.Name,
|
||||
},
|
||||
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
|
||||
ScaleTargetRef: actionsv1alpha1.ScaleTargetRef{
|
||||
Name: name,
|
||||
},
|
||||
MinReplicas: intPtr(1),
|
||||
MaxReplicas: intPtr(3),
|
||||
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
|
||||
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
|
||||
{
|
||||
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
|
||||
PullRequest: &actionsv1alpha1.PullRequestSpec{
|
||||
Types: []string{"created"},
|
||||
Branches: []string{"main"},
|
||||
},
|
||||
},
|
||||
Amount: 1,
|
||||
Duration: metav1.Duration{Duration: time.Minute},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
ExpectCreate(ctx, hra, "test HorizontalRunnerAutoscaler")
|
||||
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
}
|
||||
|
||||
{
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(1, "count of fake runners after HRA creation")
|
||||
}
|
||||
|
||||
// Scale-up to 2 replicas on first pull_request create webhook event
|
||||
{
|
||||
env.SendUserPullRequestEvent("test", "valid", "main", "created")
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1, "runner sets after webhook")
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 2, "runners after first webhook event")
|
||||
ExpectHRADesiredReplicasEquals(ctx, ns.Name, name, 2, "runner deployment desired replicas")
|
||||
}
|
||||
|
||||
// Scale-up to 3 replicas on second pull_request create webhook event
|
||||
{
|
||||
env.SendUserPullRequestEvent("test", "valid", "main", "created")
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3, "runners after second webhook event")
|
||||
ExpectHRADesiredReplicasEquals(ctx, ns.Name, name, 3, "runner deployment desired replicas")
|
||||
}
|
||||
})
|
||||
|
||||
It("should create and scale user's repository runners on check_run event", func() {
|
||||
name := "example-runnerdeploy"
|
||||
|
||||
@@ -515,7 +842,17 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Replicas: intPtr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Repository: "test/valid",
|
||||
Image: "bar",
|
||||
@@ -553,7 +890,11 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
MinReplicas: intPtr(1),
|
||||
MaxReplicas: intPtr(5),
|
||||
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
|
||||
Metrics: nil,
|
||||
Metrics: []actionsv1alpha1.MetricSpec{
|
||||
{
|
||||
Type: actionsv1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns,
|
||||
},
|
||||
},
|
||||
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
|
||||
{
|
||||
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
|
||||
@@ -599,6 +940,110 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(5, "count of fake list runners")
|
||||
})
|
||||
|
||||
It("should create and scale user's repository runners only on check_run event", func() {
|
||||
name := "example-runnerdeploy"
|
||||
|
||||
{
|
||||
rd := &actionsv1alpha1.RunnerDeployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: ns.Name,
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Replicas: intPtr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Repository: "test/valid",
|
||||
Image: "bar",
|
||||
Group: "baz",
|
||||
Env: []corev1.EnvVar{
|
||||
{Name: "FOO", Value: "FOOVALUE"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
ExpectCreate(ctx, rd, "test RunnerDeployment")
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
}
|
||||
|
||||
{
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(1, "count of fake list runners")
|
||||
}
|
||||
|
||||
// Scale-up to 3 replicas by the default TotalNumberOfQueuedAndInProgressWorkflowRuns-based scaling
|
||||
// See workflowRunsFor3Replicas_queued and workflowRunsFor3Replicas_in_progress for GitHub List-Runners API responses
|
||||
// used while testing.
|
||||
{
|
||||
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: ns.Name,
|
||||
},
|
||||
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
|
||||
ScaleTargetRef: actionsv1alpha1.ScaleTargetRef{
|
||||
Name: name,
|
||||
},
|
||||
MinReplicas: intPtr(1),
|
||||
MaxReplicas: intPtr(5),
|
||||
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
|
||||
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
|
||||
{
|
||||
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
|
||||
CheckRun: &actionsv1alpha1.CheckRunSpec{
|
||||
Types: []string{"created"},
|
||||
Status: "pending",
|
||||
},
|
||||
},
|
||||
Amount: 1,
|
||||
Duration: metav1.Duration{Duration: time.Minute},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
ExpectCreate(ctx, hra, "test HorizontalRunnerAutoscaler")
|
||||
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
|
||||
}
|
||||
|
||||
{
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(1, "count of fake list runners")
|
||||
}
|
||||
|
||||
// Scale-up to 2 replicas on first check_run create webhook event
|
||||
{
|
||||
env.SendUserCheckRunEvent("test", "valid", "pending", "created")
|
||||
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1, "runner sets after webhook")
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 2, "runners after first webhook event")
|
||||
}
|
||||
|
||||
{
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(2, "count of fake list runners")
|
||||
}
|
||||
|
||||
// Scale-up to 3 replicas on second check_run create webhook event
|
||||
{
|
||||
env.SendUserCheckRunEvent("test", "valid", "pending", "created")
|
||||
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3, "runners after second webhook event")
|
||||
}
|
||||
|
||||
env.ExpectRegisteredNumberCountEventuallyEquals(3, "count of fake list runners")
|
||||
})
|
||||
|
||||
})
|
||||
})
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ import (
|
||||
"fmt"
|
||||
gogithub "github.com/google/go-github/v33/github"
|
||||
"github.com/summerwind/actions-runner-controller/hash"
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -129,8 +130,8 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
|
||||
newRunner := runner.DeepCopy()
|
||||
newRunner.ObjectMeta.Finalizers = finalizers
|
||||
|
||||
if err := r.Update(ctx, newRunner); err != nil {
|
||||
log.Error(err, "Failed to update runner")
|
||||
if err := r.Patch(ctx, newRunner, client.MergeFrom(&runner)); err != nil {
|
||||
log.Error(err, "Failed to update runner for finalizer removal")
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
|
||||
@@ -159,31 +160,23 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
|
||||
}
|
||||
|
||||
if err := r.Create(ctx, &newPod); err != nil {
|
||||
if kerrors.IsAlreadyExists(err) {
|
||||
// Gracefully handle pod-already-exists errors due to informer cache delay.
|
||||
// Without this we got a few errors like the below on new runner pod:
|
||||
// 2021-03-16T00:23:10.116Z ERROR controller-runtime.controller Reconciler error {"controller": "runner-controller", "request": "default/example-runnerdeploy-b2g2g-j4mcp", "error": "pods \"example-runnerdeploy-b2g2g-j4mcp\" already exists"}
|
||||
log.Info("Runner pod already exists. Probably this pod has been already created in previous reconcilation but the new pod is not yet cached.")
|
||||
|
||||
return ctrl.Result{RequeueAfter: 10 * time.Second}, nil
|
||||
}
|
||||
|
||||
log.Error(err, "Failed to create pod resource")
|
||||
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
|
||||
r.Recorder.Event(&runner, corev1.EventTypeNormal, "PodCreated", fmt.Sprintf("Created pod '%s'", newPod.Name))
|
||||
log.Info("Created runner pod", "repository", runner.Spec.Repository)
|
||||
} else {
|
||||
// If pod has ended up succeeded we need to restart it
|
||||
// Happens e.g. when dind is in runner and run completes
|
||||
restart := pod.Status.Phase == corev1.PodSucceeded
|
||||
|
||||
if !restart && runner.Status.Phase != string(pod.Status.Phase) {
|
||||
updated := runner.DeepCopy()
|
||||
updated.Status.Phase = string(pod.Status.Phase)
|
||||
updated.Status.Reason = pod.Status.Reason
|
||||
updated.Status.Message = pod.Status.Message
|
||||
|
||||
if err := r.Status().Update(ctx, updated); err != nil {
|
||||
log.Error(err, "Failed to update runner status")
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
|
||||
if !pod.ObjectMeta.DeletionTimestamp.IsZero() {
|
||||
deletionTimeout := 1 * time.Minute
|
||||
currentTime := time.Now()
|
||||
@@ -220,6 +213,10 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
|
||||
}
|
||||
}
|
||||
|
||||
// If pod has ended up succeeded we need to restart it
|
||||
// Happens e.g. when dind is in runner and run completes
|
||||
restart := pod.Status.Phase == corev1.PodSucceeded
|
||||
|
||||
if pod.Status.Phase == corev1.PodRunning {
|
||||
for _, status := range pod.Status.ContainerStatuses {
|
||||
if status.Name != containerName {
|
||||
@@ -244,24 +241,45 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
|
||||
var registrationRecheckDelay time.Duration
|
||||
|
||||
// all checks done below only decide whether a restart is needed
|
||||
// if a restart was already decided before, there is no need for the checks
|
||||
// saving API calls and scary log messages
|
||||
// saving API calls and scary{ log messages
|
||||
if !restart {
|
||||
registrationCheckInterval := time.Minute
|
||||
|
||||
notRegistered := false
|
||||
// We want to call ListRunners GitHub Actions API only once per runner per minute.
|
||||
// This if block, in conjunction with:
|
||||
// return ctrl.Result{RequeueAfter: registrationRecheckDelay}, nil
|
||||
// achieves that.
|
||||
if lastCheckTime := runner.Status.LastRegistrationCheckTime; lastCheckTime != nil {
|
||||
nextCheckTime := lastCheckTime.Add(registrationCheckInterval)
|
||||
if nextCheckTime.After(time.Now()) {
|
||||
log.Info(
|
||||
fmt.Sprintf("Skipping registration check because it's deferred until %s", nextCheckTime),
|
||||
)
|
||||
|
||||
// Note that we don't need to explicitly requeue on this reconcilation because
|
||||
// the requeue should have been already scheduled previsouly
|
||||
// (with `return ctrl.Result{RequeueAfter: registrationRecheckDelay}, nil` as noted above and coded below)
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
}
|
||||
|
||||
notFound := false
|
||||
offline := false
|
||||
|
||||
runnerBusy, err := r.GitHubClient.IsRunnerBusy(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
|
||||
|
||||
currentTime := time.Now()
|
||||
|
||||
if err != nil {
|
||||
var notFoundException *github.RunnerNotFound
|
||||
var offlineException *github.RunnerOffline
|
||||
if errors.As(err, ¬FoundException) {
|
||||
log.V(1).Info("Failed to check if runner is busy. Either this runner has never been successfully registered to GitHub or it still needs more time.", "runnerName", runner.Name)
|
||||
|
||||
notRegistered = true
|
||||
notFound = true
|
||||
} else if errors.As(err, &offlineException) {
|
||||
log.V(1).Info("GitHub runner appears to be offline, waiting for runner to get online ...", "runnerName", runner.Name)
|
||||
offline = true
|
||||
} else {
|
||||
var e *gogithub.RateLimitError
|
||||
@@ -293,40 +311,91 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
|
||||
}
|
||||
|
||||
registrationTimeout := 10 * time.Minute
|
||||
currentTime := time.Now()
|
||||
registrationDidTimeout := currentTime.Sub(pod.CreationTimestamp.Add(registrationTimeout)) > 0
|
||||
durationAfterRegistrationTimeout := currentTime.Sub(pod.CreationTimestamp.Add(registrationTimeout))
|
||||
registrationDidTimeout := durationAfterRegistrationTimeout > 0
|
||||
|
||||
if notRegistered && registrationDidTimeout {
|
||||
log.Info(
|
||||
"Runner failed to register itself to GitHub in timely manner. "+
|
||||
"Recreating the pod to see if it resolves the issue. "+
|
||||
"CAUTION: If you see this a lot, you should investigate the root cause. "+
|
||||
"See https://github.com/summerwind/actions-runner-controller/issues/288",
|
||||
"podCreationTimestamp", pod.CreationTimestamp,
|
||||
"currentTime", currentTime,
|
||||
"configuredRegistrationTimeout", registrationTimeout,
|
||||
)
|
||||
if notFound {
|
||||
if registrationDidTimeout {
|
||||
log.Info(
|
||||
"Runner failed to register itself to GitHub in timely manner. "+
|
||||
"Recreating the pod to see if it resolves the issue. "+
|
||||
"CAUTION: If you see this a lot, you should investigate the root cause. "+
|
||||
"See https://github.com/summerwind/actions-runner-controller/issues/288",
|
||||
"podCreationTimestamp", pod.CreationTimestamp,
|
||||
"currentTime", currentTime,
|
||||
"configuredRegistrationTimeout", registrationTimeout,
|
||||
)
|
||||
|
||||
restart = true
|
||||
restart = true
|
||||
} else {
|
||||
log.V(1).Info(
|
||||
"Runner pod exists but we failed to check if runner is busy. Apparently it still needs more time.",
|
||||
"runnerName", runner.Name,
|
||||
)
|
||||
}
|
||||
} else if offline {
|
||||
if registrationDidTimeout {
|
||||
log.Info(
|
||||
"Already existing GitHub runner still appears offline . "+
|
||||
"Recreating the pod to see if it resolves the issue. "+
|
||||
"CAUTION: If you see this a lot, you should investigate the root cause. ",
|
||||
"podCreationTimestamp", pod.CreationTimestamp,
|
||||
"currentTime", currentTime,
|
||||
"configuredRegistrationTimeout", registrationTimeout,
|
||||
)
|
||||
|
||||
restart = true
|
||||
} else {
|
||||
log.V(1).Info(
|
||||
"Runner pod exists but the GitHub runner appears to be still offline. Waiting for runner to get online ...",
|
||||
"runnerName", runner.Name,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
if offline && registrationDidTimeout {
|
||||
log.Info(
|
||||
"Already existing GitHub runner still appears offline . "+
|
||||
"Recreating the pod to see if it resolves the issue. "+
|
||||
"CAUTION: If you see this a lot, you should investigate the root cause. ",
|
||||
"podCreationTimestamp", pod.CreationTimestamp,
|
||||
"currentTime", currentTime,
|
||||
"configuredRegistrationTimeout", registrationTimeout,
|
||||
)
|
||||
|
||||
restart = true
|
||||
if (notFound || offline) && !registrationDidTimeout {
|
||||
registrationRecheckDelay = registrationCheckInterval + wait.Jitter(10*time.Second, 0.1)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// Don't do anything if there's no need to restart the runner
|
||||
if !restart {
|
||||
// This guard enables us to update runner.Status.Phase to `Running` only after
|
||||
// the runner is registered to GitHub.
|
||||
if registrationRecheckDelay > 0 {
|
||||
log.V(1).Info(fmt.Sprintf("Rechecking the runner registration in %s", registrationRecheckDelay))
|
||||
|
||||
updated := runner.DeepCopy()
|
||||
updated.Status.LastRegistrationCheckTime = &metav1.Time{Time: time.Now()}
|
||||
|
||||
if err := r.Status().Patch(ctx, updated, client.MergeFrom(&runner)); err != nil {
|
||||
log.Error(err, "Failed to update runner status")
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
|
||||
return ctrl.Result{RequeueAfter: registrationRecheckDelay}, nil
|
||||
}
|
||||
|
||||
if runner.Status.Phase != string(pod.Status.Phase) {
|
||||
if pod.Status.Phase == corev1.PodRunning {
|
||||
// Seeing this message, you can expect the runner to become `Running` soon.
|
||||
log.Info(
|
||||
"Runner appears to have registered and running.",
|
||||
"podCreationTimestamp", pod.CreationTimestamp,
|
||||
)
|
||||
}
|
||||
|
||||
updated := runner.DeepCopy()
|
||||
updated.Status.Phase = string(pod.Status.Phase)
|
||||
updated.Status.Reason = pod.Status.Reason
|
||||
updated.Status.Message = pod.Status.Message
|
||||
|
||||
if err := r.Status().Patch(ctx, updated, client.MergeFrom(&runner)); err != nil {
|
||||
log.Error(err, "Failed to update runner status")
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
}
|
||||
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
|
||||
@@ -530,6 +599,15 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
|
||||
},
|
||||
}
|
||||
|
||||
if mtu := runner.Spec.DockerMTU; mtu != nil && dockerdInRunner {
|
||||
pod.Spec.Containers[0].Env = append(pod.Spec.Containers[0].Env, []corev1.EnvVar{
|
||||
{
|
||||
Name: "MTU",
|
||||
Value: fmt.Sprintf("%d", *runner.Spec.DockerMTU),
|
||||
},
|
||||
}...)
|
||||
}
|
||||
|
||||
if !dockerdInRunner && dockerEnabled {
|
||||
runnerVolumeName := "runner"
|
||||
runnerVolumeMountPath := "/runner"
|
||||
@@ -612,6 +690,15 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
|
||||
Resources: runner.Spec.DockerdContainerResources,
|
||||
})
|
||||
|
||||
if mtu := runner.Spec.DockerMTU; mtu != nil {
|
||||
pod.Spec.Containers[1].Env = append(pod.Spec.Containers[1].Env, []corev1.EnvVar{
|
||||
{
|
||||
Name: "DOCKERD_ROOTLESS_ROOTLESSKIT_MTU",
|
||||
Value: fmt.Sprintf("%d", *runner.Spec.DockerMTU),
|
||||
},
|
||||
}...)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
if len(runner.Spec.Containers) != 0 {
|
||||
|
||||
@@ -20,6 +20,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"hash/fnv"
|
||||
"reflect"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
@@ -40,7 +41,8 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
LabelKeyRunnerTemplateHash = "runner-template-hash"
|
||||
LabelKeyRunnerTemplateHash = "runner-template-hash"
|
||||
LabelKeyRunnerDeploymentName = "runner-deployment-name"
|
||||
|
||||
runnerSetOwnerKey = ".metadata.controller"
|
||||
)
|
||||
@@ -143,6 +145,28 @@ func (r *RunnerDeploymentReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
|
||||
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(newestSet.Spec.Selector, desiredRS.Spec.Selector) {
|
||||
updateSet := newestSet.DeepCopy()
|
||||
updateSet.Spec = *desiredRS.Spec.DeepCopy()
|
||||
|
||||
// A selector update change doesn't trigger replicaset replacement,
|
||||
// but we still need to update the existing replicaset with it.
|
||||
// Otherwise selector-based runner query will never work on replicasets created before the controller v0.17.0
|
||||
// See https://github.com/summerwind/actions-runner-controller/pull/355#discussion_r585379259
|
||||
if err := r.Client.Update(ctx, updateSet); err != nil {
|
||||
log.Error(err, "Failed to update runnerreplicaset resource")
|
||||
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
|
||||
// At this point, we are already sure that there's no need to create a new replicaset
|
||||
// as the runner template hash is not changed.
|
||||
//
|
||||
// But we still need to requeue for the (possibly rare) cases that there are still old replicasets that needs
|
||||
// to be cleaned up.
|
||||
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil
|
||||
}
|
||||
|
||||
const defaultReplicas = 1
|
||||
|
||||
currentDesiredReplicas := getIntOrDefault(newestSet.Spec.Replicas, defaultReplicas)
|
||||
@@ -170,7 +194,10 @@ func (r *RunnerDeploymentReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
|
||||
Namespace: newestSet.Namespace,
|
||||
Name: newestSet.Name,
|
||||
}).
|
||||
Info("Waiting until the newest runner replica set to be 100% available")
|
||||
Info("Waiting until the newest runner replica set to be 100% available",
|
||||
"ready", readyReplicas,
|
||||
"desired", currentDesiredReplicas,
|
||||
)
|
||||
|
||||
return ctrl.Result{RequeueAfter: 10 * time.Second}, nil
|
||||
}
|
||||
@@ -258,32 +285,94 @@ func CloneAndAddLabel(labels map[string]string, labelKey, labelValue string) map
|
||||
return newLabels
|
||||
}
|
||||
|
||||
func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeployment) (*v1alpha1.RunnerReplicaSet, error) {
|
||||
newRSTemplate := *rd.Spec.Template.DeepCopy()
|
||||
templateHash := ComputeHash(&newRSTemplate)
|
||||
// Add template hash label to selector.
|
||||
labels := CloneAndAddLabel(rd.Spec.Template.Labels, LabelKeyRunnerTemplateHash, templateHash)
|
||||
// Clones the given selector and returns a new selector with the given key and value added.
|
||||
// Returns the given selector, if labelKey is empty.
|
||||
//
|
||||
// Proudly copied from k8s.io/kubernetes/pkg/util/labels.CloneSelectorAndAddLabel
|
||||
func CloneSelectorAndAddLabel(selector *metav1.LabelSelector, labelKey, labelValue string) *metav1.LabelSelector {
|
||||
if labelKey == "" {
|
||||
// Don't need to add a label.
|
||||
return selector
|
||||
}
|
||||
|
||||
for _, l := range r.CommonRunnerLabels {
|
||||
// Clone.
|
||||
newSelector := new(metav1.LabelSelector)
|
||||
|
||||
newSelector.MatchLabels = make(map[string]string)
|
||||
if selector.MatchLabels != nil {
|
||||
for key, val := range selector.MatchLabels {
|
||||
newSelector.MatchLabels[key] = val
|
||||
}
|
||||
}
|
||||
newSelector.MatchLabels[labelKey] = labelValue
|
||||
|
||||
if selector.MatchExpressions != nil {
|
||||
newMExps := make([]metav1.LabelSelectorRequirement, len(selector.MatchExpressions))
|
||||
for i, me := range selector.MatchExpressions {
|
||||
newMExps[i].Key = me.Key
|
||||
newMExps[i].Operator = me.Operator
|
||||
if me.Values != nil {
|
||||
newMExps[i].Values = make([]string, len(me.Values))
|
||||
copy(newMExps[i].Values, me.Values)
|
||||
} else {
|
||||
newMExps[i].Values = nil
|
||||
}
|
||||
}
|
||||
newSelector.MatchExpressions = newMExps
|
||||
} else {
|
||||
newSelector.MatchExpressions = nil
|
||||
}
|
||||
|
||||
return newSelector
|
||||
}
|
||||
|
||||
func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeployment) (*v1alpha1.RunnerReplicaSet, error) {
|
||||
return newRunnerReplicaSet(&rd, r.CommonRunnerLabels, r.Scheme)
|
||||
}
|
||||
|
||||
func getSelector(rd *v1alpha1.RunnerDeployment) *metav1.LabelSelector {
|
||||
selector := rd.Spec.Selector
|
||||
if selector == nil {
|
||||
selector = &metav1.LabelSelector{MatchLabels: map[string]string{LabelKeyRunnerDeploymentName: rd.Name}}
|
||||
}
|
||||
|
||||
return selector
|
||||
}
|
||||
|
||||
func newRunnerReplicaSet(rd *v1alpha1.RunnerDeployment, commonRunnerLabels []string, scheme *runtime.Scheme) (*v1alpha1.RunnerReplicaSet, error) {
|
||||
newRSTemplate := *rd.Spec.Template.DeepCopy()
|
||||
|
||||
for _, l := range commonRunnerLabels {
|
||||
newRSTemplate.Spec.Labels = append(newRSTemplate.Spec.Labels, l)
|
||||
}
|
||||
|
||||
newRSTemplate.Labels = labels
|
||||
templateHash := ComputeHash(&newRSTemplate)
|
||||
|
||||
// Add template hash label to selector.
|
||||
newRSTemplate.ObjectMeta.Labels = CloneAndAddLabel(newRSTemplate.ObjectMeta.Labels, LabelKeyRunnerTemplateHash, templateHash)
|
||||
|
||||
// This label selector is used by default when rd.Spec.Selector is empty.
|
||||
newRSTemplate.ObjectMeta.Labels = CloneAndAddLabel(newRSTemplate.ObjectMeta.Labels, LabelKeyRunnerDeploymentName, rd.Name)
|
||||
|
||||
selector := getSelector(rd)
|
||||
|
||||
newRSSelector := CloneSelectorAndAddLabel(selector, LabelKeyRunnerTemplateHash, templateHash)
|
||||
|
||||
rs := v1alpha1.RunnerReplicaSet{
|
||||
TypeMeta: metav1.TypeMeta{},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
GenerateName: rd.ObjectMeta.Name + "-",
|
||||
Namespace: rd.ObjectMeta.Namespace,
|
||||
Labels: labels,
|
||||
Labels: newRSTemplate.ObjectMeta.Labels,
|
||||
},
|
||||
Spec: v1alpha1.RunnerReplicaSetSpec{
|
||||
Replicas: rd.Spec.Replicas,
|
||||
Selector: newRSSelector,
|
||||
Template: newRSTemplate,
|
||||
},
|
||||
}
|
||||
|
||||
if err := ctrl.SetControllerReference(&rd, &rs, r.Scheme); err != nil {
|
||||
if err := ctrl.SetControllerReference(rd, &rs, scheme); err != nil {
|
||||
return &rs, err
|
||||
}
|
||||
|
||||
|
||||
@@ -2,11 +2,13 @@ package controllers
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
"k8s.io/client-go/kubernetes/scheme"
|
||||
@@ -36,7 +38,17 @@ func TestNewRunnerReplicaSet(t *testing.T) {
|
||||
Name: "example",
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Labels: []string{"project1"},
|
||||
},
|
||||
@@ -49,10 +61,63 @@ func TestNewRunnerReplicaSet(t *testing.T) {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
|
||||
want := []string{"project1", "dev"}
|
||||
if d := cmp.Diff(want, rs.Spec.Template.Spec.Labels); d != "" {
|
||||
if val, ok := rs.Labels["foo"]; ok {
|
||||
if val != "bar" {
|
||||
t.Errorf("foo label does not have bar but %v", val)
|
||||
}
|
||||
} else {
|
||||
t.Errorf("foo label does not exist")
|
||||
}
|
||||
|
||||
hash1, ok := rs.Labels[LabelKeyRunnerTemplateHash]
|
||||
if !ok {
|
||||
t.Errorf("missing runner-template-hash label")
|
||||
}
|
||||
|
||||
runnerLabel := []string{"project1", "dev"}
|
||||
if d := cmp.Diff(runnerLabel, rs.Spec.Template.Spec.Labels); d != "" {
|
||||
t.Errorf("%s", d)
|
||||
}
|
||||
|
||||
rd2 := rd.DeepCopy()
|
||||
rd2.Spec.Template.Spec.Labels = []string{"project2"}
|
||||
|
||||
rs2, err := r.newRunnerReplicaSet(*rd2)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
|
||||
hash2, ok := rs2.Labels[LabelKeyRunnerTemplateHash]
|
||||
if !ok {
|
||||
t.Errorf("missing runner-template-hash label")
|
||||
}
|
||||
|
||||
if hash1 == hash2 {
|
||||
t.Errorf(
|
||||
"runner replica sets from runner deployments with varying labels must have different template hash, but got %s and %s",
|
||||
hash1, hash2,
|
||||
)
|
||||
}
|
||||
|
||||
rd3 := rd.DeepCopy()
|
||||
rd3.Spec.Template.Labels["foo"] = "baz"
|
||||
|
||||
rs3, err := r.newRunnerReplicaSet(*rd3)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
|
||||
hash3, ok := rs3.Labels[LabelKeyRunnerTemplateHash]
|
||||
if !ok {
|
||||
t.Errorf("missing runner-template-hash label")
|
||||
}
|
||||
|
||||
if hash1 == hash3 {
|
||||
t.Errorf(
|
||||
"runner replica sets from runner deployments with varying meta labels must have different template hash, but got %s and %s",
|
||||
hash1, hash3,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// SetupDeploymentTest will set up a testing environment.
|
||||
@@ -112,7 +177,113 @@ var _ = Context("Inside of a new namespace", func() {
|
||||
Describe("when no existing resources exist", func() {
|
||||
|
||||
It("should create a new RunnerReplicaSet resource from the specified template, add a another RunnerReplicaSet on template modification, and eventually removes old runnerreplicasets", func() {
|
||||
name := "example-runnerdeploy"
|
||||
name := "example-runnerdeploy-1"
|
||||
|
||||
{
|
||||
rs := &actionsv1alpha1.RunnerDeployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: ns.Name,
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Replicas: intPtr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Repository: "foo/bar",
|
||||
Image: "bar",
|
||||
Env: []corev1.EnvVar{
|
||||
{Name: "FOO", Value: "FOOVALUE"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
err := k8sClient.Create(ctx, rs)
|
||||
|
||||
Expect(err).NotTo(HaveOccurred(), "failed to create test RunnerReplicaSet resource")
|
||||
|
||||
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
|
||||
|
||||
Eventually(
|
||||
func() (int, error) {
|
||||
selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
err = k8sClient.List(
|
||||
ctx,
|
||||
&runnerSets,
|
||||
client.InNamespace(ns.Name),
|
||||
client.MatchingLabelsSelector{Selector: selector},
|
||||
)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if len(runnerSets.Items) != 1 {
|
||||
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
|
||||
}
|
||||
|
||||
return *runnerSets.Items[0].Spec.Replicas, nil
|
||||
},
|
||||
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
|
||||
}
|
||||
|
||||
{
|
||||
// We wrap the update in the Eventually block to avoid the below error that occurs due to concurrent modification
|
||||
// made by the controller to update .Status.AvailableReplicas and .Status.ReadyReplicas
|
||||
// Operation cannot be fulfilled on runnersets.actions.summerwind.dev "example-runnerset": the object has been modified; please apply your changes to the latest version and try again
|
||||
var rd actionsv1alpha1.RunnerDeployment
|
||||
Eventually(func() error {
|
||||
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &rd)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get test RunnerReplicaSet resource: %v\n", err)
|
||||
}
|
||||
rd.Spec.Replicas = intPtr(2)
|
||||
|
||||
return k8sClient.Update(ctx, &rd)
|
||||
},
|
||||
time.Second*1, time.Millisecond*500).Should(BeNil())
|
||||
|
||||
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
|
||||
|
||||
Eventually(
|
||||
func() (int, error) {
|
||||
selector, err := metav1.LabelSelectorAsSelector(rd.Spec.Selector)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
err = k8sClient.List(
|
||||
ctx,
|
||||
&runnerSets,
|
||||
client.InNamespace(ns.Name),
|
||||
client.MatchingLabelsSelector{Selector: selector},
|
||||
)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if len(runnerSets.Items) != 1 {
|
||||
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
|
||||
}
|
||||
|
||||
return *runnerSets.Items[0].Spec.Replicas, nil
|
||||
},
|
||||
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2))
|
||||
}
|
||||
})
|
||||
|
||||
It("should create a new RunnerReplicaSet resource from the specified template without labels and selector, add a another RunnerReplicaSet on template modification, and eventually removes old runnerreplicasets", func() {
|
||||
name := "example-runnerdeploy-2"
|
||||
|
||||
{
|
||||
rs := &actionsv1alpha1.RunnerDeployment{
|
||||
@@ -141,29 +312,25 @@ var _ = Context("Inside of a new namespace", func() {
|
||||
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
|
||||
|
||||
Eventually(
|
||||
func() int {
|
||||
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
|
||||
func() (int, error) {
|
||||
selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector)
|
||||
if err != nil {
|
||||
logf.Log.Error(err, "list runner sets")
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return len(runnerSets.Items)
|
||||
},
|
||||
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
|
||||
|
||||
Eventually(
|
||||
func() int {
|
||||
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
|
||||
err = k8sClient.List(
|
||||
ctx,
|
||||
&runnerSets,
|
||||
client.InNamespace(ns.Name),
|
||||
client.MatchingLabelsSelector{Selector: selector},
|
||||
)
|
||||
if err != nil {
|
||||
logf.Log.Error(err, "list runner sets")
|
||||
return 0, err
|
||||
}
|
||||
if len(runnerSets.Items) != 1 {
|
||||
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
|
||||
}
|
||||
|
||||
if len(runnerSets.Items) == 0 {
|
||||
logf.Log.Info("No runnerreplicasets exist yet")
|
||||
return -1
|
||||
}
|
||||
|
||||
return *runnerSets.Items[0].Spec.Replicas
|
||||
return *runnerSets.Items[0].Spec.Replicas, nil
|
||||
},
|
||||
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
|
||||
}
|
||||
@@ -172,13 +339,12 @@ var _ = Context("Inside of a new namespace", func() {
|
||||
// We wrap the update in the Eventually block to avoid the below error that occurs due to concurrent modification
|
||||
// made by the controller to update .Status.AvailableReplicas and .Status.ReadyReplicas
|
||||
// Operation cannot be fulfilled on runnersets.actions.summerwind.dev "example-runnerset": the object has been modified; please apply your changes to the latest version and try again
|
||||
var rd actionsv1alpha1.RunnerDeployment
|
||||
Eventually(func() error {
|
||||
var rd actionsv1alpha1.RunnerDeployment
|
||||
|
||||
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &rd)
|
||||
|
||||
Expect(err).NotTo(HaveOccurred(), "failed to get test RunnerReplicaSet resource")
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get test RunnerReplicaSet resource: %v\n", err)
|
||||
}
|
||||
rd.Spec.Replicas = intPtr(2)
|
||||
|
||||
return k8sClient.Update(ctx, &rd)
|
||||
@@ -188,27 +354,126 @@ var _ = Context("Inside of a new namespace", func() {
|
||||
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
|
||||
|
||||
Eventually(
|
||||
func() int {
|
||||
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
|
||||
func() (int, error) {
|
||||
selector, err := metav1.LabelSelectorAsSelector(rd.Spec.Selector)
|
||||
if err != nil {
|
||||
logf.Log.Error(err, "list runner sets")
|
||||
return 0, err
|
||||
}
|
||||
err = k8sClient.List(
|
||||
ctx,
|
||||
&runnerSets,
|
||||
client.InNamespace(ns.Name),
|
||||
client.MatchingLabelsSelector{Selector: selector},
|
||||
)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if len(runnerSets.Items) != 1 {
|
||||
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
|
||||
}
|
||||
|
||||
return len(runnerSets.Items)
|
||||
},
|
||||
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
|
||||
|
||||
Eventually(
|
||||
func() int {
|
||||
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
|
||||
if err != nil {
|
||||
logf.Log.Error(err, "list runner sets")
|
||||
}
|
||||
|
||||
return *runnerSets.Items[0].Spec.Replicas
|
||||
return *runnerSets.Items[0].Spec.Replicas, nil
|
||||
},
|
||||
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2))
|
||||
}
|
||||
})
|
||||
|
||||
It("should adopt RunnerReplicaSet created before 0.18.0 to have Spec.Selector", func() {
|
||||
name := "example-runnerdeploy-2"
|
||||
|
||||
{
|
||||
rd := &actionsv1alpha1.RunnerDeployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: ns.Name,
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerDeploymentSpec{
|
||||
Replicas: intPtr(1),
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Repository: "foo/bar",
|
||||
Image: "bar",
|
||||
Env: []corev1.EnvVar{
|
||||
{Name: "FOO", Value: "FOOVALUE"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
createRDErr := k8sClient.Create(ctx, rd)
|
||||
Expect(createRDErr).NotTo(HaveOccurred(), "failed to create test RunnerReplicaSet resource")
|
||||
|
||||
Eventually(
|
||||
func() (int, error) {
|
||||
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
|
||||
|
||||
err := k8sClient.List(
|
||||
ctx,
|
||||
&runnerSets,
|
||||
client.InNamespace(ns.Name),
|
||||
)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return len(runnerSets.Items), nil
|
||||
},
|
||||
time.Second*1, time.Millisecond*500).Should(BeEquivalentTo(1))
|
||||
|
||||
var rs17 *actionsv1alpha1.RunnerReplicaSet
|
||||
|
||||
Consistently(
|
||||
func() (*metav1.LabelSelector, error) {
|
||||
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
|
||||
|
||||
err := k8sClient.List(
|
||||
ctx,
|
||||
&runnerSets,
|
||||
client.InNamespace(ns.Name),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(runnerSets.Items) != 1 {
|
||||
return nil, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
|
||||
}
|
||||
|
||||
rs17 = &runnerSets.Items[0]
|
||||
|
||||
return runnerSets.Items[0].Spec.Selector, nil
|
||||
},
|
||||
time.Second*1, time.Millisecond*500).Should(Not(BeNil()))
|
||||
|
||||
// We simulate the old, pre 0.18.0 RunnerReplicaSet by updating it.
|
||||
// I've tried to use controllerutil.Set{Owner,Controller}Reference and k8sClient.Create(rs17)
|
||||
// but it didn't work due to missing RD UID, where UID is generated on K8s API server on k8sCLient.Create(rd)
|
||||
rs17.Spec.Selector = nil
|
||||
|
||||
updateRSErr := k8sClient.Update(ctx, rs17)
|
||||
Expect(updateRSErr).NotTo(HaveOccurred())
|
||||
|
||||
Eventually(
|
||||
func() (*metav1.LabelSelector, error) {
|
||||
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
|
||||
|
||||
err := k8sClient.List(
|
||||
ctx,
|
||||
&runnerSets,
|
||||
client.InNamespace(ns.Name),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(runnerSets.Items) != 1 {
|
||||
return nil, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
|
||||
}
|
||||
|
||||
return runnerSets.Items[0].Spec.Selector, nil
|
||||
},
|
||||
time.Second*1, time.Millisecond*500).Should(Not(BeNil()))
|
||||
}
|
||||
})
|
||||
|
||||
})
|
||||
})
|
||||
|
||||
@@ -68,8 +68,18 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
|
||||
selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector)
|
||||
if err != nil {
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
// Get the Runners managed by the target RunnerReplicaSet
|
||||
var allRunners v1alpha1.RunnerList
|
||||
if err := r.List(ctx, &allRunners, client.InNamespace(req.Namespace)); err != nil {
|
||||
if err := r.List(
|
||||
ctx,
|
||||
&allRunners,
|
||||
client.InNamespace(req.Namespace),
|
||||
client.MatchingLabelsSelector{Selector: selector},
|
||||
); err != nil {
|
||||
if !kerrors.IsNotFound(err) {
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
@@ -77,9 +87,14 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
|
||||
|
||||
var myRunners []v1alpha1.Runner
|
||||
|
||||
var available, ready int
|
||||
var (
|
||||
available int
|
||||
ready int
|
||||
)
|
||||
|
||||
for _, r := range allRunners.Items {
|
||||
// This guard is required to avoid the RunnerReplicaSet created by the controller v0.17.0 or before
|
||||
// to not treat all the runners in the namespace as its children.
|
||||
if metav1.IsControlledBy(&r, &rs) {
|
||||
myRunners = append(myRunners, r)
|
||||
|
||||
@@ -106,7 +121,7 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
|
||||
|
||||
// get runners that are currently not busy
|
||||
var notBusy []v1alpha1.Runner
|
||||
for _, runner := range myRunners {
|
||||
for _, runner := range allRunners.Items {
|
||||
busy, err := r.GitHubClient.IsRunnerBusy(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
|
||||
if err != nil {
|
||||
notRegistered := false
|
||||
|
||||
@@ -115,7 +115,17 @@ var _ = Context("Inside of a new namespace", func() {
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerReplicaSetSpec{
|
||||
Replicas: intPtr(1),
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Template: actionsv1alpha1.RunnerTemplate{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
Spec: actionsv1alpha1.RunnerSpec{
|
||||
Repository: "foo/bar",
|
||||
Image: "bar",
|
||||
@@ -135,9 +145,26 @@ var _ = Context("Inside of a new namespace", func() {
|
||||
|
||||
Eventually(
|
||||
func() int {
|
||||
err := k8sClient.List(ctx, &runners, client.InNamespace(ns.Name))
|
||||
selector, err := metav1.LabelSelectorAsSelector(
|
||||
&metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
logf.Log.Error(err, "failed to create labelselector")
|
||||
return -1
|
||||
}
|
||||
err = k8sClient.List(
|
||||
ctx,
|
||||
&runners,
|
||||
client.InNamespace(ns.Name),
|
||||
client.MatchingLabelsSelector{Selector: selector},
|
||||
)
|
||||
if err != nil {
|
||||
logf.Log.Error(err, "list runners")
|
||||
return -1
|
||||
}
|
||||
|
||||
for i, runner := range runners.Items {
|
||||
@@ -176,7 +203,23 @@ var _ = Context("Inside of a new namespace", func() {
|
||||
|
||||
Eventually(
|
||||
func() int {
|
||||
err := k8sClient.List(ctx, &runners, client.InNamespace(ns.Name))
|
||||
selector, err := metav1.LabelSelectorAsSelector(
|
||||
&metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
logf.Log.Error(err, "failed to create labelselector")
|
||||
return -1
|
||||
}
|
||||
err = k8sClient.List(
|
||||
ctx,
|
||||
&runners,
|
||||
client.InNamespace(ns.Name),
|
||||
client.MatchingLabelsSelector{Selector: selector},
|
||||
)
|
||||
if err != nil {
|
||||
logf.Log.Error(err, "list runners")
|
||||
}
|
||||
@@ -220,6 +263,7 @@ var _ = Context("Inside of a new namespace", func() {
|
||||
err := k8sClient.List(ctx, &runners, client.InNamespace(ns.Name))
|
||||
if err != nil {
|
||||
logf.Log.Error(err, "list runners")
|
||||
return -1
|
||||
}
|
||||
|
||||
for i, runner := range runners.Items {
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
@@ -39,10 +40,20 @@ func (c *Config) NewClient() (*Client, error) {
|
||||
if len(c.Token) > 0 {
|
||||
transport = oauth2.NewClient(context.Background(), oauth2.StaticTokenSource(&oauth2.Token{AccessToken: c.Token})).Transport
|
||||
} else {
|
||||
tr, err := ghinstallation.NewKeyFromFile(http.DefaultTransport, c.AppID, c.AppInstallationID, c.AppPrivateKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("authentication failed: %v", err)
|
||||
var tr *ghinstallation.Transport
|
||||
|
||||
if _, err := os.Stat(c.AppPrivateKey); err == nil {
|
||||
tr, err = ghinstallation.NewKeyFromFile(http.DefaultTransport, c.AppID, c.AppInstallationID, c.AppPrivateKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("authentication failed: using private key at %s: %v", c.AppPrivateKey, err)
|
||||
}
|
||||
} else {
|
||||
tr, err = ghinstallation.New(http.DefaultTransport, c.AppID, c.AppInstallationID, []byte(c.AppPrivateKey))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("authentication failed: using private key of size %d (%s...): %v", len(c.AppPrivateKey), strings.Split(c.AppPrivateKey, "\n")[0], err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(c.EnterpriseURL) > 0 {
|
||||
githubAPIURL, err := getEnterpriseApiUrl(c.EnterpriseURL)
|
||||
if err != nil {
|
||||
@@ -147,7 +158,7 @@ func (c *Client) ListRunners(ctx context.Context, enterprise, org, repo string)
|
||||
|
||||
var runners []*github.Runner
|
||||
|
||||
opts := github.ListOptions{PerPage: 10}
|
||||
opts := github.ListOptions{PerPage: 100}
|
||||
for {
|
||||
list, res, err := c.listRunners(ctx, enterprise, owner, repo, &opts)
|
||||
|
||||
|
||||
3
main.go
3
main.go
@@ -63,6 +63,7 @@ func main() {
|
||||
|
||||
runnerImage string
|
||||
dockerImage string
|
||||
namespace string
|
||||
|
||||
commonRunnerLabels commaSeparatedStringSlice
|
||||
)
|
||||
@@ -84,6 +85,7 @@ func main() {
|
||||
flag.StringVar(&c.AppPrivateKey, "github-app-private-key", c.AppPrivateKey, "The path of a private key file to authenticate as a GitHub App")
|
||||
flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change")
|
||||
flag.Var(&commonRunnerLabels, "common-runner-labels", "Runner labels in the K1=V1,K2=V2,... format that are inherited all the runners created by the controller. See https://github.com/summerwind/actions-runner-controller/issues/321 for more information")
|
||||
flag.StringVar(&namespace, "watch-namespace", "", "The namespace to watch for custom resources. Set to empty for letting it watch for all namespaces.")
|
||||
flag.Parse()
|
||||
|
||||
logger := zap.New(func(o *zap.Options) {
|
||||
@@ -104,6 +106,7 @@ func main() {
|
||||
LeaderElection: enableLeaderElection,
|
||||
Port: 9443,
|
||||
SyncPeriod: &syncPeriod,
|
||||
Namespace: namespace,
|
||||
})
|
||||
if err != nil {
|
||||
setupLog.Error(err, "unable to start manager")
|
||||
|
||||
8
pkg/actionsglob/README.md
Normal file
8
pkg/actionsglob/README.md
Normal file
@@ -0,0 +1,8 @@
|
||||
This package is an implementation of glob that is intended to simulate the behaviour of
|
||||
https://github.com/actions/toolkit/tree/master/packages/glob in many cases.
|
||||
|
||||
This isn't a complete reimplementation of the referenced nodejs package.
|
||||
|
||||
Differences:
|
||||
|
||||
- This package doesn't implement `**`
|
||||
78
pkg/actionsglob/actionsglob.go
Normal file
78
pkg/actionsglob/actionsglob.go
Normal file
@@ -0,0 +1,78 @@
|
||||
package actionsglob
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func Match(pat string, s string) bool {
|
||||
if len(pat) == 0 {
|
||||
panic(fmt.Sprintf("unexpected length of pattern: %d", len(pat)))
|
||||
}
|
||||
|
||||
var inverse bool
|
||||
|
||||
if pat[0] == '!' {
|
||||
pat = pat[1:]
|
||||
inverse = true
|
||||
}
|
||||
|
||||
tokens := strings.SplitAfter(pat, "*")
|
||||
|
||||
var wildcardInHead bool
|
||||
|
||||
for i := 0; i < len(tokens); i++ {
|
||||
p := tokens[i]
|
||||
|
||||
if p == "" {
|
||||
s = ""
|
||||
break
|
||||
}
|
||||
|
||||
if p == "*" {
|
||||
if i == len(tokens)-1 {
|
||||
s = ""
|
||||
break
|
||||
}
|
||||
|
||||
wildcardInHead = true
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
wildcardInTail := p[len(p)-1] == '*'
|
||||
if wildcardInTail {
|
||||
p = p[:len(p)-1]
|
||||
}
|
||||
|
||||
subs := strings.SplitN(s, p, 2)
|
||||
|
||||
if len(subs) == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
if subs[0] != "" {
|
||||
if !wildcardInHead {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if subs[1] != "" {
|
||||
if !wildcardInTail {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
s = subs[1]
|
||||
|
||||
wildcardInHead = false
|
||||
}
|
||||
|
||||
r := s == ""
|
||||
|
||||
if inverse {
|
||||
r = !r
|
||||
}
|
||||
|
||||
return r
|
||||
}
|
||||
198
pkg/actionsglob/match_test.go
Normal file
198
pkg/actionsglob/match_test.go
Normal file
@@ -0,0 +1,198 @@
|
||||
package actionsglob
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestMatch(t *testing.T) {
|
||||
type testcase struct {
|
||||
Pattern, Target string
|
||||
Want bool
|
||||
}
|
||||
|
||||
run := func(t *testing.T, tc testcase) {
|
||||
t.Helper()
|
||||
|
||||
got := Match(tc.Pattern, tc.Target)
|
||||
|
||||
if got != tc.Want {
|
||||
t.Errorf("%s against %s: want %v, got %v", tc.Pattern, tc.Target, tc.Want, got)
|
||||
}
|
||||
}
|
||||
|
||||
t.Run("foo == foo", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "foo",
|
||||
Target: "foo",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!foo == foo", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!foo",
|
||||
Target: "foo",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("foo == foo1", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "foo",
|
||||
Target: "foo1",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!foo == foo1", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!foo",
|
||||
Target: "foo1",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("*foo == foo", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "*foo",
|
||||
Target: "foo",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!*foo == foo", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!*foo",
|
||||
Target: "foo",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("*foo == 1foo", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "*foo",
|
||||
Target: "1foo",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!*foo == 1foo", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!*foo",
|
||||
Target: "1foo",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("*foo == foo1", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "*foo",
|
||||
Target: "foo1",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!*foo == foo1", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!*foo",
|
||||
Target: "foo1",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("*foo* == foo1", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "*foo*",
|
||||
Target: "foo1",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!*foo* == foo1", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!*foo*",
|
||||
Target: "foo1",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("*foo == foobar", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "*foo",
|
||||
Target: "foobar",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!*foo == foobar", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!*foo",
|
||||
Target: "foobar",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("*foo* == foobar", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "*foo*",
|
||||
Target: "foobar",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!*foo* == foobar", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!*foo*",
|
||||
Target: "foobar",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("foo* == foo", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "foo*",
|
||||
Target: "foo",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!foo* == foo", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!foo*",
|
||||
Target: "foo",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("foo* == foobar", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "foo*",
|
||||
Target: "foobar",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!foo* == foobar", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!foo*",
|
||||
Target: "foobar",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("foo (* == foo ( 1 / 2 )", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "foo (*",
|
||||
Target: "foo ( 1 / 2 )",
|
||||
Want: true,
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("!foo (* == foo ( 1 / 2 )", func(t *testing.T) {
|
||||
run(t, testcase{
|
||||
Pattern: "!foo (*",
|
||||
Target: "foo ( 1 / 2 )",
|
||||
Want: false,
|
||||
})
|
||||
})
|
||||
}
|
||||
@@ -42,7 +42,7 @@ if [ -z "${RUNNER_TOKEN}" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "${RUNNER_REPO}" ] && [ -n "${RUNNER_ORG}" ] && [ -n "${RUNNER_GROUP}" ];then
|
||||
if [ -z "${RUNNER_REPO}" ] && [ -n "${RUNNER_GROUP}" ];then
|
||||
RUNNER_GROUP_ARG="--runnergroup ${RUNNER_GROUP}"
|
||||
fi
|
||||
|
||||
|
||||
@@ -33,5 +33,9 @@ for process in "${processes[@]}"; do
|
||||
fi
|
||||
done
|
||||
|
||||
if [ -n "${MTU}" ]; then
|
||||
ifconfig docker0 mtu ${MTU} up
|
||||
fi
|
||||
|
||||
# Wait processes to be running
|
||||
entrypoint.sh
|
||||
|
||||
Reference in New Issue
Block a user