Compare commits

...

15 Commits

Author SHA1 Message Date
Callum Tait
459beeafb9 docs: remove the nonsense 2022-03-27 14:15:42 +01:00
Rolf Ahrenberg
1b327a0721 refactor: use const envvars (#1251) 2022-03-27 12:14:56 +01:00
Jérôme Foray
1f8a23c129 fix(chart): add namespace selector to webhooks when in singleNamespace mode (#1237)
* fix(chart): add namespace selector to webhooks when in singleNamespace mode

* docs: expand multi controller setup

Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
2022-03-27 11:52:39 +01:00
Naka Masato
af8d8f7e1d Update runnerdeployment_webhook.go (#1271) 2022-03-25 09:24:13 +09:00
Yusuke Kuoka
e7ef21fdf9 Merge pull request #1264 from ekarlso/env-var-detection-fix
Use container name to detect runner container in Pod
2022-03-25 09:23:48 +09:00
Endre Karlson
ee7484ac91 Use container name to detect runner container in Pod 2022-03-23 12:39:58 +01:00
Yusuke Kuoka
debf53c640 Fix missing pip bin path (/home/runner/.local/bin) (#1263)
Fixes #1261
2022-03-23 10:28:12 +09:00
Callum Tait
2cb04ddde7 * feat: move to new run.sh container friendly file (#1244)
* fix: unit tests were very broken

Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
2022-03-22 19:02:51 +00:00
renovate[bot]
366f8927d8 chore(deps): update actions/cache action to v3 (#1252)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
2022-03-22 18:48:23 +00:00
Richard Fussenegger
532a2bb2a9 feat: remove registration-only runner logic from entrypoint (#1249)
Closes #1207
2022-03-22 18:33:14 +00:00
Callum Tait
f28cecffe9 docs: various minor changes (#1250)
* docs: various minor changes

* docs: format fixes
2022-03-20 16:05:03 +00:00
Renovate Bot
4cbbcd64ce chore(deps): update dependency actions/runner to v2.289.1 2022-03-18 22:36:38 +00:00
Richard Fussenegger
a68eede616 feat: copy dotfiles from asset to service dir (#1136)
* feat: copy dotfiles from asset to service dir

* Fixed `UNITTEST` Condition

* Load `/etc/environment`

See https://github.com/actions/runner/issues/1703 for context on this change.
2022-03-18 07:40:52 +00:00
Julien Tanay
c06a806d75 Add note about having 100+ replicas (#1103) 2022-03-16 21:03:05 +00:00
Callum Tait
857c1700ba docs: add repo update to upgrade notes (#1233) 2022-03-16 10:37:37 +00:00
33 changed files with 314 additions and 556 deletions

View File

@@ -15,7 +15,7 @@ on:
- '!**.md'
env:
RUNNER_VERSION: 2.288.1
RUNNER_VERSION: 2.289.1
DOCKER_VERSION: 20.10.12
DOCKERHUB_USERNAME: summerwind

View File

@@ -18,5 +18,4 @@ jobs:
uses: actions/checkout@v3
- name: Run unit tests for entrypoint.sh
run: |
cd test/entrypoint
bash entrypoint_unittest.sh
make acceptance/runner/entrypoint

View File

@@ -26,7 +26,7 @@ jobs:
with:
go-version: '^1.17.7'
- run: go version
- uses: actions/cache@v2
- uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}

View File

@@ -197,6 +197,9 @@ acceptance/deploy:
acceptance/tests:
acceptance/checks.sh
acceptance/runner/entrypoint:
cd test/entrypoint/ && bash test.sh
# We use -count=1 instead of `go clean -testcache`
# See https://terratest.gruntwork.io/docs/testing-best-practices/avoid-test-caching/
.PHONY: e2e

View File

@@ -226,14 +226,16 @@ By default the controller will look for runners in all namespaces, the watch nam
This feature is configured via the controller's `--watch-namespace` flag. When a namespace is provided via this flag, the controller will only monitor runners in that namespace.
If you plan on installing all instances of the controller stack into a single namespace you will need to make the names of the resources unique to each stack. In the case of Helm this can be done by giving each install a unique release name, or via the `fullnameOverride` properties.
You can deploy multiple controllers either in a single shared namespace, or in a unique namespace per controller.
Alternatively, you can install each controller stack into its own unique namespace (relative to other controller stacks in the cluster), avoiding the need to uniquely prefix resources.
If you plan on installing all instances of the controller stack into a single namespace there are a few things you need to do for this to work.
When you go to the route of sharing the namespace while giving each a unique Helm release name, you must also ensure the following values are configured correctly:
1. All resources per stack must have a unique, in the case of Helm this can be done by giving each install a unique release name, or via the `fullnameOverride` properties.
2. `authSecret.name` needs be unique per stack when each stack is tied to runners in different GitHub organizations and repositories AND you want your GitHub credentials to narrowly scoped.
3. `leaderElectionId` needs to be unique per stack. If this is not unique to the stack the controller tries to race onto the leader election lock resulting in only one stack working concurrently. Your controller will be stuck with a log message something like this `attempting to acquire leader lease arc-controllers/actions-runner-controller...`
4. The MutatingWebhookConfiguration in each stack must include a namespace selector for that stacks corresponding runners namespace, this is already configured in the helm chart.
- `authSecret.name` needs be unique per stack when each stack is tied to runners in different GitHub organizations and repositories AND you want your GitHub credentials to narrowly scoped.
- `leaderElectionId` needs to be unique per stack. If this is not unique to the stack the controller tries to race onto the leader election lock and resulting in only one stack working concurrently.
Alternatively, you can install each controller stack into a unique namespace (relative to other controller stacks in the cluster), avoiding these potential pitfalls.
## Usage
@@ -453,6 +455,8 @@ Under the hood, `RunnerSet` relies on Kubernetes's `StatefulSet` and Mutating We
> Since the release of GitHub's [`workflow_job` webhook](https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#workflow_job), webhook driven scaling is the preferred way of autoscaling as it enables targeted scaling of your `RunnerDeployment` / `RunnerSet` as it includes the `runs-on` information needed to scale the appropriate runners for that workflow run. More broadly, webhook driven scaling is the preferred scaling option as it is far quicker compared to the pull driven scaling and is easy to setup.
> If you are using controller version < [v0.22.0](https://github.com/actions-runner-controller/actions-runner-controller/releases/tag/v0.22.0) and you are not using GHES, and so can't set your rate limit budget, it is recommended that you use 100 replicas or fewer to prevent being rate limited.
A `RunnerDeployment` or `RunnerSet` can scale the number of runners between `minReplicas` and `maxReplicas` fields driven by either pull based scaling metrics or via a webhook event (see limitations section of [stateful runners](#stateful-runners) for cavaets of this kind). Whether the autoscaling is driven from a webhook event or pull based metrics it is implemented by backing a `RunnerDeployment` or `RunnerSet` kind with a `HorizontalRunnerAutoscaler` kind.
**_Important!!! If you opt to configure autoscaling, ensure you remove the `replicas:` attribute in the `RunnerDeployment` / `RunnerSet` kinds that are configured for autoscaling [#206](https://github.com/actions-runner-controller/actions-runner-controller/issues/206#issuecomment-748601907)_**
@@ -556,7 +560,7 @@ metadata:
spec:
scaleTargetRef:
name: example-runner-deployment
# Uncomment the below in case the target is not RunnerDeployment but RunnerSet
# IMPORTANT : If your HRA is targeting a RunnerSet you must specify the kind in the scaleTargetRef:, uncomment the below
#kind: RunnerSet
minReplicas: 1
maxReplicas: 5
@@ -840,7 +844,7 @@ spec:
> This feature requires controller version => [v0.19.0](https://github.com/actions-runner-controller/actions-runner-controller/releases/tag/v0.19.0)
The regular `RunnerDeployment` `replicas:` attribute as well as the `HorizontalRunnerAutoscaler` `minReplicas:` attribute supports being set to 0.
The regular `RunnerDeployment` / `RunnerSet` `replicas:` attribute as well as the `HorizontalRunnerAutoscaler` `minReplicas:` attribute supports being set to 0.
The main use case for scaling from 0 is with the `HorizontalRunnerAutoscaler` kind. To scale from 0 whilst still being able to provision runners as jobs are queued we must use the `HorizontalRunnerAutoscaler` with only certain scaling configurations, only the below configurations support scaling from 0 whilst also being able to provision runners as jobs are queued:
@@ -1107,7 +1111,7 @@ spec:
You can configure your own custom volume mounts. For example to have the work/docker data in memory or on NVME ssd, for
i/o intensive builds. Other custom volume mounts should be possible as well, see [kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/)
** Ramdisk runner **
**RAM Disk Runner**<br />
Example how to place the runner work dir, docker sidecar and /tmp within the runner onto a ramdisk.
```yaml
kind: RunnerDeployment
@@ -1133,7 +1137,7 @@ spec:
emphemeral: true # recommended to not leak data between builds.
```
** NVME ssd runner **
**NVME SSD Runner**<br />
In this example we provide NVME backed storage for the workdir, docker sidecar and /tmp within the runner.
Here we use a working example on GKE, which will provide the NVME disk at /mnt/disks/ssd0. We will be placing the respective volumes in subdirs here and in order to be able to run multiple runners we will use the pod name as prefix for subdirectories. Also the disk will fill up over time and disk space will not be freed until the node is removed.

View File

@@ -26,7 +26,7 @@ import (
)
// log is for logging in this package.
var runenrDeploymentLog = logf.Log.WithName("runnerdeployment-resource")
var runnerDeploymentLog = logf.Log.WithName("runnerdeployment-resource")
func (r *RunnerDeployment) SetupWebhookWithManager(mgr ctrl.Manager) error {
return ctrl.NewWebhookManagedBy(mgr).
@@ -49,13 +49,13 @@ var _ webhook.Validator = &RunnerDeployment{}
// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
func (r *RunnerDeployment) ValidateCreate() error {
runenrDeploymentLog.Info("validate resource to be created", "name", r.Name)
runnerDeploymentLog.Info("validate resource to be created", "name", r.Name)
return r.Validate()
}
// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (r *RunnerDeployment) ValidateUpdate(old runtime.Object) error {
runenrDeploymentLog.Info("validate resource to be updated", "name", r.Name)
runnerDeploymentLog.Info("validate resource to be updated", "name", r.Name)
return r.Validate()
}

View File

@@ -32,6 +32,9 @@ kubectl replace -f crds/
2. Upgrade the Helm release
```shell
# helm repo [command]
helm repo update
# helm upgrade [RELEASE] [CHART] [flags]
helm upgrade actions-runner-controller \
actions-runner-controller/actions-runner-controller \

View File

@@ -12,6 +12,11 @@ metadata:
webhooks:
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ quote .Values.admissionWebHooks.caBundle }}
@@ -35,6 +40,11 @@ webhooks:
sideEffects: None
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
@@ -58,6 +68,11 @@ webhooks:
sideEffects: None
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
@@ -81,6 +96,11 @@ webhooks:
sideEffects: None
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
@@ -117,6 +137,11 @@ metadata:
webhooks:
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
@@ -140,6 +165,11 @@ webhooks:
sideEffects: None
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}
@@ -163,6 +193,11 @@ webhooks:
sideEffects: None
- admissionReviewVersions:
- v1beta1
{{- if .Values.scope.singleNamespace }}
namespaceSelector:
matchLabels:
name: {{ default .Release.Namespace .Values.scope.watchNamespace }}
{{- end }}
clientConfig:
{{- if .Values.admissionWebHooks.caBundle }}
caBundle: {{ .Values.admissionWebHooks.caBundle }}

View File

@@ -59,9 +59,9 @@ func (t *PodRunnerTokenInjector) Handle(ctx context.Context, req admission.Reque
return newEmptyResponse()
}
enterprise, okEnterprise := getEnv(runnerContainer, "RUNNER_ENTERPRISE")
repo, okRepo := getEnv(runnerContainer, "RUNNER_REPO")
org, okOrg := getEnv(runnerContainer, "RUNNER_ORG")
enterprise, okEnterprise := getEnv(runnerContainer, EnvVarEnterprise)
repo, okRepo := getEnv(runnerContainer, EnvVarRepo)
org, okOrg := getEnv(runnerContainer, EnvVarOrg)
if !okRepo || !okOrg || !okEnterprise {
return newEmptyResponse()
}

View File

@@ -18,6 +18,7 @@ package controllers
import (
"context"
"errors"
"fmt"
"time"
@@ -64,9 +65,19 @@ func (r *RunnerPodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return ctrl.Result{}, nil
}
var envvars []corev1.EnvVar
for _, container := range runnerPod.Spec.Containers {
if container.Name == "runner" {
envvars = container.Env
}
}
if len(envvars) == 0 {
return ctrl.Result{}, errors.New("Could not determine env vars for runner Pod")
}
var enterprise, org, repo string
envvars := runnerPod.Spec.Containers[0].Env
for _, e := range envvars {
switch e.Name {
case EnvVarEnterprise:

View File

@@ -111,12 +111,14 @@ RUN mkdir /opt/hostedtoolcache \
&& chmod g+rwx /opt/hostedtoolcache
COPY entrypoint.sh /
COPY --chown=runner:docker patched $RUNNER_ASSETS_DIR/patched
# Add the Python "User Script Directory" to the PATH
ENV PATH="${PATH}:${HOME}/.local/bin"
ENV ImageOS=ubuntu20
RUN echo "PATH=${PATH}" > /etc/environment \
&& echo "ImageOS=${ImageOS}" >> /etc/environment
USER runner
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]

View File

@@ -114,12 +114,13 @@ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
VOLUME /var/lib/docker
COPY --chown=runner:docker patched $RUNNER_ASSETS_DIR/patched
# Add the Python "User Script Directory" to the PATH
ENV PATH="${PATH}:${HOME}/.local/bin"
ENV ImageOS=ubuntu20
RUN echo "PATH=${PATH}" > /etc/environment \
&& echo "ImageOS=${ImageOS}" >> /etc/environment
# No group definition, as that makes it harder to run docker.
USER runner

View File

@@ -1,5 +1,6 @@
#!/bin/bash
RUNNER_ASSETS_DIR=${RUNNER_ASSETS_DIR:-/runnertmp}
RUNNER_HOME=${RUNNER_HOME:-/runner}
LIGHTGREEN="\e[0;32m"
@@ -77,17 +78,21 @@ if [ ! -d "${RUNNER_HOME}" ]; then
fi
# if this is not a testing environment
if [ -z "${UNITTEST:-}" ]; then
sudo chown -R runner:docker ${RUNNER_HOME}
# use cp over mv to avoid issues when /runnertmp and {RUNNER_HOME} are on different devices
cp -r /runnertmp/* ${RUNNER_HOME}/
if [[ "${UNITTEST:-}" == '' ]]; then
sudo chown -R runner:docker "$RUNNER_HOME"
# enable dotglob so we can copy a ".env" file to load in env vars as part of the service startup if one is provided
# loading a .env from the root of the service is part of the actions/runner logic
shopt -s dotglob
# use cp instead of mv to avoid issues when src and dst are on different devices
cp -r "$RUNNER_ASSETS_DIR"/* "$RUNNER_HOME"/
shopt -u dotglob
fi
cd ${RUNNER_HOME}
# past that point, it's all relative pathes from /runner
config_args=()
if [ "${RUNNER_FEATURE_FLAG_EPHEMERAL:-}" == "true" -a "${RUNNER_EPHEMERAL}" != "false" ]; then
if [ "${RUNNER_FEATURE_FLAG_EPHEMERAL:-}" == "true" -a "${RUNNER_EPHEMERAL}" == "true" ]; then
config_args+=(--ephemeral)
echo "Passing --ephemeral to config.sh to enable the ephemeral runner."
fi
@@ -145,29 +150,32 @@ cat .runner
# -H "Authorization: bearer ${GITHUB_TOKEN}"
# https://api.github.com/repos/USER/REPO/actions/runners/171
if [ -n "${RUNNER_REGISTRATION_ONLY}" ]; then
success "This runner is configured to be registration-only. Exiting without starting the runner service..."
exit 0
fi
if [ -z "${UNITTEST:-}" ]; then
mkdir ./externals
# Hack due to the DinD volumes
mv ./externalstmp/* ./externals/
for f in runsvc.sh RunnerService.js; do
diff {bin,patched}/${f} || :
sudo mv bin/${f}{,.bak}
sudo mv {patched,bin}/${f}
done
fi
args=()
if [ "${RUNNER_FEATURE_FLAG_EPHEMERAL:-}" != "true" -a "${RUNNER_EPHEMERAL}" != "false" ]; then
if [ "${RUNNER_FEATURE_FLAG_EPHEMERAL:-}" != "true" -a "${RUNNER_EPHEMERAL}" == "true" ]; then
args+=(--once)
echo "[WARNING] Passing --once is deprecated and will be removed as an option from the image and ARC at the release of 0.24.0."
echo "[WARNING] Upgrade to GHES => 3.3 to continue using actions-runner-controller. If you are using github.com ignore this warning."
fi
unset RUNNER_NAME RUNNER_REPO RUNNER_TOKEN
exec ./bin/runsvc.sh "${args[@]}"
# Unset entrypoint environment variables so they don't leak into the runner environment
unset RUNNER_NAME RUNNER_REPO RUNNER_TOKEN STARTUP_DELAY_IN_SECONDS DISABLE_WAIT_FOR_DOCKER
# Docker ignores PAM and thus never loads the system environment variables that
# are meant to be set in every environment of every user. We emulate the PAM
# behavior by reading the environment variables without interpreting them.
#
# https://github.com/actions-runner-controller/actions-runner-controller/issues/1135
# https://github.com/actions/runner/issues/1703
# /etc/environment may not exist when running unit tests depending on the platform being used
# (e.g. Mac OS) so we just skip the mapping entirely
if [ -z "${UNITTEST:-}" ]; then
mapfile -t env </etc/environment
fi
exec env -- "${env[@]}" ./run.sh "${args[@]}"

View File

@@ -1,91 +0,0 @@
#!/usr/bin/env node
// Copyright (c) GitHub. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
var childProcess = require("child_process");
var path = require("path")
var supported = ['linux', 'darwin']
if (supported.indexOf(process.platform) == -1) {
console.log('Unsupported platform: ' + process.platform);
console.log('Supported platforms are: ' + supported.toString());
process.exit(1);
}
var stopping = false;
var listener = null;
var runService = function() {
var listenerExePath = path.join(__dirname, '../bin/Runner.Listener');
var interactive = process.argv[2] === "interactive";
if(!stopping) {
try {
if (interactive) {
console.log('Starting Runner listener interactively');
listener = childProcess.spawn(listenerExePath, ['run'].concat(process.argv.slice(3)), { env: process.env });
} else {
console.log('Starting Runner listener with startup type: service');
listener = childProcess.spawn(listenerExePath, ['run', '--startuptype', 'service'].concat(process.argv.slice(2)), { env: process.env });
}
console.log('Started listener process');
listener.stdout.on('data', (data) => {
process.stdout.write(data.toString('utf8'));
});
listener.stderr.on('data', (data) => {
process.stdout.write(data.toString('utf8'));
});
listener.on('close', (code) => {
console.log(`Runner listener exited with error code ${code}`);
if (code === 0) {
console.log('Runner listener exit with 0 return code, stop the service, no retry needed.');
stopping = true;
} else if (code === 1) {
console.log('Runner listener exit with terminated error, stop the service, no retry needed.');
stopping = true;
} else if (code === 2) {
console.log('Runner listener exit with retryable error, re-launch runner in 5 seconds.');
} else if (code === 3) {
console.log('Runner listener exit because of updating, re-launch runner in 5 seconds.');
} else {
console.log('Runner listener exit with undefined return code, re-launch runner in 5 seconds.');
}
if(!stopping) {
setTimeout(runService, 5000);
}
});
} catch(ex) {
console.log(ex);
}
}
}
runService();
console.log('Started running service');
var gracefulShutdown = function(code) {
console.log('Shutting down runner listener');
stopping = true;
if (listener) {
console.log('Sending SIGINT to runner listener to stop');
listener.kill('SIGINT');
// TODO wait for 30 seconds and send a SIGKILL
}
}
process.on('SIGINT', () => {
gracefulShutdown(0);
});
process.on('SIGTERM', () => {
gracefulShutdown(0);
});

View File

@@ -1,20 +0,0 @@
#!/bin/bash
# convert SIGTERM signal to SIGINT
# for more info on how to propagate SIGTERM to a child process see: http://veithen.github.io/2014/11/16/sigterm-propagation.html
trap 'kill -INT $PID' TERM INT
if [ -f ".path" ]; then
# configure
export PATH=`cat .path`
echo ".path=${PATH}"
fi
# insert anything to setup env when running as a service
# run the host process which keep the listener alive
./externals/node12/bin/node ./bin/RunnerService.js $* &
PID=$!
wait $PID
trap - TERM INT
wait $PID

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
export LIGHTGREEN='\e[0;32m'
export LIGHTRED='\e[0;31m'
@@ -18,11 +18,15 @@ error(){
}
success "I'm configured normally"
touch .runner
echo "$*" > runner_config
# Condition for should_retry_configuring test
if [ -z "${FAIL_RUNNER_CONFIG_SETUP}" ]; then
touch .runner
fi
echo "$@" > runner_config
success "created a dummy config file"
success
# Adding a counter to see how many times we've gone through the configuration step
# adding a counter to see how many times we've gone through the configuration step
count=`cat counter 2>/dev/null|| echo "0"`
count=$((count + 1))
echo ${count} > counter

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
export LIGHTGREEN='\e[0;32m'
export LIGHTRED='\e[0;31m'

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -euo pipefail
@@ -20,12 +20,9 @@ error(){
exit 1
}
log "Dumping set runner arguments"
echo "$@" > runner_args
success "Pretending to run service..."
touch run_sh_ran
success "Success"
success ""
success "Running the service..."
# SHOULD NOT HAPPEN
# creating a file to show this script has run
touch runsvc_ran
success "...successful"
success ""

View File

@@ -1,29 +0,0 @@
#!/bin/bash
set -euo pipefail
export LIGHTGREEN='\e[0;32m'
export LIGHTRED='\e[0;31m'
export WHITE='\e[0;97m'
export RESET='\e[0m'
log(){
printf "\t${WHITE}$@${RESET}\n" 2>&1
}
success(){
printf "\t${LIGHTGREEN}$@${RESET}\n" 2>&1
}
error(){
printf "\t${LIGHTRED}$@${RESET}\n" 2>&1
exit 1
}
echo "$*" > runner_config
success "I'm pretending the configuration is not successful"
# increasing a counter to measure how many times we restarted
count=`cat counter 2>/dev/null|| echo "0"`
count=$((count + 1))
echo ${count} > counter

View File

@@ -1,12 +1,12 @@
#!/bin/bash
#!/usr/bin/env bash
# UNITTEST: retry config
# Will simulate a configuration failure and expects:
# - the configuration step to be run 10 times
# - the entrypoint script to exit with error code 2
# - the runsvc.sh script to never run.
# - the run.sh script to never run.
source ../logging.sh
source ../assets/logging.sh
entrypoint_log() {
while read I; do
@@ -14,17 +14,22 @@ entrypoint_log() {
done
}
log "Setting up the test"
log "Setting up test area"
export RUNNER_HOME=testarea
mkdir -p ${RUNNER_HOME}
log "Setting up the test config"
export UNITTEST=true
export RUNNER_HOME=localhome
export FAIL_RUNNER_CONFIG_SETUP=true
export RUNNER_NAME="example_runner_name"
export RUNNER_REPO="myorg/myrepo"
export RUNNER_TOKEN="xxxxxxxxxxxxx"
mkdir -p ${RUNNER_HOME}/bin
# add up the config.sh and runsvc.sh
ln -s ../config.sh ${RUNNER_HOME}/config.sh
ln -s ../../runsvc.sh ${RUNNER_HOME}/bin/runsvc.sh
# run.sh and config.sh get used by the runner's real entrypoint.sh and are part of actions/runner.
# We change symlink dummy versions so the entrypoint.sh can run allowing us to test the real entrypoint.sh
log "Symlink dummy config.sh and run.sh"
ln -s ../../assets/config.sh ${RUNNER_HOME}/config.sh
ln -s ../../assets/run.sh ${RUNNER_HOME}/run.sh
cleanup() {
rm -rf ${RUNNER_HOME}
@@ -33,41 +38,44 @@ cleanup() {
unset RUNNER_NAME
unset RUNNER_REPO
unset RUNNER_TOKEN
unset FAIL_RUNNER_CONFIG_SETUP
}
# Always run cleanup when test ends regardless of how it ends
trap cleanup SIGINT SIGTERM SIGQUIT EXIT
log "Running the entrypoint"
log ""
# Run the runner entrypoint script which as a final step runs this
# unit tests run.sh as it was symlinked
../../../runner/entrypoint.sh 2> >(entrypoint_log)
if [ "$?" != "2" ]; then
error "========================================="
error "Configuration should have thrown an error"
error "FAIL | Configuration should have thrown an error"
exit 1
fi
success "Entrypoint didn't complete successfully"
success ""
success "PASS | Entrypoint didn't complete successfully"
log "Checking the counter, should have 10 iterations"
count=`cat ${RUNNER_HOME}/counter || "notfound"`
if [ "${count}" != "10" ]; then
error "============================================="
error "The retry loop should have done 10 iterations"
error "FAIL | The retry loop should have done 10 iterations"
exit 1
fi
success "Retry loop went up to 10"
success
success "PASS | Retry loop went up to 10"
log "Checking that runsvc never ran"
if [ -f ${RUNNER_HOME}/runsvc_ran ]; then
log "Checking that run.sh never ran"
if [ -f ${RUNNER_HOME}/run_sh_ran ]; then
error "================================================================="
error "runsvc was invoked, entrypoint.sh should have failed before that."
error "FAIL | run.sh was invoked, entrypoint.sh should have failed before that."
exit 1
fi
success "runsvc.sh never ran"
success "PASS | run.sh never ran"
success
success "==========================="
success "Test completed successfully"

View File

@@ -1,29 +0,0 @@
#!/bin/bash
export LIGHTGREEN='\e[0;32m'
export LIGHTRED='\e[0;31m'
export WHITE='\e[0;97m'
export RESET='\e[0m'
log(){
printf "\t${WHITE}$@${RESET}\n" 2>&1
}
success(){
printf "\t${LIGHTGREEN}$@${RESET}\n" 2>&1
}
error(){
printf "\t${LIGHTRED}$@${RESET}\n" 2>&1
}
success "I'm configured normally"
touch .runner
echo "$*" > runner_config
success "created a dummy config file"
success
# adding a counter to see how many times we've gone through a configuration step
count=`cat counter 2>/dev/null|| echo "0"`
count=$((count + 1))
echo ${count} > counter

View File

@@ -1,31 +0,0 @@
#!/bin/bash
set -euo pipefail
export LIGHTGREEN='\e[0;32m'
export LIGHTRED='\e[0;31m'
export WHITE='\e[0;97m'
export RESET='\e[0m'
log(){
printf "\t${WHITE}$@${RESET}\n" 2>&1
}
success(){
printf "\t${LIGHTGREEN}$@${RESET}\n" 2>&1
}
error(){
printf "\t${LIGHTRED}$@${RESET}\n" 2>&1
exit 1
}
success ""
success "Running the service..."
# test if --once is present as a parameter
echo "$*" | grep -q 'once' && error "Should not include --once in the parameters"
success "...successful"
touch runsvc_ran
success ""

View File

@@ -1,81 +0,0 @@
#!/bin/bash
# UNITTEST: should work as non ephemeral
# Will simulate a scenario where ephemeral=false. expects:
# - the configuration step to be run exactly once
# - the entrypoint script to exit with no error
# - the runsvc.sh script to run without the --once flag
source ../logging.sh
entrypoint_log() {
while read I; do
printf "\tentrypoint.sh: $I\n"
done
}
log "Setting up the test"
export UNITTEST=true
export RUNNER_HOME=localhome
export RUNNER_NAME="example_runner_name"
export RUNNER_REPO="myorg/myrepo"
export RUNNER_TOKEN="xxxxxxxxxxxxx"
export RUNNER_EPHEMERAL=true
export RUNNER_FEATURE_FLAG_EPHEMERAL=true
mkdir -p ${RUNNER_HOME}/bin
# add up the config.sh and runsvc.sh
ln -s ../config.sh ${RUNNER_HOME}/config.sh
ln -s ../../runsvc.sh ${RUNNER_HOME}/bin/runsvc.sh
cleanup() {
rm -rf ${RUNNER_HOME}
unset UNITTEST
unset RUNNERHOME
unset RUNNER_NAME
unset RUNNER_REPO
unset RUNNER_TOKEN
unset RUNNER_EPHEMERAL
unset RUNNER_FEATURE_FLAG_EPHEMERAL
}
trap cleanup SIGINT SIGTERM SIGQUIT EXIT
log "Running the entrypoint"
log ""
../../../runner/entrypoint.sh 2> >(entrypoint_log)
if [ "$?" != "0" ]; then
error "==========================================="
error "Entrypoint script did not exit successfully"
exit 1
fi
log "Testing if we went through the configuration step only once"
count=`cat ${RUNNER_HOME}/counter || echo "not_found"`
if [ ${count} != "1" ]; then
error "==============================================="
error "The configuration step was not run exactly once"
exit 1
fi
log "Testing if the configuration included the --ephemeral flag"
if ! grep -q -- '--ephemeral' ${RUNNER_HOME}/runner_config; then
error "==============================================="
error "The configuration did not include the --ephemeral flag"
exit 1
fi
success "The configuration ran ${count} time(s)"
log "Testing if runsvc ran"
if [ ! -f "${RUNNER_HOME}/runsvc_ran" ]; then
error "=============================="
error "The runner service has not run"
exit 1
fi
success "The service ran"
success ""
success "==========================="
success "Test completed successfully"

View File

@@ -1,31 +0,0 @@
#!/bin/bash
set -euo pipefail
export LIGHTGREEN='\e[0;32m'
export LIGHTRED='\e[0;31m'
export WHITE='\e[0;97m'
export RESET='\e[0m'
log(){
printf "\t${WHITE}$@${RESET}\n" 2>&1
}
success(){
printf "\t${LIGHTGREEN}$@${RESET}\n" 2>&1
}
error(){
printf "\t${LIGHTRED}$@${RESET}\n" 2>&1
exit 1
}
success ""
success "Running the service..."
# test if --once is present as a parameter
echo "$*" | grep -q 'once' || error "Should include --once in the parameters"j
success "...successful"
touch runsvc_ran
success ""

View File

@@ -1,29 +0,0 @@
#!/bin/bash
export LIGHTGREEN='\e[0;32m'
export LIGHTRED='\e[0;31m'
export WHITE='\e[0;97m'
export RESET='\e[0m'
log(){
printf "\t${WHITE}$@${RESET}\n" 2>&1
}
success(){
printf "\t${LIGHTGREEN}$@${RESET}\n" 2>&1
}
error(){
printf "\t${LIGHTRED}$@${RESET}\n" 2>&1
}
success "I'm configured normally"
touch .runner
echo "$*" > runner_config
success "created a dummy config file"
success
# adding a counter to see how many times we've gone through a configuration step
count=`cat counter 2>/dev/null|| echo "0"`
count=$((count + 1))
echo ${count} > counter

View File

@@ -1,31 +0,0 @@
#!/bin/bash
set -euo pipefail
export LIGHTGREEN='\e[0;32m'
export LIGHTRED='\e[0;31m'
export WHITE='\e[0;97m'
export RESET='\e[0m'
log(){
printf "\t${WHITE}$@${RESET}\n" 2>&1
}
success(){
printf "\t${LIGHTGREEN}$@${RESET}\n" 2>&1
}
error(){
printf "\t${LIGHTRED}$@${RESET}\n" 2>&1
exit 1
}
success ""
success "Running the service..."
# test if --once is present as a parameter
echo "$*" | grep -q 'once' && error "Should not include --once in the parameters"
success "...successful"
touch runsvc_ran
success ""

View File

@@ -1,12 +1,12 @@
#!/bin/bash
#!/usr/bin/env bash
# UNITTEST: should work as non ephemeral
# Will simulate a scenario where ephemeral=false. expects:
# - the configuration step to be run exactly once
# - the entrypoint script to exit with no error
# - the runsvc.sh script to run without the --once flag
# - the run.sh script to run without the --once flag
source ../logging.sh
source ../assets/logging.sh
entrypoint_log() {
while read I; do
@@ -14,18 +14,22 @@ entrypoint_log() {
done
}
log "Setting up test area"
export RUNNER_HOME=testarea
mkdir -p ${RUNNER_HOME}
log "Setting up the test"
export UNITTEST=true
export RUNNER_HOME=localhome
export RUNNER_NAME="example_runner_name"
export RUNNER_REPO="myorg/myrepo"
export RUNNER_TOKEN="xxxxxxxxxxxxx"
export RUNNER_EPHEMERAL=false
mkdir -p ${RUNNER_HOME}/bin
# add up the config.sh and runsvc.sh
ln -s ../config.sh ${RUNNER_HOME}/config.sh
ln -s ../../runsvc.sh ${RUNNER_HOME}/bin/runsvc.sh
# run.sh and config.sh get used by the runner's real entrypoint.sh and are part of actions/runner.
# We change symlink dummy versions so the entrypoint.sh can run allowing us to test the real entrypoint.sh
log "Symlink dummy config.sh and run.sh"
ln -s ../../assets/config.sh ${RUNNER_HOME}/config.sh
ln -s ../../assets/run.sh ${RUNNER_HOME}/run.sh
cleanup() {
rm -rf ${RUNNER_HOME}
@@ -37,16 +41,19 @@ cleanup() {
unset RUNNER_EPHEMERAL
}
# Always run cleanup when test ends regardless of how it ends
trap cleanup SIGINT SIGTERM SIGQUIT EXIT
log "Running the entrypoint"
log ""
# Run the runner entrypoint script which as a final step runs this
# unit tests run.sh as it was symlinked
../../../runner/entrypoint.sh 2> >(entrypoint_log)
if [ "$?" != "0" ]; then
error "==========================================="
error "Entrypoint script did not exit successfully"
error "FAIL | Entrypoint script did not exit successfully"
exit 1
fi
@@ -54,19 +61,19 @@ log "Testing if we went through the configuration step only once"
count=`cat ${RUNNER_HOME}/counter || echo "not_found"`
if [ ${count} != "1" ]; then
error "==============================================="
error "The configuration step was not run exactly once"
error "FAIL | The configuration step was not run exactly once"
exit 1
fi
success "The configuration ran ${count} time(s)"
success "PASS | The configuration ran ${count} time(s)"
log "Testing if runsvc ran"
if [ ! -f "${RUNNER_HOME}/runsvc_ran" ]; then
log "Testing if run.sh ran"
if [ ! -f "${RUNNER_HOME}/run_sh_ran" ]; then
error "=============================="
error "The runner service has not run"
error "FAIL | The runner service has not run"
exit 1
fi
success "The service ran"
success "PASS | run.sh ran"
success ""
success "==========================="
success "Test completed successfully"

View File

@@ -1,29 +0,0 @@
#!/bin/bash
export LIGHTGREEN='\e[0;32m'
export LIGHTRED='\e[0;31m'
export WHITE='\e[0;97m'
export RESET='\e[0m'
log(){
printf "\t${WHITE}$@${RESET}\n" 2>&1
}
success(){
printf "\t${LIGHTGREEN}$@${RESET}\n" 2>&1
}
error(){
printf "\t${LIGHTRED}$@${RESET}\n" 2>&1
}
success "I'm configured normally"
touch .runner
echo "$*" > runner_config
success "created a dummy config file"
success
# Adding a counter to see how many times we've gone through the configuration step
count=`cat counter 2>/dev/null|| echo "0"`
count=$((count + 1))
echo ${count} > counter

View File

@@ -1,31 +0,0 @@
#!/bin/bash
set -euo pipefail
export LIGHTGREEN='\e[0;32m'
export LIGHTRED='\e[0;31m'
export WHITE='\e[0;97m'
export RESET='\e[0m'
log(){
printf "\t${WHITE}$@${RESET}\n" 2>&1
}
success(){
printf "\t${LIGHTGREEN}$@${RESET}\n" 2>&1
}
error(){
printf "\t${LIGHTRED}$@${RESET}\n" 2>&1
exit 1
}
success ""
success "Running the service..."
# test if --once is present as a parameter
echo "$*" | grep -q 'once' || error "Should include --once in the parameters"j
success "...successful"
touch runsvc_ran
success ""

View File

@@ -1,12 +1,12 @@
#!/bin/bash
#!/usr/bin/env bash
# UNITTEST: should work normally
# Will simulate a normal execution scenario. expects:
# - the configuration step to be run exactly once
# - the entrypoint script to exit with no error
# - the runsvc.sh script to run with the --once flag activated.
# - the run.sh script to run with the --once flag activated.
source ../logging.sh
source ../assets/logging.sh
entrypoint_log() {
while read I; do
@@ -14,17 +14,21 @@ entrypoint_log() {
done
}
log "Setting up test area"
export RUNNER_HOME=testarea
mkdir -p ${RUNNER_HOME}
log "Setting up the test"
export UNITTEST=true
export RUNNER_HOME=localhome
export RUNNER_NAME="example_runner_name"
export RUNNER_REPO="myorg/myrepo"
export RUNNER_TOKEN="xxxxxxxxxxxxx"
mkdir -p ${RUNNER_HOME}/bin
# add up the config.sh and runsvc.sh
ln -s ../config.sh ${RUNNER_HOME}/config.sh
ln -s ../../runsvc.sh ${RUNNER_HOME}/bin/runsvc.sh
# run.sh and config.sh get used by the runner's real entrypoint.sh and are part of actions/runner.
# We change symlink dummy versions so the entrypoint.sh can run allowing us to test the real entrypoint.sh
log "Symlink dummy config.sh and run.sh"
ln -s ../../assets/config.sh ${RUNNER_HOME}/config.sh
ln -s ../../assets/run.sh ${RUNNER_HOME}/run.sh
cleanup() {
rm -rf ${RUNNER_HOME}
@@ -35,11 +39,14 @@ cleanup() {
unset RUNNER_TOKEN
}
# Always run cleanup when test ends regardless of how it ends
trap cleanup SIGINT SIGTERM SIGQUIT EXIT
log "Running the entrypoint"
log ""
# Run the runner entrypoint script which as a final step runs this
# unit tests run.sh as it was symlinked
../../../runner/entrypoint.sh 2> >(entrypoint_log)
if [ "$?" != "0" ]; then
@@ -52,26 +59,29 @@ log "Testing if the configuration step was run only once"
count=`cat ${RUNNER_HOME}/counter || echo "not_found"`
if [ ${count} != "1" ]; then
error "==============================================="
error "The configuration step was not run exactly once"
error "FAIL | The configuration step was not run exactly once"
exit 1
fi
success "The configuration ran ${count} time(s)"
success "PASS | The configuration ran ${count} time(s)"
log "Testing if the configuration included the --ephemeral flag"
if grep -q -- '--ephemeral' ${RUNNER_HOME}/runner_config; then
error "==============================================="
error "The configuration should not include the --ephemeral flag"
error "FAIL | The configuration should not include the --ephemeral flag"
exit 1
fi
log "Testing if runsvc ran"
if [ ! -f "${RUNNER_HOME}/runsvc_ran" ]; then
success "PASS | The --ephemeral switch was included in the configuration"
log "Testing if run.sh ran"
if [ ! -f "${RUNNER_HOME}/run_sh_ran" ]; then
error "=============================="
error "The runner service has not run"
error "FAIL | The runner service has not run"
exit 1
fi
success "The service ran"
success "PASS | run.sh ran"
success ""
success "==========================="
success "Test completed successfully"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# UNITTEST: should work disable update
# Will simulate a scneario where disableupdate=true. expects:
@@ -6,7 +6,7 @@
# - the entrypoint script to exit with no error
# - the config.sh script to run with the --disableupdate flag set to 'true'.
source ../logging.sh
source ../assets/logging.sh
entrypoint_log() {
while read I; do
@@ -14,18 +14,22 @@ entrypoint_log() {
done
}
log "Setting up test area"
export RUNNER_HOME=testarea
mkdir -p ${RUNNER_HOME}
log "Setting up the test"
export UNITTEST=true
export RUNNER_HOME=localhome
export RUNNER_NAME="example_runner_name"
export RUNNER_REPO="myorg/myrepo"
export RUNNER_TOKEN="xxxxxxxxxxxxx"
export DISABLE_RUNNER_UPDATE="true"
mkdir -p ${RUNNER_HOME}/bin
# add up the config.sh and runsvc.sh
ln -s ../config.sh ${RUNNER_HOME}/config.sh
ln -s ../../runsvc.sh ${RUNNER_HOME}/bin/runsvc.sh
# run.sh and config.sh get used by the runner's real entrypoint.sh and are part of actions/runner.
# We change symlink dummy versions so the entrypoint.sh can run allowing us to test the real entrypoint.sh
log "Symlink dummy config.sh and run.sh"
ln -s ../../assets/config.sh ${RUNNER_HOME}/config.sh
ln -s ../../assets/run.sh ${RUNNER_HOME}/run.sh
cleanup() {
rm -rf ${RUNNER_HOME}
@@ -36,16 +40,19 @@ cleanup() {
unset RUNNER_TOKEN
}
# Always run cleanup when test ends regardless of how it ends
trap cleanup SIGINT SIGTERM SIGQUIT EXIT
log "Running the entrypoint"
log ""
# run.sh and config.sh get used by the runner's real entrypoint.sh and are part of actions/runner.
# We change symlink dummy versions so the entrypoint.sh can run allowing us to test the real entrypoint.sh
../../../runner/entrypoint.sh 2> >(entrypoint_log)
if [ "$?" != "0" ]; then
error "=========================="
error "Test completed with errors"
error "FAIL | Test completed with errors"
exit 1
fi
@@ -53,26 +60,28 @@ log "Testing if the configuration step was run only once"
count=`cat ${RUNNER_HOME}/counter || echo "not_found"`
if [ ${count} != "1" ]; then
error "==============================================="
error "The configuration step was not run exactly once"
error "FAIL | The configuration step was not run exactly once"
exit 1
fi
success "The configuration ran ${count} time(s)"
success "PASS | The configuration ran ${count} time(s)"
log "Testing if the configuration included the --disableupdate flag"
if ! grep -q -- '--disableupdate' ${RUNNER_HOME}/runner_config; then
error "==============================================="
error "The configuration should not include the --disableupdate flag"
error "FAIL | The configuration should not include the --disableupdate flag"
exit 1
fi
log "Testing if runsvc ran"
if [ ! -f "${RUNNER_HOME}/runsvc_ran" ]; then
success "PASS | The --disableupdate switch was included in the configuration"
log "Testing if run.sh ran"
if [ ! -f "${RUNNER_HOME}/run_sh_ran" ]; then
error "=============================="
error "The runner service has not run"
error "FAIL | The runner service has not run"
exit 1
fi
success "The service ran"
success "PASS | run.sh ran"
success ""
success "==========================="
success "Test completed successfully"

View File

@@ -0,0 +1,90 @@
#!/usr/bin/env bash
# UNITTEST: should work legacy once switch set
# Will simulate a scenario where RUNNER_FEATURE_FLAG_EPHEMERAL=false. expects:
# - the configuration step to be run exactly once
# - the entrypoint script to exit with no error
# - the run.sh script to run with the --once flag
source ../assets/logging.sh
entrypoint_log() {
while read I; do
printf "\tentrypoint.sh: $I\n"
done
}
log "Setting up test area"
export RUNNER_HOME=testarea
mkdir -p ${RUNNER_HOME}
log "Setting up the test"
export UNITTEST=true
export RUNNER_NAME="example_runner_name"
export RUNNER_REPO="myorg/myrepo"
export RUNNER_TOKEN="xxxxxxxxxxxxx"
export RUNNER_FEATURE_FLAG_EPHEMERAL="false"
export RUNNER_EPHEMERAL="true"
# run.sh and config.sh get used by the runner's real entrypoint.sh and are part of actions/runner.
# We change symlink dummy versions so the entrypoint.sh can run allowing us to test the real entrypoint.sh
log "Symlink dummy config.sh and run.sh"
ln -s ../../assets/config.sh ${RUNNER_HOME}/config.sh
ln -s ../../assets/run.sh ${RUNNER_HOME}/run.sh
cleanup() {
rm -rf ${RUNNER_HOME}
unset UNITTEST
unset RUNNERHOME
unset RUNNER_NAME
unset RUNNER_REPO
unset RUNNER_TOKEN
unset RUNNER_EPHEMERAL
unset RUNNER_FEATURE_FLAG_EPHEMERAL
}
# Always run cleanup when test ends regardless of how it ends
trap cleanup SIGINT SIGTERM SIGQUIT EXIT
log "Running the entrypoint"
log ""
# run.sh and config.sh get used by the runner's real entrypoint.sh and are part of actions/runner.
# We change symlink dummy versions so the entrypoint.sh can run allowing us to test the real entrypoint.sh
../../../runner/entrypoint.sh 2> >(entrypoint_log)
if [ "$?" != "0" ]; then
error "==========================================="
error "FAIL | Entrypoint script did not exit successfully"
exit 1
fi
log "Testing if we went through the configuration step only once"
count=`cat ${RUNNER_HOME}/counter || echo "not_found"`
if [ ${count} != "1" ]; then
error "==============================================="
error "FAIL | The configuration step was not run exactly once"
exit 1
fi
success "PASS | The configuration ran ${count} time(s)"
log "Testing if the configuration included the --once flag"
if ! grep -q -- '--once' ${RUNNER_HOME}/runner_args; then
error "==============================================="
error "FAIL | The configuration did not include the --once flag, config printed below:"
exit 1
fi
success "PASS | The --once argument was passed in"
log "Testing if run.sh ran"
if [ ! -f "${RUNNER_HOME}/run_sh_ran" ]; then
error "=============================="
error "FAIL | The runner service has not run"
exit 1
fi
success "PASS | run.sh ran"
success ""
success "==========================="
success "Test completed successfully"

View File

@@ -1,13 +1,12 @@
#!/bin/bash
#!/usr/bin/env bash
source logging.sh
source assets/logging.sh
for unittest in ./should*; do
log "**********************************"
log " UNIT TEST: ${unittest}"
log "**********************************"
log ""
cd ${unittest}
./test.sh
ret_code=$?