mirror of
https://github.com/actions/runner-container-hooks.git
synced 2025-12-17 10:16:44 +00:00
Compare commits
1 Commits
v0.3.2
...
nikola-jok
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4f9272a5ce |
1
.gitattributes
vendored
1
.gitattributes
vendored
@@ -1 +0,0 @@
|
|||||||
*.png filter=lfs diff=lfs merge=lfs -text
|
|
||||||
7
.github/workflows/build.yaml
vendored
7
.github/workflows/build.yaml
vendored
@@ -11,11 +11,6 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
- run: sed -i "s|{{PATHTOREPO}}|$(pwd)|" packages/k8s/tests/test-kind.yaml
|
|
||||||
name: Setup kind cluster yaml config
|
|
||||||
- uses: helm/kind-action@v1.2.0
|
|
||||||
with:
|
|
||||||
config: packages/k8s/tests/test-kind.yaml
|
|
||||||
- run: npm install
|
- run: npm install
|
||||||
name: Install dependencies
|
name: Install dependencies
|
||||||
- run: npm run bootstrap
|
- run: npm run bootstrap
|
||||||
@@ -26,6 +21,6 @@ jobs:
|
|||||||
- name: Check linter
|
- name: Check linter
|
||||||
run: |
|
run: |
|
||||||
npm run lint
|
npm run lint
|
||||||
git diff --exit-code -- ':!packages/k8s/tests/test-kind.yaml'
|
git diff --exit-code
|
||||||
- name: Run tests
|
- name: Run tests
|
||||||
run: npm run test
|
run: npm run test
|
||||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -2,4 +2,3 @@ node_modules/
|
|||||||
lib/
|
lib/
|
||||||
dist/
|
dist/
|
||||||
**/tests/_temp/**
|
**/tests/_temp/**
|
||||||
packages/k8s/tests/test-kind.yaml
|
|
||||||
@@ -1 +1 @@
|
|||||||
* @actions/actions-runtime @actions/runner-akvelon
|
* @actions/actions-runtime
|
||||||
@@ -13,7 +13,7 @@ You'll need a runner compatible with hooks, a repository with container workflow
|
|||||||
- You'll need a runner compatible with hooks, a repository with container workflows to which you can register the runner and the hooks from this repository.
|
- You'll need a runner compatible with hooks, a repository with container workflows to which you can register the runner and the hooks from this repository.
|
||||||
- See [the runner contributing.md](../../github/CONTRIBUTING.MD) for how to get started with runner development.
|
- See [the runner contributing.md](../../github/CONTRIBUTING.MD) for how to get started with runner development.
|
||||||
- Build your hook using `npm run build`
|
- Build your hook using `npm run build`
|
||||||
- Enable the hooks by setting `ACTIONS_RUNNER_CONTAINER_HOOKS=./packages/{libraryname}/dist/index.js` file generated by [ncc](https://github.com/vercel/ncc)
|
- Enable the hooks by setting `ACTIONS_RUNNER_CONTAINER_HOOK=./packages/{libraryname}/dist/index.js` file generated by [ncc](https://github.com/vercel/ncc)
|
||||||
- Configure your self hosted runner against the a repository you have admin access
|
- Configure your self hosted runner against the a repository you have admin access
|
||||||
- Run a workflow with a container job, for example
|
- Run a workflow with a container job, for example
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1,184 +0,0 @@
|
|||||||
# ADR 0072: Using Ephemeral Containers
|
|
||||||
|
|
||||||
**Date:** 27 March 2023
|
|
||||||
|
|
||||||
**Status**: Rejected <!--Accepted|Rejected|Superceded|Deprecated-->
|
|
||||||
|
|
||||||
## Context
|
|
||||||
|
|
||||||
We are evaluating using Kubernetes [ephemeral containers](https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/) as a drop-in replacement for creating pods for [jobs that run in containers](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container) and [service containers](https://docs.github.com/en/actions/using-containerized-services/about-service-containers).
|
|
||||||
|
|
||||||
The main motivator behind using ephemeral containers is to eliminate the need for [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). Persistent Volume implementations vary depending on the provider and we want to avoid building a dependency on it in order to provide our end-users a consistent experience.
|
|
||||||
|
|
||||||
With ephemeral containers we could leverage [emptyDir volumes](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) which fits our use case better and its behaviour is consistent across providers.
|
|
||||||
|
|
||||||
However, it's important to acknowledge that ephemeral containers were not designed to handle workloads but rather provide a mechanism to inspect running containers for debugging and troubleshooting purposes.
|
|
||||||
|
|
||||||
## Evaluation
|
|
||||||
|
|
||||||
The criteria that we are using to evaluate whether ephemeral containers are fit for purpose are:
|
|
||||||
|
|
||||||
- Networking
|
|
||||||
- Storage
|
|
||||||
- Security
|
|
||||||
- Resource limits
|
|
||||||
- Logs
|
|
||||||
- Customizability
|
|
||||||
|
|
||||||
### Networking
|
|
||||||
|
|
||||||
Ephemeral containers share the networking namespace of the pod they are attached to. This means that ephemeral containers can access the same network interfaces as the pod and can communicate with other containers in the same pod. However, ephemeral containers cannot have ports configured and as such the fields ports, livenessProbe, and readinessProbe are not available [^1][^2]
|
|
||||||
|
|
||||||
In this scenario we have 3 containers in a pod:
|
|
||||||
|
|
||||||
- `runner`: the main container that runs the GitHub Actions job
|
|
||||||
- `debugger`: the first ephemeral container
|
|
||||||
- `debugger2`: the second ephemeral container
|
|
||||||
|
|
||||||
By sequentially opening ports on each of these containers and connecting to them we can demonstrate that the communication flow between the runner and the debuggers is feasible.
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>1. Runner -> Debugger communication</summary>
|
|
||||||
|
|
||||||

|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>2. Debugger -> Runner communication</summary>
|
|
||||||
|
|
||||||

|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>3. Debugger2 -> Debugger communication</summary>
|
|
||||||
|
|
||||||

|
|
||||||
</details>
|
|
||||||
|
|
||||||
### Storage
|
|
||||||
|
|
||||||
An emptyDir volume can be successfully mounted (read/write) by the runner as well as the ephemeral containers. This means that ephemeral containers can share data with the runner and other ephemeral containers.
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Configuration</summary>
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Extracted from the values.yaml for the gha-runner-scale-set helm chart
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: runner
|
|
||||||
image: ghcr.io/actions/actions-runner:latest
|
|
||||||
command: ["/home/runner/run.sh"]
|
|
||||||
volumeMounts:
|
|
||||||
- mountPath: /workspace
|
|
||||||
name: work-volume
|
|
||||||
volumes:
|
|
||||||
- name: work-volume
|
|
||||||
emptyDir:
|
|
||||||
sizeLimit: 1Gi
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# The API call to the Kubernetes API used to create the ephemeral containers
|
|
||||||
|
|
||||||
POD_NAME="arc-runner-set-6sfwd-runner-k7qq6"
|
|
||||||
NAMESPACE="arc-runners"
|
|
||||||
|
|
||||||
curl -v "https://<IP>:<PORT>/api/v1/namespaces/$NAMESPACE/pods/$POD_NAME/ephemeralcontainers" \
|
|
||||||
-X PATCH \
|
|
||||||
-H 'Content-Type: application/strategic-merge-patch+json' \
|
|
||||||
--cacert <PATH_TO_CACERT> \
|
|
||||||
--cert <PATH_TO_CERT> \
|
|
||||||
--key <PATH_TO_CLIENT_KEY> \
|
|
||||||
-d '
|
|
||||||
{
|
|
||||||
"spec":
|
|
||||||
{
|
|
||||||
"ephemeralContainers":
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"name": "debugger",
|
|
||||||
"command": ["sh"],
|
|
||||||
"image": "ghcr.io/actions/actions-runner:latest",
|
|
||||||
"targetContainerName": "runner",
|
|
||||||
"stdin": true,
|
|
||||||
"tty": true,
|
|
||||||
"volumeMounts": [{
|
|
||||||
"mountPath": "/workspace",
|
|
||||||
"name": "work-volume",
|
|
||||||
"readOnly": false
|
|
||||||
}]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "debugger2",
|
|
||||||
"command": ["sh"],
|
|
||||||
"image": "ghcr.io/actions/actions-runner:latest",
|
|
||||||
"targetContainerName": "runner",
|
|
||||||
"stdin": true,
|
|
||||||
"tty": true,
|
|
||||||
"volumeMounts": [{
|
|
||||||
"mountPath": "/workspace",
|
|
||||||
"name": "work-volume",
|
|
||||||
"readOnly": false
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>emptyDir volume mount</summary>
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
### Security
|
|
||||||
|
|
||||||
According to the [ephemeral containers API specification](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#ephemeralcontainer-v1-core) the configuration of the `securityContext` field is possible.
|
|
||||||
|
|
||||||
Ephemeral containers share the same network namespace as the pod they are attached to. This means that ephemeral containers can access the same network interfaces as the pod and can communicate with other containers in the same pod.
|
|
||||||
|
|
||||||
It is also possible for ephemeral containers to [share the process namespace](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/) with the other containers in the pod. This is disabled by default.
|
|
||||||
|
|
||||||
The above could have unpredictable security implications.
|
|
||||||
|
|
||||||
### Resource limits
|
|
||||||
|
|
||||||
Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. [^1] This is a major drawback as it means that ephemeral containers cannot be configured to have resource limits.
|
|
||||||
|
|
||||||
There are no guaranteed resources for ad-hoc troubleshooting. If troubleshooting causes a pod to exceed its resource limit it may be evicted. [^3]
|
|
||||||
|
|
||||||
### Logs
|
|
||||||
|
|
||||||
Since ephemeral containers can share volumes with the runner container, it's possible to write logs to the same volume and have them available to the runner container.
|
|
||||||
|
|
||||||
### Customizability
|
|
||||||
|
|
||||||
Ephemeral containers can run any image and tag provided, they can be customized to run any arbitrary job. However, it's important to note that the following are not feasible:
|
|
||||||
|
|
||||||
- Lifecycle is not allowed for ephemeral containers
|
|
||||||
- Ephemeral containers will stop when their command exits, such as exiting a shell, and they will not be restarted. Unlike `kubectl exec`, processes in Ephemeral Containers will not receive an `EOF` if their connections are interrupted, so shells won't automatically exit on disconnect. There is no API support for killing or restarting an ephemeral container. The only way to exit the container is to send it an OS signal. [^4]
|
|
||||||
- Probes are not allowed for ephemeral containers.
|
|
||||||
- Ports are not allowed for ephemeral containers.
|
|
||||||
|
|
||||||
## Decision
|
|
||||||
|
|
||||||
While the evaluation shows that ephemeral containers can be used to run jobs in containers, it's important to acknowledge that ephemeral containers were not designed to handle workloads but rather provide a mechanism to inspect running containers for debugging and troubleshooting purposes.
|
|
||||||
|
|
||||||
Given the limitations of ephemeral containers, we decided not to use them outside of their intended purpose.
|
|
||||||
|
|
||||||
## Consequences
|
|
||||||
|
|
||||||
Proposal rejected, no further action required. This document will be used as a reference for future discussions.
|
|
||||||
|
|
||||||
[^1]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#ephemeralcontainer-v1-core
|
|
||||||
|
|
||||||
[^2]: https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/
|
|
||||||
|
|
||||||
[^3]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/277-ephemeral-containers/README.md#notesconstraintscaveats
|
|
||||||
|
|
||||||
[^4]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/277-ephemeral-containers/README.md#ephemeral-container-lifecycle
|
|
||||||
BIN
docs/adrs/images/debugger-runner.png
(Stored with Git LFS)
BIN
docs/adrs/images/debugger-runner.png
(Stored with Git LFS)
Binary file not shown.
BIN
docs/adrs/images/debugger2-debugger.png
(Stored with Git LFS)
BIN
docs/adrs/images/debugger2-debugger.png
(Stored with Git LFS)
Binary file not shown.
BIN
docs/adrs/images/emptyDir_volume.png
(Stored with Git LFS)
BIN
docs/adrs/images/emptyDir_volume.png
(Stored with Git LFS)
Binary file not shown.
BIN
docs/adrs/images/runner-debugger.png
(Stored with Git LFS)
BIN
docs/adrs/images/runner-debugger.png
(Stored with Git LFS)
Binary file not shown.
@@ -1,3 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
echo "Hello World"
|
|
||||||
@@ -5,7 +5,7 @@
|
|||||||
"args": {
|
"args": {
|
||||||
"container": {
|
"container": {
|
||||||
"image": "node:14.16",
|
"image": "node:14.16",
|
||||||
"workingDirectory": "/__w/repo/repo",
|
"workingDirectory": "/__w/thboop-test2/thboop-test2",
|
||||||
"createOptions": "--cpus 1",
|
"createOptions": "--cpus 1",
|
||||||
"environmentVariables": {
|
"environmentVariables": {
|
||||||
"NODE_ENV": "development"
|
"NODE_ENV": "development"
|
||||||
@@ -24,37 +24,37 @@
|
|||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work",
|
||||||
"targetVolumePath": "/__w",
|
"targetVolumePath": "/__w",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/externals",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/externals",
|
||||||
"targetVolumePath": "/__e",
|
"targetVolumePath": "/__e",
|
||||||
"readOnly": true
|
"readOnly": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp",
|
||||||
"targetVolumePath": "/__w/_temp",
|
"targetVolumePath": "/__w/_temp",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_actions",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_actions",
|
||||||
"targetVolumePath": "/__w/_actions",
|
"targetVolumePath": "/__w/_actions",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_tool",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_tool",
|
||||||
"targetVolumePath": "/__w/_tool",
|
"targetVolumePath": "/__w/_tool",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp/_github_home",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp/_github_home",
|
||||||
"targetVolumePath": "/github/home",
|
"targetVolumePath": "/github/home",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp/_github_workflow",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp/_github_workflow",
|
||||||
"targetVolumePath": "/github/workflow",
|
"targetVolumePath": "/github/workflow",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
}
|
}
|
||||||
@@ -73,8 +73,6 @@
|
|||||||
"contextName": "redis",
|
"contextName": "redis",
|
||||||
"image": "redis",
|
"image": "redis",
|
||||||
"createOptions": "--cpus 1",
|
"createOptions": "--cpus 1",
|
||||||
"entrypoint": null,
|
|
||||||
"entryPointArgs": [],
|
|
||||||
"environmentVariables": {},
|
"environmentVariables": {},
|
||||||
"userMountVolumes": [
|
"userMountVolumes": [
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -12,11 +12,11 @@
|
|||||||
"image": "node:14.16",
|
"image": "node:14.16",
|
||||||
"dockerfile": null,
|
"dockerfile": null,
|
||||||
"entryPointArgs": [
|
"entryPointArgs": [
|
||||||
"-e",
|
"-c",
|
||||||
"example-script.sh"
|
"echo \"hello world2\""
|
||||||
],
|
],
|
||||||
"entryPoint": "bash",
|
"entryPoint": "bash",
|
||||||
"workingDirectory": "/__w/repo/repo",
|
"workingDirectory": "/__w/thboop-test2/thboop-test2",
|
||||||
"createOptions": "--cpus 1",
|
"createOptions": "--cpus 1",
|
||||||
"environmentVariables": {
|
"environmentVariables": {
|
||||||
"NODE_ENV": "development"
|
"NODE_ENV": "development"
|
||||||
@@ -34,27 +34,27 @@
|
|||||||
],
|
],
|
||||||
"systemMountVolumes": [
|
"systemMountVolumes": [
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work",
|
||||||
"targetVolumePath": "/__w",
|
"targetVolumePath": "/__w",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/externals",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/externals",
|
||||||
"targetVolumePath": "/__e",
|
"targetVolumePath": "/__e",
|
||||||
"readOnly": true
|
"readOnly": true
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp",
|
||||||
"targetVolumePath": "/__w/_temp",
|
"targetVolumePath": "/__w/_temp",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_actions",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_actions",
|
||||||
"targetVolumePath": "/__w/_actions",
|
"targetVolumePath": "/__w/_actions",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_tool",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_tool",
|
||||||
"targetVolumePath": "/__w/_tool",
|
"targetVolumePath": "/__w/_tool",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
@@ -64,7 +64,7 @@
|
|||||||
"readOnly": false
|
"readOnly": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp/_github_workflow",
|
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp/_github_workflow",
|
||||||
"targetVolumePath": "/github/workflow",
|
"targetVolumePath": "/github/workflow",
|
||||||
"readOnly": false
|
"readOnly": false
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,8 +10,8 @@
|
|||||||
},
|
},
|
||||||
"args": {
|
"args": {
|
||||||
"entryPointArgs": [
|
"entryPointArgs": [
|
||||||
"-e",
|
"-c",
|
||||||
"example-script.sh"
|
"echo \"hello world\""
|
||||||
],
|
],
|
||||||
"entryPoint": "bash",
|
"entryPoint": "bash",
|
||||||
"environmentVariables": {
|
"environmentVariables": {
|
||||||
@@ -21,6 +21,6 @@
|
|||||||
"/foo/bar",
|
"/foo/bar",
|
||||||
"bar/foo"
|
"bar/foo"
|
||||||
],
|
],
|
||||||
"workingDirectory": "/__w/repo/repo"
|
"workingDirectory": "/__w/thboop-test2/thboop-test2"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
16
package-lock.json
generated
16
package-lock.json
generated
@@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"name": "hooks",
|
"name": "hooks",
|
||||||
"version": "0.3.2",
|
"version": "0.1.0",
|
||||||
"lockfileVersion": 2,
|
"lockfileVersion": 2,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "hooks",
|
"name": "hooks",
|
||||||
"version": "0.3.2",
|
"version": "0.1.0",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@types/jest": "^27.5.1",
|
"@types/jest": "^27.5.1",
|
||||||
@@ -1800,9 +1800,9 @@
|
|||||||
"dev": true
|
"dev": true
|
||||||
},
|
},
|
||||||
"node_modules/json5": {
|
"node_modules/json5": {
|
||||||
"version": "1.0.2",
|
"version": "1.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz",
|
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
|
||||||
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==",
|
"integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"minimist": "^1.2.0"
|
"minimist": "^1.2.0"
|
||||||
@@ -3926,9 +3926,9 @@
|
|||||||
"dev": true
|
"dev": true
|
||||||
},
|
},
|
||||||
"json5": {
|
"json5": {
|
||||||
"version": "1.0.2",
|
"version": "1.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz",
|
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
|
||||||
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==",
|
"integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"requires": {
|
"requires": {
|
||||||
"minimist": "^1.2.0"
|
"minimist": "^1.2.0"
|
||||||
|
|||||||
@@ -1,13 +1,13 @@
|
|||||||
{
|
{
|
||||||
"name": "hooks",
|
"name": "hooks",
|
||||||
"version": "0.3.2",
|
"version": "0.1.0",
|
||||||
"description": "Three projects are included - k8s: a kubernetes hook implementation that spins up pods dynamically to run a job - docker: A hook implementation of the runner's docker implementation - A hook lib, which contains shared typescript definitions and utilities that the other packages consume",
|
"description": "Three projects are included - k8s: a kubernetes hook implementation that spins up pods dynamically to run a job - docker: A hook implementation of the runner's docker implementation - A hook lib, which contains shared typescript definitions and utilities that the other packages consume",
|
||||||
"main": "",
|
"main": "",
|
||||||
"directories": {
|
"directories": {
|
||||||
"doc": "docs"
|
"doc": "docs"
|
||||||
},
|
},
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"test": "npm run test --prefix packages/docker && npm run test --prefix packages/k8s",
|
"test": "npm run test --prefix packages/docker",
|
||||||
"bootstrap": "npm install --prefix packages/hooklib && npm install --prefix packages/k8s && npm install --prefix packages/docker",
|
"bootstrap": "npm install --prefix packages/hooklib && npm install --prefix packages/k8s && npm install --prefix packages/docker",
|
||||||
"format": "prettier --write '**/*.ts'",
|
"format": "prettier --write '**/*.ts'",
|
||||||
"format-check": "prettier --check '**/*.ts'",
|
"format-check": "prettier --check '**/*.ts'",
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
jest.setTimeout(500000)
|
jest.setTimeout(90000)
|
||||||
64
packages/docker/package-lock.json
generated
64
packages/docker/package-lock.json
generated
@@ -9,7 +9,7 @@
|
|||||||
"version": "0.1.0",
|
"version": "0.1.0",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@actions/core": "^1.9.1",
|
"@actions/core": "^1.6.0",
|
||||||
"@actions/exec": "^1.1.1",
|
"@actions/exec": "^1.1.1",
|
||||||
"hooklib": "file:../hooklib",
|
"hooklib": "file:../hooklib",
|
||||||
"uuid": "^8.3.2"
|
"uuid": "^8.3.2"
|
||||||
@@ -30,7 +30,7 @@
|
|||||||
"version": "0.1.0",
|
"version": "0.1.0",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@actions/core": "^1.9.1"
|
"@actions/core": "^1.6.0"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@types/node": "^17.0.23",
|
"@types/node": "^17.0.23",
|
||||||
@@ -43,12 +43,11 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@actions/core": {
|
"node_modules/@actions/core": {
|
||||||
"version": "1.9.1",
|
"version": "1.6.0",
|
||||||
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.9.1.tgz",
|
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.6.0.tgz",
|
||||||
"integrity": "sha512-5ad+U2YGrmmiw6du20AQW5XuWo7UKN2052FjSV7MX+Wfjf8sCqcsZe62NfgHys4QI4/Y+vQvLKYL8jWtA1ZBTA==",
|
"integrity": "sha512-NB1UAZomZlCV/LmJqkLhNTqtKfFXJZAUPcfl/zqG7EfsQdeUJtaWO98SGbuQ3pydJ3fHl2CvI/51OKYlCYYcaw==",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@actions/http-client": "^2.0.1",
|
"@actions/http-client": "^1.0.11"
|
||||||
"uuid": "^8.3.2"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@actions/exec": {
|
"node_modules/@actions/exec": {
|
||||||
@@ -60,11 +59,11 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@actions/http-client": {
|
"node_modules/@actions/http-client": {
|
||||||
"version": "2.0.1",
|
"version": "1.0.11",
|
||||||
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-1.0.11.tgz",
|
||||||
"integrity": "sha512-PIXiMVtz6VvyaRsGY268qvj57hXQEpsYogYOu2nrQhlf+XCGmZstmuZBbAybUl1nQGnvS1k1eEsQ69ZoD7xlSw==",
|
"integrity": "sha512-VRYHGQV1rqnROJqdMvGUbY/Kn8vriQe/F9HR2AlYHzmKuM/p3kjNuXhmdBfcVgsvRWTz5C5XW5xvndZrVBuAYg==",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"tunnel": "^0.0.6"
|
"tunnel": "0.0.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@actions/io": {
|
"node_modules/@actions/io": {
|
||||||
@@ -3779,9 +3778,9 @@
|
|||||||
"peer": true
|
"peer": true
|
||||||
},
|
},
|
||||||
"node_modules/json5": {
|
"node_modules/json5": {
|
||||||
"version": "2.2.3",
|
"version": "2.2.1",
|
||||||
"resolved": "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz",
|
"resolved": "https://registry.npmjs.org/json5/-/json5-2.2.1.tgz",
|
||||||
"integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==",
|
"integrity": "sha512-1hqLFMSrGHRHxav9q9gNjJ5EXznIxGVO09xQRrwplcS8qs28pZ8s8hupZAmqDwZUmVZ2Qb2jnyPOWcDH8m8dlA==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"bin": {
|
"bin": {
|
||||||
"json5": "lib/cli.js"
|
"json5": "lib/cli.js"
|
||||||
@@ -4903,9 +4902,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/tsconfig-paths/node_modules/json5": {
|
"node_modules/tsconfig-paths/node_modules/json5": {
|
||||||
"version": "1.0.2",
|
"version": "1.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz",
|
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
|
||||||
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==",
|
"integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"minimist": "^1.2.0"
|
"minimist": "^1.2.0"
|
||||||
@@ -5280,12 +5279,11 @@
|
|||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@actions/core": {
|
"@actions/core": {
|
||||||
"version": "1.9.1",
|
"version": "1.6.0",
|
||||||
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.9.1.tgz",
|
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.6.0.tgz",
|
||||||
"integrity": "sha512-5ad+U2YGrmmiw6du20AQW5XuWo7UKN2052FjSV7MX+Wfjf8sCqcsZe62NfgHys4QI4/Y+vQvLKYL8jWtA1ZBTA==",
|
"integrity": "sha512-NB1UAZomZlCV/LmJqkLhNTqtKfFXJZAUPcfl/zqG7EfsQdeUJtaWO98SGbuQ3pydJ3fHl2CvI/51OKYlCYYcaw==",
|
||||||
"requires": {
|
"requires": {
|
||||||
"@actions/http-client": "^2.0.1",
|
"@actions/http-client": "^1.0.11"
|
||||||
"uuid": "^8.3.2"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"@actions/exec": {
|
"@actions/exec": {
|
||||||
@@ -5297,11 +5295,11 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"@actions/http-client": {
|
"@actions/http-client": {
|
||||||
"version": "2.0.1",
|
"version": "1.0.11",
|
||||||
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-1.0.11.tgz",
|
||||||
"integrity": "sha512-PIXiMVtz6VvyaRsGY268qvj57hXQEpsYogYOu2nrQhlf+XCGmZstmuZBbAybUl1nQGnvS1k1eEsQ69ZoD7xlSw==",
|
"integrity": "sha512-VRYHGQV1rqnROJqdMvGUbY/Kn8vriQe/F9HR2AlYHzmKuM/p3kjNuXhmdBfcVgsvRWTz5C5XW5xvndZrVBuAYg==",
|
||||||
"requires": {
|
"requires": {
|
||||||
"tunnel": "^0.0.6"
|
"tunnel": "0.0.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"@actions/io": {
|
"@actions/io": {
|
||||||
@@ -7378,7 +7376,7 @@
|
|||||||
"hooklib": {
|
"hooklib": {
|
||||||
"version": "file:../hooklib",
|
"version": "file:../hooklib",
|
||||||
"requires": {
|
"requires": {
|
||||||
"@actions/core": "^1.9.1",
|
"@actions/core": "^1.6.0",
|
||||||
"@types/node": "^17.0.23",
|
"@types/node": "^17.0.23",
|
||||||
"@typescript-eslint/parser": "^5.18.0",
|
"@typescript-eslint/parser": "^5.18.0",
|
||||||
"@zeit/ncc": "^0.22.3",
|
"@zeit/ncc": "^0.22.3",
|
||||||
@@ -8176,9 +8174,9 @@
|
|||||||
"peer": true
|
"peer": true
|
||||||
},
|
},
|
||||||
"json5": {
|
"json5": {
|
||||||
"version": "2.2.3",
|
"version": "2.2.1",
|
||||||
"resolved": "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz",
|
"resolved": "https://registry.npmjs.org/json5/-/json5-2.2.1.tgz",
|
||||||
"integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==",
|
"integrity": "sha512-1hqLFMSrGHRHxav9q9gNjJ5EXznIxGVO09xQRrwplcS8qs28pZ8s8hupZAmqDwZUmVZ2Qb2jnyPOWcDH8m8dlA==",
|
||||||
"dev": true
|
"dev": true
|
||||||
},
|
},
|
||||||
"kleur": {
|
"kleur": {
|
||||||
@@ -8985,9 +8983,9 @@
|
|||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"json5": {
|
"json5": {
|
||||||
"version": "1.0.2",
|
"version": "1.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz",
|
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
|
||||||
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==",
|
"integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"requires": {
|
"requires": {
|
||||||
"minimist": "^1.2.0"
|
"minimist": "^1.2.0"
|
||||||
|
|||||||
@@ -10,7 +10,7 @@
|
|||||||
"author": "",
|
"author": "",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@actions/core": "^1.9.1",
|
"@actions/core": "^1.6.0",
|
||||||
"@actions/exec": "^1.1.1",
|
"@actions/exec": "^1.1.1",
|
||||||
"hooklib": "file:../hooklib",
|
"hooklib": "file:../hooklib",
|
||||||
"uuid": "^8.3.2"
|
"uuid": "^8.3.2"
|
||||||
|
|||||||
@@ -2,11 +2,12 @@ import * as core from '@actions/core'
|
|||||||
import * as fs from 'fs'
|
import * as fs from 'fs'
|
||||||
import {
|
import {
|
||||||
ContainerInfo,
|
ContainerInfo,
|
||||||
Registry,
|
JobContainerInfo,
|
||||||
RunContainerStepArgs,
|
RunContainerStepArgs,
|
||||||
ServiceContainerInfo
|
ServiceContainerInfo,
|
||||||
|
StepContainerInfo
|
||||||
} from 'hooklib/lib'
|
} from 'hooklib/lib'
|
||||||
import * as path from 'path'
|
import path from 'path'
|
||||||
import { env } from 'process'
|
import { env } from 'process'
|
||||||
import { v4 as uuidv4 } from 'uuid'
|
import { v4 as uuidv4 } from 'uuid'
|
||||||
import { runDockerCommand, RunDockerCommandOptions } from '../utils'
|
import { runDockerCommand, RunDockerCommandOptions } from '../utils'
|
||||||
@@ -42,15 +43,19 @@ export async function createContainer(
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (args.environmentVariables) {
|
if (args.environmentVariables) {
|
||||||
for (const [key] of Object.entries(args.environmentVariables)) {
|
for (const [key, value] of Object.entries(args.environmentVariables)) {
|
||||||
dockerArgs.push('-e')
|
dockerArgs.push('-e')
|
||||||
dockerArgs.push(key)
|
if (!value) {
|
||||||
|
dockerArgs.push(`"${key}"`)
|
||||||
|
} else {
|
||||||
|
dockerArgs.push(`"${key}=${value}"`)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const mountVolumes = [
|
const mountVolumes = [
|
||||||
...(args.userMountVolumes || []),
|
...(args.userMountVolumes || []),
|
||||||
...(args.systemMountVolumes || [])
|
...((args as JobContainerInfo | StepContainerInfo).systemMountVolumes || [])
|
||||||
]
|
]
|
||||||
for (const mountVolume of mountVolumes) {
|
for (const mountVolume of mountVolumes) {
|
||||||
dockerArgs.push(
|
dockerArgs.push(
|
||||||
@@ -69,9 +74,7 @@ export async function createContainer(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const id = (
|
const id = (await runDockerCommand(dockerArgs)).trim()
|
||||||
await runDockerCommand(dockerArgs, { env: args.environmentVariables })
|
|
||||||
).trim()
|
|
||||||
if (!id) {
|
if (!id) {
|
||||||
throw new Error('Could not read id from docker command')
|
throw new Error('Could not read id from docker command')
|
||||||
}
|
}
|
||||||
@@ -143,41 +146,17 @@ export async function containerBuild(
|
|||||||
args: RunContainerStepArgs,
|
args: RunContainerStepArgs,
|
||||||
tag: string
|
tag: string
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
if (!args.dockerfile) {
|
const context = path.dirname(`${env.GITHUB_WORKSPACE}/${args.dockerfile}`)
|
||||||
throw new Error("Container build expects 'args.dockerfile' to be set")
|
|
||||||
}
|
|
||||||
|
|
||||||
const dockerArgs: string[] = ['build']
|
const dockerArgs: string[] = ['build']
|
||||||
dockerArgs.push('-t', tag)
|
dockerArgs.push('-t', tag)
|
||||||
dockerArgs.push('-f', args.dockerfile)
|
dockerArgs.push('-f', `${env.GITHUB_WORKSPACE}/${args.dockerfile}`)
|
||||||
dockerArgs.push(getBuildContext(args.dockerfile))
|
dockerArgs.push(context)
|
||||||
|
// TODO: figure out build working directory
|
||||||
await runDockerCommand(dockerArgs, {
|
await runDockerCommand(dockerArgs, {
|
||||||
workingDir: getWorkingDir(args.dockerfile)
|
workingDir: args['buildWorkingDirectory']
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
function getBuildContext(dockerfilePath: string): string {
|
|
||||||
return path.dirname(dockerfilePath)
|
|
||||||
}
|
|
||||||
|
|
||||||
function getWorkingDir(dockerfilePath: string): string {
|
|
||||||
const workspace = env.GITHUB_WORKSPACE as string
|
|
||||||
let workingDir = workspace
|
|
||||||
if (!dockerfilePath?.includes(workspace)) {
|
|
||||||
// This is container action
|
|
||||||
const pathSplit = dockerfilePath.split('/')
|
|
||||||
const actionIndex = pathSplit?.findIndex(d => d === '_actions')
|
|
||||||
if (actionIndex) {
|
|
||||||
const actionSubdirectoryDepth = 3 // handle + repo + [branch | tag]
|
|
||||||
pathSplit.splice(actionIndex + actionSubdirectoryDepth + 1)
|
|
||||||
workingDir = pathSplit.join('/')
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return workingDir
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function containerLogs(id: string): Promise<void> {
|
export async function containerLogs(id: string): Promise<void> {
|
||||||
const dockerArgs: string[] = ['logs']
|
const dockerArgs: string[] = ['logs']
|
||||||
dockerArgs.push('--details')
|
dockerArgs.push('--details')
|
||||||
@@ -192,18 +171,6 @@ export async function containerNetworkRemove(network: string): Promise<void> {
|
|||||||
await runDockerCommand(dockerArgs)
|
await runDockerCommand(dockerArgs)
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function containerNetworkPrune(): Promise<void> {
|
|
||||||
const dockerArgs = [
|
|
||||||
'network',
|
|
||||||
'prune',
|
|
||||||
'--force',
|
|
||||||
'--filter',
|
|
||||||
`label=${getRunnerLabel()}`
|
|
||||||
]
|
|
||||||
|
|
||||||
await runDockerCommand(dockerArgs)
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function containerPrune(): Promise<void> {
|
export async function containerPrune(): Promise<void> {
|
||||||
const dockerPSArgs: string[] = [
|
const dockerPSArgs: string[] = [
|
||||||
'ps',
|
'ps',
|
||||||
@@ -271,36 +238,22 @@ export async function healthCheck({
|
|||||||
export async function containerPorts(id: string): Promise<string[]> {
|
export async function containerPorts(id: string): Promise<string[]> {
|
||||||
const dockerArgs = ['port', id]
|
const dockerArgs = ['port', id]
|
||||||
const portMappings = (await runDockerCommand(dockerArgs)).trim()
|
const portMappings = (await runDockerCommand(dockerArgs)).trim()
|
||||||
return portMappings.split('\n').filter(p => !!p)
|
return portMappings.split('\n')
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function getContainerEnvValue(
|
export async function registryLogin(args): Promise<string> {
|
||||||
id: string,
|
if (!args.registry) {
|
||||||
name: string
|
|
||||||
): Promise<string> {
|
|
||||||
const dockerArgs = [
|
|
||||||
'inspect',
|
|
||||||
`--format='{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "${name}"}}{{index (split $value "=") 1}}{{end}}{{end}}'`,
|
|
||||||
id
|
|
||||||
]
|
|
||||||
const value = (await runDockerCommand(dockerArgs)).trim()
|
|
||||||
const lines = value.split('\n')
|
|
||||||
return lines.length ? lines[0].replace(/^'/, '').replace(/'$/, '') : ''
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function registryLogin(registry?: Registry): Promise<string> {
|
|
||||||
if (!registry) {
|
|
||||||
return ''
|
return ''
|
||||||
}
|
}
|
||||||
const credentials = {
|
const credentials = {
|
||||||
username: registry.username,
|
username: args.registry.username,
|
||||||
password: registry.password
|
password: args.registry.password
|
||||||
}
|
}
|
||||||
|
|
||||||
const configLocation = `${env.RUNNER_TEMP}/.docker_${uuidv4()}`
|
const configLocation = `${env.RUNNER_TEMP}/.docker_${uuidv4()}`
|
||||||
fs.mkdirSync(configLocation)
|
fs.mkdirSync(configLocation)
|
||||||
try {
|
try {
|
||||||
await dockerLogin(configLocation, registry.serverUrl, credentials)
|
await dockerLogin(configLocation, args.registry.serverUrl, credentials)
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
fs.rmdirSync(configLocation, { recursive: true })
|
fs.rmdirSync(configLocation, { recursive: true })
|
||||||
throw error
|
throw error
|
||||||
@@ -318,7 +271,7 @@ export async function registryLogout(configLocation: string): Promise<void> {
|
|||||||
async function dockerLogin(
|
async function dockerLogin(
|
||||||
configLocation: string,
|
configLocation: string,
|
||||||
registry: string,
|
registry: string,
|
||||||
credentials: { username?: string; password?: string }
|
credentials: { username: string; password: string }
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
const credentialsArgs =
|
const credentialsArgs =
|
||||||
credentials.username && credentials.password
|
credentials.username && credentials.password
|
||||||
@@ -354,36 +307,30 @@ export async function containerExecStep(
|
|||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
const dockerArgs: string[] = ['exec', '-i']
|
const dockerArgs: string[] = ['exec', '-i']
|
||||||
dockerArgs.push(`--workdir=${args.workingDirectory}`)
|
dockerArgs.push(`--workdir=${args.workingDirectory}`)
|
||||||
for (const [key] of Object.entries(args['environmentVariables'])) {
|
for (const [key, value] of Object.entries(args['environmentVariables'])) {
|
||||||
dockerArgs.push('-e')
|
dockerArgs.push('-e')
|
||||||
dockerArgs.push(key)
|
if (!value) {
|
||||||
|
dockerArgs.push(`"${key}"`)
|
||||||
|
} else {
|
||||||
|
dockerArgs.push(`"${key}=${value}"`)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (args.prependPath?.length) {
|
// Todo figure out prepend path and update it here
|
||||||
// TODO: remove compatibility with typeof prependPath === 'string' as we bump to next major version, the hooks will lose PrependPath compat with runners 2.293.0 and older
|
// (we need to pass path in as -e Path={fullpath}) where {fullpath is the prepend path added to the current containers path}
|
||||||
const prependPath =
|
|
||||||
typeof args.prependPath === 'string'
|
|
||||||
? args.prependPath
|
|
||||||
: args.prependPath.join(':')
|
|
||||||
|
|
||||||
dockerArgs.push(
|
|
||||||
'-e',
|
|
||||||
`PATH=${prependPath}:${await getContainerEnvValue(containerId, 'PATH')}`
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
dockerArgs.push(containerId)
|
dockerArgs.push(containerId)
|
||||||
dockerArgs.push(args.entryPoint)
|
dockerArgs.push(args.entryPoint)
|
||||||
for (const entryPointArg of args.entryPointArgs) {
|
for (const entryPointArg of args.entryPointArgs) {
|
||||||
dockerArgs.push(entryPointArg)
|
dockerArgs.push(entryPointArg)
|
||||||
}
|
}
|
||||||
await runDockerCommand(dockerArgs, { env: args.environmentVariables })
|
await runDockerCommand(dockerArgs)
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function containerRun(
|
export async function containerRun(
|
||||||
args: RunContainerStepArgs,
|
args: RunContainerStepArgs,
|
||||||
name: string,
|
name: string,
|
||||||
network?: string
|
network: string
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
if (!args.image) {
|
if (!args.image) {
|
||||||
throw new Error('expected image to be set')
|
throw new Error('expected image to be set')
|
||||||
@@ -393,15 +340,15 @@ export async function containerRun(
|
|||||||
dockerArgs.push('--name', name)
|
dockerArgs.push('--name', name)
|
||||||
dockerArgs.push(`--workdir=${args.workingDirectory}`)
|
dockerArgs.push(`--workdir=${args.workingDirectory}`)
|
||||||
dockerArgs.push(`--label=${getRunnerLabel()}`)
|
dockerArgs.push(`--label=${getRunnerLabel()}`)
|
||||||
if (network) {
|
dockerArgs.push(`--network=${network}`)
|
||||||
dockerArgs.push(`--network=${network}`)
|
|
||||||
}
|
|
||||||
|
|
||||||
if (args.createOptions) {
|
if (args.createOptions) {
|
||||||
dockerArgs.push(...args.createOptions.split(' '))
|
dockerArgs.push(...args.createOptions.split(' '))
|
||||||
}
|
}
|
||||||
if (args.environmentVariables) {
|
if (args.environmentVariables) {
|
||||||
for (const [key] of Object.entries(args.environmentVariables)) {
|
for (const [key, value] of Object.entries(args.environmentVariables)) {
|
||||||
|
// Pass in this way to avoid printing secrets
|
||||||
|
env[key] = value ?? undefined
|
||||||
dockerArgs.push('-e')
|
dockerArgs.push('-e')
|
||||||
dockerArgs.push(key)
|
dockerArgs.push(key)
|
||||||
}
|
}
|
||||||
@@ -427,14 +374,11 @@ export async function containerRun(
|
|||||||
dockerArgs.push(args.image)
|
dockerArgs.push(args.image)
|
||||||
if (args.entryPointArgs) {
|
if (args.entryPointArgs) {
|
||||||
for (const entryPointArg of args.entryPointArgs) {
|
for (const entryPointArg of args.entryPointArgs) {
|
||||||
if (!entryPointArg) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
dockerArgs.push(entryPointArg)
|
dockerArgs.push(entryPointArg)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
await runDockerCommand(dockerArgs, { env: args.environmentVariables })
|
await runDockerCommand(dockerArgs)
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function isContainerAlpine(containerId: string): Promise<boolean> {
|
export async function isContainerAlpine(containerId: string): Promise<boolean> {
|
||||||
|
|||||||
@@ -1,9 +1,21 @@
|
|||||||
import {
|
import {
|
||||||
containerNetworkPrune,
|
containerRemove,
|
||||||
containerPrune
|
containerNetworkRemove
|
||||||
} from '../dockerCommands/container'
|
} from '../dockerCommands/container'
|
||||||
|
|
||||||
export async function cleanupJob(): Promise<void> {
|
// eslint-disable-next-line @typescript-eslint/no-unused-vars
|
||||||
await containerPrune()
|
export async function cleanupJob(args, state, responseFile): Promise<void> {
|
||||||
await containerNetworkPrune()
|
const containerIds: string[] = []
|
||||||
|
if (state?.container) {
|
||||||
|
containerIds.push(state.container)
|
||||||
|
}
|
||||||
|
if (state?.services) {
|
||||||
|
containerIds.push(state.services)
|
||||||
|
}
|
||||||
|
if (containerIds.length > 0) {
|
||||||
|
await containerRemove(containerIds)
|
||||||
|
}
|
||||||
|
if (state.network) {
|
||||||
|
await containerNetworkRemove(state.network)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -48,7 +48,6 @@ export async function prepareJob(
|
|||||||
} finally {
|
} finally {
|
||||||
await registryLogout(configLocation)
|
await registryLogout(configLocation)
|
||||||
}
|
}
|
||||||
|
|
||||||
containerMetadata = await createContainer(
|
containerMetadata = await createContainer(
|
||||||
container,
|
container,
|
||||||
generateContainerName(container.image),
|
generateContainerName(container.image),
|
||||||
@@ -79,7 +78,6 @@ export async function prepareJob(
|
|||||||
generateContainerName(service.image),
|
generateContainerName(service.image),
|
||||||
networkName
|
networkName
|
||||||
)
|
)
|
||||||
|
|
||||||
servicesMetadata.push(response)
|
servicesMetadata.push(response)
|
||||||
await containerStart(response.id)
|
await containerStart(response.id)
|
||||||
}
|
}
|
||||||
@@ -96,10 +94,7 @@ export async function prepareJob(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
let isAlpine = false
|
const isAlpine = await isContainerAlpine(containerMetadata!.id)
|
||||||
if (containerMetadata?.id) {
|
|
||||||
isAlpine = await isContainerAlpine(containerMetadata.id)
|
|
||||||
}
|
|
||||||
|
|
||||||
if (containerMetadata?.id) {
|
if (containerMetadata?.id) {
|
||||||
containerMetadata.ports = await containerPorts(containerMetadata.id)
|
containerMetadata.ports = await containerPorts(containerMetadata.id)
|
||||||
@@ -110,10 +105,7 @@ export async function prepareJob(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const healthChecks: Promise<void>[] = []
|
const healthChecks: Promise<void>[] = [healthCheck(containerMetadata!)]
|
||||||
if (containerMetadata) {
|
|
||||||
healthChecks.push(healthCheck(containerMetadata))
|
|
||||||
}
|
|
||||||
for (const service of servicesMetadata) {
|
for (const service of servicesMetadata) {
|
||||||
healthChecks.push(healthCheck(service))
|
healthChecks.push(healthCheck(service))
|
||||||
}
|
}
|
||||||
@@ -141,6 +133,7 @@ function generateResponseFile(
|
|||||||
servicesMetadata?: ContainerMetadata[],
|
servicesMetadata?: ContainerMetadata[],
|
||||||
isAlpine = false
|
isAlpine = false
|
||||||
): void {
|
): void {
|
||||||
|
// todo figure out if we are alpine
|
||||||
const response = {
|
const response = {
|
||||||
state: { network: networkName },
|
state: { network: networkName },
|
||||||
context: {},
|
context: {},
|
||||||
@@ -193,15 +186,15 @@ function transformDockerPortsToContextPorts(
|
|||||||
meta: ContainerMetadata
|
meta: ContainerMetadata
|
||||||
): ContextPorts {
|
): ContextPorts {
|
||||||
// ex: '80/tcp -> 0.0.0.0:80'
|
// ex: '80/tcp -> 0.0.0.0:80'
|
||||||
const re = /^(\d+)(\/\w+)? -> (.*):(\d+)$/
|
const re = /^(\d+)\/(\w+)? -> (.*):(\d+)$/
|
||||||
const contextPorts: ContextPorts = {}
|
const contextPorts: ContextPorts = {}
|
||||||
|
|
||||||
if (meta.ports?.length) {
|
if (meta.ports) {
|
||||||
for (const port of meta.ports) {
|
for (const port of meta.ports) {
|
||||||
const matches = port.match(re)
|
const matches = port.match(re)
|
||||||
if (!matches) {
|
if (!matches) {
|
||||||
throw new Error(
|
throw new Error(
|
||||||
'Container ports could not match the regex: "^(\\d+)(\\/\\w+)? -> (.*):(\\d+)$"'
|
'Container ports could not match the regex: "^(\\d+)\\/(\\w+)? -> (.*):(\\d+)$"'
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
contextPorts[matches[1]] = matches[matches.length - 1]
|
contextPorts[matches[1]] = matches[matches.length - 1]
|
||||||
|
|||||||
@@ -1,12 +1,13 @@
|
|||||||
import { RunContainerStepArgs } from 'hooklib/lib'
|
|
||||||
import { v4 as uuidv4 } from 'uuid'
|
|
||||||
import {
|
import {
|
||||||
containerBuild,
|
containerBuild,
|
||||||
containerPull,
|
|
||||||
containerRun,
|
|
||||||
registryLogin,
|
registryLogin,
|
||||||
registryLogout
|
registryLogout,
|
||||||
|
containerPull,
|
||||||
|
containerRun
|
||||||
} from '../dockerCommands'
|
} from '../dockerCommands'
|
||||||
|
import { v4 as uuidv4 } from 'uuid'
|
||||||
|
import * as core from '@actions/core'
|
||||||
|
import { RunContainerStepArgs } from 'hooklib/lib'
|
||||||
import { getRunnerLabel } from '../dockerCommands/constants'
|
import { getRunnerLabel } from '../dockerCommands/constants'
|
||||||
|
|
||||||
export async function runContainerStep(
|
export async function runContainerStep(
|
||||||
@@ -14,23 +15,23 @@ export async function runContainerStep(
|
|||||||
state
|
state
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
const tag = generateBuildTag() // for docker build
|
const tag = generateBuildTag() // for docker build
|
||||||
if (args.image) {
|
if (!args.image) {
|
||||||
const configLocation = await registryLogin(args.registry)
|
core.error('expected an image')
|
||||||
try {
|
|
||||||
await containerPull(args.image, configLocation)
|
|
||||||
} finally {
|
|
||||||
await registryLogout(configLocation)
|
|
||||||
}
|
|
||||||
} else if (args.dockerfile) {
|
|
||||||
await containerBuild(args, tag)
|
|
||||||
args.image = tag
|
|
||||||
} else {
|
} else {
|
||||||
throw new Error(
|
if (args.dockerfile) {
|
||||||
'run container step should have image or dockerfile fields specified'
|
await containerBuild(args, tag)
|
||||||
)
|
args.image = tag
|
||||||
|
} else {
|
||||||
|
const configLocation = await registryLogin(args)
|
||||||
|
try {
|
||||||
|
await containerPull(args.image, configLocation)
|
||||||
|
} finally {
|
||||||
|
await registryLogout(configLocation)
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
// container will get pruned at the end of the job based on the label, no need to cleanup here
|
// container will get pruned at the end of the job based on the label, no need to cleanup here
|
||||||
await containerRun(args, tag.split(':')[1], state?.network)
|
await containerRun(args, tag.split(':')[1], state.network)
|
||||||
}
|
}
|
||||||
|
|
||||||
function generateBuildTag(): string {
|
function generateBuildTag(): string {
|
||||||
|
|||||||
@@ -13,23 +13,22 @@ import {
|
|||||||
runContainerStep,
|
runContainerStep,
|
||||||
runScriptStep
|
runScriptStep
|
||||||
} from './hooks'
|
} from './hooks'
|
||||||
import { checkEnvironment } from './utils'
|
|
||||||
|
|
||||||
async function run(): Promise<void> {
|
async function run(): Promise<void> {
|
||||||
try {
|
const input = await getInputFromStdin()
|
||||||
checkEnvironment()
|
|
||||||
const input = await getInputFromStdin()
|
|
||||||
|
|
||||||
const args = input['args']
|
const args = input['args']
|
||||||
const command = input['command']
|
const command = input['command']
|
||||||
const responseFile = input['responseFile']
|
const responseFile = input['responseFile']
|
||||||
const state = input['state']
|
const state = input['state']
|
||||||
|
|
||||||
|
try {
|
||||||
switch (command) {
|
switch (command) {
|
||||||
case Command.PrepareJob:
|
case Command.PrepareJob:
|
||||||
await prepareJob(args as PrepareJobArgs, responseFile)
|
await prepareJob(args as PrepareJobArgs, responseFile)
|
||||||
return exit(0)
|
return exit(0)
|
||||||
case Command.CleanupJob:
|
case Command.CleanupJob:
|
||||||
await cleanupJob()
|
await cleanupJob(null, state, null)
|
||||||
return exit(0)
|
return exit(0)
|
||||||
case Command.RunScriptStep:
|
case Command.RunScriptStep:
|
||||||
await runScriptStep(args as RunScriptStepArgs, state)
|
await runScriptStep(args as RunScriptStepArgs, state)
|
||||||
|
|||||||
@@ -2,21 +2,18 @@
|
|||||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||||
/* eslint-disable import/no-commonjs */
|
/* eslint-disable import/no-commonjs */
|
||||||
import * as core from '@actions/core'
|
import * as core from '@actions/core'
|
||||||
import { env } from 'process'
|
|
||||||
// Import this way otherwise typescript has errors
|
// Import this way otherwise typescript has errors
|
||||||
const exec = require('@actions/exec')
|
const exec = require('@actions/exec')
|
||||||
|
|
||||||
export interface RunDockerCommandOptions {
|
export interface RunDockerCommandOptions {
|
||||||
workingDir?: string
|
workingDir?: string
|
||||||
input?: Buffer
|
input?: Buffer
|
||||||
env?: { [key: string]: string }
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function runDockerCommand(
|
export async function runDockerCommand(
|
||||||
args: string[],
|
args: string[],
|
||||||
options?: RunDockerCommandOptions
|
options?: RunDockerCommandOptions
|
||||||
): Promise<string> {
|
): Promise<string> {
|
||||||
options = optionsWithDockerEnvs(options)
|
|
||||||
const pipes = await exec.getExecOutput('docker', args, options)
|
const pipes = await exec.getExecOutput('docker', args, options)
|
||||||
if (pipes.exitCode !== 0) {
|
if (pipes.exitCode !== 0) {
|
||||||
core.error(`Docker failed with exit code ${pipes.exitCode}`)
|
core.error(`Docker failed with exit code ${pipes.exitCode}`)
|
||||||
@@ -25,45 +22,6 @@ export async function runDockerCommand(
|
|||||||
return Promise.resolve(pipes.stdout)
|
return Promise.resolve(pipes.stdout)
|
||||||
}
|
}
|
||||||
|
|
||||||
export function optionsWithDockerEnvs(
|
|
||||||
options?: RunDockerCommandOptions
|
|
||||||
): RunDockerCommandOptions | undefined {
|
|
||||||
// From https://docs.docker.com/engine/reference/commandline/cli/#environment-variables
|
|
||||||
const dockerCliEnvs = new Set([
|
|
||||||
'DOCKER_API_VERSION',
|
|
||||||
'DOCKER_CERT_PATH',
|
|
||||||
'DOCKER_CONFIG',
|
|
||||||
'DOCKER_CONTENT_TRUST_SERVER',
|
|
||||||
'DOCKER_CONTENT_TRUST',
|
|
||||||
'DOCKER_CONTEXT',
|
|
||||||
'DOCKER_DEFAULT_PLATFORM',
|
|
||||||
'DOCKER_HIDE_LEGACY_COMMANDS',
|
|
||||||
'DOCKER_HOST',
|
|
||||||
'DOCKER_STACK_ORCHESTRATOR',
|
|
||||||
'DOCKER_TLS_VERIFY',
|
|
||||||
'BUILDKIT_PROGRESS'
|
|
||||||
])
|
|
||||||
const dockerEnvs = {}
|
|
||||||
for (const key in process.env) {
|
|
||||||
if (dockerCliEnvs.has(key)) {
|
|
||||||
dockerEnvs[key] = process.env[key]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const newOptions = {
|
|
||||||
workingDir: options?.workingDir,
|
|
||||||
input: options?.input,
|
|
||||||
env: options?.env || {}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set docker envs or overwrite provided ones
|
|
||||||
for (const [key, value] of Object.entries(dockerEnvs)) {
|
|
||||||
newOptions.env[key] = value as string
|
|
||||||
}
|
|
||||||
|
|
||||||
return newOptions
|
|
||||||
}
|
|
||||||
|
|
||||||
export function sanitize(val: string): string {
|
export function sanitize(val: string): string {
|
||||||
if (!val || typeof val !== 'string') {
|
if (!val || typeof val !== 'string') {
|
||||||
return ''
|
return ''
|
||||||
@@ -84,12 +42,6 @@ export function sanitize(val: string): string {
|
|||||||
return newNameBuilder.join('')
|
return newNameBuilder.join('')
|
||||||
}
|
}
|
||||||
|
|
||||||
export function checkEnvironment(): void {
|
|
||||||
if (!env.GITHUB_WORKSPACE) {
|
|
||||||
throw new Error('GITHUB_WORKSPACE is not set')
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// isAlpha accepts single character and checks if
|
// isAlpha accepts single character and checks if
|
||||||
// that character is [a-zA-Z]
|
// that character is [a-zA-Z]
|
||||||
function isAlpha(val: string): boolean {
|
function isAlpha(val: string): boolean {
|
||||||
|
|||||||
@@ -1,33 +1,62 @@
|
|||||||
import { PrepareJobArgs } from 'hooklib/lib'
|
import { prepareJob, cleanupJob } from '../src/hooks'
|
||||||
import { cleanupJob, prepareJob } from '../src/hooks'
|
import { v4 as uuidv4 } from 'uuid'
|
||||||
|
import * as fs from 'fs'
|
||||||
|
import * as path from 'path'
|
||||||
import TestSetup from './test-setup'
|
import TestSetup from './test-setup'
|
||||||
|
|
||||||
|
const prepareJobInputPath = path.resolve(
|
||||||
|
`${__dirname}/../../../examples/prepare-job.json`
|
||||||
|
)
|
||||||
|
|
||||||
|
const tmpOutputDir = `${__dirname}/${uuidv4()}`
|
||||||
|
|
||||||
|
let prepareJobOutputPath: string
|
||||||
|
let prepareJobData: any
|
||||||
|
|
||||||
let testSetup: TestSetup
|
let testSetup: TestSetup
|
||||||
|
|
||||||
jest.useRealTimers()
|
jest.useRealTimers()
|
||||||
|
|
||||||
describe('cleanup job', () => {
|
describe('cleanup job', () => {
|
||||||
|
beforeAll(() => {
|
||||||
|
fs.mkdirSync(tmpOutputDir, { recursive: true })
|
||||||
|
})
|
||||||
|
|
||||||
|
afterAll(() => {
|
||||||
|
fs.rmSync(tmpOutputDir, { recursive: true })
|
||||||
|
})
|
||||||
|
|
||||||
beforeEach(async () => {
|
beforeEach(async () => {
|
||||||
|
const prepareJobRawData = fs.readFileSync(prepareJobInputPath, 'utf8')
|
||||||
|
prepareJobData = JSON.parse(prepareJobRawData.toString())
|
||||||
|
|
||||||
|
prepareJobOutputPath = `${tmpOutputDir}/prepare-job-output-${uuidv4()}.json`
|
||||||
|
fs.writeFileSync(prepareJobOutputPath, '')
|
||||||
|
|
||||||
testSetup = new TestSetup()
|
testSetup = new TestSetup()
|
||||||
testSetup.initialize()
|
testSetup.initialize()
|
||||||
|
|
||||||
const prepareJobDefinition = testSetup.getPrepareJobDefinition()
|
prepareJobData.args.container.userMountVolumes = testSetup.userMountVolumes
|
||||||
|
prepareJobData.args.container.systemMountVolumes =
|
||||||
|
testSetup.systemMountVolumes
|
||||||
|
prepareJobData.args.container.workingDirectory = testSetup.workingDirectory
|
||||||
|
|
||||||
const prepareJobOutput = testSetup.createOutputFile(
|
await prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||||
'prepare-job-output.json'
|
|
||||||
)
|
|
||||||
|
|
||||||
await prepareJob(
|
|
||||||
prepareJobDefinition.args as PrepareJobArgs,
|
|
||||||
prepareJobOutput
|
|
||||||
)
|
|
||||||
})
|
})
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
|
fs.rmSync(prepareJobOutputPath, { force: true })
|
||||||
testSetup.teardown()
|
testSetup.teardown()
|
||||||
})
|
})
|
||||||
|
|
||||||
it('should cleanup successfully', async () => {
|
it('should cleanup successfully', async () => {
|
||||||
await expect(cleanupJob()).resolves.not.toThrow()
|
const prepareJobOutputContent = fs.readFileSync(
|
||||||
|
prepareJobOutputPath,
|
||||||
|
'utf-8'
|
||||||
|
)
|
||||||
|
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
|
||||||
|
await expect(
|
||||||
|
cleanupJob(prepareJobData.args, parsedPrepareJobOutput.state, null)
|
||||||
|
).resolves.not.toThrow()
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,27 +0,0 @@
|
|||||||
import { containerBuild } from '../src/dockerCommands'
|
|
||||||
import TestSetup from './test-setup'
|
|
||||||
|
|
||||||
let testSetup
|
|
||||||
let runContainerStepDefinition
|
|
||||||
|
|
||||||
describe('container build', () => {
|
|
||||||
beforeEach(() => {
|
|
||||||
testSetup = new TestSetup()
|
|
||||||
testSetup.initialize()
|
|
||||||
|
|
||||||
runContainerStepDefinition = testSetup.getRunContainerStepDefinition()
|
|
||||||
})
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
testSetup.teardown()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should build container', async () => {
|
|
||||||
runContainerStepDefinition.image = ''
|
|
||||||
const actionPath = testSetup.initializeDockerAction()
|
|
||||||
runContainerStepDefinition.dockerfile = `${actionPath}/Dockerfile`
|
|
||||||
await expect(
|
|
||||||
containerBuild(runContainerStepDefinition, 'example-test-tag')
|
|
||||||
).resolves.not.toThrow()
|
|
||||||
})
|
|
||||||
})
|
|
||||||
@@ -4,7 +4,7 @@ jest.useRealTimers()
|
|||||||
|
|
||||||
describe('container pull', () => {
|
describe('container pull', () => {
|
||||||
it('should fail', async () => {
|
it('should fail', async () => {
|
||||||
const arg = { image: 'does-not-exist' }
|
const arg = { image: 'doesNotExist' }
|
||||||
await expect(containerPull(arg.image, '')).rejects.toThrow()
|
await expect(containerPull(arg.image, '')).rejects.toThrow()
|
||||||
})
|
})
|
||||||
it('should succeed', async () => {
|
it('should succeed', async () => {
|
||||||
|
|||||||
@@ -1,72 +1,102 @@
|
|||||||
import * as fs from 'fs'
|
|
||||||
import {
|
import {
|
||||||
cleanupJob,
|
|
||||||
prepareJob,
|
prepareJob,
|
||||||
runContainerStep,
|
cleanupJob,
|
||||||
runScriptStep
|
runScriptStep,
|
||||||
|
runContainerStep
|
||||||
} from '../src/hooks'
|
} from '../src/hooks'
|
||||||
|
import * as fs from 'fs'
|
||||||
|
import * as path from 'path'
|
||||||
|
import { v4 as uuidv4 } from 'uuid'
|
||||||
import TestSetup from './test-setup'
|
import TestSetup from './test-setup'
|
||||||
|
|
||||||
let definitions
|
const prepareJobJson = fs.readFileSync(
|
||||||
|
path.resolve(__dirname + '/../../../examples/prepare-job.json'),
|
||||||
|
'utf8'
|
||||||
|
)
|
||||||
|
|
||||||
|
const containerStepJson = fs.readFileSync(
|
||||||
|
path.resolve(__dirname + '/../../../examples/run-container-step.json'),
|
||||||
|
'utf8'
|
||||||
|
)
|
||||||
|
|
||||||
|
const tmpOutputDir = `${__dirname}/_temp/${uuidv4()}`
|
||||||
|
|
||||||
|
let prepareJobData: any
|
||||||
|
let scriptStepJson: any
|
||||||
|
let scriptStepData: any
|
||||||
|
let containerStepData: any
|
||||||
|
|
||||||
|
let prepareJobOutputFilePath: string
|
||||||
|
|
||||||
let testSetup: TestSetup
|
let testSetup: TestSetup
|
||||||
|
|
||||||
describe('e2e', () => {
|
describe('e2e', () => {
|
||||||
|
beforeAll(() => {
|
||||||
|
fs.mkdirSync(tmpOutputDir, { recursive: true })
|
||||||
|
})
|
||||||
|
|
||||||
|
afterAll(() => {
|
||||||
|
fs.rmSync(tmpOutputDir, { recursive: true })
|
||||||
|
})
|
||||||
|
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
|
// init dirs
|
||||||
testSetup = new TestSetup()
|
testSetup = new TestSetup()
|
||||||
testSetup.initialize()
|
testSetup.initialize()
|
||||||
|
|
||||||
definitions = {
|
prepareJobData = JSON.parse(prepareJobJson)
|
||||||
prepareJob: testSetup.getPrepareJobDefinition(),
|
prepareJobData.args.container.userMountVolumes = testSetup.userMountVolumes
|
||||||
runScriptStep: testSetup.getRunScriptStepDefinition(),
|
prepareJobData.args.container.systemMountVolumes =
|
||||||
runContainerStep: testSetup.getRunContainerStepDefinition()
|
testSetup.systemMountVolumes
|
||||||
}
|
prepareJobData.args.container.workingDirectory = testSetup.workingDirectory
|
||||||
|
|
||||||
|
scriptStepJson = fs.readFileSync(
|
||||||
|
path.resolve(__dirname + '/../../../examples/run-script-step.json'),
|
||||||
|
'utf8'
|
||||||
|
)
|
||||||
|
scriptStepData = JSON.parse(scriptStepJson)
|
||||||
|
scriptStepData.args.workingDirectory = testSetup.workingDirectory
|
||||||
|
|
||||||
|
containerStepData = JSON.parse(containerStepJson)
|
||||||
|
containerStepData.args.workingDirectory = testSetup.workingDirectory
|
||||||
|
containerStepData.args.userMountVolumes = testSetup.userMountVolumes
|
||||||
|
containerStepData.args.systemMountVolumes = testSetup.systemMountVolumes
|
||||||
|
|
||||||
|
prepareJobOutputFilePath = `${tmpOutputDir}/prepare-job-output-${uuidv4()}.json`
|
||||||
|
fs.writeFileSync(prepareJobOutputFilePath, '')
|
||||||
})
|
})
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
|
fs.rmSync(prepareJobOutputFilePath, { force: true })
|
||||||
testSetup.teardown()
|
testSetup.teardown()
|
||||||
})
|
})
|
||||||
|
|
||||||
it('should prepare job, then run script step, then run container step then cleanup', async () => {
|
it('should prepare job, then run script step, then run container step then cleanup', async () => {
|
||||||
const prepareJobOutput = testSetup.createOutputFile(
|
|
||||||
'prepare-job-output.json'
|
|
||||||
)
|
|
||||||
|
|
||||||
await expect(
|
await expect(
|
||||||
prepareJob(definitions.prepareJob.args, prepareJobOutput)
|
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
|
let rawState = fs.readFileSync(prepareJobOutputFilePath, 'utf-8')
|
||||||
let rawState = fs.readFileSync(prepareJobOutput, 'utf-8')
|
|
||||||
let resp = JSON.parse(rawState)
|
let resp = JSON.parse(rawState)
|
||||||
|
|
||||||
await expect(
|
await expect(
|
||||||
runScriptStep(definitions.runScriptStep.args, resp.state)
|
runScriptStep(scriptStepData.args, resp.state)
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
|
|
||||||
await expect(
|
await expect(
|
||||||
runContainerStep(definitions.runContainerStep.args, resp.state)
|
runContainerStep(containerStepData.args, resp.state)
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
|
await expect(cleanupJob(resp, resp.state, null)).resolves.not.toThrow()
|
||||||
await expect(cleanupJob()).resolves.not.toThrow()
|
|
||||||
})
|
})
|
||||||
|
|
||||||
it('should prepare job, then run script step, then run container step with Dockerfile then cleanup', async () => {
|
it('should prepare job, then run script step, then run container step with Dockerfile then cleanup', async () => {
|
||||||
const prepareJobOutput = testSetup.createOutputFile(
|
|
||||||
'prepare-job-output.json'
|
|
||||||
)
|
|
||||||
|
|
||||||
await expect(
|
await expect(
|
||||||
prepareJob(definitions.prepareJob.args, prepareJobOutput)
|
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
|
let rawState = fs.readFileSync(prepareJobOutputFilePath, 'utf-8')
|
||||||
let rawState = fs.readFileSync(prepareJobOutput, 'utf-8')
|
|
||||||
let resp = JSON.parse(rawState)
|
let resp = JSON.parse(rawState)
|
||||||
|
|
||||||
await expect(
|
await expect(
|
||||||
runScriptStep(definitions.runScriptStep.args, resp.state)
|
runScriptStep(scriptStepData.args, resp.state)
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
|
|
||||||
const dockerfilePath = `${testSetup.workingDirectory}/Dockerfile`
|
const dockerfilePath = `${tmpOutputDir}/Dockerfile`
|
||||||
fs.writeFileSync(
|
fs.writeFileSync(
|
||||||
dockerfilePath,
|
dockerfilePath,
|
||||||
`FROM ubuntu:latest
|
`FROM ubuntu:latest
|
||||||
@@ -74,17 +104,14 @@ ENV TEST=test
|
|||||||
ENTRYPOINT [ "tail", "-f", "/dev/null" ]
|
ENTRYPOINT [ "tail", "-f", "/dev/null" ]
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
|
const containerStepDataCopy = JSON.parse(JSON.stringify(containerStepData))
|
||||||
const containerStepDataCopy = JSON.parse(
|
process.env.GITHUB_WORKSPACE = tmpOutputDir
|
||||||
JSON.stringify(definitions.runContainerStep)
|
|
||||||
)
|
|
||||||
|
|
||||||
containerStepDataCopy.args.dockerfile = 'Dockerfile'
|
containerStepDataCopy.args.dockerfile = 'Dockerfile'
|
||||||
|
containerStepDataCopy.args.context = '.'
|
||||||
|
console.log(containerStepDataCopy.args)
|
||||||
await expect(
|
await expect(
|
||||||
runContainerStep(containerStepDataCopy.args, resp.state)
|
runContainerStep(containerStepDataCopy.args, resp.state)
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
|
await expect(cleanupJob(resp, resp.state, null)).resolves.not.toThrow()
|
||||||
await expect(cleanupJob()).resolves.not.toThrow()
|
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,18 +1,40 @@
|
|||||||
import * as fs from 'fs'
|
import * as fs from 'fs'
|
||||||
|
import { v4 as uuidv4 } from 'uuid'
|
||||||
import { prepareJob } from '../src/hooks'
|
import { prepareJob } from '../src/hooks'
|
||||||
import TestSetup from './test-setup'
|
import TestSetup from './test-setup'
|
||||||
|
|
||||||
jest.useRealTimers()
|
jest.useRealTimers()
|
||||||
|
|
||||||
let prepareJobDefinition
|
let prepareJobOutputPath: string
|
||||||
|
let prepareJobData: any
|
||||||
|
const tmpOutputDir = `${__dirname}/_temp/${uuidv4()}`
|
||||||
|
const prepareJobInputPath = `${__dirname}/../../../examples/prepare-job.json`
|
||||||
|
|
||||||
let testSetup: TestSetup
|
let testSetup: TestSetup
|
||||||
|
|
||||||
describe('prepare job', () => {
|
describe('prepare job', () => {
|
||||||
beforeEach(() => {
|
beforeAll(() => {
|
||||||
|
fs.mkdirSync(tmpOutputDir, { recursive: true })
|
||||||
|
})
|
||||||
|
|
||||||
|
afterAll(() => {
|
||||||
|
fs.rmSync(tmpOutputDir, { recursive: true })
|
||||||
|
})
|
||||||
|
|
||||||
|
beforeEach(async () => {
|
||||||
testSetup = new TestSetup()
|
testSetup = new TestSetup()
|
||||||
testSetup.initialize()
|
testSetup.initialize()
|
||||||
prepareJobDefinition = testSetup.getPrepareJobDefinition()
|
|
||||||
|
let prepareJobRawData = fs.readFileSync(prepareJobInputPath, 'utf8')
|
||||||
|
prepareJobData = JSON.parse(prepareJobRawData.toString())
|
||||||
|
|
||||||
|
prepareJobData.args.container.userMountVolumes = testSetup.userMountVolumes
|
||||||
|
prepareJobData.args.container.systemMountVolumes =
|
||||||
|
testSetup.systemMountVolumes
|
||||||
|
prepareJobData.args.container.workingDirectory = testSetup.workingDirectory
|
||||||
|
|
||||||
|
prepareJobOutputPath = `${tmpOutputDir}/prepare-job-output-${uuidv4()}.json`
|
||||||
|
fs.writeFileSync(prepareJobOutputPath, '')
|
||||||
})
|
})
|
||||||
|
|
||||||
afterEach(() => {
|
afterEach(() => {
|
||||||
@@ -20,68 +42,38 @@ describe('prepare job', () => {
|
|||||||
})
|
})
|
||||||
|
|
||||||
it('should not throw', async () => {
|
it('should not throw', async () => {
|
||||||
const prepareJobOutput = testSetup.createOutputFile(
|
|
||||||
'prepare-job-output.json'
|
|
||||||
)
|
|
||||||
await expect(
|
await expect(
|
||||||
prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
|
|
||||||
expect(() => fs.readFileSync(prepareJobOutput, 'utf-8')).not.toThrow()
|
expect(() => fs.readFileSync(prepareJobOutputPath, 'utf-8')).not.toThrow()
|
||||||
})
|
})
|
||||||
|
|
||||||
it('should have JSON output written to a file', async () => {
|
it('should have JSON output written to a file', async () => {
|
||||||
const prepareJobOutput = testSetup.createOutputFile(
|
await prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||||
'prepare-job-output.json'
|
const prepareJobOutputContent = fs.readFileSync(
|
||||||
|
prepareJobOutputPath,
|
||||||
|
'utf-8'
|
||||||
)
|
)
|
||||||
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
|
||||||
const prepareJobOutputContent = fs.readFileSync(prepareJobOutput, 'utf-8')
|
|
||||||
expect(() => JSON.parse(prepareJobOutputContent)).not.toThrow()
|
expect(() => JSON.parse(prepareJobOutputContent)).not.toThrow()
|
||||||
})
|
})
|
||||||
|
|
||||||
it('should have context written to a file', async () => {
|
it('should have context written to a file', async () => {
|
||||||
const prepareJobOutput = testSetup.createOutputFile(
|
await prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||||
'prepare-job-output.json'
|
const prepareJobOutputContent = fs.readFileSync(
|
||||||
)
|
prepareJobOutputPath,
|
||||||
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
'utf-8'
|
||||||
const parsedPrepareJobOutput = JSON.parse(
|
|
||||||
fs.readFileSync(prepareJobOutput, 'utf-8')
|
|
||||||
)
|
)
|
||||||
|
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
|
||||||
expect(parsedPrepareJobOutput.context).toBeDefined()
|
expect(parsedPrepareJobOutput.context).toBeDefined()
|
||||||
})
|
})
|
||||||
|
|
||||||
it('should have isAlpine field set correctly', async () => {
|
|
||||||
let prepareJobOutput = testSetup.createOutputFile(
|
|
||||||
'prepare-job-output-alpine.json'
|
|
||||||
)
|
|
||||||
const prepareJobArgsClone = JSON.parse(
|
|
||||||
JSON.stringify(prepareJobDefinition.args)
|
|
||||||
)
|
|
||||||
prepareJobArgsClone.container.image = 'alpine:latest'
|
|
||||||
await prepareJob(prepareJobArgsClone, prepareJobOutput)
|
|
||||||
|
|
||||||
let parsedPrepareJobOutput = JSON.parse(
|
|
||||||
fs.readFileSync(prepareJobOutput, 'utf-8')
|
|
||||||
)
|
|
||||||
expect(parsedPrepareJobOutput.isAlpine).toBe(true)
|
|
||||||
|
|
||||||
prepareJobOutput = testSetup.createOutputFile(
|
|
||||||
'prepare-job-output-ubuntu.json'
|
|
||||||
)
|
|
||||||
prepareJobArgsClone.container.image = 'ubuntu:latest'
|
|
||||||
await prepareJob(prepareJobArgsClone, prepareJobOutput)
|
|
||||||
parsedPrepareJobOutput = JSON.parse(
|
|
||||||
fs.readFileSync(prepareJobOutput, 'utf-8')
|
|
||||||
)
|
|
||||||
expect(parsedPrepareJobOutput.isAlpine).toBe(false)
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should have container ids written to file', async () => {
|
it('should have container ids written to file', async () => {
|
||||||
const prepareJobOutput = testSetup.createOutputFile(
|
await prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||||
'prepare-job-output.json'
|
const prepareJobOutputContent = fs.readFileSync(
|
||||||
|
prepareJobOutputPath,
|
||||||
|
'utf-8'
|
||||||
)
|
)
|
||||||
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
|
||||||
const prepareJobOutputContent = fs.readFileSync(prepareJobOutput, 'utf-8')
|
|
||||||
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
|
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
|
||||||
|
|
||||||
expect(parsedPrepareJobOutput.context.container.id).toBeDefined()
|
expect(parsedPrepareJobOutput.context.container.id).toBeDefined()
|
||||||
@@ -90,11 +82,11 @@ describe('prepare job', () => {
|
|||||||
})
|
})
|
||||||
|
|
||||||
it('should have ports for context written in form [containerPort]:[hostPort]', async () => {
|
it('should have ports for context written in form [containerPort]:[hostPort]', async () => {
|
||||||
const prepareJobOutput = testSetup.createOutputFile(
|
await prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||||
'prepare-job-output.json'
|
const prepareJobOutputContent = fs.readFileSync(
|
||||||
|
prepareJobOutputPath,
|
||||||
|
'utf-8'
|
||||||
)
|
)
|
||||||
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
|
||||||
const prepareJobOutputContent = fs.readFileSync(prepareJobOutput, 'utf-8')
|
|
||||||
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
|
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
|
||||||
|
|
||||||
const mainContainerPorts = parsedPrepareJobOutput.context.container.ports
|
const mainContainerPorts = parsedPrepareJobOutput.context.container.ports
|
||||||
@@ -108,14 +100,4 @@ describe('prepare job', () => {
|
|||||||
expect(redisServicePorts['80']).toBe('8080')
|
expect(redisServicePorts['80']).toBe('8080')
|
||||||
expect(redisServicePorts['8080']).toBe('8088')
|
expect(redisServicePorts['8080']).toBe('8088')
|
||||||
})
|
})
|
||||||
|
|
||||||
it('should run prepare job without job container without exception', async () => {
|
|
||||||
prepareJobDefinition.args.container = null
|
|
||||||
const prepareJobOutput = testSetup.createOutputFile(
|
|
||||||
'prepare-job-output.json'
|
|
||||||
)
|
|
||||||
await expect(
|
|
||||||
prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
|
||||||
).resolves.not.toThrow()
|
|
||||||
})
|
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,63 +0,0 @@
|
|||||||
import * as fs from 'fs'
|
|
||||||
import { PrepareJobResponse } from 'hooklib/lib'
|
|
||||||
import { prepareJob, runScriptStep } from '../src/hooks'
|
|
||||||
import TestSetup from './test-setup'
|
|
||||||
|
|
||||||
jest.useRealTimers()
|
|
||||||
|
|
||||||
let testSetup: TestSetup
|
|
||||||
|
|
||||||
let definitions
|
|
||||||
|
|
||||||
let prepareJobResponse: PrepareJobResponse
|
|
||||||
|
|
||||||
describe('run script step', () => {
|
|
||||||
beforeEach(async () => {
|
|
||||||
testSetup = new TestSetup()
|
|
||||||
testSetup.initialize()
|
|
||||||
|
|
||||||
definitions = {
|
|
||||||
prepareJob: testSetup.getPrepareJobDefinition(),
|
|
||||||
runScriptStep: testSetup.getRunScriptStepDefinition()
|
|
||||||
}
|
|
||||||
|
|
||||||
const prepareJobOutput = testSetup.createOutputFile(
|
|
||||||
'prepare-job-output.json'
|
|
||||||
)
|
|
||||||
await prepareJob(definitions.prepareJob.args, prepareJobOutput)
|
|
||||||
|
|
||||||
prepareJobResponse = JSON.parse(fs.readFileSync(prepareJobOutput, 'utf-8'))
|
|
||||||
})
|
|
||||||
|
|
||||||
it('Should run script step without exceptions', async () => {
|
|
||||||
await expect(
|
|
||||||
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
|
|
||||||
).resolves.not.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('Should have path variable changed in container with prepend path string', async () => {
|
|
||||||
definitions.runScriptStep.args.prependPath = '/some/path'
|
|
||||||
definitions.runScriptStep.args.entryPoint = '/bin/bash'
|
|
||||||
definitions.runScriptStep.args.entryPointArgs = [
|
|
||||||
'-c',
|
|
||||||
`if [[ ! $(env | grep "^PATH=") = "PATH=${definitions.runScriptStep.args.prependPath}:"* ]]; then exit 1; fi`
|
|
||||||
]
|
|
||||||
await expect(
|
|
||||||
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
|
|
||||||
).resolves.not.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('Should have path variable changed in container with prepend path string array', async () => {
|
|
||||||
definitions.runScriptStep.args.prependPath = ['/some/other/path']
|
|
||||||
definitions.runScriptStep.args.entryPoint = '/bin/bash'
|
|
||||||
definitions.runScriptStep.args.entryPointArgs = [
|
|
||||||
'-c',
|
|
||||||
`if [[ ! $(env | grep "^PATH=") = "PATH=${definitions.runScriptStep.args.prependPath.join(
|
|
||||||
':'
|
|
||||||
)}:"* ]]; then exit 1; fi`
|
|
||||||
]
|
|
||||||
await expect(
|
|
||||||
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
|
|
||||||
).resolves.not.toThrow()
|
|
||||||
})
|
|
||||||
})
|
|
||||||
@@ -1,15 +1,11 @@
|
|||||||
import * as fs from 'fs'
|
import * as fs from 'fs'
|
||||||
import { Mount } from 'hooklib'
|
|
||||||
import { HookData } from 'hooklib/lib'
|
|
||||||
import * as path from 'path'
|
|
||||||
import { env } from 'process'
|
|
||||||
import { v4 as uuidv4 } from 'uuid'
|
import { v4 as uuidv4 } from 'uuid'
|
||||||
|
import { env } from 'process'
|
||||||
|
import { Mount } from 'hooklib'
|
||||||
|
|
||||||
export default class TestSetup {
|
export default class TestSetup {
|
||||||
private testdir: string
|
private testdir: string
|
||||||
private runnerMockDir: string
|
private runnerMockDir: string
|
||||||
readonly runnerOutputDir: string
|
|
||||||
|
|
||||||
private runnerMockSubdirs = {
|
private runnerMockSubdirs = {
|
||||||
work: '_work',
|
work: '_work',
|
||||||
externals: 'externals',
|
externals: 'externals',
|
||||||
@@ -20,16 +16,15 @@ export default class TestSetup {
|
|||||||
githubWorkflow: '_work/_temp/_github_workflow'
|
githubWorkflow: '_work/_temp/_github_workflow'
|
||||||
}
|
}
|
||||||
|
|
||||||
private readonly projectName = 'repo'
|
private readonly projectName = 'example'
|
||||||
|
|
||||||
constructor() {
|
constructor() {
|
||||||
this.testdir = `${__dirname}/_temp/${uuidv4()}`
|
this.testdir = `${__dirname}/_temp/${uuidv4()}`
|
||||||
this.runnerMockDir = `${this.testdir}/runner/_layout`
|
this.runnerMockDir = `${this.testdir}/runner/_layout`
|
||||||
this.runnerOutputDir = `${this.testdir}/outputs`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private get allTestDirectories() {
|
private get allTestDirectories() {
|
||||||
const resp = [this.testdir, this.runnerMockDir, this.runnerOutputDir]
|
const resp = [this.testdir, this.runnerMockDir]
|
||||||
|
|
||||||
for (const [key, value] of Object.entries(this.runnerMockSubdirs)) {
|
for (const [key, value] of Object.entries(this.runnerMockSubdirs)) {
|
||||||
resp.push(`${this.runnerMockDir}/${value}`)
|
resp.push(`${this.runnerMockDir}/${value}`)
|
||||||
@@ -43,27 +38,30 @@ export default class TestSetup {
|
|||||||
}
|
}
|
||||||
|
|
||||||
public initialize(): void {
|
public initialize(): void {
|
||||||
env['GITHUB_WORKSPACE'] = this.workingDirectory
|
for (const dir of this.allTestDirectories) {
|
||||||
|
fs.mkdirSync(dir, { recursive: true })
|
||||||
|
}
|
||||||
env['RUNNER_NAME'] = 'test'
|
env['RUNNER_NAME'] = 'test'
|
||||||
env[
|
env[
|
||||||
'RUNNER_TEMP'
|
'RUNNER_TEMP'
|
||||||
] = `${this.runnerMockDir}/${this.runnerMockSubdirs.workTemp}`
|
] = `${this.runnerMockDir}/${this.runnerMockSubdirs.workTemp}`
|
||||||
|
|
||||||
for (const dir of this.allTestDirectories) {
|
|
||||||
fs.mkdirSync(dir, { recursive: true })
|
|
||||||
}
|
|
||||||
|
|
||||||
fs.copyFileSync(
|
|
||||||
path.resolve(`${__dirname}/../../../examples/example-script.sh`),
|
|
||||||
`${env.RUNNER_TEMP}/example-script.sh`
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public teardown(): void {
|
public teardown(): void {
|
||||||
fs.rmdirSync(this.testdir, { recursive: true })
|
fs.rmdirSync(this.testdir, { recursive: true })
|
||||||
}
|
}
|
||||||
|
|
||||||
private get systemMountVolumes(): Mount[] {
|
public get userMountVolumes(): Mount[] {
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
sourceVolumePath: 'my_docker_volume',
|
||||||
|
targetVolumePath: '/volume_mount',
|
||||||
|
readOnly: false
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
public get systemMountVolumes(): Mount[] {
|
||||||
return [
|
return [
|
||||||
{
|
{
|
||||||
sourceVolumePath: '/var/run/docker.sock',
|
sourceVolumePath: '/var/run/docker.sock',
|
||||||
@@ -108,89 +106,7 @@ export default class TestSetup {
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
public createOutputFile(name: string): string {
|
|
||||||
let filePath = path.join(this.runnerOutputDir, name || `${uuidv4()}.json`)
|
|
||||||
fs.writeFileSync(filePath, '')
|
|
||||||
return filePath
|
|
||||||
}
|
|
||||||
|
|
||||||
public get workingDirectory(): string {
|
public get workingDirectory(): string {
|
||||||
return `${this.runnerMockDir}/_work/${this.projectName}/${this.projectName}`
|
|
||||||
}
|
|
||||||
|
|
||||||
public get containerWorkingDirectory(): string {
|
|
||||||
return `/__w/${this.projectName}/${this.projectName}`
|
return `/__w/${this.projectName}/${this.projectName}`
|
||||||
}
|
}
|
||||||
|
|
||||||
public initializeDockerAction(): string {
|
|
||||||
const actionPath = `${this.testdir}/_actions/example-handle/example-repo/example-branch/mock-directory`
|
|
||||||
fs.mkdirSync(actionPath, { recursive: true })
|
|
||||||
this.writeDockerfile(actionPath)
|
|
||||||
this.writeEntrypoint(actionPath)
|
|
||||||
return actionPath
|
|
||||||
}
|
|
||||||
|
|
||||||
private writeDockerfile(actionPath: string) {
|
|
||||||
const content = `FROM alpine:3.10
|
|
||||||
COPY entrypoint.sh /entrypoint.sh
|
|
||||||
ENTRYPOINT ["/entrypoint.sh"]`
|
|
||||||
fs.writeFileSync(`${actionPath}/Dockerfile`, content)
|
|
||||||
}
|
|
||||||
|
|
||||||
private writeEntrypoint(actionPath) {
|
|
||||||
const content = `#!/bin/sh -l
|
|
||||||
echo "Hello $1"
|
|
||||||
time=$(date)
|
|
||||||
echo "::set-output name=time::$time"`
|
|
||||||
const entryPointPath = `${actionPath}/entrypoint.sh`
|
|
||||||
fs.writeFileSync(entryPointPath, content)
|
|
||||||
fs.chmodSync(entryPointPath, 0o755)
|
|
||||||
}
|
|
||||||
|
|
||||||
public getPrepareJobDefinition(): HookData {
|
|
||||||
const prepareJob = JSON.parse(
|
|
||||||
fs.readFileSync(
|
|
||||||
path.resolve(__dirname + '/../../../examples/prepare-job.json'),
|
|
||||||
'utf8'
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
prepareJob.args.container.systemMountVolumes = this.systemMountVolumes
|
|
||||||
prepareJob.args.container.workingDirectory = this.workingDirectory
|
|
||||||
prepareJob.args.container.userMountVolumes = undefined
|
|
||||||
prepareJob.args.container.registry = null
|
|
||||||
prepareJob.args.services.forEach(s => {
|
|
||||||
s.registry = null
|
|
||||||
})
|
|
||||||
|
|
||||||
return prepareJob
|
|
||||||
}
|
|
||||||
|
|
||||||
public getRunScriptStepDefinition(): HookData {
|
|
||||||
const runScriptStep = JSON.parse(
|
|
||||||
fs.readFileSync(
|
|
||||||
path.resolve(__dirname + '/../../../examples/run-script-step.json'),
|
|
||||||
'utf8'
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
runScriptStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
|
|
||||||
return runScriptStep
|
|
||||||
}
|
|
||||||
|
|
||||||
public getRunContainerStepDefinition(): HookData {
|
|
||||||
const runContainerStep = JSON.parse(
|
|
||||||
fs.readFileSync(
|
|
||||||
path.resolve(__dirname + '/../../../examples/run-container-step.json'),
|
|
||||||
'utf8'
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
runContainerStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
|
|
||||||
runContainerStep.args.systemMountVolumes = this.systemMountVolumes
|
|
||||||
runContainerStep.args.workingDirectory = this.workingDirectory
|
|
||||||
runContainerStep.args.userMountVolumes = undefined
|
|
||||||
runContainerStep.args.registry = null
|
|
||||||
return runContainerStep
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import { optionsWithDockerEnvs, sanitize } from '../src/utils'
|
import { sanitize } from '../src/utils'
|
||||||
|
|
||||||
describe('Utilities', () => {
|
describe('Utilities', () => {
|
||||||
it('should return sanitized image name', () => {
|
it('should return sanitized image name', () => {
|
||||||
@@ -9,41 +9,4 @@ describe('Utilities', () => {
|
|||||||
const validStr = 'teststr8_one'
|
const validStr = 'teststr8_one'
|
||||||
expect(sanitize(validStr)).toBe(validStr)
|
expect(sanitize(validStr)).toBe(validStr)
|
||||||
})
|
})
|
||||||
|
|
||||||
describe('with docker options', () => {
|
|
||||||
it('should augment options with docker environment variables', () => {
|
|
||||||
process.env.DOCKER_HOST = 'unix:///run/user/1001/docker.sock'
|
|
||||||
process.env.DOCKER_NOTEXIST = 'notexist'
|
|
||||||
|
|
||||||
const optionDefinitions: any = [
|
|
||||||
undefined,
|
|
||||||
{},
|
|
||||||
{ env: {} },
|
|
||||||
{ env: { DOCKER_HOST: 'unix://var/run/docker.sock' } }
|
|
||||||
]
|
|
||||||
for (const opt of optionDefinitions) {
|
|
||||||
let options = optionsWithDockerEnvs(opt)
|
|
||||||
expect(options).toBeDefined()
|
|
||||||
expect(options?.env).toBeDefined()
|
|
||||||
expect(options?.env?.DOCKER_HOST).toBe(process.env.DOCKER_HOST)
|
|
||||||
expect(options?.env?.DOCKER_NOTEXIST).toBeUndefined()
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should not overwrite other options', () => {
|
|
||||||
process.env.DOCKER_HOST = 'unix:///run/user/1001/docker.sock'
|
|
||||||
const opt = {
|
|
||||||
workingDir: 'test',
|
|
||||||
input: Buffer.from('test')
|
|
||||||
}
|
|
||||||
|
|
||||||
const options = optionsWithDockerEnvs(opt)
|
|
||||||
expect(options).toBeDefined()
|
|
||||||
expect(options?.workingDir).toBe(opt.workingDir)
|
|
||||||
expect(options?.input).toBe(opt.input)
|
|
||||||
expect(options?.env).toStrictEqual({
|
|
||||||
DOCKER_HOST: process.env.DOCKER_HOST
|
|
||||||
})
|
|
||||||
})
|
|
||||||
})
|
|
||||||
})
|
})
|
||||||
|
|||||||
61
packages/hooklib/package-lock.json
generated
61
packages/hooklib/package-lock.json
generated
@@ -9,7 +9,7 @@
|
|||||||
"version": "0.1.0",
|
"version": "0.1.0",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@actions/core": "^1.9.1"
|
"@actions/core": "^1.6.0"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@types/node": "^17.0.23",
|
"@types/node": "^17.0.23",
|
||||||
@@ -22,20 +22,19 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@actions/core": {
|
"node_modules/@actions/core": {
|
||||||
"version": "1.9.1",
|
"version": "1.6.0",
|
||||||
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.9.1.tgz",
|
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.6.0.tgz",
|
||||||
"integrity": "sha512-5ad+U2YGrmmiw6du20AQW5XuWo7UKN2052FjSV7MX+Wfjf8sCqcsZe62NfgHys4QI4/Y+vQvLKYL8jWtA1ZBTA==",
|
"integrity": "sha512-NB1UAZomZlCV/LmJqkLhNTqtKfFXJZAUPcfl/zqG7EfsQdeUJtaWO98SGbuQ3pydJ3fHl2CvI/51OKYlCYYcaw==",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@actions/http-client": "^2.0.1",
|
"@actions/http-client": "^1.0.11"
|
||||||
"uuid": "^8.3.2"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@actions/http-client": {
|
"node_modules/@actions/http-client": {
|
||||||
"version": "2.0.1",
|
"version": "1.0.11",
|
||||||
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-1.0.11.tgz",
|
||||||
"integrity": "sha512-PIXiMVtz6VvyaRsGY268qvj57hXQEpsYogYOu2nrQhlf+XCGmZstmuZBbAybUl1nQGnvS1k1eEsQ69ZoD7xlSw==",
|
"integrity": "sha512-VRYHGQV1rqnROJqdMvGUbY/Kn8vriQe/F9HR2AlYHzmKuM/p3kjNuXhmdBfcVgsvRWTz5C5XW5xvndZrVBuAYg==",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"tunnel": "^0.0.6"
|
"tunnel": "0.0.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@eslint/eslintrc": {
|
"node_modules/@eslint/eslintrc": {
|
||||||
@@ -1742,9 +1741,9 @@
|
|||||||
"dev": true
|
"dev": true
|
||||||
},
|
},
|
||||||
"node_modules/json5": {
|
"node_modules/json5": {
|
||||||
"version": "1.0.2",
|
"version": "1.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz",
|
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
|
||||||
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==",
|
"integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"minimist": "^1.2.0"
|
"minimist": "^1.2.0"
|
||||||
@@ -2486,14 +2485,6 @@
|
|||||||
"punycode": "^2.1.0"
|
"punycode": "^2.1.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/uuid": {
|
|
||||||
"version": "8.3.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
|
|
||||||
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==",
|
|
||||||
"bin": {
|
|
||||||
"uuid": "dist/bin/uuid"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/v8-compile-cache": {
|
"node_modules/v8-compile-cache": {
|
||||||
"version": "2.3.0",
|
"version": "2.3.0",
|
||||||
"resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.3.0.tgz",
|
"resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.3.0.tgz",
|
||||||
@@ -2555,20 +2546,19 @@
|
|||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@actions/core": {
|
"@actions/core": {
|
||||||
"version": "1.9.1",
|
"version": "1.6.0",
|
||||||
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.9.1.tgz",
|
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.6.0.tgz",
|
||||||
"integrity": "sha512-5ad+U2YGrmmiw6du20AQW5XuWo7UKN2052FjSV7MX+Wfjf8sCqcsZe62NfgHys4QI4/Y+vQvLKYL8jWtA1ZBTA==",
|
"integrity": "sha512-NB1UAZomZlCV/LmJqkLhNTqtKfFXJZAUPcfl/zqG7EfsQdeUJtaWO98SGbuQ3pydJ3fHl2CvI/51OKYlCYYcaw==",
|
||||||
"requires": {
|
"requires": {
|
||||||
"@actions/http-client": "^2.0.1",
|
"@actions/http-client": "^1.0.11"
|
||||||
"uuid": "^8.3.2"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"@actions/http-client": {
|
"@actions/http-client": {
|
||||||
"version": "2.0.1",
|
"version": "1.0.11",
|
||||||
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.0.1.tgz",
|
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-1.0.11.tgz",
|
||||||
"integrity": "sha512-PIXiMVtz6VvyaRsGY268qvj57hXQEpsYogYOu2nrQhlf+XCGmZstmuZBbAybUl1nQGnvS1k1eEsQ69ZoD7xlSw==",
|
"integrity": "sha512-VRYHGQV1rqnROJqdMvGUbY/Kn8vriQe/F9HR2AlYHzmKuM/p3kjNuXhmdBfcVgsvRWTz5C5XW5xvndZrVBuAYg==",
|
||||||
"requires": {
|
"requires": {
|
||||||
"tunnel": "^0.0.6"
|
"tunnel": "0.0.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"@eslint/eslintrc": {
|
"@eslint/eslintrc": {
|
||||||
@@ -3789,9 +3779,9 @@
|
|||||||
"dev": true
|
"dev": true
|
||||||
},
|
},
|
||||||
"json5": {
|
"json5": {
|
||||||
"version": "1.0.2",
|
"version": "1.0.1",
|
||||||
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz",
|
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
|
||||||
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==",
|
"integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"requires": {
|
"requires": {
|
||||||
"minimist": "^1.2.0"
|
"minimist": "^1.2.0"
|
||||||
@@ -4310,11 +4300,6 @@
|
|||||||
"punycode": "^2.1.0"
|
"punycode": "^2.1.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"uuid": {
|
|
||||||
"version": "8.3.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
|
|
||||||
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg=="
|
|
||||||
},
|
|
||||||
"v8-compile-cache": {
|
"v8-compile-cache": {
|
||||||
"version": "2.3.0",
|
"version": "2.3.0",
|
||||||
"resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.3.0.tgz",
|
"resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.3.0.tgz",
|
||||||
|
|||||||
@@ -23,6 +23,6 @@
|
|||||||
"typescript": "^4.6.3"
|
"typescript": "^4.6.3"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@actions/core": "^1.9.1"
|
"@actions/core": "^1.6.0"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -34,7 +34,6 @@ export interface ContainerInfo {
|
|||||||
createOptions?: string
|
createOptions?: string
|
||||||
environmentVariables?: { [key: string]: string }
|
environmentVariables?: { [key: string]: string }
|
||||||
userMountVolumes?: Mount[]
|
userMountVolumes?: Mount[]
|
||||||
systemMountVolumes?: Mount[]
|
|
||||||
registry?: Registry
|
registry?: Registry
|
||||||
portMappings?: string[]
|
portMappings?: string[]
|
||||||
}
|
}
|
||||||
@@ -74,6 +73,14 @@ export enum Protocol {
|
|||||||
UDP = 'udp'
|
UDP = 'udp'
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export enum PodPhase {
|
||||||
|
PENDING = 'Pending',
|
||||||
|
RUNNING = 'Running',
|
||||||
|
SUCCEEDED = 'Succeded',
|
||||||
|
FAILED = 'Failed',
|
||||||
|
UNKNOWN = 'Unknown'
|
||||||
|
}
|
||||||
|
|
||||||
export interface PrepareJobResponse {
|
export interface PrepareJobResponse {
|
||||||
state?: object
|
state?: object
|
||||||
context?: ContainerContext
|
context?: ContainerContext
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
import * as core from '@actions/core'
|
||||||
import * as events from 'events'
|
import * as events from 'events'
|
||||||
import * as fs from 'fs'
|
import * as fs from 'fs'
|
||||||
import * as os from 'os'
|
import * as os from 'os'
|
||||||
@@ -12,6 +13,7 @@ export async function getInputFromStdin(): Promise<HookData> {
|
|||||||
})
|
})
|
||||||
|
|
||||||
rl.on('line', line => {
|
rl.on('line', line => {
|
||||||
|
core.debug(`Line from STDIN: ${line}`)
|
||||||
input = line
|
input = line
|
||||||
})
|
})
|
||||||
await events.default.once(rl, 'close')
|
await events.default.once(rl, 'close')
|
||||||
|
|||||||
@@ -6,40 +6,7 @@ This implementation provides a way to dynamically spin up jobs to run container
|
|||||||
## Pre-requisites
|
## Pre-requisites
|
||||||
Some things are expected to be set when using these hooks
|
Some things are expected to be set when using these hooks
|
||||||
- The runner itself should be running in a pod, with a service account with the following permissions
|
- The runner itself should be running in a pod, with a service account with the following permissions
|
||||||
```
|
- The `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER=true` should be set to true
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: Role
|
|
||||||
metadata:
|
|
||||||
namespace: default
|
|
||||||
name: runner-role
|
|
||||||
rules:
|
|
||||||
- apiGroups: [""]
|
|
||||||
resources: ["pods"]
|
|
||||||
verbs: ["get", "list", "create", "delete"]
|
|
||||||
- apiGroups: [""]
|
|
||||||
resources: ["pods/exec"]
|
|
||||||
verbs: ["get", "create"]
|
|
||||||
- apiGroups: [""]
|
|
||||||
resources: ["pods/log"]
|
|
||||||
verbs: ["get", "list", "watch",]
|
|
||||||
- apiGroups: ["batch"]
|
|
||||||
resources: ["jobs"]
|
|
||||||
verbs: ["get", "list", "create", "delete"]
|
|
||||||
- apiGroups: [""]
|
|
||||||
resources: ["secrets"]
|
|
||||||
verbs: ["get", "list", "create", "delete"]
|
|
||||||
```
|
|
||||||
- The `ACTIONS_RUNNER_POD_NAME` env should be set to the name of the pod
|
- The `ACTIONS_RUNNER_POD_NAME` env should be set to the name of the pod
|
||||||
- The `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` env should be set to true to prevent the runner from running any jobs outside of a container
|
|
||||||
- The runner pod should map a persistent volume claim into the `_work` directory
|
- The runner pod should map a persistent volume claim into the `_work` directory
|
||||||
- The `ACTIONS_RUNNER_CLAIM_NAME` env should be set to the persistent volume claim that contains the runner's working directory, otherwise it defaults to `${ACTIONS_RUNNER_POD_NAME}-work`
|
- The `ACTIONS_RUNNER_CLAIM_NAME` should be set to the persistent volume claim that contains the runner's working directory
|
||||||
- Some actions runner env's are expected to be set. These are set automatically by the runner.
|
|
||||||
- `RUNNER_WORKSPACE` is expected to be set to the workspace of the runner
|
|
||||||
- `GITHUB_WORKSPACE` is expected to be set to the workspace of the job
|
|
||||||
|
|
||||||
|
|
||||||
## Limitations
|
|
||||||
- A [job containers](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container) will be required for all jobs
|
|
||||||
- Building container actions from a dockerfile is not supported at this time
|
|
||||||
- Container actions will not have access to the services network or job container network
|
|
||||||
- Docker [create options](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontaineroptions) are not supported
|
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
jest.setTimeout(500000)
|
jest.setTimeout(90000)
|
||||||
1024
packages/k8s/package-lock.json
generated
1024
packages/k8s/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -13,10 +13,10 @@
|
|||||||
"author": "",
|
"author": "",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@actions/core": "^1.9.1",
|
"@actions/core": "^1.6.0",
|
||||||
"@actions/exec": "^1.1.1",
|
"@actions/exec": "^1.1.1",
|
||||||
"@actions/io": "^1.1.2",
|
"@actions/io": "^1.1.2",
|
||||||
"@kubernetes/client-node": "^0.18.1",
|
"@kubernetes/client-node": "^0.16.3",
|
||||||
"hooklib": "file:../hooklib"
|
"hooklib": "file:../hooklib"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import { prunePods, pruneSecrets } from '../k8s'
|
import { podPrune } from '../k8s'
|
||||||
|
|
||||||
export async function cleanupJob(): Promise<void> {
|
export async function cleanupJob(): Promise<void> {
|
||||||
await Promise.all([prunePods(), pruneSecrets()])
|
await podPrune()
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -20,33 +20,28 @@ export function getJobPodName(): string {
|
|||||||
export function getStepPodName(): string {
|
export function getStepPodName(): string {
|
||||||
return `${getRunnerPodName().substring(
|
return `${getRunnerPodName().substring(
|
||||||
0,
|
0,
|
||||||
MAX_POD_NAME_LENGTH - ('-step-'.length + STEP_POD_NAME_SUFFIX_LENGTH)
|
MAX_POD_NAME_LENGTH - ('-step'.length + STEP_POD_NAME_SUFFIX_LENGTH)
|
||||||
)}-step-${uuidv4().substring(0, STEP_POD_NAME_SUFFIX_LENGTH)}`
|
)}-step-${uuidv4().substring(0, STEP_POD_NAME_SUFFIX_LENGTH)}`
|
||||||
}
|
}
|
||||||
|
|
||||||
export function getVolumeClaimName(): string {
|
export function getVolumeClaimName(): string {
|
||||||
const name = process.env.ACTIONS_RUNNER_CLAIM_NAME
|
const name = process.env.ACTIONS_RUNNER_CLAIM_NAME
|
||||||
if (!name) {
|
if (!name) {
|
||||||
return `${getRunnerPodName()}-work`
|
throw new Error(
|
||||||
|
"'ACTIONS_RUNNER_CLAIM_NAME' is required, please contact your self hosted runner administrator"
|
||||||
|
)
|
||||||
}
|
}
|
||||||
return name
|
return name
|
||||||
}
|
}
|
||||||
|
|
||||||
export function getSecretName(): string {
|
const MAX_POD_NAME_LENGTH = 63
|
||||||
return `${getRunnerPodName().substring(
|
const STEP_POD_NAME_SUFFIX_LENGTH = 8
|
||||||
0,
|
|
||||||
MAX_POD_NAME_LENGTH - ('-secret-'.length + STEP_POD_NAME_SUFFIX_LENGTH)
|
|
||||||
)}-secret-${uuidv4().substring(0, STEP_POD_NAME_SUFFIX_LENGTH)}`
|
|
||||||
}
|
|
||||||
|
|
||||||
export const MAX_POD_NAME_LENGTH = 63
|
|
||||||
export const STEP_POD_NAME_SUFFIX_LENGTH = 8
|
|
||||||
export const JOB_CONTAINER_NAME = 'job'
|
export const JOB_CONTAINER_NAME = 'job'
|
||||||
|
|
||||||
export class RunnerInstanceLabel {
|
export class RunnerInstanceLabel {
|
||||||
private podName: string
|
runnerhook: string
|
||||||
constructor() {
|
constructor() {
|
||||||
this.podName = getRunnerPodName()
|
this.runnerhook = process.env.ACTIONS_RUNNER_POD_NAME as string
|
||||||
}
|
}
|
||||||
|
|
||||||
get key(): string {
|
get key(): string {
|
||||||
@@ -54,10 +49,10 @@ export class RunnerInstanceLabel {
|
|||||||
}
|
}
|
||||||
|
|
||||||
get value(): string {
|
get value(): string {
|
||||||
return this.podName
|
return this.runnerhook
|
||||||
}
|
}
|
||||||
|
|
||||||
toString(): string {
|
toString(): string {
|
||||||
return `runner-pod=${this.podName}`
|
return `runner-pod=${this.runnerhook}`
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,21 +1,27 @@
|
|||||||
import * as core from '@actions/core'
|
import * as core from '@actions/core'
|
||||||
import * as io from '@actions/io'
|
import * as io from '@actions/io'
|
||||||
import * as k8s from '@kubernetes/client-node'
|
import * as k8s from '@kubernetes/client-node'
|
||||||
import { ContextPorts, prepareJobArgs, writeToResponseFile } from 'hooklib'
|
import {
|
||||||
|
ContextPorts,
|
||||||
|
PodPhase,
|
||||||
|
prepareJobArgs,
|
||||||
|
writeToResponseFile
|
||||||
|
} from 'hooklib'
|
||||||
import path from 'path'
|
import path from 'path'
|
||||||
import {
|
import {
|
||||||
containerPorts,
|
containerPorts,
|
||||||
createPod,
|
createPod,
|
||||||
|
isAuthPermissionsOK,
|
||||||
isPodContainerAlpine,
|
isPodContainerAlpine,
|
||||||
prunePods,
|
namespace,
|
||||||
|
podPrune,
|
||||||
|
requiredPermissions,
|
||||||
waitForPodPhases
|
waitForPodPhases
|
||||||
} from '../k8s'
|
} from '../k8s'
|
||||||
import {
|
import {
|
||||||
containerVolumes,
|
containerVolumes,
|
||||||
DEFAULT_CONTAINER_ENTRY_POINT,
|
DEFAULT_CONTAINER_ENTRY_POINT,
|
||||||
DEFAULT_CONTAINER_ENTRY_POINT_ARGS,
|
DEFAULT_CONTAINER_ENTRY_POINT_ARGS
|
||||||
generateContainerName,
|
|
||||||
PodPhase
|
|
||||||
} from '../k8s/utils'
|
} from '../k8s/utils'
|
||||||
import { JOB_CONTAINER_NAME } from './constants'
|
import { JOB_CONTAINER_NAME } from './constants'
|
||||||
|
|
||||||
@@ -23,23 +29,26 @@ export async function prepareJob(
|
|||||||
args: prepareJobArgs,
|
args: prepareJobArgs,
|
||||||
responseFile
|
responseFile
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
if (!args.container) {
|
await podPrune()
|
||||||
throw new Error('Job Container is required.')
|
if (!(await isAuthPermissionsOK())) {
|
||||||
|
throw new Error(
|
||||||
|
`The Service account needs the following permissions ${JSON.stringify(
|
||||||
|
requiredPermissions
|
||||||
|
)} on the pod resource in the '${namespace}' namespace. Please contact your self hosted runner administrator.`
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
await prunePods()
|
|
||||||
await copyExternalsToRoot()
|
await copyExternalsToRoot()
|
||||||
let container: k8s.V1Container | undefined = undefined
|
let container: k8s.V1Container | undefined = undefined
|
||||||
if (args.container?.image) {
|
if (args.container?.image) {
|
||||||
core.debug(`Using image '${args.container.image}' for job image`)
|
core.info(`Using image '${args.container.image}' for job image`)
|
||||||
container = createContainerSpec(args.container, JOB_CONTAINER_NAME, true)
|
container = createPodSpec(args.container, JOB_CONTAINER_NAME, true)
|
||||||
}
|
}
|
||||||
|
|
||||||
let services: k8s.V1Container[] = []
|
let services: k8s.V1Container[] = []
|
||||||
if (args.services?.length) {
|
if (args.services?.length) {
|
||||||
services = args.services.map(service => {
|
services = args.services.map(service => {
|
||||||
core.debug(`Adding service '${service.image}' to pod definition`)
|
core.info(`Adding service '${service.image}' to pod definition`)
|
||||||
return createContainerSpec(service, generateContainerName(service.image))
|
return createPodSpec(service, service.image.split(':')[0])
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
if (!container && !services?.length) {
|
if (!container && !services?.length) {
|
||||||
@@ -47,18 +56,15 @@ export async function prepareJob(
|
|||||||
}
|
}
|
||||||
let createdPod: k8s.V1Pod | undefined = undefined
|
let createdPod: k8s.V1Pod | undefined = undefined
|
||||||
try {
|
try {
|
||||||
createdPod = await createPod(container, services, args.container.registry)
|
createdPod = await createPod(container, services, args.registry)
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
await prunePods()
|
await podPrune()
|
||||||
throw new Error(`failed to create job pod: ${err}`)
|
throw new Error(`failed to create job pod: ${err}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!createdPod?.metadata?.name) {
|
if (!createdPod?.metadata?.name) {
|
||||||
throw new Error('created pod should have metadata.name')
|
throw new Error('created pod should have metadata.name')
|
||||||
}
|
}
|
||||||
core.debug(
|
|
||||||
`Job pod created, waiting for it to come online ${createdPod?.metadata?.name}`
|
|
||||||
)
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
await waitForPodPhases(
|
await waitForPodPhases(
|
||||||
@@ -67,11 +73,11 @@ export async function prepareJob(
|
|||||||
new Set([PodPhase.PENDING])
|
new Set([PodPhase.PENDING])
|
||||||
)
|
)
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
await prunePods()
|
await podPrune()
|
||||||
throw new Error(`Pod failed to come online with error: ${err}`)
|
throw new Error(`Pod failed to come online with error: ${err}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
core.debug('Job pod is ready for traffic')
|
core.info('Pod is ready for traffic')
|
||||||
|
|
||||||
let isAlpine = false
|
let isAlpine = false
|
||||||
try {
|
try {
|
||||||
@@ -82,7 +88,7 @@ export async function prepareJob(
|
|||||||
} catch (err) {
|
} catch (err) {
|
||||||
throw new Error(`Failed to determine if the pod is alpine: ${err}`)
|
throw new Error(`Failed to determine if the pod is alpine: ${err}`)
|
||||||
}
|
}
|
||||||
core.debug(`Setting isAlpine to ${isAlpine}`)
|
|
||||||
generateResponseFile(responseFile, createdPod, isAlpine)
|
generateResponseFile(responseFile, createdPod, isAlpine)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -91,13 +97,8 @@ function generateResponseFile(
|
|||||||
appPod: k8s.V1Pod,
|
appPod: k8s.V1Pod,
|
||||||
isAlpine
|
isAlpine
|
||||||
): void {
|
): void {
|
||||||
if (!appPod.metadata?.name) {
|
|
||||||
throw new Error('app pod must have metadata.name specified')
|
|
||||||
}
|
|
||||||
const response = {
|
const response = {
|
||||||
state: {
|
state: {},
|
||||||
jobPod: appPod.metadata.name
|
|
||||||
},
|
|
||||||
context: {},
|
context: {},
|
||||||
isAlpine
|
isAlpine
|
||||||
}
|
}
|
||||||
@@ -125,11 +126,13 @@ function generateResponseFile(
|
|||||||
)
|
)
|
||||||
if (serviceContainers?.length) {
|
if (serviceContainers?.length) {
|
||||||
response.context['services'] = serviceContainers.map(c => {
|
response.context['services'] = serviceContainers.map(c => {
|
||||||
|
if (!c.ports) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
const ctxPorts: ContextPorts = {}
|
const ctxPorts: ContextPorts = {}
|
||||||
if (c.ports?.length) {
|
for (const port of c.ports) {
|
||||||
for (const port of c.ports) {
|
ctxPorts[port.containerPort] = port.hostPort
|
||||||
ctxPorts[port.containerPort] = port.hostPort
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
@@ -152,31 +155,33 @@ async function copyExternalsToRoot(): Promise<void> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
export function createContainerSpec(
|
function createPodSpec(
|
||||||
container,
|
container,
|
||||||
name: string,
|
name: string,
|
||||||
jobContainer = false
|
jobContainer = false
|
||||||
): k8s.V1Container {
|
): k8s.V1Container {
|
||||||
if (!container.entryPoint && jobContainer) {
|
core.info(JSON.stringify(container))
|
||||||
container.entryPoint = DEFAULT_CONTAINER_ENTRY_POINT
|
if (!container.entryPointArgs) {
|
||||||
container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
|
container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
|
||||||
}
|
}
|
||||||
|
container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
|
||||||
|
if (!container.entryPoint) {
|
||||||
|
container.entryPoint = DEFAULT_CONTAINER_ENTRY_POINT
|
||||||
|
}
|
||||||
const podContainer = {
|
const podContainer = {
|
||||||
name,
|
name,
|
||||||
image: container.image,
|
image: container.image,
|
||||||
|
command: [container.entryPoint],
|
||||||
|
args: container.entryPointArgs,
|
||||||
ports: containerPorts(container)
|
ports: containerPorts(container)
|
||||||
} as k8s.V1Container
|
} as k8s.V1Container
|
||||||
|
|
||||||
if (container.workingDirectory) {
|
if (container.workingDirectory) {
|
||||||
podContainer.workingDir = container.workingDirectory
|
podContainer.workingDir = container.workingDirectory
|
||||||
}
|
}
|
||||||
|
|
||||||
if (container.entryPoint) {
|
if (container.createOptions) {
|
||||||
podContainer.command = [container.entryPoint]
|
podContainer.resources = getResourceRequirements(container.createOptions)
|
||||||
}
|
|
||||||
|
|
||||||
if (container.entryPointArgs?.length > 0) {
|
|
||||||
podContainer.args = container.entryPointArgs
|
|
||||||
}
|
}
|
||||||
|
|
||||||
podContainer.env = []
|
podContainer.env = []
|
||||||
@@ -195,3 +200,62 @@ export function createContainerSpec(
|
|||||||
|
|
||||||
return podContainer
|
return podContainer
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function getResourceRequirements(
|
||||||
|
createOptions: string
|
||||||
|
): k8s.V1ResourceRequirements {
|
||||||
|
const rr = new k8s.V1ResourceRequirements()
|
||||||
|
rr.limits = {}
|
||||||
|
rr.requests = {}
|
||||||
|
|
||||||
|
const options = parseOptions(createOptions)
|
||||||
|
for (const [key, value] of Object.entries(options)) {
|
||||||
|
switch (key) {
|
||||||
|
case '--cpus':
|
||||||
|
rr.requests.cpu = value
|
||||||
|
break
|
||||||
|
case '--memory':
|
||||||
|
case '-m':
|
||||||
|
rr.limits.memory = value
|
||||||
|
break
|
||||||
|
default:
|
||||||
|
core.warning(
|
||||||
|
`Container option ${key} is not supported. Supported options are ['--cpus', '--memory', '-m']`
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return rr
|
||||||
|
}
|
||||||
|
|
||||||
|
function parseOptions(options: string): { [option: string]: string } {
|
||||||
|
const rv: { [option: string]: string } = {}
|
||||||
|
|
||||||
|
const spaceSplit = options.split(' ')
|
||||||
|
for (let i = 0; i < spaceSplit.length; i++) {
|
||||||
|
if (!spaceSplit[i].startsWith('-')) {
|
||||||
|
throw new Error(`Options specified in wrong format: ${options}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
const optSplit = spaceSplit[i].split('=')
|
||||||
|
const optName = optSplit[0]
|
||||||
|
let optValue = ''
|
||||||
|
switch (optSplit.length) {
|
||||||
|
case 1:
|
||||||
|
if (spaceSplit.length <= i + 1) {
|
||||||
|
throw new Error(`Option ${optName} must have a value`)
|
||||||
|
}
|
||||||
|
optValue = spaceSplit[++i]
|
||||||
|
break
|
||||||
|
case 2:
|
||||||
|
optValue = optSplit[1]
|
||||||
|
break
|
||||||
|
default:
|
||||||
|
throw new Error(`failed to parse option ${spaceSplit[i]}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
rv[optName] = optValue
|
||||||
|
}
|
||||||
|
|
||||||
|
return rv
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,39 +1,22 @@
|
|||||||
import * as core from '@actions/core'
|
|
||||||
import * as k8s from '@kubernetes/client-node'
|
import * as k8s from '@kubernetes/client-node'
|
||||||
import { RunContainerStepArgs } from 'hooklib'
|
import * as core from '@actions/core'
|
||||||
|
import { PodPhase } from 'hooklib'
|
||||||
import {
|
import {
|
||||||
createJob,
|
createJob,
|
||||||
createSecretForEnvs,
|
|
||||||
getContainerJobPodName,
|
getContainerJobPodName,
|
||||||
getPodLogs,
|
getPodLogs,
|
||||||
getPodStatus,
|
getPodStatus,
|
||||||
waitForJobToComplete,
|
waitForJobToComplete,
|
||||||
waitForPodPhases
|
waitForPodPhases
|
||||||
} from '../k8s'
|
} from '../k8s'
|
||||||
import {
|
|
||||||
containerVolumes,
|
|
||||||
DEFAULT_CONTAINER_ENTRY_POINT,
|
|
||||||
DEFAULT_CONTAINER_ENTRY_POINT_ARGS,
|
|
||||||
PodPhase,
|
|
||||||
writeEntryPointScript
|
|
||||||
} from '../k8s/utils'
|
|
||||||
import { JOB_CONTAINER_NAME } from './constants'
|
import { JOB_CONTAINER_NAME } from './constants'
|
||||||
|
import { containerVolumes } from '../k8s/utils'
|
||||||
|
|
||||||
export async function runContainerStep(
|
export async function runContainerStep(stepContainer): Promise<number> {
|
||||||
stepContainer: RunContainerStepArgs
|
|
||||||
): Promise<number> {
|
|
||||||
if (stepContainer.dockerfile) {
|
if (stepContainer.dockerfile) {
|
||||||
throw new Error('Building container actions is not currently supported')
|
throw new Error('Building container actions is not currently supported')
|
||||||
}
|
}
|
||||||
|
const container = createPodSpec(stepContainer)
|
||||||
let secretName: string | undefined = undefined
|
|
||||||
if (stepContainer.environmentVariables) {
|
|
||||||
secretName = await createSecretForEnvs(stepContainer.environmentVariables)
|
|
||||||
}
|
|
||||||
|
|
||||||
core.debug(`Created secret ${secretName} for container job envs`)
|
|
||||||
const container = createPodSpec(stepContainer, secretName)
|
|
||||||
|
|
||||||
const job = await createJob(container)
|
const job = await createJob(container)
|
||||||
if (!job.metadata?.name) {
|
if (!job.metadata?.name) {
|
||||||
throw new Error(
|
throw new Error(
|
||||||
@@ -42,69 +25,45 @@ export async function runContainerStep(
|
|||||||
)} to have correctly set the metadata.name`
|
)} to have correctly set the metadata.name`
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
core.debug(`Job created, waiting for pod to start: ${job.metadata?.name}`)
|
|
||||||
|
|
||||||
const podName = await getContainerJobPodName(job.metadata.name)
|
const podName = await getContainerJobPodName(job.metadata.name)
|
||||||
await waitForPodPhases(
|
await waitForPodPhases(
|
||||||
podName,
|
podName,
|
||||||
new Set([PodPhase.COMPLETED, PodPhase.RUNNING, PodPhase.SUCCEEDED]),
|
new Set([PodPhase.COMPLETED, PodPhase.RUNNING]),
|
||||||
new Set([PodPhase.PENDING, PodPhase.UNKNOWN])
|
new Set([PodPhase.PENDING])
|
||||||
)
|
)
|
||||||
core.debug('Container step is running or complete, pulling logs')
|
|
||||||
|
|
||||||
await getPodLogs(podName, JOB_CONTAINER_NAME)
|
await getPodLogs(podName, JOB_CONTAINER_NAME)
|
||||||
|
|
||||||
core.debug('Waiting for container job to complete')
|
|
||||||
await waitForJobToComplete(job.metadata.name)
|
await waitForJobToComplete(job.metadata.name)
|
||||||
// pod has failed so pull the status code from the container
|
// pod has failed so pull the status code from the container
|
||||||
const status = await getPodStatus(podName)
|
const status = await getPodStatus(podName)
|
||||||
if (status?.phase === 'Succeeded') {
|
if (!status?.containerStatuses?.length) {
|
||||||
|
core.warning(`Can't determine container status`)
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
if (!status?.containerStatuses?.length) {
|
|
||||||
core.error(
|
|
||||||
`Can't determine container status from response: ${JSON.stringify(
|
|
||||||
status
|
|
||||||
)}`
|
|
||||||
)
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
const exitCode =
|
const exitCode =
|
||||||
status.containerStatuses[status.containerStatuses.length - 1].state
|
status.containerStatuses[status.containerStatuses.length - 1].state
|
||||||
?.terminated?.exitCode
|
?.terminated?.exitCode
|
||||||
return Number(exitCode) || 1
|
return Number(exitCode) || 0
|
||||||
}
|
}
|
||||||
|
|
||||||
function createPodSpec(
|
function createPodSpec(container): k8s.V1Container {
|
||||||
container: RunContainerStepArgs,
|
|
||||||
secretName?: string
|
|
||||||
): k8s.V1Container {
|
|
||||||
const podContainer = new k8s.V1Container()
|
const podContainer = new k8s.V1Container()
|
||||||
podContainer.name = JOB_CONTAINER_NAME
|
podContainer.name = JOB_CONTAINER_NAME
|
||||||
podContainer.image = container.image
|
podContainer.image = container.image
|
||||||
|
if (container.entryPoint) {
|
||||||
const { entryPoint, entryPointArgs } = container
|
podContainer.command = [container.entryPoint, ...container.entryPointArgs]
|
||||||
container.entryPoint = 'sh'
|
|
||||||
|
|
||||||
const { containerPath } = writeEntryPointScript(
|
|
||||||
container.workingDirectory,
|
|
||||||
entryPoint || DEFAULT_CONTAINER_ENTRY_POINT,
|
|
||||||
entryPoint ? entryPointArgs || [] : DEFAULT_CONTAINER_ENTRY_POINT_ARGS
|
|
||||||
)
|
|
||||||
container.entryPointArgs = ['-e', containerPath]
|
|
||||||
podContainer.command = [container.entryPoint, ...container.entryPointArgs]
|
|
||||||
|
|
||||||
if (secretName) {
|
|
||||||
podContainer.envFrom = [
|
|
||||||
{
|
|
||||||
secretRef: {
|
|
||||||
name: secretName,
|
|
||||||
optional: false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
podContainer.volumeMounts = containerVolumes(undefined, false, true)
|
|
||||||
|
podContainer.env = []
|
||||||
|
for (const [key, value] of Object.entries(
|
||||||
|
container['environmentVariables']
|
||||||
|
)) {
|
||||||
|
if (value && key !== 'HOME') {
|
||||||
|
podContainer.env.push({ name: key, value: value as string })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
podContainer.volumeMounts = containerVolumes()
|
||||||
|
|
||||||
return podContainer
|
return podContainer
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,8 +1,6 @@
|
|||||||
/* eslint-disable @typescript-eslint/no-unused-vars */
|
/* eslint-disable @typescript-eslint/no-unused-vars */
|
||||||
import * as fs from 'fs'
|
|
||||||
import { RunScriptStepArgs } from 'hooklib'
|
import { RunScriptStepArgs } from 'hooklib'
|
||||||
import { execPodStep } from '../k8s'
|
import { execPodStep } from '../k8s'
|
||||||
import { writeEntryPointScript } from '../k8s/utils'
|
|
||||||
import { JOB_CONTAINER_NAME } from './constants'
|
import { JOB_CONTAINER_NAME } from './constants'
|
||||||
|
|
||||||
export async function runScriptStep(
|
export async function runScriptStep(
|
||||||
@@ -10,26 +8,31 @@ export async function runScriptStep(
|
|||||||
state,
|
state,
|
||||||
responseFile
|
responseFile
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
const { entryPoint, entryPointArgs, environmentVariables } = args
|
const cb = new CommandsBuilder(
|
||||||
const { containerPath, runnerPath } = writeEntryPointScript(
|
args.entryPoint,
|
||||||
args.workingDirectory,
|
args.entryPointArgs,
|
||||||
entryPoint,
|
args.environmentVariables
|
||||||
entryPointArgs,
|
|
||||||
args.prependPath,
|
|
||||||
environmentVariables
|
|
||||||
)
|
)
|
||||||
|
await execPodStep(cb.command, state.jobPod, JOB_CONTAINER_NAME)
|
||||||
|
}
|
||||||
|
|
||||||
args.entryPoint = 'sh'
|
class CommandsBuilder {
|
||||||
args.entryPointArgs = ['-e', containerPath]
|
constructor(
|
||||||
try {
|
private entryPoint: string,
|
||||||
await execPodStep(
|
private entryPointArgs: string[],
|
||||||
[args.entryPoint, ...args.entryPointArgs],
|
private environmentVariables: { [key: string]: string }
|
||||||
state.jobPod,
|
) {}
|
||||||
JOB_CONTAINER_NAME
|
|
||||||
)
|
get command(): string[] {
|
||||||
} catch (err) {
|
const envCommands: string[] = []
|
||||||
throw new Error(`failed to run script step: ${err}`)
|
if (
|
||||||
} finally {
|
this.environmentVariables &&
|
||||||
fs.rmSync(runnerPath)
|
Object.entries(this.environmentVariables).length
|
||||||
|
) {
|
||||||
|
for (const [key, value] of Object.entries(this.environmentVariables)) {
|
||||||
|
envCommands.push(`${key}=${value}`)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ['env', ...envCommands, this.entryPoint, ...this.entryPointArgs]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
import * as core from '@actions/core'
|
|
||||||
import { Command, getInputFromStdin, prepareJobArgs } from 'hooklib'
|
import { Command, getInputFromStdin, prepareJobArgs } from 'hooklib'
|
||||||
import {
|
import {
|
||||||
cleanupJob,
|
cleanupJob,
|
||||||
@@ -6,45 +5,40 @@ import {
|
|||||||
runContainerStep,
|
runContainerStep,
|
||||||
runScriptStep
|
runScriptStep
|
||||||
} from './hooks'
|
} from './hooks'
|
||||||
import { isAuthPermissionsOK, namespace, requiredPermissions } from './k8s'
|
|
||||||
|
|
||||||
async function run(): Promise<void> {
|
async function run(): Promise<void> {
|
||||||
|
const input = await getInputFromStdin()
|
||||||
|
|
||||||
|
const args = input['args']
|
||||||
|
const command = input['command']
|
||||||
|
const responseFile = input['responseFile']
|
||||||
|
const state = input['state']
|
||||||
|
|
||||||
|
let exitCode = 0
|
||||||
try {
|
try {
|
||||||
const input = await getInputFromStdin()
|
|
||||||
|
|
||||||
const args = input['args']
|
|
||||||
const command = input['command']
|
|
||||||
const responseFile = input['responseFile']
|
|
||||||
const state = input['state']
|
|
||||||
if (!(await isAuthPermissionsOK())) {
|
|
||||||
throw new Error(
|
|
||||||
`The Service account needs the following permissions ${JSON.stringify(
|
|
||||||
requiredPermissions
|
|
||||||
)} on the pod resource in the '${namespace()}' namespace. Please contact your self hosted runner administrator.`
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
let exitCode = 0
|
|
||||||
switch (command) {
|
switch (command) {
|
||||||
case Command.PrepareJob:
|
case Command.PrepareJob:
|
||||||
await prepareJob(args as prepareJobArgs, responseFile)
|
await prepareJob(args as prepareJobArgs, responseFile)
|
||||||
return process.exit(0)
|
break
|
||||||
case Command.CleanupJob:
|
case Command.CleanupJob:
|
||||||
await cleanupJob()
|
await cleanupJob()
|
||||||
return process.exit(0)
|
break
|
||||||
case Command.RunScriptStep:
|
case Command.RunScriptStep:
|
||||||
await runScriptStep(args, state, null)
|
await runScriptStep(args, state, null)
|
||||||
return process.exit(0)
|
break
|
||||||
case Command.RunContainerStep:
|
case Command.RunContainerStep:
|
||||||
exitCode = await runContainerStep(args)
|
exitCode = await runContainerStep(args)
|
||||||
return process.exit(exitCode)
|
break
|
||||||
|
case Command.runContainerStep:
|
||||||
default:
|
default:
|
||||||
throw new Error(`Command not recognized: ${command}`)
|
throw new Error(`Command not recognized: ${command}`)
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
core.error(error as Error)
|
// eslint-disable-next-line no-console
|
||||||
process.exit(1)
|
console.log(error)
|
||||||
|
exitCode = 1
|
||||||
}
|
}
|
||||||
|
process.exitCode = exitCode
|
||||||
}
|
}
|
||||||
|
|
||||||
void run()
|
void run()
|
||||||
|
|||||||
@@ -1,16 +1,13 @@
|
|||||||
import * as core from '@actions/core'
|
|
||||||
import * as k8s from '@kubernetes/client-node'
|
import * as k8s from '@kubernetes/client-node'
|
||||||
import { ContainerInfo, Registry } from 'hooklib'
|
import { ContainerInfo, PodPhase, Registry } from 'hooklib'
|
||||||
import * as stream from 'stream'
|
import * as stream from 'stream'
|
||||||
|
import { v4 as uuidv4 } from 'uuid'
|
||||||
import {
|
import {
|
||||||
getJobPodName,
|
getJobPodName,
|
||||||
getRunnerPodName,
|
getRunnerPodName,
|
||||||
getSecretName,
|
|
||||||
getStepPodName,
|
|
||||||
getVolumeClaimName,
|
getVolumeClaimName,
|
||||||
RunnerInstanceLabel
|
RunnerInstanceLabel
|
||||||
} from '../hooks/constants'
|
} from '../hooks/constants'
|
||||||
import { PodPhase } from './utils'
|
|
||||||
|
|
||||||
const kc = new k8s.KubeConfig()
|
const kc = new k8s.KubeConfig()
|
||||||
|
|
||||||
@@ -46,15 +43,16 @@ export const requiredPermissions = [
|
|||||||
verbs: ['get', 'list', 'create', 'delete'],
|
verbs: ['get', 'list', 'create', 'delete'],
|
||||||
resource: 'jobs',
|
resource: 'jobs',
|
||||||
subresource: ''
|
subresource: ''
|
||||||
},
|
|
||||||
{
|
|
||||||
group: '',
|
|
||||||
verbs: ['create', 'delete', 'get', 'list'],
|
|
||||||
resource: 'secrets',
|
|
||||||
subresource: ''
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
|
const secretPermission = {
|
||||||
|
group: '',
|
||||||
|
verbs: ['get', 'list', 'create', 'delete'],
|
||||||
|
resource: 'secrets',
|
||||||
|
subresource: ''
|
||||||
|
}
|
||||||
|
|
||||||
export async function createPod(
|
export async function createPod(
|
||||||
jobContainer?: k8s.V1Container,
|
jobContainer?: k8s.V1Container,
|
||||||
services?: k8s.V1Container[],
|
services?: k8s.V1Container[],
|
||||||
@@ -94,13 +92,19 @@ export async function createPod(
|
|||||||
]
|
]
|
||||||
|
|
||||||
if (registry) {
|
if (registry) {
|
||||||
const secret = await createDockerSecret(registry)
|
if (await isSecretsAuthOK()) {
|
||||||
if (!secret?.metadata?.name) {
|
const secret = await createDockerSecret(registry)
|
||||||
throw new Error(`created secret does not have secret.metadata.name`)
|
if (!secret?.metadata?.name) {
|
||||||
|
throw new Error(`created secret does not have secret.metadata.name`)
|
||||||
|
}
|
||||||
|
const secretReference = new k8s.V1LocalObjectReference()
|
||||||
|
secretReference.name = secret.metadata.name
|
||||||
|
appPod.spec.imagePullSecrets = [secretReference]
|
||||||
|
} else {
|
||||||
|
throw new Error(
|
||||||
|
`Pulls from private registry is not allowed. Please contact your self hosted runner administrator. Service account needs permissions for ${secretPermission.verbs} in resource ${secretPermission.resource}`
|
||||||
|
)
|
||||||
}
|
}
|
||||||
const secretReference = new k8s.V1LocalObjectReference()
|
|
||||||
secretReference.name = secret.metadata.name
|
|
||||||
appPod.spec.imagePullSecrets = [secretReference]
|
|
||||||
}
|
}
|
||||||
|
|
||||||
const { body } = await k8sApi.createNamespacedPod(namespace(), appPod)
|
const { body } = await k8sApi.createNamespacedPod(namespace(), appPod)
|
||||||
@@ -110,14 +114,13 @@ export async function createPod(
|
|||||||
export async function createJob(
|
export async function createJob(
|
||||||
container: k8s.V1Container
|
container: k8s.V1Container
|
||||||
): Promise<k8s.V1Job> {
|
): Promise<k8s.V1Job> {
|
||||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
|
||||||
|
|
||||||
const job = new k8s.V1Job()
|
const job = new k8s.V1Job()
|
||||||
|
|
||||||
job.apiVersion = 'batch/v1'
|
job.apiVersion = 'batch/v1'
|
||||||
job.kind = 'Job'
|
job.kind = 'Job'
|
||||||
job.metadata = new k8s.V1ObjectMeta()
|
job.metadata = new k8s.V1ObjectMeta()
|
||||||
job.metadata.name = getStepPodName()
|
job.metadata.name = getJobPodName()
|
||||||
job.metadata.labels = { [runnerInstanceLabel.key]: runnerInstanceLabel.value }
|
job.metadata.labels = { 'runner-pod': getRunnerPodName() }
|
||||||
|
|
||||||
job.spec = new k8s.V1JobSpec()
|
job.spec = new k8s.V1JobSpec()
|
||||||
job.spec.ttlSecondsAfterFinished = 300
|
job.spec.ttlSecondsAfterFinished = 300
|
||||||
@@ -129,7 +132,7 @@ export async function createJob(
|
|||||||
job.spec.template.spec.restartPolicy = 'Never'
|
job.spec.template.spec.restartPolicy = 'Never'
|
||||||
job.spec.template.spec.nodeName = await getCurrentNodeName()
|
job.spec.template.spec.nodeName = await getCurrentNodeName()
|
||||||
|
|
||||||
const claimName = getVolumeClaimName()
|
const claimName = `${runnerName()}-work`
|
||||||
job.spec.template.spec.volumes = [
|
job.spec.template.spec.volumes = [
|
||||||
{
|
{
|
||||||
name: 'work',
|
name: 'work',
|
||||||
@@ -170,13 +173,7 @@ export async function getContainerJobPodName(jobName: string): Promise<string> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
export async function deletePod(podName: string): Promise<void> {
|
export async function deletePod(podName: string): Promise<void> {
|
||||||
await k8sApi.deleteNamespacedPod(
|
await k8sApi.deleteNamespacedPod(podName, namespace())
|
||||||
podName,
|
|
||||||
namespace(),
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
0
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function execPodStep(
|
export async function execPodStep(
|
||||||
@@ -185,32 +182,36 @@ export async function execPodStep(
|
|||||||
containerName: string,
|
containerName: string,
|
||||||
stdin?: stream.Readable
|
stdin?: stream.Readable
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
|
// TODO, we need to add the path from `prependPath` to the PATH variable. How can we do that? Maybe another exec before running this one?
|
||||||
|
// Maybe something like, get the current path, if these entries aren't in it, add them, then set the current path to that?
|
||||||
|
|
||||||
|
// TODO: how do we set working directory? There doesn't seem to be an easy way to do it. Should we cd then execute our bash script?
|
||||||
const exec = new k8s.Exec(kc)
|
const exec = new k8s.Exec(kc)
|
||||||
await new Promise(async function (resolve, reject) {
|
return new Promise(async function (resolve, reject) {
|
||||||
await exec.exec(
|
try {
|
||||||
namespace(),
|
await exec.exec(
|
||||||
podName,
|
namespace(),
|
||||||
containerName,
|
podName,
|
||||||
command,
|
containerName,
|
||||||
process.stdout,
|
command,
|
||||||
process.stderr,
|
process.stdout,
|
||||||
stdin ?? null,
|
process.stderr,
|
||||||
false /* tty */,
|
stdin ?? null,
|
||||||
resp => {
|
false /* tty */,
|
||||||
// kube.exec returns an error if exit code is not 0, but we can't actually get the exit code
|
resp => {
|
||||||
if (resp.status === 'Success') {
|
// kube.exec returns an error if exit code is not 0, but we can't actually get the exit code
|
||||||
resolve(resp.code)
|
if (resp.status === 'Success') {
|
||||||
} else {
|
resolve()
|
||||||
core.debug(
|
} else {
|
||||||
JSON.stringify({
|
reject(
|
||||||
message: resp?.message,
|
JSON.stringify({ message: resp?.message, details: resp?.details })
|
||||||
details: resp?.details
|
)
|
||||||
})
|
}
|
||||||
)
|
|
||||||
reject(resp?.message)
|
|
||||||
}
|
}
|
||||||
}
|
)
|
||||||
)
|
} catch (error) {
|
||||||
|
reject(error)
|
||||||
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -233,100 +234,46 @@ export async function createDockerSecret(
|
|||||||
): Promise<k8s.V1Secret> {
|
): Promise<k8s.V1Secret> {
|
||||||
const authContent = {
|
const authContent = {
|
||||||
auths: {
|
auths: {
|
||||||
[registry.serverUrl || 'https://index.docker.io/v1/']: {
|
[registry.serverUrl]: {
|
||||||
username: registry.username,
|
username: registry.username,
|
||||||
password: registry.password,
|
password: registry.password,
|
||||||
auth: Buffer.from(`${registry.username}:${registry.password}`).toString(
|
auth: Buffer.from(
|
||||||
|
`${registry.username}:${registry.password}`,
|
||||||
'base64'
|
'base64'
|
||||||
)
|
).toString()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
const secretName = generateSecretName()
|
||||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
|
||||||
|
|
||||||
const secretName = getSecretName()
|
|
||||||
const secret = new k8s.V1Secret()
|
const secret = new k8s.V1Secret()
|
||||||
secret.immutable = true
|
secret.immutable = true
|
||||||
secret.apiVersion = 'v1'
|
secret.apiVersion = 'v1'
|
||||||
secret.metadata = new k8s.V1ObjectMeta()
|
secret.metadata = new k8s.V1ObjectMeta()
|
||||||
secret.metadata.name = secretName
|
secret.metadata.name = secretName
|
||||||
secret.metadata.namespace = namespace()
|
|
||||||
secret.metadata.labels = {
|
|
||||||
[runnerInstanceLabel.key]: runnerInstanceLabel.value
|
|
||||||
}
|
|
||||||
secret.type = 'kubernetes.io/dockerconfigjson'
|
|
||||||
secret.kind = 'Secret'
|
secret.kind = 'Secret'
|
||||||
secret.data = {
|
secret.data = {
|
||||||
'.dockerconfigjson': Buffer.from(JSON.stringify(authContent)).toString(
|
'.dockerconfigjson': Buffer.from(
|
||||||
|
JSON.stringify(authContent),
|
||||||
'base64'
|
'base64'
|
||||||
)
|
).toString()
|
||||||
}
|
}
|
||||||
|
|
||||||
const { body } = await k8sApi.createNamespacedSecret(namespace(), secret)
|
const { body } = await k8sApi.createNamespacedSecret(namespace(), secret)
|
||||||
return body
|
return body
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function createSecretForEnvs(envs: {
|
|
||||||
[key: string]: string
|
|
||||||
}): Promise<string> {
|
|
||||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
|
||||||
|
|
||||||
const secret = new k8s.V1Secret()
|
|
||||||
const secretName = getSecretName()
|
|
||||||
secret.immutable = true
|
|
||||||
secret.apiVersion = 'v1'
|
|
||||||
secret.metadata = new k8s.V1ObjectMeta()
|
|
||||||
secret.metadata.name = secretName
|
|
||||||
|
|
||||||
secret.metadata.labels = {
|
|
||||||
[runnerInstanceLabel.key]: runnerInstanceLabel.value
|
|
||||||
}
|
|
||||||
secret.kind = 'Secret'
|
|
||||||
secret.data = {}
|
|
||||||
for (const [key, value] of Object.entries(envs)) {
|
|
||||||
secret.data[key] = Buffer.from(value).toString('base64')
|
|
||||||
}
|
|
||||||
|
|
||||||
await k8sApi.createNamespacedSecret(namespace(), secret)
|
|
||||||
return secretName
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function deleteSecret(secretName: string): Promise<void> {
|
|
||||||
await k8sApi.deleteNamespacedSecret(secretName, namespace())
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function pruneSecrets(): Promise<void> {
|
|
||||||
const secretList = await k8sApi.listNamespacedSecret(
|
|
||||||
namespace(),
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
new RunnerInstanceLabel().toString()
|
|
||||||
)
|
|
||||||
if (!secretList.body.items.length) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
await Promise.all(
|
|
||||||
secretList.body.items.map(
|
|
||||||
secret => secret.metadata?.name && deleteSecret(secret.metadata.name)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function waitForPodPhases(
|
export async function waitForPodPhases(
|
||||||
podName: string,
|
podName: string,
|
||||||
awaitingPhases: Set<PodPhase>,
|
awaitingPhases: Set<PodPhase>,
|
||||||
backOffPhases: Set<PodPhase>,
|
backOffPhases: Set<PodPhase>,
|
||||||
maxTimeSeconds = 10 * 60 // 10 min
|
maxTimeSeconds = 45 * 60 // 45 min
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
const backOffManager = new BackOffManager(maxTimeSeconds)
|
const backOffManager = new BackOffManager(maxTimeSeconds)
|
||||||
let phase: PodPhase = PodPhase.UNKNOWN
|
let phase: PodPhase = PodPhase.UNKNOWN
|
||||||
try {
|
try {
|
||||||
while (true) {
|
while (true) {
|
||||||
phase = await getPodPhase(podName)
|
phase = await getPodPhase(podName)
|
||||||
|
|
||||||
if (awaitingPhases.has(phase)) {
|
if (awaitingPhases.has(phase)) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -357,7 +304,7 @@ async function getPodPhase(podName: string): Promise<PodPhase> {
|
|||||||
if (!pod.status?.phase || !podPhaseLookup.has(pod.status.phase)) {
|
if (!pod.status?.phase || !podPhaseLookup.has(pod.status.phase)) {
|
||||||
return PodPhase.UNKNOWN
|
return PodPhase.UNKNOWN
|
||||||
}
|
}
|
||||||
return pod.status?.phase as PodPhase
|
return pod.status?.phase
|
||||||
}
|
}
|
||||||
|
|
||||||
async function isJobSucceeded(jobName: string): Promise<boolean> {
|
async function isJobSucceeded(jobName: string): Promise<boolean> {
|
||||||
@@ -381,7 +328,7 @@ export async function getPodLogs(
|
|||||||
})
|
})
|
||||||
|
|
||||||
logStream.on('error', err => {
|
logStream.on('error', err => {
|
||||||
process.stderr.write(err.message)
|
process.stderr.write(JSON.stringify(err))
|
||||||
})
|
})
|
||||||
|
|
||||||
const r = await log.log(namespace(), podName, containerName, logStream, {
|
const r = await log.log(namespace(), podName, containerName, logStream, {
|
||||||
@@ -393,7 +340,7 @@ export async function getPodLogs(
|
|||||||
await new Promise(resolve => r.on('close', () => resolve(null)))
|
await new Promise(resolve => r.on('close', () => resolve(null)))
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function prunePods(): Promise<void> {
|
export async function podPrune(): Promise<void> {
|
||||||
const podList = await k8sApi.listNamespacedPod(
|
const podList = await k8sApi.listNamespacedPod(
|
||||||
namespace(),
|
namespace(),
|
||||||
undefined,
|
undefined,
|
||||||
@@ -442,6 +389,26 @@ export async function isAuthPermissionsOK(): Promise<boolean> {
|
|||||||
return responses.every(resp => resp.body.status?.allowed)
|
return responses.every(resp => resp.body.status?.allowed)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export async function isSecretsAuthOK(): Promise<boolean> {
|
||||||
|
const sar = new k8s.V1SelfSubjectAccessReview()
|
||||||
|
const asyncs: Promise<{
|
||||||
|
response: unknown
|
||||||
|
body: k8s.V1SelfSubjectAccessReview
|
||||||
|
}>[] = []
|
||||||
|
for (const verb of secretPermission.verbs) {
|
||||||
|
sar.spec = new k8s.V1SelfSubjectAccessReviewSpec()
|
||||||
|
sar.spec.resourceAttributes = new k8s.V1ResourceAttributes()
|
||||||
|
sar.spec.resourceAttributes.verb = verb
|
||||||
|
sar.spec.resourceAttributes.namespace = namespace()
|
||||||
|
sar.spec.resourceAttributes.group = secretPermission.group
|
||||||
|
sar.spec.resourceAttributes.resource = secretPermission.resource
|
||||||
|
sar.spec.resourceAttributes.subresource = secretPermission.subresource
|
||||||
|
asyncs.push(k8sAuthorizationV1Api.createSelfSubjectAccessReview(sar))
|
||||||
|
}
|
||||||
|
const responses = await Promise.all(asyncs)
|
||||||
|
return responses.every(resp => resp.body.status?.allowed)
|
||||||
|
}
|
||||||
|
|
||||||
export async function isPodContainerAlpine(
|
export async function isPodContainerAlpine(
|
||||||
podName: string,
|
podName: string,
|
||||||
containerName: string
|
containerName: string
|
||||||
@@ -487,6 +454,20 @@ export function namespace(): string {
|
|||||||
return context.namespace
|
return context.namespace
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function generateSecretName(): string {
|
||||||
|
return `github-secret-${uuidv4()}`
|
||||||
|
}
|
||||||
|
|
||||||
|
function runnerName(): string {
|
||||||
|
const name = process.env.ACTIONS_RUNNER_POD_NAME
|
||||||
|
if (!name) {
|
||||||
|
throw new Error(
|
||||||
|
'Failed to determine runner name. "ACTIONS_RUNNER_POD_NAME" env variables should be set.'
|
||||||
|
)
|
||||||
|
}
|
||||||
|
return name
|
||||||
|
}
|
||||||
|
|
||||||
class BackOffManager {
|
class BackOffManager {
|
||||||
private backOffSeconds = 1
|
private backOffSeconds = 1
|
||||||
totalTime = 0
|
totalTime = 0
|
||||||
@@ -516,40 +497,27 @@ class BackOffManager {
|
|||||||
export function containerPorts(
|
export function containerPorts(
|
||||||
container: ContainerInfo
|
container: ContainerInfo
|
||||||
): k8s.V1ContainerPort[] {
|
): k8s.V1ContainerPort[] {
|
||||||
|
// 8080:8080/tcp
|
||||||
|
const portFormat = /(\d{1,5})(:(\d{1,5}))?(\/(tcp|udp))?/
|
||||||
|
|
||||||
const ports: k8s.V1ContainerPort[] = []
|
const ports: k8s.V1ContainerPort[] = []
|
||||||
if (!container.portMappings?.length) {
|
|
||||||
return ports
|
|
||||||
}
|
|
||||||
for (const portDefinition of container.portMappings) {
|
for (const portDefinition of container.portMappings) {
|
||||||
const portProtoSplit = portDefinition.split('/')
|
const submatches = portFormat.exec(portDefinition)
|
||||||
if (portProtoSplit.length > 2) {
|
if (!submatches) {
|
||||||
throw new Error(`Unexpected port format: ${portDefinition}`)
|
throw new Error(
|
||||||
|
`Port definition "${portDefinition}" is in incorrect format`
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
const port = new k8s.V1ContainerPort()
|
const port = new k8s.V1ContainerPort()
|
||||||
port.protocol =
|
port.hostPort = Number(submatches[1])
|
||||||
portProtoSplit.length === 2 ? portProtoSplit[1].toUpperCase() : 'TCP'
|
if (submatches[3]) {
|
||||||
|
port.containerPort = Number(submatches[3])
|
||||||
const portSplit = portProtoSplit[0].split(':')
|
|
||||||
if (portSplit.length > 2) {
|
|
||||||
throw new Error('ports should have at most one ":" separator')
|
|
||||||
}
|
}
|
||||||
|
if (submatches[5]) {
|
||||||
const parsePort = (p: string): number => {
|
port.protocol = submatches[5].toUpperCase()
|
||||||
const num = Number(p)
|
|
||||||
if (!Number.isInteger(num) || num < 1 || num > 65535) {
|
|
||||||
throw new Error(`invalid container port: ${p}`)
|
|
||||||
}
|
|
||||||
return num
|
|
||||||
}
|
|
||||||
|
|
||||||
if (portSplit.length === 1) {
|
|
||||||
port.containerPort = parsePort(portSplit[0])
|
|
||||||
} else {
|
} else {
|
||||||
port.hostPort = parsePort(portSplit[0])
|
port.protocol = 'TCP'
|
||||||
port.containerPort = parsePort(portSplit[1])
|
|
||||||
}
|
}
|
||||||
|
|
||||||
ports.push(port)
|
ports.push(port)
|
||||||
}
|
}
|
||||||
return ports
|
return ports
|
||||||
|
|||||||
@@ -1,8 +1,6 @@
|
|||||||
import * as k8s from '@kubernetes/client-node'
|
import * as k8s from '@kubernetes/client-node'
|
||||||
import * as fs from 'fs'
|
|
||||||
import { Mount } from 'hooklib'
|
import { Mount } from 'hooklib'
|
||||||
import * as path from 'path'
|
import * as path from 'path'
|
||||||
import { v1 as uuidv4 } from 'uuid'
|
|
||||||
import { POD_VOLUME_NAME } from './index'
|
import { POD_VOLUME_NAME } from './index'
|
||||||
|
|
||||||
export const DEFAULT_CONTAINER_ENTRY_POINT_ARGS = [`-f`, `/dev/null`]
|
export const DEFAULT_CONTAINER_ENTRY_POINT_ARGS = [`-f`, `/dev/null`]
|
||||||
@@ -10,8 +8,7 @@ export const DEFAULT_CONTAINER_ENTRY_POINT = 'tail'
|
|||||||
|
|
||||||
export function containerVolumes(
|
export function containerVolumes(
|
||||||
userMountVolumes: Mount[] = [],
|
userMountVolumes: Mount[] = [],
|
||||||
jobContainer = true,
|
jobContainer = true
|
||||||
containerAction = false
|
|
||||||
): k8s.V1VolumeMount[] {
|
): k8s.V1VolumeMount[] {
|
||||||
const mounts: k8s.V1VolumeMount[] = [
|
const mounts: k8s.V1VolumeMount[] = [
|
||||||
{
|
{
|
||||||
@@ -20,25 +17,6 @@ export function containerVolumes(
|
|||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
const workspacePath = process.env.GITHUB_WORKSPACE as string
|
|
||||||
if (containerAction) {
|
|
||||||
const i = workspacePath.lastIndexOf('_work/')
|
|
||||||
const workspaceRelativePath = workspacePath.slice(i + '_work/'.length)
|
|
||||||
mounts.push(
|
|
||||||
{
|
|
||||||
name: POD_VOLUME_NAME,
|
|
||||||
mountPath: '/github/workspace',
|
|
||||||
subPath: workspaceRelativePath
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: POD_VOLUME_NAME,
|
|
||||||
mountPath: '/github/file_commands',
|
|
||||||
subPath: '_temp/_runner_file_commands'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
return mounts
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!jobContainer) {
|
if (!jobContainer) {
|
||||||
return mounts
|
return mounts
|
||||||
}
|
}
|
||||||
@@ -66,20 +44,14 @@ export function containerVolumes(
|
|||||||
}
|
}
|
||||||
|
|
||||||
for (const userVolume of userMountVolumes) {
|
for (const userVolume of userMountVolumes) {
|
||||||
let sourceVolumePath = ''
|
const sourceVolumePath = `${
|
||||||
if (path.isAbsolute(userVolume.sourceVolumePath)) {
|
path.isAbsolute(userVolume.sourceVolumePath)
|
||||||
if (!userVolume.sourceVolumePath.startsWith(workspacePath)) {
|
? userVolume.sourceVolumePath
|
||||||
throw new Error(
|
: path.join(
|
||||||
'Volume mounts outside of the work folder are not supported'
|
process.env.GITHUB_WORKSPACE as string,
|
||||||
)
|
userVolume.sourceVolumePath
|
||||||
}
|
)
|
||||||
// source volume path should be relative path
|
}`
|
||||||
sourceVolumePath = userVolume.sourceVolumePath.slice(
|
|
||||||
workspacePath.length + 1
|
|
||||||
)
|
|
||||||
} else {
|
|
||||||
sourceVolumePath = userVolume.sourceVolumePath
|
|
||||||
}
|
|
||||||
|
|
||||||
mounts.push({
|
mounts.push({
|
||||||
name: POD_VOLUME_NAME,
|
name: POD_VOLUME_NAME,
|
||||||
@@ -91,78 +63,3 @@ export function containerVolumes(
|
|||||||
|
|
||||||
return mounts
|
return mounts
|
||||||
}
|
}
|
||||||
|
|
||||||
export function writeEntryPointScript(
|
|
||||||
workingDirectory: string,
|
|
||||||
entryPoint: string,
|
|
||||||
entryPointArgs?: string[],
|
|
||||||
prependPath?: string[],
|
|
||||||
environmentVariables?: { [key: string]: string }
|
|
||||||
): { containerPath: string; runnerPath: string } {
|
|
||||||
let exportPath = ''
|
|
||||||
if (prependPath?.length) {
|
|
||||||
// TODO: remove compatibility with typeof prependPath === 'string' as we bump to next major version, the hooks will lose PrependPath compat with runners 2.293.0 and older
|
|
||||||
const prepend =
|
|
||||||
typeof prependPath === 'string' ? prependPath : prependPath.join(':')
|
|
||||||
exportPath = `export PATH=${prepend}:$PATH`
|
|
||||||
}
|
|
||||||
let environmentPrefix = ''
|
|
||||||
|
|
||||||
if (environmentVariables && Object.entries(environmentVariables).length) {
|
|
||||||
const envBuffer: string[] = []
|
|
||||||
for (const [key, value] of Object.entries(environmentVariables)) {
|
|
||||||
if (
|
|
||||||
key.includes(`=`) ||
|
|
||||||
key.includes(`'`) ||
|
|
||||||
key.includes(`"`) ||
|
|
||||||
key.includes(`$`)
|
|
||||||
) {
|
|
||||||
throw new Error(
|
|
||||||
`environment key ${key} is invalid - the key must not contain =, $, ', or "`
|
|
||||||
)
|
|
||||||
}
|
|
||||||
envBuffer.push(
|
|
||||||
`"${key}=${value
|
|
||||||
.replace(/\\/g, '\\\\')
|
|
||||||
.replace(/"/g, '\\"')
|
|
||||||
.replace(/\$/g, '\\$')}"`
|
|
||||||
)
|
|
||||||
}
|
|
||||||
environmentPrefix = `env ${envBuffer.join(' ')} `
|
|
||||||
}
|
|
||||||
|
|
||||||
const content = `#!/bin/sh -l
|
|
||||||
${exportPath}
|
|
||||||
cd ${workingDirectory} && \
|
|
||||||
exec ${environmentPrefix} ${entryPoint} ${
|
|
||||||
entryPointArgs?.length ? entryPointArgs.join(' ') : ''
|
|
||||||
}
|
|
||||||
`
|
|
||||||
const filename = `${uuidv4()}.sh`
|
|
||||||
const entryPointPath = `${process.env.RUNNER_TEMP}/${filename}`
|
|
||||||
fs.writeFileSync(entryPointPath, content)
|
|
||||||
return {
|
|
||||||
containerPath: `/__w/_temp/${filename}`,
|
|
||||||
runnerPath: entryPointPath
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export function generateContainerName(image: string): string {
|
|
||||||
const nameWithTag = image.split('/').pop()
|
|
||||||
const name = nameWithTag?.split(':').at(0)
|
|
||||||
|
|
||||||
if (!name) {
|
|
||||||
throw new Error(`Image definition '${image}' is invalid`)
|
|
||||||
}
|
|
||||||
|
|
||||||
return name
|
|
||||||
}
|
|
||||||
|
|
||||||
export enum PodPhase {
|
|
||||||
PENDING = 'Pending',
|
|
||||||
RUNNING = 'Running',
|
|
||||||
SUCCEEDED = 'Succeeded',
|
|
||||||
FAILED = 'Failed',
|
|
||||||
UNKNOWN = 'Unknown',
|
|
||||||
COMPLETED = 'Completed'
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,65 +1,31 @@
|
|||||||
import * as k8s from '@kubernetes/client-node'
|
import * as path from 'path'
|
||||||
import { cleanupJob, prepareJob } from '../src/hooks'
|
import * as fs from 'fs'
|
||||||
import { RunnerInstanceLabel } from '../src/hooks/constants'
|
import { prepareJob, cleanupJob } from '../src/hooks'
|
||||||
import { namespace } from '../src/k8s'
|
import { TestTempOutput } from './test-setup'
|
||||||
import { TestHelper } from './test-setup'
|
|
||||||
|
|
||||||
let testHelper: TestHelper
|
let testTempOutput: TestTempOutput
|
||||||
|
|
||||||
|
const prepareJobJsonPath = path.resolve(
|
||||||
|
`${__dirname}/../../../examples/prepare-job.json`
|
||||||
|
)
|
||||||
|
|
||||||
|
let prepareJobOutputFilePath: string
|
||||||
|
|
||||||
describe('Cleanup Job', () => {
|
describe('Cleanup Job', () => {
|
||||||
beforeEach(async () => {
|
beforeEach(async () => {
|
||||||
testHelper = new TestHelper()
|
const prepareJobJson = fs.readFileSync(prepareJobJsonPath)
|
||||||
await testHelper.initialize()
|
let prepareJobData = JSON.parse(prepareJobJson.toString())
|
||||||
let prepareJobData = testHelper.getPrepareJobDefinition()
|
|
||||||
const prepareJobOutputFilePath = testHelper.createFile(
|
testTempOutput = new TestTempOutput()
|
||||||
|
testTempOutput.initialize()
|
||||||
|
prepareJobOutputFilePath = testTempOutput.createFile(
|
||||||
'prepare-job-output.json'
|
'prepare-job-output.json'
|
||||||
)
|
)
|
||||||
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||||
})
|
})
|
||||||
|
|
||||||
afterEach(async () => {
|
|
||||||
await testHelper.cleanup()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should not throw', async () => {
|
it('should not throw', async () => {
|
||||||
|
const outputJson = fs.readFileSync(prepareJobOutputFilePath)
|
||||||
|
const outputData = JSON.parse(outputJson.toString())
|
||||||
await expect(cleanupJob()).resolves.not.toThrow()
|
await expect(cleanupJob()).resolves.not.toThrow()
|
||||||
})
|
})
|
||||||
|
|
||||||
it('should have no runner linked pods running', async () => {
|
|
||||||
await cleanupJob()
|
|
||||||
const kc = new k8s.KubeConfig()
|
|
||||||
|
|
||||||
kc.loadFromDefault()
|
|
||||||
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
|
|
||||||
|
|
||||||
const podList = await k8sApi.listNamespacedPod(
|
|
||||||
namespace(),
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
new RunnerInstanceLabel().toString()
|
|
||||||
)
|
|
||||||
|
|
||||||
expect(podList.body.items.length).toBe(0)
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should have no runner linked secrets', async () => {
|
|
||||||
await cleanupJob()
|
|
||||||
const kc = new k8s.KubeConfig()
|
|
||||||
|
|
||||||
kc.loadFromDefault()
|
|
||||||
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
|
|
||||||
|
|
||||||
const secretList = await k8sApi.listNamespacedSecret(
|
|
||||||
namespace(),
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
new RunnerInstanceLabel().toString()
|
|
||||||
)
|
|
||||||
|
|
||||||
expect(secretList.body.items.length).toBe(0)
|
|
||||||
})
|
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,182 +0,0 @@
|
|||||||
import {
|
|
||||||
getJobPodName,
|
|
||||||
getRunnerPodName,
|
|
||||||
getSecretName,
|
|
||||||
getStepPodName,
|
|
||||||
getVolumeClaimName,
|
|
||||||
JOB_CONTAINER_NAME,
|
|
||||||
MAX_POD_NAME_LENGTH,
|
|
||||||
RunnerInstanceLabel,
|
|
||||||
STEP_POD_NAME_SUFFIX_LENGTH
|
|
||||||
} from '../src/hooks/constants'
|
|
||||||
|
|
||||||
describe('constants', () => {
|
|
||||||
describe('runner instance label', () => {
|
|
||||||
beforeEach(() => {
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
|
|
||||||
})
|
|
||||||
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
|
||||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
|
||||||
expect(() => new RunnerInstanceLabel()).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should have key truthy', () => {
|
|
||||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
|
||||||
expect(typeof runnerInstanceLabel.key).toBe('string')
|
|
||||||
expect(runnerInstanceLabel.key).toBeTruthy()
|
|
||||||
expect(runnerInstanceLabel.key.length).toBeGreaterThan(0)
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should have value as runner pod name', () => {
|
|
||||||
const name = process.env.ACTIONS_RUNNER_POD_NAME as string
|
|
||||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
|
||||||
expect(typeof runnerInstanceLabel.value).toBe('string')
|
|
||||||
expect(runnerInstanceLabel.value).toBe(name)
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should have toString combination of key and value', () => {
|
|
||||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
|
||||||
expect(runnerInstanceLabel.toString()).toBe(
|
|
||||||
`${runnerInstanceLabel.key}=${runnerInstanceLabel.value}`
|
|
||||||
)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
describe('getRunnerPodName', () => {
|
|
||||||
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
|
||||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
|
||||||
expect(() => getRunnerPodName()).toThrow()
|
|
||||||
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = ''
|
|
||||||
expect(() => getRunnerPodName()).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should return corrent ACTIONS_RUNNER_POD_NAME name', () => {
|
|
||||||
const name = 'example'
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = name
|
|
||||||
expect(getRunnerPodName()).toBe(name)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
describe('getJobPodName', () => {
|
|
||||||
it('should throw on getJobPodName if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
|
||||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
|
||||||
expect(() => getJobPodName()).toThrow()
|
|
||||||
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = ''
|
|
||||||
expect(() => getRunnerPodName()).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should contain suffix -workflow', () => {
|
|
||||||
const tableTests = [
|
|
||||||
{
|
|
||||||
podName: 'test',
|
|
||||||
expect: 'test-workflow'
|
|
||||||
},
|
|
||||||
{
|
|
||||||
// podName.length == 63
|
|
||||||
podName:
|
|
||||||
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa',
|
|
||||||
expect:
|
|
||||||
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-workflow'
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
for (const tt of tableTests) {
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = tt.podName
|
|
||||||
const actual = getJobPodName()
|
|
||||||
expect(actual).toBe(tt.expect)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
describe('getVolumeClaimName', () => {
|
|
||||||
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
|
||||||
delete process.env.ACTIONS_RUNNER_CLAIM_NAME
|
|
||||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
|
||||||
expect(() => getVolumeClaimName()).toThrow()
|
|
||||||
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = ''
|
|
||||||
expect(() => getVolumeClaimName()).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should return ACTIONS_RUNNER_CLAIM_NAME env if set', () => {
|
|
||||||
const claimName = 'testclaim'
|
|
||||||
process.env.ACTIONS_RUNNER_CLAIM_NAME = claimName
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
|
|
||||||
expect(getVolumeClaimName()).toBe(claimName)
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should contain suffix -work if ACTIONS_RUNNER_CLAIM_NAME is not set', () => {
|
|
||||||
delete process.env.ACTIONS_RUNNER_CLAIM_NAME
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
|
|
||||||
expect(getVolumeClaimName()).toBe('example-work')
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
describe('getSecretName', () => {
|
|
||||||
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
|
||||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
|
||||||
expect(() => getSecretName()).toThrow()
|
|
||||||
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = ''
|
|
||||||
expect(() => getSecretName()).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should contain suffix -secret- and name trimmed', () => {
|
|
||||||
const podNames = [
|
|
||||||
'test',
|
|
||||||
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
|
|
||||||
]
|
|
||||||
|
|
||||||
for (const podName of podNames) {
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = podName
|
|
||||||
const actual = getSecretName()
|
|
||||||
const re = new RegExp(
|
|
||||||
`${podName.substring(
|
|
||||||
MAX_POD_NAME_LENGTH -
|
|
||||||
'-secret-'.length -
|
|
||||||
STEP_POD_NAME_SUFFIX_LENGTH
|
|
||||||
)}-secret-[a-z0-9]{8,}`
|
|
||||||
)
|
|
||||||
expect(actual).toMatch(re)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
describe('getStepPodName', () => {
|
|
||||||
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
|
||||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
|
||||||
expect(() => getStepPodName()).toThrow()
|
|
||||||
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = ''
|
|
||||||
expect(() => getStepPodName()).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should contain suffix -step- and name trimmed', () => {
|
|
||||||
const podNames = [
|
|
||||||
'test',
|
|
||||||
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
|
|
||||||
]
|
|
||||||
|
|
||||||
for (const podName of podNames) {
|
|
||||||
process.env.ACTIONS_RUNNER_POD_NAME = podName
|
|
||||||
const actual = getStepPodName()
|
|
||||||
const re = new RegExp(
|
|
||||||
`${podName.substring(
|
|
||||||
MAX_POD_NAME_LENGTH - '-step-'.length - STEP_POD_NAME_SUFFIX_LENGTH
|
|
||||||
)}-step-[a-z0-9]{8,}`
|
|
||||||
)
|
|
||||||
expect(actual).toMatch(re)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
describe('const values', () => {
|
|
||||||
it('should have constants set', () => {
|
|
||||||
expect(JOB_CONTAINER_NAME).toBeTruthy()
|
|
||||||
expect(MAX_POD_NAME_LENGTH).toBeGreaterThan(0)
|
|
||||||
expect(STEP_POD_NAME_SUFFIX_LENGTH).toBeGreaterThan(0)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
})
|
|
||||||
@@ -1,36 +1,51 @@
|
|||||||
import * as fs from 'fs'
|
import * as fs from 'fs'
|
||||||
|
import * as path from 'path'
|
||||||
import {
|
import {
|
||||||
cleanupJob,
|
cleanupJob,
|
||||||
prepareJob,
|
prepareJob,
|
||||||
runContainerStep,
|
runContainerStep,
|
||||||
runScriptStep
|
runScriptStep
|
||||||
} from '../src/hooks'
|
} from '../src/hooks'
|
||||||
import { TestHelper } from './test-setup'
|
import { TestTempOutput } from './test-setup'
|
||||||
|
|
||||||
jest.useRealTimers()
|
jest.useRealTimers()
|
||||||
|
|
||||||
let testHelper: TestHelper
|
let testTempOutput: TestTempOutput
|
||||||
|
|
||||||
|
const prepareJobJsonPath = path.resolve(
|
||||||
|
`${__dirname}/../../../../examples/prepare-job.json`
|
||||||
|
)
|
||||||
|
const runScriptStepJsonPath = path.resolve(
|
||||||
|
`${__dirname}/../../../../examples/run-script-step.json`
|
||||||
|
)
|
||||||
|
let runContainerStepJsonPath = path.resolve(
|
||||||
|
`${__dirname}/../../../../examples/run-container-step.json`
|
||||||
|
)
|
||||||
|
|
||||||
let prepareJobData: any
|
let prepareJobData: any
|
||||||
|
|
||||||
let prepareJobOutputFilePath: string
|
let prepareJobOutputFilePath: string
|
||||||
describe('e2e', () => {
|
describe('e2e', () => {
|
||||||
beforeEach(async () => {
|
beforeEach(() => {
|
||||||
testHelper = new TestHelper()
|
const prepareJobJson = fs.readFileSync(prepareJobJsonPath)
|
||||||
await testHelper.initialize()
|
prepareJobData = JSON.parse(prepareJobJson.toString())
|
||||||
|
|
||||||
prepareJobData = testHelper.getPrepareJobDefinition()
|
testTempOutput = new TestTempOutput()
|
||||||
prepareJobOutputFilePath = testHelper.createFile('prepare-job-output.json')
|
testTempOutput.initialize()
|
||||||
|
prepareJobOutputFilePath = testTempOutput.createFile(
|
||||||
|
'prepare-job-output.json'
|
||||||
|
)
|
||||||
})
|
})
|
||||||
afterEach(async () => {
|
afterEach(async () => {
|
||||||
await testHelper.cleanup()
|
testTempOutput.cleanup()
|
||||||
})
|
})
|
||||||
it('should prepare job, run script step, run container step then cleanup without errors', async () => {
|
it('should prepare job, run script step, run container step then cleanup without errors', async () => {
|
||||||
await expect(
|
await expect(
|
||||||
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
|
|
||||||
const scriptStepData = testHelper.getRunScriptStepDefinition()
|
const scriptStepContent = fs.readFileSync(runScriptStepJsonPath)
|
||||||
|
const scriptStepData = JSON.parse(scriptStepContent.toString())
|
||||||
|
|
||||||
const prepareJobOutputJson = fs.readFileSync(prepareJobOutputFilePath)
|
const prepareJobOutputJson = fs.readFileSync(prepareJobOutputFilePath)
|
||||||
const prepareJobOutputData = JSON.parse(prepareJobOutputJson.toString())
|
const prepareJobOutputData = JSON.parse(prepareJobOutputJson.toString())
|
||||||
@@ -39,7 +54,8 @@ describe('e2e', () => {
|
|||||||
runScriptStep(scriptStepData.args, prepareJobOutputData.state, null)
|
runScriptStep(scriptStepData.args, prepareJobOutputData.state, null)
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
|
|
||||||
const runContainerStepData = testHelper.getRunContainerStepDefinition()
|
const runContainerStepContent = fs.readFileSync(runContainerStepJsonPath)
|
||||||
|
const runContainerStepData = JSON.parse(runContainerStepContent.toString())
|
||||||
|
|
||||||
await expect(
|
await expect(
|
||||||
runContainerStep(runContainerStepData.args)
|
runContainerStep(runContainerStepData.args)
|
||||||
|
|||||||
@@ -1,331 +0,0 @@
|
|||||||
import * as fs from 'fs'
|
|
||||||
import { containerPorts, POD_VOLUME_NAME } from '../src/k8s'
|
|
||||||
import {
|
|
||||||
containerVolumes,
|
|
||||||
generateContainerName,
|
|
||||||
writeEntryPointScript
|
|
||||||
} from '../src/k8s/utils'
|
|
||||||
import { TestHelper } from './test-setup'
|
|
||||||
|
|
||||||
let testHelper: TestHelper
|
|
||||||
|
|
||||||
describe('k8s utils', () => {
|
|
||||||
describe('write entrypoint', () => {
|
|
||||||
beforeEach(async () => {
|
|
||||||
testHelper = new TestHelper()
|
|
||||||
await testHelper.initialize()
|
|
||||||
})
|
|
||||||
|
|
||||||
afterEach(async () => {
|
|
||||||
await testHelper.cleanup()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should not throw', () => {
|
|
||||||
expect(() =>
|
|
||||||
writeEntryPointScript(
|
|
||||||
'/test',
|
|
||||||
'sh',
|
|
||||||
['-e', 'script.sh'],
|
|
||||||
['/prepend/path'],
|
|
||||||
{
|
|
||||||
SOME_ENV: 'SOME_VALUE'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
).not.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw if RUNNER_TEMP is not set', () => {
|
|
||||||
delete process.env.RUNNER_TEMP
|
|
||||||
expect(() =>
|
|
||||||
writeEntryPointScript(
|
|
||||||
'/test',
|
|
||||||
'sh',
|
|
||||||
['-e', 'script.sh'],
|
|
||||||
['/prepend/path'],
|
|
||||||
{
|
|
||||||
SOME_ENV: 'SOME_VALUE'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw if environment variable name contains double quote', () => {
|
|
||||||
expect(() =>
|
|
||||||
writeEntryPointScript(
|
|
||||||
'/test',
|
|
||||||
'sh',
|
|
||||||
['-e', 'script.sh'],
|
|
||||||
['/prepend/path'],
|
|
||||||
{
|
|
||||||
'SOME"_ENV': 'SOME_VALUE'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw if environment variable name contains =', () => {
|
|
||||||
expect(() =>
|
|
||||||
writeEntryPointScript(
|
|
||||||
'/test',
|
|
||||||
'sh',
|
|
||||||
['-e', 'script.sh'],
|
|
||||||
['/prepend/path'],
|
|
||||||
{
|
|
||||||
'SOME=ENV': 'SOME_VALUE'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw if environment variable name contains single quote', () => {
|
|
||||||
expect(() =>
|
|
||||||
writeEntryPointScript(
|
|
||||||
'/test',
|
|
||||||
'sh',
|
|
||||||
['-e', 'script.sh'],
|
|
||||||
['/prepend/path'],
|
|
||||||
{
|
|
||||||
"SOME'_ENV": 'SOME_VALUE'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw if environment variable name contains dollar', () => {
|
|
||||||
expect(() =>
|
|
||||||
writeEntryPointScript(
|
|
||||||
'/test',
|
|
||||||
'sh',
|
|
||||||
['-e', 'script.sh'],
|
|
||||||
['/prepend/path'],
|
|
||||||
{
|
|
||||||
SOME_$_ENV: 'SOME_VALUE'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should escape double quote, dollar and backslash in environment variable values', () => {
|
|
||||||
const { runnerPath } = writeEntryPointScript(
|
|
||||||
'/test',
|
|
||||||
'sh',
|
|
||||||
['-e', 'script.sh'],
|
|
||||||
['/prepend/path'],
|
|
||||||
{
|
|
||||||
DQUOTE: '"',
|
|
||||||
BACK_SLASH: '\\',
|
|
||||||
DOLLAR: '$'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
expect(fs.existsSync(runnerPath)).toBe(true)
|
|
||||||
const script = fs.readFileSync(runnerPath, 'utf8')
|
|
||||||
expect(script).toContain('"DQUOTE=\\"')
|
|
||||||
expect(script).toContain('"BACK_SLASH=\\\\"')
|
|
||||||
expect(script).toContain('"DOLLAR=\\$"')
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should return object with containerPath and runnerPath', () => {
|
|
||||||
const { containerPath, runnerPath } = writeEntryPointScript(
|
|
||||||
'/test',
|
|
||||||
'sh',
|
|
||||||
['-e', 'script.sh'],
|
|
||||||
['/prepend/path'],
|
|
||||||
{
|
|
||||||
SOME_ENV: 'SOME_VALUE'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
expect(containerPath).toMatch(/\/__w\/_temp\/.*\.sh/)
|
|
||||||
const re = new RegExp(`${process.env.RUNNER_TEMP}/.*\\.sh`)
|
|
||||||
expect(runnerPath).toMatch(re)
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should write entrypoint path and the file should exist', () => {
|
|
||||||
const { runnerPath } = writeEntryPointScript(
|
|
||||||
'/test',
|
|
||||||
'sh',
|
|
||||||
['-e', 'script.sh'],
|
|
||||||
['/prepend/path'],
|
|
||||||
{
|
|
||||||
SOME_ENV: 'SOME_VALUE'
|
|
||||||
}
|
|
||||||
)
|
|
||||||
expect(fs.existsSync(runnerPath)).toBe(true)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
describe('container volumes', () => {
|
|
||||||
beforeEach(async () => {
|
|
||||||
testHelper = new TestHelper()
|
|
||||||
await testHelper.initialize()
|
|
||||||
})
|
|
||||||
|
|
||||||
afterEach(async () => {
|
|
||||||
await testHelper.cleanup()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw if container action and GITHUB_WORKSPACE env is not set', () => {
|
|
||||||
delete process.env.GITHUB_WORKSPACE
|
|
||||||
expect(() => containerVolumes([], true, true)).toThrow()
|
|
||||||
expect(() => containerVolumes([], false, true)).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should always have work mount', () => {
|
|
||||||
let volumes = containerVolumes([], true, true)
|
|
||||||
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
|
|
||||||
volumes = containerVolumes([], true, false)
|
|
||||||
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
|
|
||||||
volumes = containerVolumes([], false, true)
|
|
||||||
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
|
|
||||||
volumes = containerVolumes([], false, false)
|
|
||||||
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should have container action volumes', () => {
|
|
||||||
let volumes = containerVolumes([], true, true)
|
|
||||||
let workspace = volumes.find(e => e.mountPath === '/github/workspace')
|
|
||||||
let fileCommands = volumes.find(
|
|
||||||
e => e.mountPath === '/github/file_commands'
|
|
||||||
)
|
|
||||||
expect(workspace).toBeTruthy()
|
|
||||||
expect(workspace?.subPath).toBe('repo/repo')
|
|
||||||
expect(fileCommands).toBeTruthy()
|
|
||||||
expect(fileCommands?.subPath).toBe('_temp/_runner_file_commands')
|
|
||||||
|
|
||||||
volumes = containerVolumes([], false, true)
|
|
||||||
workspace = volumes.find(e => e.mountPath === '/github/workspace')
|
|
||||||
fileCommands = volumes.find(e => e.mountPath === '/github/file_commands')
|
|
||||||
expect(workspace).toBeTruthy()
|
|
||||||
expect(workspace?.subPath).toBe('repo/repo')
|
|
||||||
expect(fileCommands).toBeTruthy()
|
|
||||||
expect(fileCommands?.subPath).toBe('_temp/_runner_file_commands')
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should have externals, github home and github workflow mounts if job container', () => {
|
|
||||||
const volumes = containerVolumes()
|
|
||||||
expect(volumes.find(e => e.mountPath === '/__e')).toBeTruthy()
|
|
||||||
expect(volumes.find(e => e.mountPath === '/github/home')).toBeTruthy()
|
|
||||||
expect(volumes.find(e => e.mountPath === '/github/workflow')).toBeTruthy()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw if user volume source volume path is not in workspace', () => {
|
|
||||||
expect(() =>
|
|
||||||
containerVolumes(
|
|
||||||
[
|
|
||||||
{
|
|
||||||
sourceVolumePath: '/outside/of/workdir'
|
|
||||||
}
|
|
||||||
],
|
|
||||||
true,
|
|
||||||
false
|
|
||||||
)
|
|
||||||
).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it(`all volumes should have name ${POD_VOLUME_NAME}`, () => {
|
|
||||||
let volumes = containerVolumes([], true, true)
|
|
||||||
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
|
|
||||||
volumes = containerVolumes([], true, false)
|
|
||||||
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
|
|
||||||
volumes = containerVolumes([], false, true)
|
|
||||||
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
|
|
||||||
volumes = containerVolumes([], false, false)
|
|
||||||
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should parse container ports', () => {
|
|
||||||
const tt = [
|
|
||||||
{
|
|
||||||
spec: '8080:80',
|
|
||||||
want: {
|
|
||||||
containerPort: 80,
|
|
||||||
hostPort: 8080,
|
|
||||||
protocol: 'TCP'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
spec: '8080:80/udp',
|
|
||||||
want: {
|
|
||||||
containerPort: 80,
|
|
||||||
hostPort: 8080,
|
|
||||||
protocol: 'UDP'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
spec: '8080/udp',
|
|
||||||
want: {
|
|
||||||
containerPort: 8080,
|
|
||||||
hostPort: undefined,
|
|
||||||
protocol: 'UDP'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
spec: '8080',
|
|
||||||
want: {
|
|
||||||
containerPort: 8080,
|
|
||||||
hostPort: undefined,
|
|
||||||
protocol: 'TCP'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
for (const tc of tt) {
|
|
||||||
const got = containerPorts({ portMappings: [tc.spec] })
|
|
||||||
for (const [key, value] of Object.entries(tc.want)) {
|
|
||||||
expect(got[0][key]).toBe(value)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw when ports are out of range (0, 65536)', () => {
|
|
||||||
expect(() => containerPorts({ portMappings: ['65536'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: ['0'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: ['65536/udp'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: ['0/udp'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: ['1:65536'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: ['65536:1'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: ['1:65536/tcp'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: ['65536:1/tcp'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: ['1:'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: [':1'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: ['1:/tcp'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: [':1/tcp'] })).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw on multi ":" splits', () => {
|
|
||||||
expect(() => containerPorts({ portMappings: ['1:1:1'] })).toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw on multi "/" splits', () => {
|
|
||||||
expect(() => containerPorts({ portMappings: ['1:1/tcp/udp'] })).toThrow()
|
|
||||||
expect(() => containerPorts({ portMappings: ['1/tcp/udp'] })).toThrow()
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
describe('generate container name', () => {
|
|
||||||
it('should return the container name from image string', () => {
|
|
||||||
expect(
|
|
||||||
generateContainerName('public.ecr.aws/localstack/localstack')
|
|
||||||
).toEqual('localstack')
|
|
||||||
expect(
|
|
||||||
generateContainerName(
|
|
||||||
'public.ecr.aws/url/with/multiple/slashes/postgres:latest'
|
|
||||||
)
|
|
||||||
).toEqual('postgres')
|
|
||||||
expect(generateContainerName('postgres')).toEqual('postgres')
|
|
||||||
expect(generateContainerName('postgres:latest')).toEqual('postgres')
|
|
||||||
expect(generateContainerName('localstack/localstack')).toEqual(
|
|
||||||
'localstack'
|
|
||||||
)
|
|
||||||
expect(generateContainerName('localstack/localstack:latest')).toEqual(
|
|
||||||
'localstack'
|
|
||||||
)
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw on invalid image string', () => {
|
|
||||||
expect(() =>
|
|
||||||
generateContainerName('localstack/localstack/:latest')
|
|
||||||
).toThrow()
|
|
||||||
expect(() => generateContainerName(':latest')).toThrow()
|
|
||||||
})
|
|
||||||
})
|
|
||||||
})
|
|
||||||
@@ -1,29 +1,36 @@
|
|||||||
import * as fs from 'fs'
|
import * as fs from 'fs'
|
||||||
import * as path from 'path'
|
import * as path from 'path'
|
||||||
import { cleanupJob } from '../src/hooks'
|
import { cleanupJob } from '../src/hooks'
|
||||||
import { createContainerSpec, prepareJob } from '../src/hooks/prepare-job'
|
import { prepareJob } from '../src/hooks/prepare-job'
|
||||||
import { TestHelper } from './test-setup'
|
import { TestTempOutput } from './test-setup'
|
||||||
import { generateContainerName } from '../src/k8s/utils'
|
|
||||||
import { V1Container } from '@kubernetes/client-node'
|
|
||||||
|
|
||||||
jest.useRealTimers()
|
jest.useRealTimers()
|
||||||
|
|
||||||
let testHelper: TestHelper
|
let testTempOutput: TestTempOutput
|
||||||
|
|
||||||
|
const prepareJobJsonPath = path.resolve(
|
||||||
|
`${__dirname}/../../../examples/prepare-job.json`
|
||||||
|
)
|
||||||
let prepareJobData: any
|
let prepareJobData: any
|
||||||
|
|
||||||
let prepareJobOutputFilePath: string
|
let prepareJobOutputFilePath: string
|
||||||
|
|
||||||
describe('Prepare job', () => {
|
describe('Prepare job', () => {
|
||||||
beforeEach(async () => {
|
beforeEach(() => {
|
||||||
testHelper = new TestHelper()
|
const prepareJobJson = fs.readFileSync(prepareJobJsonPath)
|
||||||
await testHelper.initialize()
|
prepareJobData = JSON.parse(prepareJobJson.toString())
|
||||||
prepareJobData = testHelper.getPrepareJobDefinition()
|
|
||||||
prepareJobOutputFilePath = testHelper.createFile('prepare-job-output.json')
|
testTempOutput = new TestTempOutput()
|
||||||
|
testTempOutput.initialize()
|
||||||
|
prepareJobOutputFilePath = testTempOutput.createFile(
|
||||||
|
'prepare-job-output.json'
|
||||||
|
)
|
||||||
})
|
})
|
||||||
afterEach(async () => {
|
afterEach(async () => {
|
||||||
|
const outputJson = fs.readFileSync(prepareJobOutputFilePath)
|
||||||
|
const outputData = JSON.parse(outputJson.toString())
|
||||||
await cleanupJob()
|
await cleanupJob()
|
||||||
await testHelper.cleanup()
|
testTempOutput.cleanup()
|
||||||
})
|
})
|
||||||
|
|
||||||
it('should not throw exception', async () => {
|
it('should not throw exception', async () => {
|
||||||
@@ -37,63 +44,4 @@ describe('Prepare job', () => {
|
|||||||
const content = fs.readFileSync(prepareJobOutputFilePath)
|
const content = fs.readFileSync(prepareJobOutputFilePath)
|
||||||
expect(() => JSON.parse(content.toString())).not.toThrow()
|
expect(() => JSON.parse(content.toString())).not.toThrow()
|
||||||
})
|
})
|
||||||
|
|
||||||
it('should prepare job with absolute path for userVolumeMount', async () => {
|
|
||||||
prepareJobData.args.container.userMountVolumes = [
|
|
||||||
{
|
|
||||||
sourceVolumePath: path.join(
|
|
||||||
process.env.GITHUB_WORKSPACE as string,
|
|
||||||
'/myvolume'
|
|
||||||
),
|
|
||||||
targetVolumePath: '/volume_mount',
|
|
||||||
readOnly: false
|
|
||||||
}
|
|
||||||
]
|
|
||||||
await expect(
|
|
||||||
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
|
||||||
).resolves.not.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should throw an exception if the user volume mount is absolute path outside of GITHUB_WORKSPACE', async () => {
|
|
||||||
prepareJobData.args.container.userMountVolumes = [
|
|
||||||
{
|
|
||||||
sourceVolumePath: '/somewhere/not/in/gh-workspace',
|
|
||||||
targetVolumePath: '/containermount',
|
|
||||||
readOnly: false
|
|
||||||
}
|
|
||||||
]
|
|
||||||
await expect(
|
|
||||||
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
|
||||||
).rejects.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should not run prepare job without the job container', async () => {
|
|
||||||
prepareJobData.args.container = undefined
|
|
||||||
await expect(
|
|
||||||
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
|
||||||
).rejects.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should not set command + args for service container if not passed in args', async () => {
|
|
||||||
const services = prepareJobData.args.services.map(service => {
|
|
||||||
return createContainerSpec(service, generateContainerName(service.image))
|
|
||||||
}) as [V1Container]
|
|
||||||
|
|
||||||
expect(services[0].command).toBe(undefined)
|
|
||||||
expect(services[0].args).toBe(undefined)
|
|
||||||
})
|
|
||||||
|
|
||||||
test.each([undefined, null, []])(
|
|
||||||
'should not throw exception when portMapping=%p',
|
|
||||||
async pm => {
|
|
||||||
prepareJobData.args.services.forEach(s => {
|
|
||||||
s.portMappings = pm
|
|
||||||
})
|
|
||||||
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
|
||||||
const content = JSON.parse(
|
|
||||||
fs.readFileSync(prepareJobOutputFilePath).toString()
|
|
||||||
)
|
|
||||||
expect(() => content.context.services[0].image).not.toThrow()
|
|
||||||
}
|
|
||||||
)
|
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,39 +1,25 @@
|
|||||||
|
import { TestTempOutput } from './test-setup'
|
||||||
|
import * as path from 'path'
|
||||||
import { runContainerStep } from '../src/hooks'
|
import { runContainerStep } from '../src/hooks'
|
||||||
import { TestHelper } from './test-setup'
|
import * as fs from 'fs'
|
||||||
|
|
||||||
jest.useRealTimers()
|
jest.useRealTimers()
|
||||||
|
|
||||||
let testHelper: TestHelper
|
let testTempOutput: TestTempOutput
|
||||||
|
|
||||||
|
let runContainerStepJsonPath = path.resolve(
|
||||||
|
`${__dirname}/../../../examples/run-container-step.json`
|
||||||
|
)
|
||||||
|
|
||||||
let runContainerStepData: any
|
let runContainerStepData: any
|
||||||
|
|
||||||
describe('Run container step', () => {
|
describe('Run container step', () => {
|
||||||
beforeEach(async () => {
|
beforeAll(() => {
|
||||||
testHelper = new TestHelper()
|
const content = fs.readFileSync(runContainerStepJsonPath)
|
||||||
await testHelper.initialize()
|
runContainerStepData = JSON.parse(content.toString())
|
||||||
runContainerStepData = testHelper.getRunContainerStepDefinition()
|
process.env.RUNNER_NAME = 'testjob'
|
||||||
})
|
})
|
||||||
|
|
||||||
afterEach(async () => {
|
|
||||||
await testHelper.cleanup()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should not throw', async () => {
|
it('should not throw', async () => {
|
||||||
const exitCode = await runContainerStep(runContainerStepData.args)
|
|
||||||
expect(exitCode).toBe(0)
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should fail if the working directory does not exist', async () => {
|
|
||||||
runContainerStepData.args.workingDirectory = '/foo/bar'
|
|
||||||
await expect(runContainerStep(runContainerStepData.args)).rejects.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should shold have env variables available', async () => {
|
|
||||||
runContainerStepData.args.entryPoint = 'bash'
|
|
||||||
runContainerStepData.args.entryPointArgs = [
|
|
||||||
'-c',
|
|
||||||
"'if [[ -z $NODE_ENV ]]; then exit 1; fi'"
|
|
||||||
]
|
|
||||||
await expect(
|
await expect(
|
||||||
runContainerStep(runContainerStepData.args)
|
runContainerStep(runContainerStepData.args)
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
|
|||||||
@@ -1,26 +1,31 @@
|
|||||||
|
import { prepareJob, cleanupJob, runScriptStep } from '../src/hooks'
|
||||||
|
import { TestTempOutput } from './test-setup'
|
||||||
|
import * as path from 'path'
|
||||||
import * as fs from 'fs'
|
import * as fs from 'fs'
|
||||||
import { cleanupJob, prepareJob, runScriptStep } from '../src/hooks'
|
|
||||||
import { TestHelper } from './test-setup'
|
|
||||||
|
|
||||||
jest.useRealTimers()
|
jest.useRealTimers()
|
||||||
|
|
||||||
let testHelper: TestHelper
|
let testTempOutput: TestTempOutput
|
||||||
|
|
||||||
|
const prepareJobJsonPath = path.resolve(
|
||||||
|
`${__dirname}/../../../examples/prepare-job.json`
|
||||||
|
)
|
||||||
|
let prepareJobData: any
|
||||||
|
|
||||||
|
let prepareJobOutputFilePath: string
|
||||||
let prepareJobOutputData: any
|
let prepareJobOutputData: any
|
||||||
|
|
||||||
let runScriptStepDefinition
|
|
||||||
|
|
||||||
describe('Run script step', () => {
|
describe('Run script step', () => {
|
||||||
beforeEach(async () => {
|
beforeEach(async () => {
|
||||||
testHelper = new TestHelper()
|
const prepareJobJson = fs.readFileSync(prepareJobJsonPath)
|
||||||
await testHelper.initialize()
|
prepareJobData = JSON.parse(prepareJobJson.toString())
|
||||||
const prepareJobOutputFilePath = testHelper.createFile(
|
console.log(prepareJobData)
|
||||||
|
|
||||||
|
testTempOutput = new TestTempOutput()
|
||||||
|
testTempOutput.initialize()
|
||||||
|
prepareJobOutputFilePath = testTempOutput.createFile(
|
||||||
'prepare-job-output.json'
|
'prepare-job-output.json'
|
||||||
)
|
)
|
||||||
|
|
||||||
const prepareJobData = testHelper.getPrepareJobDefinition()
|
|
||||||
runScriptStepDefinition = testHelper.getRunScriptStepDefinition()
|
|
||||||
|
|
||||||
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||||
const outputContent = fs.readFileSync(prepareJobOutputFilePath)
|
const outputContent = fs.readFileSync(prepareJobOutputFilePath)
|
||||||
prepareJobOutputData = JSON.parse(outputContent.toString())
|
prepareJobOutputData = JSON.parse(outputContent.toString())
|
||||||
@@ -28,7 +33,7 @@ describe('Run script step', () => {
|
|||||||
|
|
||||||
afterEach(async () => {
|
afterEach(async () => {
|
||||||
await cleanupJob()
|
await cleanupJob()
|
||||||
await testHelper.cleanup()
|
testTempOutput.cleanup()
|
||||||
})
|
})
|
||||||
|
|
||||||
// NOTE: To use this test, do kubectl apply -f podspec.yaml (from podspec examples)
|
// NOTE: To use this test, do kubectl apply -f podspec.yaml (from podspec examples)
|
||||||
@@ -36,97 +41,21 @@ describe('Run script step', () => {
|
|||||||
// npm run test run-script-step
|
// npm run test run-script-step
|
||||||
|
|
||||||
it('should not throw an exception', async () => {
|
it('should not throw an exception', async () => {
|
||||||
await expect(
|
const args = {
|
||||||
runScriptStep(
|
entryPointArgs: ['echo "test"'],
|
||||||
runScriptStepDefinition.args,
|
entryPoint: '/bin/bash',
|
||||||
prepareJobOutputData.state,
|
environmentVariables: {
|
||||||
null
|
NODE_ENV: 'development'
|
||||||
)
|
},
|
||||||
).resolves.not.toThrow()
|
prependPath: ['/foo/bar', 'bar/foo'],
|
||||||
})
|
workingDirectory: '/__w/thboop-test2/thboop-test2'
|
||||||
|
|
||||||
it('should fail if the working directory does not exist', async () => {
|
|
||||||
runScriptStepDefinition.args.workingDirectory = '/foo/bar'
|
|
||||||
await expect(
|
|
||||||
runScriptStep(
|
|
||||||
runScriptStepDefinition.args,
|
|
||||||
prepareJobOutputData.state,
|
|
||||||
null
|
|
||||||
)
|
|
||||||
).rejects.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('should shold have env variables available', async () => {
|
|
||||||
runScriptStepDefinition.args.entryPoint = 'bash'
|
|
||||||
|
|
||||||
runScriptStepDefinition.args.entryPointArgs = [
|
|
||||||
'-c',
|
|
||||||
"'if [[ -z $NODE_ENV ]]; then exit 1; fi'"
|
|
||||||
]
|
|
||||||
await expect(
|
|
||||||
runScriptStep(
|
|
||||||
runScriptStepDefinition.args,
|
|
||||||
prepareJobOutputData.state,
|
|
||||||
null
|
|
||||||
)
|
|
||||||
).resolves.not.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('Should have path variable changed in container with prepend path string', async () => {
|
|
||||||
runScriptStepDefinition.args.prependPath = '/some/path'
|
|
||||||
runScriptStepDefinition.args.entryPoint = '/bin/bash'
|
|
||||||
runScriptStepDefinition.args.entryPointArgs = [
|
|
||||||
'-c',
|
|
||||||
`'if [[ ! $(env | grep "^PATH=") = "PATH=${runScriptStepDefinition.args.prependPath}:"* ]]; then exit 1; fi'`
|
|
||||||
]
|
|
||||||
|
|
||||||
await expect(
|
|
||||||
runScriptStep(
|
|
||||||
runScriptStepDefinition.args,
|
|
||||||
prepareJobOutputData.state,
|
|
||||||
null
|
|
||||||
)
|
|
||||||
).resolves.not.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('Dollar symbols in environment variables should not be expanded', async () => {
|
|
||||||
runScriptStepDefinition.args.environmentVariables = {
|
|
||||||
VARIABLE1: '$VAR',
|
|
||||||
VARIABLE2: '${VAR}',
|
|
||||||
VARIABLE3: '$(VAR)'
|
|
||||||
}
|
}
|
||||||
runScriptStepDefinition.args.entryPointArgs = [
|
const state = {
|
||||||
'-c',
|
jobPod: prepareJobOutputData.state.jobPod
|
||||||
'\'if [[ -z "$VARIABLE1" ]]; then exit 1; fi\'',
|
}
|
||||||
'\'if [[ -z "$VARIABLE2" ]]; then exit 2; fi\'',
|
const responseFile = null
|
||||||
'\'if [[ -z "$VARIABLE3" ]]; then exit 3; fi\''
|
|
||||||
]
|
|
||||||
|
|
||||||
await expect(
|
await expect(
|
||||||
runScriptStep(
|
runScriptStep(args, state, responseFile)
|
||||||
runScriptStepDefinition.args,
|
|
||||||
prepareJobOutputData.state,
|
|
||||||
null
|
|
||||||
)
|
|
||||||
).resolves.not.toThrow()
|
|
||||||
})
|
|
||||||
|
|
||||||
it('Should have path variable changed in container with prepend path string array', async () => {
|
|
||||||
runScriptStepDefinition.args.prependPath = ['/some/other/path']
|
|
||||||
runScriptStepDefinition.args.entryPoint = '/bin/bash'
|
|
||||||
runScriptStepDefinition.args.entryPointArgs = [
|
|
||||||
'-c',
|
|
||||||
`'if [[ ! $(env | grep "^PATH=") = "PATH=${runScriptStepDefinition.args.prependPath.join(
|
|
||||||
':'
|
|
||||||
)}:"* ]]; then exit 1; fi'`
|
|
||||||
]
|
|
||||||
|
|
||||||
await expect(
|
|
||||||
runScriptStep(
|
|
||||||
runScriptStepDefinition.args,
|
|
||||||
prepareJobOutputData.state,
|
|
||||||
null
|
|
||||||
)
|
|
||||||
).resolves.not.toThrow()
|
).resolves.not.toThrow()
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,18 +0,0 @@
|
|||||||
kind: Cluster
|
|
||||||
apiVersion: kind.x-k8s.io/v1alpha4
|
|
||||||
nodes:
|
|
||||||
- role: control-plane
|
|
||||||
# add a mount from /path/to/my/files on the host to /files on the node
|
|
||||||
extraMounts:
|
|
||||||
- hostPath: {{PATHTOREPO}}
|
|
||||||
containerPath: {{PATHTOREPO}}
|
|
||||||
# optional: if set, the mount is read-only.
|
|
||||||
# default false
|
|
||||||
readOnly: false
|
|
||||||
# optional: if set, the mount needs SELinux relabeling.
|
|
||||||
# default false
|
|
||||||
selinuxRelabel: false
|
|
||||||
# optional: set propagation mode (None, HostToContainer or Bidirectional)
|
|
||||||
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
|
|
||||||
# default None
|
|
||||||
propagation: None
|
|
||||||
@@ -1,80 +1,20 @@
|
|||||||
import * as k8s from '@kubernetes/client-node'
|
|
||||||
import * as fs from 'fs'
|
import * as fs from 'fs'
|
||||||
import { HookData } from 'hooklib/lib'
|
|
||||||
import * as path from 'path'
|
|
||||||
import { v4 as uuidv4 } from 'uuid'
|
import { v4 as uuidv4 } from 'uuid'
|
||||||
|
|
||||||
const kc = new k8s.KubeConfig()
|
export class TestTempOutput {
|
||||||
|
|
||||||
kc.loadFromDefault()
|
|
||||||
|
|
||||||
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
|
|
||||||
const k8sStorageApi = kc.makeApiClient(k8s.StorageV1Api)
|
|
||||||
|
|
||||||
export class TestHelper {
|
|
||||||
private tempDirPath: string
|
private tempDirPath: string
|
||||||
private podName: string
|
|
||||||
constructor() {
|
constructor() {
|
||||||
this.tempDirPath = `${__dirname}/_temp/runner`
|
this.tempDirPath = `${__dirname}/_temp/${uuidv4()}`
|
||||||
this.podName = uuidv4().replace(/-/g, '')
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public async initialize(): Promise<void> {
|
public initialize(): void {
|
||||||
process.env['ACTIONS_RUNNER_POD_NAME'] = `${this.podName}`
|
fs.mkdirSync(this.tempDirPath, { recursive: true })
|
||||||
process.env['RUNNER_WORKSPACE'] = `${this.tempDirPath}/_work/repo`
|
|
||||||
process.env['RUNNER_TEMP'] = `${this.tempDirPath}/_work/_temp`
|
|
||||||
process.env['GITHUB_WORKSPACE'] = `${this.tempDirPath}/_work/repo/repo`
|
|
||||||
process.env['ACTIONS_RUNNER_KUBERNETES_NAMESPACE'] = 'default'
|
|
||||||
|
|
||||||
fs.mkdirSync(`${this.tempDirPath}/_work/repo/repo`, { recursive: true })
|
|
||||||
fs.mkdirSync(`${this.tempDirPath}/externals`, { recursive: true })
|
|
||||||
fs.mkdirSync(process.env.RUNNER_TEMP, { recursive: true })
|
|
||||||
|
|
||||||
fs.copyFileSync(
|
|
||||||
path.resolve(`${__dirname}/../../../examples/example-script.sh`),
|
|
||||||
`${process.env.RUNNER_TEMP}/example-script.sh`
|
|
||||||
)
|
|
||||||
|
|
||||||
await this.cleanupK8sResources()
|
|
||||||
try {
|
|
||||||
await this.createTestVolume()
|
|
||||||
await this.createTestJobPod()
|
|
||||||
} catch (e) {
|
|
||||||
console.log(e)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public async cleanup(): Promise<void> {
|
public cleanup(): void {
|
||||||
try {
|
fs.rmSync(this.tempDirPath, { recursive: true })
|
||||||
await this.cleanupK8sResources()
|
|
||||||
fs.rmSync(this.tempDirPath, { recursive: true })
|
|
||||||
} catch {}
|
|
||||||
}
|
|
||||||
public async cleanupK8sResources() {
|
|
||||||
await k8sApi
|
|
||||||
.deleteNamespacedPersistentVolumeClaim(
|
|
||||||
`${this.podName}-work`,
|
|
||||||
'default',
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
0
|
|
||||||
)
|
|
||||||
.catch(e => {})
|
|
||||||
await k8sApi.deletePersistentVolume(`${this.podName}-pv`).catch(e => {})
|
|
||||||
await k8sStorageApi.deleteStorageClass('local-storage').catch(e => {})
|
|
||||||
await k8sApi
|
|
||||||
.deleteNamespacedPod(this.podName, 'default', undefined, undefined, 0)
|
|
||||||
.catch(e => {})
|
|
||||||
await k8sApi
|
|
||||||
.deleteNamespacedPod(
|
|
||||||
`${this.podName}-workflow`,
|
|
||||||
'default',
|
|
||||||
undefined,
|
|
||||||
undefined,
|
|
||||||
0
|
|
||||||
)
|
|
||||||
.catch(e => {})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public createFile(fileName?: string): string {
|
public createFile(fileName?: string): string {
|
||||||
const filePath = `${this.tempDirPath}/${fileName || uuidv4()}`
|
const filePath = `${this.tempDirPath}/${fileName || uuidv4()}`
|
||||||
fs.writeFileSync(filePath, '')
|
fs.writeFileSync(filePath, '')
|
||||||
@@ -85,112 +25,4 @@ export class TestHelper {
|
|||||||
const filePath = `${this.tempDirPath}/${fileName}`
|
const filePath = `${this.tempDirPath}/${fileName}`
|
||||||
fs.rmSync(filePath)
|
fs.rmSync(filePath)
|
||||||
}
|
}
|
||||||
|
|
||||||
public async createTestJobPod() {
|
|
||||||
const container = {
|
|
||||||
name: 'nginx',
|
|
||||||
image: 'nginx:latest',
|
|
||||||
imagePullPolicy: 'IfNotPresent'
|
|
||||||
} as k8s.V1Container
|
|
||||||
|
|
||||||
const pod: k8s.V1Pod = {
|
|
||||||
metadata: {
|
|
||||||
name: this.podName
|
|
||||||
},
|
|
||||||
spec: {
|
|
||||||
restartPolicy: 'Never',
|
|
||||||
containers: [container]
|
|
||||||
}
|
|
||||||
} as k8s.V1Pod
|
|
||||||
await k8sApi.createNamespacedPod('default', pod)
|
|
||||||
}
|
|
||||||
|
|
||||||
public async createTestVolume() {
|
|
||||||
var sc: k8s.V1StorageClass = {
|
|
||||||
metadata: {
|
|
||||||
name: 'local-storage'
|
|
||||||
},
|
|
||||||
provisioner: 'kubernetes.io/no-provisioner',
|
|
||||||
volumeBindingMode: 'Immediate'
|
|
||||||
}
|
|
||||||
await k8sStorageApi.createStorageClass(sc)
|
|
||||||
|
|
||||||
var volume: k8s.V1PersistentVolume = {
|
|
||||||
metadata: {
|
|
||||||
name: `${this.podName}-pv`
|
|
||||||
},
|
|
||||||
spec: {
|
|
||||||
storageClassName: 'local-storage',
|
|
||||||
capacity: {
|
|
||||||
storage: '2Gi'
|
|
||||||
},
|
|
||||||
volumeMode: 'Filesystem',
|
|
||||||
accessModes: ['ReadWriteOnce'],
|
|
||||||
hostPath: {
|
|
||||||
path: `${this.tempDirPath}/_work`
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
await k8sApi.createPersistentVolume(volume)
|
|
||||||
var volumeClaim: k8s.V1PersistentVolumeClaim = {
|
|
||||||
metadata: {
|
|
||||||
name: `${this.podName}-work`
|
|
||||||
},
|
|
||||||
spec: {
|
|
||||||
accessModes: ['ReadWriteOnce'],
|
|
||||||
volumeMode: 'Filesystem',
|
|
||||||
storageClassName: 'local-storage',
|
|
||||||
volumeName: `${this.podName}-pv`,
|
|
||||||
resources: {
|
|
||||||
requests: {
|
|
||||||
storage: '1Gi'
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
await k8sApi.createNamespacedPersistentVolumeClaim('default', volumeClaim)
|
|
||||||
}
|
|
||||||
|
|
||||||
public getPrepareJobDefinition(): HookData {
|
|
||||||
const prepareJob = JSON.parse(
|
|
||||||
fs.readFileSync(
|
|
||||||
path.resolve(__dirname + '/../../../examples/prepare-job.json'),
|
|
||||||
'utf8'
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
prepareJob.args.container.userMountVolumes = undefined
|
|
||||||
prepareJob.args.container.registry = null
|
|
||||||
prepareJob.args.services.forEach(s => {
|
|
||||||
s.registry = null
|
|
||||||
})
|
|
||||||
|
|
||||||
return prepareJob
|
|
||||||
}
|
|
||||||
|
|
||||||
public getRunScriptStepDefinition(): HookData {
|
|
||||||
const runScriptStep = JSON.parse(
|
|
||||||
fs.readFileSync(
|
|
||||||
path.resolve(__dirname + '/../../../examples/run-script-step.json'),
|
|
||||||
'utf8'
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
runScriptStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
|
|
||||||
return runScriptStep
|
|
||||||
}
|
|
||||||
|
|
||||||
public getRunContainerStepDefinition(): HookData {
|
|
||||||
const runContainerStep = JSON.parse(
|
|
||||||
fs.readFileSync(
|
|
||||||
path.resolve(__dirname + '/../../../examples/run-container-step.json'),
|
|
||||||
'utf8'
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
runContainerStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
|
|
||||||
runContainerStep.args.userMountVolumes = undefined
|
|
||||||
runContainerStep.args.registry = null
|
|
||||||
return runContainerStep
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
<!-- ## Features -->
|
## Features
|
||||||
|
- Initial Release
|
||||||
|
|
||||||
## Bugs
|
## Bugs
|
||||||
|
|
||||||
- Handle `$` symbols in environment variable names and values in k8s [#74]
|
|
||||||
|
|
||||||
<!-- ## Misc -->
|
## Misc
|
||||||
Reference in New Issue
Block a user