mirror of
https://github.com/actions/runner-container-hooks.git
synced 2025-12-30 13:57:15 +08:00
Compare commits
73 Commits
nikola-jok
...
fhammerl/k
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b3df7ec55b | ||
|
|
16276a2a22 | ||
|
|
abeebd2a37 | ||
|
|
2eecf2d378 | ||
|
|
0c38d44dbd | ||
|
|
a62b81fc95 | ||
|
|
ae0066ae41 | ||
|
|
6c9241fb0e | ||
|
|
efe66bb99b | ||
|
|
9a50e3a796 | ||
|
|
16eb238caa | ||
|
|
8e06496e34 | ||
|
|
e2033b29c7 | ||
|
|
eb47baaf5e | ||
|
|
20c19dae27 | ||
|
|
4307828719 | ||
|
|
5c6995dba1 | ||
|
|
bb1a033ed7 | ||
|
|
898063bddd | ||
|
|
266b8edb99 | ||
|
|
47cbf5a0d7 | ||
|
|
de4553f25a | ||
|
|
8ea57170d8 | ||
|
|
643bf36fd8 | ||
|
|
de59bd8716 | ||
|
|
d3ec1c0040 | ||
|
|
3e04b45585 | ||
|
|
2b386f7cbd | ||
|
|
bf362ba0dd | ||
|
|
7ae8942b3d | ||
|
|
347e68d3c9 | ||
|
|
7c4e0f8d51 | ||
|
|
cd310988c9 | ||
|
|
1bfc52f466 | ||
|
|
2aa6f9d9c8 | ||
|
|
3d0ca83d2d | ||
|
|
5daaae120b | ||
|
|
df448fbbb0 | ||
|
|
ee2554e2c0 | ||
|
|
f764d18c4c | ||
|
|
55761eab39 | ||
|
|
51bd8b62a4 | ||
|
|
150bc0503a | ||
|
|
9ce39e5a60 | ||
|
|
8351f842bd | ||
|
|
bf3707d7e0 | ||
|
|
88b7b19db7 | ||
|
|
dd5dfb3e48 | ||
|
|
84a57de2e3 | ||
|
|
fa680b2073 | ||
|
|
02f0b322a0 | ||
|
|
ecb9376000 | ||
|
|
cc90cd2361 | ||
|
|
152c4e1cc8 | ||
|
|
d0e094649e | ||
|
|
3ba45d3d7e | ||
|
|
58ebf56ad3 | ||
|
|
e928fa3252 | ||
|
|
ddf09ad7bd | ||
|
|
689a74e352 | ||
|
|
55c9198ada | ||
|
|
bd7e053180 | ||
|
|
0ebccbd8c6 | ||
|
|
ec8131abb7 | ||
|
|
7010d21bff | ||
|
|
c65ec28bbb | ||
|
|
c2f9b10f4d | ||
|
|
1e49f4ba5b | ||
|
|
171956673c | ||
|
|
3ab4ae20f9 | ||
|
|
b0cf60b678 | ||
|
|
5ec2edbe11 | ||
|
|
4b7efe88ef |
7
.github/workflows/build.yaml
vendored
7
.github/workflows/build.yaml
vendored
@@ -11,6 +11,11 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- run: sed -i "s|{{PATHTOREPO}}|$(pwd)|" packages/k8s/tests/test-kind.yaml
|
||||
name: Setup kind cluster yaml config
|
||||
- uses: helm/kind-action@v1.2.0
|
||||
with:
|
||||
config: packages/k8s/tests/test-kind.yaml
|
||||
- run: npm install
|
||||
name: Install dependencies
|
||||
- run: npm run bootstrap
|
||||
@@ -21,6 +26,6 @@ jobs:
|
||||
- name: Check linter
|
||||
run: |
|
||||
npm run lint
|
||||
git diff --exit-code
|
||||
git diff --exit-code -- ':!packages/k8s/tests/test-kind.yaml'
|
||||
- name: Run tests
|
||||
run: npm run test
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,4 +1,5 @@
|
||||
node_modules/
|
||||
lib/
|
||||
dist/
|
||||
**/tests/_temp/**
|
||||
**/tests/_temp/**
|
||||
packages/k8s/tests/test-kind.yaml
|
||||
@@ -1 +1 @@
|
||||
* @actions/actions-runtime
|
||||
* @actions/actions-runtime @actions/runner-akvelon
|
||||
|
||||
64
docs/adrs/0034-build-docker-with-kaniko.md
Normal file
64
docs/adrs/0034-build-docker-with-kaniko.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# ADR 0034: Build container-action Dockerfiles with Kaniko
|
||||
|
||||
**Date**: 2023-01-26
|
||||
|
||||
**Status**: In Progress
|
||||
|
||||
# Background
|
||||
|
||||
[Building Dockerfiles in k8s using Kaniko](https://github.com/actions/runner-container-hooks/issues/23) has been on the radar since the beginning of container hooks.
|
||||
Currently, this is possible in ARC using a [dind/docker-in-docker](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/runner/actions-runner-dind.dockerfile) sidecar container.
|
||||
This container needs to be launched using `--privileged`, which presents a security concern.
|
||||
|
||||
As an alternative tool, a container running [Kaniko](https://github.com/GoogleContainerTools/kaniko) can be used to build these files instead.
|
||||
Kaniko doesn't need to be `--privileged`.
|
||||
Whether using dind/docker-in-docker sidecar or Kaniko, in this ADR I will refer to these containers as '**builder containers**'
|
||||
|
||||
# Guiding Principles
|
||||
- **Security:** running a Kaniko builder container should be possible without the `--privileged` flag
|
||||
- **Feature parity with Docker:** Any 'Dockerfile' that can be built with vanilla Docker should also be possible to build using a Kaniko build container
|
||||
- **Ease of Use:** The customer should be able to build and push Docker images with minimal configuration
|
||||
|
||||
## Limitations
|
||||
|
||||
### User provided registry
|
||||
The user needs to provide a a remote registry (like ghcr.io or dockerhub) and credentials, for the Kaniko builder container to push to and k8s to pull from later. This is the user's responsiblity so that our solution remains lightweight and generic.
|
||||
- Alternatively, a user-managed local Docker Registry within the k8s cluster can of course be used instead
|
||||
|
||||
### Kaniko feature limit
|
||||
Anything Kaniko can't do we'll be by definition unable to help with. Potential incompatibilities / inconsistencies between Docker and Kaniko will naturally be inherited by our solution.
|
||||
|
||||
## Interface
|
||||
The user will set `containerMode:kubernetes`, because this is a change to the behaviour of our k8s hooks
|
||||
|
||||
The user will set two ENVs:
|
||||
- `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST`: e.g. `ghcr.io/OWNER` or `dockerhandle`.
|
||||
- `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_SECRET_NAME`: e.g. `docker-secret`: the name of the `k8s` secret resource that allows you to authenticate against the registry with the given handle above
|
||||
|
||||
The workspace is used as the image name.
|
||||
|
||||
The image tag is a random generated string.
|
||||
|
||||
To execute a container-action, we then run a k8s job by loading the image from the specified registry
|
||||
|
||||
## Additional configuration
|
||||
|
||||
Users may want to use different URLs for the registry when pushing and pulling an image as they will be invoked by different machines on different networks.
|
||||
|
||||
- The **Kaniko build container pushes the image** after building is a pod that belongs to the runner pod.
|
||||
- The **kubelet pulls the image** before starting a pod.
|
||||
|
||||
The above two might not resolve all host names 100% the same so it makes sense to allow different push and pull URLs.
|
||||
|
||||
ENVs `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PUSH` and `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PULL` will be preferred if set.
|
||||
|
||||
### Example
|
||||
|
||||
As an example, a cluster local docker registry could be a long running pod exposed as a service _and_ as a NodePort.
|
||||
|
||||
The Kaniko builder pod would push to `my-local-registry.default.svc.cluster.local:12345/foohandle`. (`ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PUSH`)
|
||||
This URL cannot be resolved by the kubelet to pull the image, so we need a secondary URL to pull it - in this case, using the NodePort, this URL is localhost:NODEPORT/foohandle. (`ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PULL)
|
||||
|
||||
|
||||
## Consequences
|
||||
- Users build container-actions with a local Dockerfile in their k8s cluster without a privileged docker builder container
|
||||
3
examples/example-script.sh
Normal file
3
examples/example-script.sh
Normal file
@@ -0,0 +1,3 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "Hello World"
|
||||
@@ -5,7 +5,7 @@
|
||||
"args": {
|
||||
"container": {
|
||||
"image": "node:14.16",
|
||||
"workingDirectory": "/__w/thboop-test2/thboop-test2",
|
||||
"workingDirectory": "/__w/repo/repo",
|
||||
"createOptions": "--cpus 1",
|
||||
"environmentVariables": {
|
||||
"NODE_ENV": "development"
|
||||
@@ -24,37 +24,37 @@
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work",
|
||||
"targetVolumePath": "/__w",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/externals",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/externals",
|
||||
"targetVolumePath": "/__e",
|
||||
"readOnly": true
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp",
|
||||
"targetVolumePath": "/__w/_temp",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_actions",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_actions",
|
||||
"targetVolumePath": "/__w/_actions",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_tool",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_tool",
|
||||
"targetVolumePath": "/__w/_tool",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp/_github_home",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp/_github_home",
|
||||
"targetVolumePath": "/github/home",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp/_github_workflow",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp/_github_workflow",
|
||||
"targetVolumePath": "/github/workflow",
|
||||
"readOnly": false
|
||||
}
|
||||
|
||||
@@ -12,11 +12,11 @@
|
||||
"image": "node:14.16",
|
||||
"dockerfile": null,
|
||||
"entryPointArgs": [
|
||||
"-c",
|
||||
"echo \"hello world2\""
|
||||
"-e",
|
||||
"example-script.sh"
|
||||
],
|
||||
"entryPoint": "bash",
|
||||
"workingDirectory": "/__w/thboop-test2/thboop-test2",
|
||||
"workingDirectory": "/__w/repo/repo",
|
||||
"createOptions": "--cpus 1",
|
||||
"environmentVariables": {
|
||||
"NODE_ENV": "development"
|
||||
@@ -34,27 +34,27 @@
|
||||
],
|
||||
"systemMountVolumes": [
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work",
|
||||
"targetVolumePath": "/__w",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/externals",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/externals",
|
||||
"targetVolumePath": "/__e",
|
||||
"readOnly": true
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp",
|
||||
"targetVolumePath": "/__w/_temp",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_actions",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_actions",
|
||||
"targetVolumePath": "/__w/_actions",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_tool",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_tool",
|
||||
"targetVolumePath": "/__w/_tool",
|
||||
"readOnly": false
|
||||
},
|
||||
@@ -64,7 +64,7 @@
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp/_github_workflow",
|
||||
"sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp/_github_workflow",
|
||||
"targetVolumePath": "/github/workflow",
|
||||
"readOnly": false
|
||||
}
|
||||
|
||||
@@ -10,8 +10,8 @@
|
||||
},
|
||||
"args": {
|
||||
"entryPointArgs": [
|
||||
"-c",
|
||||
"echo \"hello world\""
|
||||
"-e",
|
||||
"example-script.sh"
|
||||
],
|
||||
"entryPoint": "bash",
|
||||
"environmentVariables": {
|
||||
@@ -21,6 +21,6 @@
|
||||
"/foo/bar",
|
||||
"bar/foo"
|
||||
],
|
||||
"workingDirectory": "/__w/thboop-test2/thboop-test2"
|
||||
"workingDirectory": "/__w/repo/repo"
|
||||
}
|
||||
}
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "hooks",
|
||||
"version": "0.1.0",
|
||||
"version": "0.1.3",
|
||||
"lockfileVersion": 2,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "hooks",
|
||||
"version": "0.1.0",
|
||||
"version": "0.1.3",
|
||||
"license": "MIT",
|
||||
"devDependencies": {
|
||||
"@types/jest": "^27.5.1",
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
{
|
||||
"name": "hooks",
|
||||
"version": "0.1.0",
|
||||
"version": "0.1.3",
|
||||
"description": "Three projects are included - k8s: a kubernetes hook implementation that spins up pods dynamically to run a job - docker: A hook implementation of the runner's docker implementation - A hook lib, which contains shared typescript definitions and utilities that the other packages consume",
|
||||
"main": "",
|
||||
"directories": {
|
||||
"doc": "docs"
|
||||
},
|
||||
"scripts": {
|
||||
"test": "npm run test --prefix packages/docker",
|
||||
"test": "npm run test --prefix packages/docker && npm run test --prefix packages/k8s",
|
||||
"bootstrap": "npm install --prefix packages/hooklib && npm install --prefix packages/k8s && npm install --prefix packages/docker",
|
||||
"format": "prettier --write '**/*.ts'",
|
||||
"format-check": "prettier --check '**/*.ts'",
|
||||
|
||||
@@ -1 +1 @@
|
||||
jest.setTimeout(90000)
|
||||
jest.setTimeout(500000)
|
||||
|
||||
@@ -2,12 +2,11 @@ import * as core from '@actions/core'
|
||||
import * as fs from 'fs'
|
||||
import {
|
||||
ContainerInfo,
|
||||
JobContainerInfo,
|
||||
Registry,
|
||||
RunContainerStepArgs,
|
||||
ServiceContainerInfo,
|
||||
StepContainerInfo
|
||||
ServiceContainerInfo
|
||||
} from 'hooklib/lib'
|
||||
import path from 'path'
|
||||
import * as path from 'path'
|
||||
import { env } from 'process'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import { runDockerCommand, RunDockerCommandOptions } from '../utils'
|
||||
@@ -43,19 +42,15 @@ export async function createContainer(
|
||||
}
|
||||
|
||||
if (args.environmentVariables) {
|
||||
for (const [key, value] of Object.entries(args.environmentVariables)) {
|
||||
for (const [key] of Object.entries(args.environmentVariables)) {
|
||||
dockerArgs.push('-e')
|
||||
if (!value) {
|
||||
dockerArgs.push(`"${key}"`)
|
||||
} else {
|
||||
dockerArgs.push(`"${key}=${value}"`)
|
||||
}
|
||||
dockerArgs.push(key)
|
||||
}
|
||||
}
|
||||
|
||||
const mountVolumes = [
|
||||
...(args.userMountVolumes || []),
|
||||
...((args as JobContainerInfo | StepContainerInfo).systemMountVolumes || [])
|
||||
...(args.systemMountVolumes || [])
|
||||
]
|
||||
for (const mountVolume of mountVolumes) {
|
||||
dockerArgs.push(
|
||||
@@ -74,7 +69,9 @@ export async function createContainer(
|
||||
}
|
||||
}
|
||||
|
||||
const id = (await runDockerCommand(dockerArgs)).trim()
|
||||
const id = (
|
||||
await runDockerCommand(dockerArgs, { env: args.environmentVariables })
|
||||
).trim()
|
||||
if (!id) {
|
||||
throw new Error('Could not read id from docker command')
|
||||
}
|
||||
@@ -146,17 +143,41 @@ export async function containerBuild(
|
||||
args: RunContainerStepArgs,
|
||||
tag: string
|
||||
): Promise<void> {
|
||||
const context = path.dirname(`${env.GITHUB_WORKSPACE}/${args.dockerfile}`)
|
||||
if (!args.dockerfile) {
|
||||
throw new Error("Container build expects 'args.dockerfile' to be set")
|
||||
}
|
||||
|
||||
const dockerArgs: string[] = ['build']
|
||||
dockerArgs.push('-t', tag)
|
||||
dockerArgs.push('-f', `${env.GITHUB_WORKSPACE}/${args.dockerfile}`)
|
||||
dockerArgs.push(context)
|
||||
// TODO: figure out build working directory
|
||||
dockerArgs.push('-f', args.dockerfile)
|
||||
dockerArgs.push(getBuildContext(args.dockerfile))
|
||||
|
||||
await runDockerCommand(dockerArgs, {
|
||||
workingDir: args['buildWorkingDirectory']
|
||||
workingDir: getWorkingDir(args.dockerfile)
|
||||
})
|
||||
}
|
||||
|
||||
function getBuildContext(dockerfilePath: string): string {
|
||||
return path.dirname(dockerfilePath)
|
||||
}
|
||||
|
||||
function getWorkingDir(dockerfilePath: string): string {
|
||||
const workspace = env.GITHUB_WORKSPACE as string
|
||||
let workingDir = workspace
|
||||
if (!dockerfilePath?.includes(workspace)) {
|
||||
// This is container action
|
||||
const pathSplit = dockerfilePath.split('/')
|
||||
const actionIndex = pathSplit?.findIndex(d => d === '_actions')
|
||||
if (actionIndex) {
|
||||
const actionSubdirectoryDepth = 3 // handle + repo + [branch | tag]
|
||||
pathSplit.splice(actionIndex + actionSubdirectoryDepth + 1)
|
||||
workingDir = pathSplit.join('/')
|
||||
}
|
||||
}
|
||||
|
||||
return workingDir
|
||||
}
|
||||
|
||||
export async function containerLogs(id: string): Promise<void> {
|
||||
const dockerArgs: string[] = ['logs']
|
||||
dockerArgs.push('--details')
|
||||
@@ -171,6 +192,18 @@ export async function containerNetworkRemove(network: string): Promise<void> {
|
||||
await runDockerCommand(dockerArgs)
|
||||
}
|
||||
|
||||
export async function containerNetworkPrune(): Promise<void> {
|
||||
const dockerArgs = [
|
||||
'network',
|
||||
'prune',
|
||||
'--force',
|
||||
'--filter',
|
||||
`label=${getRunnerLabel()}`
|
||||
]
|
||||
|
||||
await runDockerCommand(dockerArgs)
|
||||
}
|
||||
|
||||
export async function containerPrune(): Promise<void> {
|
||||
const dockerPSArgs: string[] = [
|
||||
'ps',
|
||||
@@ -238,22 +271,36 @@ export async function healthCheck({
|
||||
export async function containerPorts(id: string): Promise<string[]> {
|
||||
const dockerArgs = ['port', id]
|
||||
const portMappings = (await runDockerCommand(dockerArgs)).trim()
|
||||
return portMappings.split('\n')
|
||||
return portMappings.split('\n').filter(p => !!p)
|
||||
}
|
||||
|
||||
export async function registryLogin(args): Promise<string> {
|
||||
if (!args.registry) {
|
||||
export async function getContainerEnvValue(
|
||||
id: string,
|
||||
name: string
|
||||
): Promise<string> {
|
||||
const dockerArgs = [
|
||||
'inspect',
|
||||
`--format='{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "${name}"}}{{index (split $value "=") 1}}{{end}}{{end}}'`,
|
||||
id
|
||||
]
|
||||
const value = (await runDockerCommand(dockerArgs)).trim()
|
||||
const lines = value.split('\n')
|
||||
return lines.length ? lines[0].replace(/^'/, '').replace(/'$/, '') : ''
|
||||
}
|
||||
|
||||
export async function registryLogin(registry?: Registry): Promise<string> {
|
||||
if (!registry) {
|
||||
return ''
|
||||
}
|
||||
const credentials = {
|
||||
username: args.registry.username,
|
||||
password: args.registry.password
|
||||
username: registry.username,
|
||||
password: registry.password
|
||||
}
|
||||
|
||||
const configLocation = `${env.RUNNER_TEMP}/.docker_${uuidv4()}`
|
||||
fs.mkdirSync(configLocation)
|
||||
try {
|
||||
await dockerLogin(configLocation, args.registry.serverUrl, credentials)
|
||||
await dockerLogin(configLocation, registry.serverUrl, credentials)
|
||||
} catch (error) {
|
||||
fs.rmdirSync(configLocation, { recursive: true })
|
||||
throw error
|
||||
@@ -271,7 +318,7 @@ export async function registryLogout(configLocation: string): Promise<void> {
|
||||
async function dockerLogin(
|
||||
configLocation: string,
|
||||
registry: string,
|
||||
credentials: { username: string; password: string }
|
||||
credentials: { username?: string; password?: string }
|
||||
): Promise<void> {
|
||||
const credentialsArgs =
|
||||
credentials.username && credentials.password
|
||||
@@ -307,30 +354,36 @@ export async function containerExecStep(
|
||||
): Promise<void> {
|
||||
const dockerArgs: string[] = ['exec', '-i']
|
||||
dockerArgs.push(`--workdir=${args.workingDirectory}`)
|
||||
for (const [key, value] of Object.entries(args['environmentVariables'])) {
|
||||
for (const [key] of Object.entries(args['environmentVariables'])) {
|
||||
dockerArgs.push('-e')
|
||||
if (!value) {
|
||||
dockerArgs.push(`"${key}"`)
|
||||
} else {
|
||||
dockerArgs.push(`"${key}=${value}"`)
|
||||
}
|
||||
dockerArgs.push(key)
|
||||
}
|
||||
|
||||
// Todo figure out prepend path and update it here
|
||||
// (we need to pass path in as -e Path={fullpath}) where {fullpath is the prepend path added to the current containers path}
|
||||
if (args.prependPath?.length) {
|
||||
// TODO: remove compatibility with typeof prependPath === 'string' as we bump to next major version, the hooks will lose PrependPath compat with runners 2.293.0 and older
|
||||
const prependPath =
|
||||
typeof args.prependPath === 'string'
|
||||
? args.prependPath
|
||||
: args.prependPath.join(':')
|
||||
|
||||
dockerArgs.push(
|
||||
'-e',
|
||||
`PATH=${prependPath}:${await getContainerEnvValue(containerId, 'PATH')}`
|
||||
)
|
||||
}
|
||||
|
||||
dockerArgs.push(containerId)
|
||||
dockerArgs.push(args.entryPoint)
|
||||
for (const entryPointArg of args.entryPointArgs) {
|
||||
dockerArgs.push(entryPointArg)
|
||||
}
|
||||
await runDockerCommand(dockerArgs)
|
||||
await runDockerCommand(dockerArgs, { env: args.environmentVariables })
|
||||
}
|
||||
|
||||
export async function containerRun(
|
||||
args: RunContainerStepArgs,
|
||||
name: string,
|
||||
network: string
|
||||
network?: string
|
||||
): Promise<void> {
|
||||
if (!args.image) {
|
||||
throw new Error('expected image to be set')
|
||||
@@ -340,15 +393,15 @@ export async function containerRun(
|
||||
dockerArgs.push('--name', name)
|
||||
dockerArgs.push(`--workdir=${args.workingDirectory}`)
|
||||
dockerArgs.push(`--label=${getRunnerLabel()}`)
|
||||
dockerArgs.push(`--network=${network}`)
|
||||
if (network) {
|
||||
dockerArgs.push(`--network=${network}`)
|
||||
}
|
||||
|
||||
if (args.createOptions) {
|
||||
dockerArgs.push(...args.createOptions.split(' '))
|
||||
}
|
||||
if (args.environmentVariables) {
|
||||
for (const [key, value] of Object.entries(args.environmentVariables)) {
|
||||
// Pass in this way to avoid printing secrets
|
||||
env[key] = value ?? undefined
|
||||
for (const [key] of Object.entries(args.environmentVariables)) {
|
||||
dockerArgs.push('-e')
|
||||
dockerArgs.push(key)
|
||||
}
|
||||
@@ -378,7 +431,7 @@ export async function containerRun(
|
||||
}
|
||||
}
|
||||
|
||||
await runDockerCommand(dockerArgs)
|
||||
await runDockerCommand(dockerArgs, { env: args.environmentVariables })
|
||||
}
|
||||
|
||||
export async function isContainerAlpine(containerId: string): Promise<boolean> {
|
||||
|
||||
@@ -1,21 +1,9 @@
|
||||
import {
|
||||
containerRemove,
|
||||
containerNetworkRemove
|
||||
containerNetworkPrune,
|
||||
containerPrune
|
||||
} from '../dockerCommands/container'
|
||||
|
||||
// eslint-disable-next-line @typescript-eslint/no-unused-vars
|
||||
export async function cleanupJob(args, state, responseFile): Promise<void> {
|
||||
const containerIds: string[] = []
|
||||
if (state?.container) {
|
||||
containerIds.push(state.container)
|
||||
}
|
||||
if (state?.services) {
|
||||
containerIds.push(state.services)
|
||||
}
|
||||
if (containerIds.length > 0) {
|
||||
await containerRemove(containerIds)
|
||||
}
|
||||
if (state.network) {
|
||||
await containerNetworkRemove(state.network)
|
||||
}
|
||||
export async function cleanupJob(): Promise<void> {
|
||||
await containerPrune()
|
||||
await containerNetworkPrune()
|
||||
}
|
||||
|
||||
@@ -48,6 +48,7 @@ export async function prepareJob(
|
||||
} finally {
|
||||
await registryLogout(configLocation)
|
||||
}
|
||||
|
||||
containerMetadata = await createContainer(
|
||||
container,
|
||||
generateContainerName(container.image),
|
||||
@@ -78,6 +79,7 @@ export async function prepareJob(
|
||||
generateContainerName(service.image),
|
||||
networkName
|
||||
)
|
||||
|
||||
servicesMetadata.push(response)
|
||||
await containerStart(response.id)
|
||||
}
|
||||
@@ -94,7 +96,10 @@ export async function prepareJob(
|
||||
)
|
||||
}
|
||||
|
||||
const isAlpine = await isContainerAlpine(containerMetadata!.id)
|
||||
let isAlpine = false
|
||||
if (containerMetadata?.id) {
|
||||
isAlpine = await isContainerAlpine(containerMetadata.id)
|
||||
}
|
||||
|
||||
if (containerMetadata?.id) {
|
||||
containerMetadata.ports = await containerPorts(containerMetadata.id)
|
||||
@@ -105,7 +110,10 @@ export async function prepareJob(
|
||||
}
|
||||
}
|
||||
|
||||
const healthChecks: Promise<void>[] = [healthCheck(containerMetadata!)]
|
||||
const healthChecks: Promise<void>[] = []
|
||||
if (containerMetadata) {
|
||||
healthChecks.push(healthCheck(containerMetadata))
|
||||
}
|
||||
for (const service of servicesMetadata) {
|
||||
healthChecks.push(healthCheck(service))
|
||||
}
|
||||
@@ -133,7 +141,6 @@ function generateResponseFile(
|
||||
servicesMetadata?: ContainerMetadata[],
|
||||
isAlpine = false
|
||||
): void {
|
||||
// todo figure out if we are alpine
|
||||
const response = {
|
||||
state: { network: networkName },
|
||||
context: {},
|
||||
@@ -186,15 +193,15 @@ function transformDockerPortsToContextPorts(
|
||||
meta: ContainerMetadata
|
||||
): ContextPorts {
|
||||
// ex: '80/tcp -> 0.0.0.0:80'
|
||||
const re = /^(\d+)\/(\w+)? -> (.*):(\d+)$/
|
||||
const re = /^(\d+)(\/\w+)? -> (.*):(\d+)$/
|
||||
const contextPorts: ContextPorts = {}
|
||||
|
||||
if (meta.ports) {
|
||||
if (meta.ports?.length) {
|
||||
for (const port of meta.ports) {
|
||||
const matches = port.match(re)
|
||||
if (!matches) {
|
||||
throw new Error(
|
||||
'Container ports could not match the regex: "^(\\d+)\\/(\\w+)? -> (.*):(\\d+)$"'
|
||||
'Container ports could not match the regex: "^(\\d+)(\\/\\w+)? -> (.*):(\\d+)$"'
|
||||
)
|
||||
}
|
||||
contextPorts[matches[1]] = matches[matches.length - 1]
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
import { RunContainerStepArgs } from 'hooklib/lib'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import {
|
||||
containerBuild,
|
||||
registryLogin,
|
||||
registryLogout,
|
||||
containerPull,
|
||||
containerRun
|
||||
containerRun,
|
||||
registryLogin,
|
||||
registryLogout
|
||||
} from '../dockerCommands'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import * as core from '@actions/core'
|
||||
import { RunContainerStepArgs } from 'hooklib/lib'
|
||||
import { getRunnerLabel } from '../dockerCommands/constants'
|
||||
|
||||
export async function runContainerStep(
|
||||
@@ -15,23 +14,23 @@ export async function runContainerStep(
|
||||
state
|
||||
): Promise<void> {
|
||||
const tag = generateBuildTag() // for docker build
|
||||
if (!args.image) {
|
||||
core.error('expected an image')
|
||||
} else {
|
||||
if (args.dockerfile) {
|
||||
await containerBuild(args, tag)
|
||||
args.image = tag
|
||||
} else {
|
||||
const configLocation = await registryLogin(args)
|
||||
try {
|
||||
await containerPull(args.image, configLocation)
|
||||
} finally {
|
||||
await registryLogout(configLocation)
|
||||
}
|
||||
if (args.image) {
|
||||
const configLocation = await registryLogin(args.registry)
|
||||
try {
|
||||
await containerPull(args.image, configLocation)
|
||||
} finally {
|
||||
await registryLogout(configLocation)
|
||||
}
|
||||
} else if (args.dockerfile) {
|
||||
await containerBuild(args, tag)
|
||||
args.image = tag
|
||||
} else {
|
||||
throw new Error(
|
||||
'run container step should have image or dockerfile fields specified'
|
||||
)
|
||||
}
|
||||
// container will get pruned at the end of the job based on the label, no need to cleanup here
|
||||
await containerRun(args, tag.split(':')[1], state.network)
|
||||
await containerRun(args, tag.split(':')[1], state?.network)
|
||||
}
|
||||
|
||||
function generateBuildTag(): string {
|
||||
|
||||
@@ -13,6 +13,7 @@ import {
|
||||
runContainerStep,
|
||||
runScriptStep
|
||||
} from './hooks'
|
||||
import { checkEnvironment } from './utils'
|
||||
|
||||
async function run(): Promise<void> {
|
||||
const input = await getInputFromStdin()
|
||||
@@ -23,12 +24,13 @@ async function run(): Promise<void> {
|
||||
const state = input['state']
|
||||
|
||||
try {
|
||||
checkEnvironment()
|
||||
switch (command) {
|
||||
case Command.PrepareJob:
|
||||
await prepareJob(args as PrepareJobArgs, responseFile)
|
||||
return exit(0)
|
||||
case Command.CleanupJob:
|
||||
await cleanupJob(null, state, null)
|
||||
await cleanupJob()
|
||||
return exit(0)
|
||||
case Command.RunScriptStep:
|
||||
await runScriptStep(args as RunScriptStepArgs, state)
|
||||
|
||||
@@ -2,12 +2,14 @@
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
/* eslint-disable import/no-commonjs */
|
||||
import * as core from '@actions/core'
|
||||
import { env } from 'process'
|
||||
// Import this way otherwise typescript has errors
|
||||
const exec = require('@actions/exec')
|
||||
|
||||
export interface RunDockerCommandOptions {
|
||||
workingDir?: string
|
||||
input?: Buffer
|
||||
env?: { [key: string]: string }
|
||||
}
|
||||
|
||||
export async function runDockerCommand(
|
||||
@@ -42,6 +44,12 @@ export function sanitize(val: string): string {
|
||||
return newNameBuilder.join('')
|
||||
}
|
||||
|
||||
export function checkEnvironment(): void {
|
||||
if (!env.GITHUB_WORKSPACE) {
|
||||
throw new Error('GITHUB_WORKSPACE is not set')
|
||||
}
|
||||
}
|
||||
|
||||
// isAlpha accepts single character and checks if
|
||||
// that character is [a-zA-Z]
|
||||
function isAlpha(val: string): boolean {
|
||||
|
||||
@@ -1,62 +1,33 @@
|
||||
import { prepareJob, cleanupJob } from '../src/hooks'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import * as fs from 'fs'
|
||||
import * as path from 'path'
|
||||
import { PrepareJobArgs } from 'hooklib/lib'
|
||||
import { cleanupJob, prepareJob } from '../src/hooks'
|
||||
import TestSetup from './test-setup'
|
||||
|
||||
const prepareJobInputPath = path.resolve(
|
||||
`${__dirname}/../../../examples/prepare-job.json`
|
||||
)
|
||||
|
||||
const tmpOutputDir = `${__dirname}/${uuidv4()}`
|
||||
|
||||
let prepareJobOutputPath: string
|
||||
let prepareJobData: any
|
||||
|
||||
let testSetup: TestSetup
|
||||
|
||||
jest.useRealTimers()
|
||||
|
||||
describe('cleanup job', () => {
|
||||
beforeAll(() => {
|
||||
fs.mkdirSync(tmpOutputDir, { recursive: true })
|
||||
})
|
||||
|
||||
afterAll(() => {
|
||||
fs.rmSync(tmpOutputDir, { recursive: true })
|
||||
})
|
||||
|
||||
beforeEach(async () => {
|
||||
const prepareJobRawData = fs.readFileSync(prepareJobInputPath, 'utf8')
|
||||
prepareJobData = JSON.parse(prepareJobRawData.toString())
|
||||
|
||||
prepareJobOutputPath = `${tmpOutputDir}/prepare-job-output-${uuidv4()}.json`
|
||||
fs.writeFileSync(prepareJobOutputPath, '')
|
||||
|
||||
testSetup = new TestSetup()
|
||||
testSetup.initialize()
|
||||
|
||||
prepareJobData.args.container.userMountVolumes = testSetup.userMountVolumes
|
||||
prepareJobData.args.container.systemMountVolumes =
|
||||
testSetup.systemMountVolumes
|
||||
prepareJobData.args.container.workingDirectory = testSetup.workingDirectory
|
||||
const prepareJobDefinition = testSetup.getPrepareJobDefinition()
|
||||
|
||||
await prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||
const prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
|
||||
await prepareJob(
|
||||
prepareJobDefinition.args as PrepareJobArgs,
|
||||
prepareJobOutput
|
||||
)
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
fs.rmSync(prepareJobOutputPath, { force: true })
|
||||
testSetup.teardown()
|
||||
})
|
||||
|
||||
it('should cleanup successfully', async () => {
|
||||
const prepareJobOutputContent = fs.readFileSync(
|
||||
prepareJobOutputPath,
|
||||
'utf-8'
|
||||
)
|
||||
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
|
||||
await expect(
|
||||
cleanupJob(prepareJobData.args, parsedPrepareJobOutput.state, null)
|
||||
).resolves.not.toThrow()
|
||||
await expect(cleanupJob()).resolves.not.toThrow()
|
||||
})
|
||||
})
|
||||
|
||||
27
packages/docker/tests/container-build-test.ts
Normal file
27
packages/docker/tests/container-build-test.ts
Normal file
@@ -0,0 +1,27 @@
|
||||
import { containerBuild } from '../src/dockerCommands'
|
||||
import TestSetup from './test-setup'
|
||||
|
||||
let testSetup
|
||||
let runContainerStepDefinition
|
||||
|
||||
describe('container build', () => {
|
||||
beforeEach(() => {
|
||||
testSetup = new TestSetup()
|
||||
testSetup.initialize()
|
||||
|
||||
runContainerStepDefinition = testSetup.getRunContainerStepDefinition()
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
testSetup.teardown()
|
||||
})
|
||||
|
||||
it('should build container', async () => {
|
||||
runContainerStepDefinition.image = ''
|
||||
const actionPath = testSetup.initializeDockerAction()
|
||||
runContainerStepDefinition.dockerfile = `${actionPath}/Dockerfile`
|
||||
await expect(
|
||||
containerBuild(runContainerStepDefinition, 'example-test-tag')
|
||||
).resolves.not.toThrow()
|
||||
})
|
||||
})
|
||||
@@ -4,7 +4,7 @@ jest.useRealTimers()
|
||||
|
||||
describe('container pull', () => {
|
||||
it('should fail', async () => {
|
||||
const arg = { image: 'doesNotExist' }
|
||||
const arg = { image: 'does-not-exist' }
|
||||
await expect(containerPull(arg.image, '')).rejects.toThrow()
|
||||
})
|
||||
it('should succeed', async () => {
|
||||
|
||||
@@ -1,102 +1,72 @@
|
||||
import {
|
||||
prepareJob,
|
||||
cleanupJob,
|
||||
runScriptStep,
|
||||
runContainerStep
|
||||
} from '../src/hooks'
|
||||
import * as fs from 'fs'
|
||||
import * as path from 'path'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import {
|
||||
cleanupJob,
|
||||
prepareJob,
|
||||
runContainerStep,
|
||||
runScriptStep
|
||||
} from '../src/hooks'
|
||||
import TestSetup from './test-setup'
|
||||
|
||||
const prepareJobJson = fs.readFileSync(
|
||||
path.resolve(__dirname + '/../../../examples/prepare-job.json'),
|
||||
'utf8'
|
||||
)
|
||||
|
||||
const containerStepJson = fs.readFileSync(
|
||||
path.resolve(__dirname + '/../../../examples/run-container-step.json'),
|
||||
'utf8'
|
||||
)
|
||||
|
||||
const tmpOutputDir = `${__dirname}/_temp/${uuidv4()}`
|
||||
|
||||
let prepareJobData: any
|
||||
let scriptStepJson: any
|
||||
let scriptStepData: any
|
||||
let containerStepData: any
|
||||
|
||||
let prepareJobOutputFilePath: string
|
||||
let definitions
|
||||
|
||||
let testSetup: TestSetup
|
||||
|
||||
describe('e2e', () => {
|
||||
beforeAll(() => {
|
||||
fs.mkdirSync(tmpOutputDir, { recursive: true })
|
||||
})
|
||||
|
||||
afterAll(() => {
|
||||
fs.rmSync(tmpOutputDir, { recursive: true })
|
||||
})
|
||||
|
||||
beforeEach(() => {
|
||||
// init dirs
|
||||
testSetup = new TestSetup()
|
||||
testSetup.initialize()
|
||||
|
||||
prepareJobData = JSON.parse(prepareJobJson)
|
||||
prepareJobData.args.container.userMountVolumes = testSetup.userMountVolumes
|
||||
prepareJobData.args.container.systemMountVolumes =
|
||||
testSetup.systemMountVolumes
|
||||
prepareJobData.args.container.workingDirectory = testSetup.workingDirectory
|
||||
|
||||
scriptStepJson = fs.readFileSync(
|
||||
path.resolve(__dirname + '/../../../examples/run-script-step.json'),
|
||||
'utf8'
|
||||
)
|
||||
scriptStepData = JSON.parse(scriptStepJson)
|
||||
scriptStepData.args.workingDirectory = testSetup.workingDirectory
|
||||
|
||||
containerStepData = JSON.parse(containerStepJson)
|
||||
containerStepData.args.workingDirectory = testSetup.workingDirectory
|
||||
containerStepData.args.userMountVolumes = testSetup.userMountVolumes
|
||||
containerStepData.args.systemMountVolumes = testSetup.systemMountVolumes
|
||||
|
||||
prepareJobOutputFilePath = `${tmpOutputDir}/prepare-job-output-${uuidv4()}.json`
|
||||
fs.writeFileSync(prepareJobOutputFilePath, '')
|
||||
definitions = {
|
||||
prepareJob: testSetup.getPrepareJobDefinition(),
|
||||
runScriptStep: testSetup.getRunScriptStepDefinition(),
|
||||
runContainerStep: testSetup.getRunContainerStepDefinition()
|
||||
}
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
fs.rmSync(prepareJobOutputFilePath, { force: true })
|
||||
testSetup.teardown()
|
||||
})
|
||||
|
||||
it('should prepare job, then run script step, then run container step then cleanup', async () => {
|
||||
const prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
|
||||
await expect(
|
||||
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||
prepareJob(definitions.prepareJob.args, prepareJobOutput)
|
||||
).resolves.not.toThrow()
|
||||
let rawState = fs.readFileSync(prepareJobOutputFilePath, 'utf-8')
|
||||
|
||||
let rawState = fs.readFileSync(prepareJobOutput, 'utf-8')
|
||||
let resp = JSON.parse(rawState)
|
||||
|
||||
await expect(
|
||||
runScriptStep(scriptStepData.args, resp.state)
|
||||
runScriptStep(definitions.runScriptStep.args, resp.state)
|
||||
).resolves.not.toThrow()
|
||||
|
||||
await expect(
|
||||
runContainerStep(containerStepData.args, resp.state)
|
||||
runContainerStep(definitions.runContainerStep.args, resp.state)
|
||||
).resolves.not.toThrow()
|
||||
await expect(cleanupJob(resp, resp.state, null)).resolves.not.toThrow()
|
||||
|
||||
await expect(cleanupJob()).resolves.not.toThrow()
|
||||
})
|
||||
|
||||
it('should prepare job, then run script step, then run container step with Dockerfile then cleanup', async () => {
|
||||
const prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
|
||||
await expect(
|
||||
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||
).resolves.not.toThrow()
|
||||
let rawState = fs.readFileSync(prepareJobOutputFilePath, 'utf-8')
|
||||
let resp = JSON.parse(rawState)
|
||||
await expect(
|
||||
runScriptStep(scriptStepData.args, resp.state)
|
||||
prepareJob(definitions.prepareJob.args, prepareJobOutput)
|
||||
).resolves.not.toThrow()
|
||||
|
||||
const dockerfilePath = `${tmpOutputDir}/Dockerfile`
|
||||
let rawState = fs.readFileSync(prepareJobOutput, 'utf-8')
|
||||
let resp = JSON.parse(rawState)
|
||||
|
||||
await expect(
|
||||
runScriptStep(definitions.runScriptStep.args, resp.state)
|
||||
).resolves.not.toThrow()
|
||||
|
||||
const dockerfilePath = `${testSetup.workingDirectory}/Dockerfile`
|
||||
fs.writeFileSync(
|
||||
dockerfilePath,
|
||||
`FROM ubuntu:latest
|
||||
@@ -104,14 +74,17 @@ ENV TEST=test
|
||||
ENTRYPOINT [ "tail", "-f", "/dev/null" ]
|
||||
`
|
||||
)
|
||||
const containerStepDataCopy = JSON.parse(JSON.stringify(containerStepData))
|
||||
process.env.GITHUB_WORKSPACE = tmpOutputDir
|
||||
|
||||
const containerStepDataCopy = JSON.parse(
|
||||
JSON.stringify(definitions.runContainerStep)
|
||||
)
|
||||
|
||||
containerStepDataCopy.args.dockerfile = 'Dockerfile'
|
||||
containerStepDataCopy.args.context = '.'
|
||||
console.log(containerStepDataCopy.args)
|
||||
|
||||
await expect(
|
||||
runContainerStep(containerStepDataCopy.args, resp.state)
|
||||
).resolves.not.toThrow()
|
||||
await expect(cleanupJob(resp, resp.state, null)).resolves.not.toThrow()
|
||||
|
||||
await expect(cleanupJob()).resolves.not.toThrow()
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,40 +1,18 @@
|
||||
import * as fs from 'fs'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import { prepareJob } from '../src/hooks'
|
||||
import TestSetup from './test-setup'
|
||||
|
||||
jest.useRealTimers()
|
||||
|
||||
let prepareJobOutputPath: string
|
||||
let prepareJobData: any
|
||||
const tmpOutputDir = `${__dirname}/_temp/${uuidv4()}`
|
||||
const prepareJobInputPath = `${__dirname}/../../../examples/prepare-job.json`
|
||||
let prepareJobDefinition
|
||||
|
||||
let testSetup: TestSetup
|
||||
|
||||
describe('prepare job', () => {
|
||||
beforeAll(() => {
|
||||
fs.mkdirSync(tmpOutputDir, { recursive: true })
|
||||
})
|
||||
|
||||
afterAll(() => {
|
||||
fs.rmSync(tmpOutputDir, { recursive: true })
|
||||
})
|
||||
|
||||
beforeEach(async () => {
|
||||
beforeEach(() => {
|
||||
testSetup = new TestSetup()
|
||||
testSetup.initialize()
|
||||
|
||||
let prepareJobRawData = fs.readFileSync(prepareJobInputPath, 'utf8')
|
||||
prepareJobData = JSON.parse(prepareJobRawData.toString())
|
||||
|
||||
prepareJobData.args.container.userMountVolumes = testSetup.userMountVolumes
|
||||
prepareJobData.args.container.systemMountVolumes =
|
||||
testSetup.systemMountVolumes
|
||||
prepareJobData.args.container.workingDirectory = testSetup.workingDirectory
|
||||
|
||||
prepareJobOutputPath = `${tmpOutputDir}/prepare-job-output-${uuidv4()}.json`
|
||||
fs.writeFileSync(prepareJobOutputPath, '')
|
||||
prepareJobDefinition = testSetup.getPrepareJobDefinition()
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
@@ -42,38 +20,68 @@ describe('prepare job', () => {
|
||||
})
|
||||
|
||||
it('should not throw', async () => {
|
||||
const prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
await expect(
|
||||
prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||
prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
||||
).resolves.not.toThrow()
|
||||
|
||||
expect(() => fs.readFileSync(prepareJobOutputPath, 'utf-8')).not.toThrow()
|
||||
expect(() => fs.readFileSync(prepareJobOutput, 'utf-8')).not.toThrow()
|
||||
})
|
||||
|
||||
it('should have JSON output written to a file', async () => {
|
||||
await prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||
const prepareJobOutputContent = fs.readFileSync(
|
||||
prepareJobOutputPath,
|
||||
'utf-8'
|
||||
const prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
||||
const prepareJobOutputContent = fs.readFileSync(prepareJobOutput, 'utf-8')
|
||||
expect(() => JSON.parse(prepareJobOutputContent)).not.toThrow()
|
||||
})
|
||||
|
||||
it('should have context written to a file', async () => {
|
||||
await prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||
const prepareJobOutputContent = fs.readFileSync(
|
||||
prepareJobOutputPath,
|
||||
'utf-8'
|
||||
const prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
||||
const parsedPrepareJobOutput = JSON.parse(
|
||||
fs.readFileSync(prepareJobOutput, 'utf-8')
|
||||
)
|
||||
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
|
||||
expect(parsedPrepareJobOutput.context).toBeDefined()
|
||||
})
|
||||
|
||||
it('should have container ids written to file', async () => {
|
||||
await prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||
const prepareJobOutputContent = fs.readFileSync(
|
||||
prepareJobOutputPath,
|
||||
'utf-8'
|
||||
it('should have isAlpine field set correctly', async () => {
|
||||
let prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output-alpine.json'
|
||||
)
|
||||
const prepareJobArgsClone = JSON.parse(
|
||||
JSON.stringify(prepareJobDefinition.args)
|
||||
)
|
||||
prepareJobArgsClone.container.image = 'alpine:latest'
|
||||
await prepareJob(prepareJobArgsClone, prepareJobOutput)
|
||||
|
||||
let parsedPrepareJobOutput = JSON.parse(
|
||||
fs.readFileSync(prepareJobOutput, 'utf-8')
|
||||
)
|
||||
expect(parsedPrepareJobOutput.isAlpine).toBe(true)
|
||||
|
||||
prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output-ubuntu.json'
|
||||
)
|
||||
prepareJobArgsClone.container.image = 'ubuntu:latest'
|
||||
await prepareJob(prepareJobArgsClone, prepareJobOutput)
|
||||
parsedPrepareJobOutput = JSON.parse(
|
||||
fs.readFileSync(prepareJobOutput, 'utf-8')
|
||||
)
|
||||
expect(parsedPrepareJobOutput.isAlpine).toBe(false)
|
||||
})
|
||||
|
||||
it('should have container ids written to file', async () => {
|
||||
const prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
||||
const prepareJobOutputContent = fs.readFileSync(prepareJobOutput, 'utf-8')
|
||||
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
|
||||
|
||||
expect(parsedPrepareJobOutput.context.container.id).toBeDefined()
|
||||
@@ -82,11 +90,11 @@ describe('prepare job', () => {
|
||||
})
|
||||
|
||||
it('should have ports for context written in form [containerPort]:[hostPort]', async () => {
|
||||
await prepareJob(prepareJobData.args, prepareJobOutputPath)
|
||||
const prepareJobOutputContent = fs.readFileSync(
|
||||
prepareJobOutputPath,
|
||||
'utf-8'
|
||||
const prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
||||
const prepareJobOutputContent = fs.readFileSync(prepareJobOutput, 'utf-8')
|
||||
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
|
||||
|
||||
const mainContainerPorts = parsedPrepareJobOutput.context.container.ports
|
||||
@@ -100,4 +108,14 @@ describe('prepare job', () => {
|
||||
expect(redisServicePorts['80']).toBe('8080')
|
||||
expect(redisServicePorts['8080']).toBe('8088')
|
||||
})
|
||||
|
||||
it('should run prepare job without job container without exception', async () => {
|
||||
prepareJobDefinition.args.container = null
|
||||
const prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
await expect(
|
||||
prepareJob(prepareJobDefinition.args, prepareJobOutput)
|
||||
).resolves.not.toThrow()
|
||||
})
|
||||
})
|
||||
|
||||
63
packages/docker/tests/run-script-step-test.ts
Normal file
63
packages/docker/tests/run-script-step-test.ts
Normal file
@@ -0,0 +1,63 @@
|
||||
import * as fs from 'fs'
|
||||
import { PrepareJobResponse } from 'hooklib/lib'
|
||||
import { prepareJob, runScriptStep } from '../src/hooks'
|
||||
import TestSetup from './test-setup'
|
||||
|
||||
jest.useRealTimers()
|
||||
|
||||
let testSetup: TestSetup
|
||||
|
||||
let definitions
|
||||
|
||||
let prepareJobResponse: PrepareJobResponse
|
||||
|
||||
describe('run script step', () => {
|
||||
beforeEach(async () => {
|
||||
testSetup = new TestSetup()
|
||||
testSetup.initialize()
|
||||
|
||||
definitions = {
|
||||
prepareJob: testSetup.getPrepareJobDefinition(),
|
||||
runScriptStep: testSetup.getRunScriptStepDefinition()
|
||||
}
|
||||
|
||||
const prepareJobOutput = testSetup.createOutputFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
await prepareJob(definitions.prepareJob.args, prepareJobOutput)
|
||||
|
||||
prepareJobResponse = JSON.parse(fs.readFileSync(prepareJobOutput, 'utf-8'))
|
||||
})
|
||||
|
||||
it('Should run script step without exceptions', async () => {
|
||||
await expect(
|
||||
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
|
||||
).resolves.not.toThrow()
|
||||
})
|
||||
|
||||
it('Should have path variable changed in container with prepend path string', async () => {
|
||||
definitions.runScriptStep.args.prependPath = '/some/path'
|
||||
definitions.runScriptStep.args.entryPoint = '/bin/bash'
|
||||
definitions.runScriptStep.args.entryPointArgs = [
|
||||
'-c',
|
||||
`if [[ ! $(env | grep "^PATH=") = "PATH=${definitions.runScriptStep.args.prependPath}:"* ]]; then exit 1; fi`
|
||||
]
|
||||
await expect(
|
||||
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
|
||||
).resolves.not.toThrow()
|
||||
})
|
||||
|
||||
it('Should have path variable changed in container with prepend path string array', async () => {
|
||||
definitions.runScriptStep.args.prependPath = ['/some/other/path']
|
||||
definitions.runScriptStep.args.entryPoint = '/bin/bash'
|
||||
definitions.runScriptStep.args.entryPointArgs = [
|
||||
'-c',
|
||||
`if [[ ! $(env | grep "^PATH=") = "PATH=${definitions.runScriptStep.args.prependPath.join(
|
||||
':'
|
||||
)}:"* ]]; then exit 1; fi`
|
||||
]
|
||||
await expect(
|
||||
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
|
||||
).resolves.not.toThrow()
|
||||
})
|
||||
})
|
||||
@@ -1,11 +1,15 @@
|
||||
import * as fs from 'fs'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import { env } from 'process'
|
||||
import { Mount } from 'hooklib'
|
||||
import { HookData } from 'hooklib/lib'
|
||||
import * as path from 'path'
|
||||
import { env } from 'process'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
|
||||
export default class TestSetup {
|
||||
private testdir: string
|
||||
private runnerMockDir: string
|
||||
readonly runnerOutputDir: string
|
||||
|
||||
private runnerMockSubdirs = {
|
||||
work: '_work',
|
||||
externals: 'externals',
|
||||
@@ -16,15 +20,16 @@ export default class TestSetup {
|
||||
githubWorkflow: '_work/_temp/_github_workflow'
|
||||
}
|
||||
|
||||
private readonly projectName = 'example'
|
||||
private readonly projectName = 'repo'
|
||||
|
||||
constructor() {
|
||||
this.testdir = `${__dirname}/_temp/${uuidv4()}`
|
||||
this.runnerMockDir = `${this.testdir}/runner/_layout`
|
||||
this.runnerOutputDir = `${this.testdir}/outputs`
|
||||
}
|
||||
|
||||
private get allTestDirectories() {
|
||||
const resp = [this.testdir, this.runnerMockDir]
|
||||
const resp = [this.testdir, this.runnerMockDir, this.runnerOutputDir]
|
||||
|
||||
for (const [key, value] of Object.entries(this.runnerMockSubdirs)) {
|
||||
resp.push(`${this.runnerMockDir}/${value}`)
|
||||
@@ -38,30 +43,27 @@ export default class TestSetup {
|
||||
}
|
||||
|
||||
public initialize(): void {
|
||||
for (const dir of this.allTestDirectories) {
|
||||
fs.mkdirSync(dir, { recursive: true })
|
||||
}
|
||||
env['GITHUB_WORKSPACE'] = this.workingDirectory
|
||||
env['RUNNER_NAME'] = 'test'
|
||||
env[
|
||||
'RUNNER_TEMP'
|
||||
] = `${this.runnerMockDir}/${this.runnerMockSubdirs.workTemp}`
|
||||
|
||||
for (const dir of this.allTestDirectories) {
|
||||
fs.mkdirSync(dir, { recursive: true })
|
||||
}
|
||||
|
||||
fs.copyFileSync(
|
||||
path.resolve(`${__dirname}/../../../examples/example-script.sh`),
|
||||
`${env.RUNNER_TEMP}/example-script.sh`
|
||||
)
|
||||
}
|
||||
|
||||
public teardown(): void {
|
||||
fs.rmdirSync(this.testdir, { recursive: true })
|
||||
}
|
||||
|
||||
public get userMountVolumes(): Mount[] {
|
||||
return [
|
||||
{
|
||||
sourceVolumePath: 'my_docker_volume',
|
||||
targetVolumePath: '/volume_mount',
|
||||
readOnly: false
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
public get systemMountVolumes(): Mount[] {
|
||||
private get systemMountVolumes(): Mount[] {
|
||||
return [
|
||||
{
|
||||
sourceVolumePath: '/var/run/docker.sock',
|
||||
@@ -106,7 +108,89 @@ export default class TestSetup {
|
||||
]
|
||||
}
|
||||
|
||||
public createOutputFile(name: string): string {
|
||||
let filePath = path.join(this.runnerOutputDir, name || `${uuidv4()}.json`)
|
||||
fs.writeFileSync(filePath, '')
|
||||
return filePath
|
||||
}
|
||||
|
||||
public get workingDirectory(): string {
|
||||
return `${this.runnerMockDir}/_work/${this.projectName}/${this.projectName}`
|
||||
}
|
||||
|
||||
public get containerWorkingDirectory(): string {
|
||||
return `/__w/${this.projectName}/${this.projectName}`
|
||||
}
|
||||
|
||||
public initializeDockerAction(): string {
|
||||
const actionPath = `${this.testdir}/_actions/example-handle/example-repo/example-branch/mock-directory`
|
||||
fs.mkdirSync(actionPath, { recursive: true })
|
||||
this.writeDockerfile(actionPath)
|
||||
this.writeEntrypoint(actionPath)
|
||||
return actionPath
|
||||
}
|
||||
|
||||
private writeDockerfile(actionPath: string) {
|
||||
const content = `FROM alpine:3.10
|
||||
COPY entrypoint.sh /entrypoint.sh
|
||||
ENTRYPOINT ["/entrypoint.sh"]`
|
||||
fs.writeFileSync(`${actionPath}/Dockerfile`, content)
|
||||
}
|
||||
|
||||
private writeEntrypoint(actionPath) {
|
||||
const content = `#!/bin/sh -l
|
||||
echo "Hello $1"
|
||||
time=$(date)
|
||||
echo "::set-output name=time::$time"`
|
||||
const entryPointPath = `${actionPath}/entrypoint.sh`
|
||||
fs.writeFileSync(entryPointPath, content)
|
||||
fs.chmodSync(entryPointPath, 0o755)
|
||||
}
|
||||
|
||||
public getPrepareJobDefinition(): HookData {
|
||||
const prepareJob = JSON.parse(
|
||||
fs.readFileSync(
|
||||
path.resolve(__dirname + '/../../../examples/prepare-job.json'),
|
||||
'utf8'
|
||||
)
|
||||
)
|
||||
|
||||
prepareJob.args.container.systemMountVolumes = this.systemMountVolumes
|
||||
prepareJob.args.container.workingDirectory = this.workingDirectory
|
||||
prepareJob.args.container.userMountVolumes = undefined
|
||||
prepareJob.args.container.registry = null
|
||||
prepareJob.args.services.forEach(s => {
|
||||
s.registry = null
|
||||
})
|
||||
|
||||
return prepareJob
|
||||
}
|
||||
|
||||
public getRunScriptStepDefinition(): HookData {
|
||||
const runScriptStep = JSON.parse(
|
||||
fs.readFileSync(
|
||||
path.resolve(__dirname + '/../../../examples/run-script-step.json'),
|
||||
'utf8'
|
||||
)
|
||||
)
|
||||
|
||||
runScriptStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
|
||||
return runScriptStep
|
||||
}
|
||||
|
||||
public getRunContainerStepDefinition(): HookData {
|
||||
const runContainerStep = JSON.parse(
|
||||
fs.readFileSync(
|
||||
path.resolve(__dirname + '/../../../examples/run-container-step.json'),
|
||||
'utf8'
|
||||
)
|
||||
)
|
||||
|
||||
runContainerStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
|
||||
runContainerStep.args.systemMountVolumes = this.systemMountVolumes
|
||||
runContainerStep.args.workingDirectory = this.workingDirectory
|
||||
runContainerStep.args.userMountVolumes = undefined
|
||||
runContainerStep.args.registry = null
|
||||
return runContainerStep
|
||||
}
|
||||
}
|
||||
|
||||
@@ -34,6 +34,7 @@ export interface ContainerInfo {
|
||||
createOptions?: string
|
||||
environmentVariables?: { [key: string]: string }
|
||||
userMountVolumes?: Mount[]
|
||||
systemMountVolumes?: Mount[]
|
||||
registry?: Registry
|
||||
portMappings?: string[]
|
||||
}
|
||||
@@ -73,14 +74,6 @@ export enum Protocol {
|
||||
UDP = 'udp'
|
||||
}
|
||||
|
||||
export enum PodPhase {
|
||||
PENDING = 'Pending',
|
||||
RUNNING = 'Running',
|
||||
SUCCEEDED = 'Succeded',
|
||||
FAILED = 'Failed',
|
||||
UNKNOWN = 'Unknown'
|
||||
}
|
||||
|
||||
export interface PrepareJobResponse {
|
||||
state?: object
|
||||
context?: ContainerContext
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
import * as core from '@actions/core'
|
||||
import * as events from 'events'
|
||||
import * as fs from 'fs'
|
||||
import * as os from 'os'
|
||||
@@ -13,7 +12,6 @@ export async function getInputFromStdin(): Promise<HookData> {
|
||||
})
|
||||
|
||||
rl.on('line', line => {
|
||||
core.debug(`Line from STDIN: ${line}`)
|
||||
input = line
|
||||
})
|
||||
await events.default.once(rl, 'close')
|
||||
|
||||
@@ -6,7 +6,40 @@ This implementation provides a way to dynamically spin up jobs to run container
|
||||
## Pre-requisites
|
||||
Some things are expected to be set when using these hooks
|
||||
- The runner itself should be running in a pod, with a service account with the following permissions
|
||||
- The `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER=true` should be set to true
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
namespace: default
|
||||
name: runner-role
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "list", "create", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods/exec"]
|
||||
verbs: ["get", "create"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods/log"]
|
||||
verbs: ["get", "list", "watch",]
|
||||
- apiGroups: ["batch"]
|
||||
resources: ["jobs"]
|
||||
verbs: ["get", "list", "create", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list", "create", "delete"]
|
||||
```
|
||||
- The `ACTIONS_RUNNER_POD_NAME` env should be set to the name of the pod
|
||||
- The `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` env should be set to true to prevent the runner from running any jobs outside of a container
|
||||
- The runner pod should map a persistent volume claim into the `_work` directory
|
||||
- The `ACTIONS_RUNNER_CLAIM_NAME` should be set to the persistent volume claim that contains the runner's working directory
|
||||
- The `ACTIONS_RUNNER_CLAIM_NAME` env should be set to the persistent volume claim that contains the runner's working directory, otherwise it defaults to `${ACTIONS_RUNNER_POD_NAME}-work`
|
||||
- Some actions runner env's are expected to be set. These are set automatically by the runner.
|
||||
- `RUNNER_WORKSPACE` is expected to be set to the workspace of the runner
|
||||
- `GITHUB_WORKSPACE` is expected to be set to the workspace of the job
|
||||
|
||||
|
||||
## Limitations
|
||||
- A [job containers](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container) will be required for all jobs
|
||||
- Building container actions from a dockerfile is not supported at this time
|
||||
- Container actions will not have access to the services network or job container network
|
||||
- Docker [create options](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontaineroptions) are not supported
|
||||
|
||||
@@ -1 +1 @@
|
||||
jest.setTimeout(90000)
|
||||
jest.setTimeout(500000)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { podPrune } from '../k8s'
|
||||
import { prunePods, pruneSecrets } from '../k8s'
|
||||
|
||||
export async function cleanupJob(): Promise<void> {
|
||||
await podPrune()
|
||||
await Promise.all([prunePods(), pruneSecrets()])
|
||||
}
|
||||
|
||||
@@ -20,28 +20,33 @@ export function getJobPodName(): string {
|
||||
export function getStepPodName(): string {
|
||||
return `${getRunnerPodName().substring(
|
||||
0,
|
||||
MAX_POD_NAME_LENGTH - ('-step'.length + STEP_POD_NAME_SUFFIX_LENGTH)
|
||||
MAX_POD_NAME_LENGTH - ('-step-'.length + STEP_POD_NAME_SUFFIX_LENGTH)
|
||||
)}-step-${uuidv4().substring(0, STEP_POD_NAME_SUFFIX_LENGTH)}`
|
||||
}
|
||||
|
||||
export function getVolumeClaimName(): string {
|
||||
const name = process.env.ACTIONS_RUNNER_CLAIM_NAME
|
||||
if (!name) {
|
||||
throw new Error(
|
||||
"'ACTIONS_RUNNER_CLAIM_NAME' is required, please contact your self hosted runner administrator"
|
||||
)
|
||||
return `${getRunnerPodName()}-work`
|
||||
}
|
||||
return name
|
||||
}
|
||||
|
||||
const MAX_POD_NAME_LENGTH = 63
|
||||
const STEP_POD_NAME_SUFFIX_LENGTH = 8
|
||||
export function getSecretName(): string {
|
||||
return `${getRunnerPodName().substring(
|
||||
0,
|
||||
MAX_POD_NAME_LENGTH - ('-secret-'.length + STEP_POD_NAME_SUFFIX_LENGTH)
|
||||
)}-secret-${uuidv4().substring(0, STEP_POD_NAME_SUFFIX_LENGTH)}`
|
||||
}
|
||||
|
||||
export const MAX_POD_NAME_LENGTH = 63
|
||||
export const STEP_POD_NAME_SUFFIX_LENGTH = 8
|
||||
export const JOB_CONTAINER_NAME = 'job'
|
||||
|
||||
export class RunnerInstanceLabel {
|
||||
runnerhook: string
|
||||
private podName: string
|
||||
constructor() {
|
||||
this.runnerhook = process.env.ACTIONS_RUNNER_POD_NAME as string
|
||||
this.podName = getRunnerPodName()
|
||||
}
|
||||
|
||||
get key(): string {
|
||||
@@ -49,10 +54,10 @@ export class RunnerInstanceLabel {
|
||||
}
|
||||
|
||||
get value(): string {
|
||||
return this.runnerhook
|
||||
return this.podName
|
||||
}
|
||||
|
||||
toString(): string {
|
||||
return `runner-pod=${this.runnerhook}`
|
||||
return `runner-pod=${this.podName}`
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,27 +1,20 @@
|
||||
import * as core from '@actions/core'
|
||||
import * as io from '@actions/io'
|
||||
import * as k8s from '@kubernetes/client-node'
|
||||
import {
|
||||
ContextPorts,
|
||||
PodPhase,
|
||||
prepareJobArgs,
|
||||
writeToResponseFile
|
||||
} from 'hooklib'
|
||||
import { ContextPorts, prepareJobArgs, writeToResponseFile } from 'hooklib'
|
||||
import path from 'path'
|
||||
import {
|
||||
containerPorts,
|
||||
createPod,
|
||||
isAuthPermissionsOK,
|
||||
isPodContainerAlpine,
|
||||
namespace,
|
||||
podPrune,
|
||||
requiredPermissions,
|
||||
prunePods,
|
||||
waitForPodPhases
|
||||
} from '../k8s'
|
||||
import {
|
||||
containerVolumes,
|
||||
DEFAULT_CONTAINER_ENTRY_POINT,
|
||||
DEFAULT_CONTAINER_ENTRY_POINT_ARGS
|
||||
DEFAULT_CONTAINER_ENTRY_POINT_ARGS,
|
||||
PodPhase
|
||||
} from '../k8s/utils'
|
||||
import { JOB_CONTAINER_NAME } from './constants'
|
||||
|
||||
@@ -29,25 +22,22 @@ export async function prepareJob(
|
||||
args: prepareJobArgs,
|
||||
responseFile
|
||||
): Promise<void> {
|
||||
await podPrune()
|
||||
if (!(await isAuthPermissionsOK())) {
|
||||
throw new Error(
|
||||
`The Service account needs the following permissions ${JSON.stringify(
|
||||
requiredPermissions
|
||||
)} on the pod resource in the '${namespace}' namespace. Please contact your self hosted runner administrator.`
|
||||
)
|
||||
if (!args.container) {
|
||||
throw new Error('Job Container is required.')
|
||||
}
|
||||
|
||||
await prunePods()
|
||||
await copyExternalsToRoot()
|
||||
let container: k8s.V1Container | undefined = undefined
|
||||
if (args.container?.image) {
|
||||
core.info(`Using image '${args.container.image}' for job image`)
|
||||
core.debug(`Using image '${args.container.image}' for job image`)
|
||||
container = createPodSpec(args.container, JOB_CONTAINER_NAME, true)
|
||||
}
|
||||
|
||||
let services: k8s.V1Container[] = []
|
||||
if (args.services?.length) {
|
||||
services = args.services.map(service => {
|
||||
core.info(`Adding service '${service.image}' to pod definition`)
|
||||
core.debug(`Adding service '${service.image}' to pod definition`)
|
||||
return createPodSpec(service, service.image.split(':')[0])
|
||||
})
|
||||
}
|
||||
@@ -56,15 +46,18 @@ export async function prepareJob(
|
||||
}
|
||||
let createdPod: k8s.V1Pod | undefined = undefined
|
||||
try {
|
||||
createdPod = await createPod(container, services, args.registry)
|
||||
createdPod = await createPod(container, services, args.container.registry)
|
||||
} catch (err) {
|
||||
await podPrune()
|
||||
await prunePods()
|
||||
throw new Error(`failed to create job pod: ${err}`)
|
||||
}
|
||||
|
||||
if (!createdPod?.metadata?.name) {
|
||||
throw new Error('created pod should have metadata.name')
|
||||
}
|
||||
core.debug(
|
||||
`Job pod created, waiting for it to come online ${createdPod?.metadata?.name}`
|
||||
)
|
||||
|
||||
try {
|
||||
await waitForPodPhases(
|
||||
@@ -73,11 +66,11 @@ export async function prepareJob(
|
||||
new Set([PodPhase.PENDING])
|
||||
)
|
||||
} catch (err) {
|
||||
await podPrune()
|
||||
await prunePods()
|
||||
throw new Error(`Pod failed to come online with error: ${err}`)
|
||||
}
|
||||
|
||||
core.info('Pod is ready for traffic')
|
||||
core.debug('Job pod is ready for traffic')
|
||||
|
||||
let isAlpine = false
|
||||
try {
|
||||
@@ -88,7 +81,7 @@ export async function prepareJob(
|
||||
} catch (err) {
|
||||
throw new Error(`Failed to determine if the pod is alpine: ${err}`)
|
||||
}
|
||||
|
||||
core.debug(`Setting isAlpine to ${isAlpine}`)
|
||||
generateResponseFile(responseFile, createdPod, isAlpine)
|
||||
}
|
||||
|
||||
@@ -97,8 +90,13 @@ function generateResponseFile(
|
||||
appPod: k8s.V1Pod,
|
||||
isAlpine
|
||||
): void {
|
||||
if (!appPod.metadata?.name) {
|
||||
throw new Error('app pod must have metadata.name specified')
|
||||
}
|
||||
const response = {
|
||||
state: {},
|
||||
state: {
|
||||
jobPod: appPod.metadata.name
|
||||
},
|
||||
context: {},
|
||||
isAlpine
|
||||
}
|
||||
@@ -160,14 +158,11 @@ function createPodSpec(
|
||||
name: string,
|
||||
jobContainer = false
|
||||
): k8s.V1Container {
|
||||
core.info(JSON.stringify(container))
|
||||
if (!container.entryPointArgs) {
|
||||
container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
|
||||
}
|
||||
container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
|
||||
if (!container.entryPoint) {
|
||||
container.entryPoint = DEFAULT_CONTAINER_ENTRY_POINT
|
||||
container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
|
||||
}
|
||||
|
||||
const podContainer = {
|
||||
name,
|
||||
image: container.image,
|
||||
@@ -175,15 +170,10 @@ function createPodSpec(
|
||||
args: container.entryPointArgs,
|
||||
ports: containerPorts(container)
|
||||
} as k8s.V1Container
|
||||
|
||||
if (container.workingDirectory) {
|
||||
podContainer.workingDir = container.workingDirectory
|
||||
}
|
||||
|
||||
if (container.createOptions) {
|
||||
podContainer.resources = getResourceRequirements(container.createOptions)
|
||||
}
|
||||
|
||||
podContainer.env = []
|
||||
for (const [key, value] of Object.entries(
|
||||
container['environmentVariables']
|
||||
@@ -200,62 +190,3 @@ function createPodSpec(
|
||||
|
||||
return podContainer
|
||||
}
|
||||
|
||||
function getResourceRequirements(
|
||||
createOptions: string
|
||||
): k8s.V1ResourceRequirements {
|
||||
const rr = new k8s.V1ResourceRequirements()
|
||||
rr.limits = {}
|
||||
rr.requests = {}
|
||||
|
||||
const options = parseOptions(createOptions)
|
||||
for (const [key, value] of Object.entries(options)) {
|
||||
switch (key) {
|
||||
case '--cpus':
|
||||
rr.requests.cpu = value
|
||||
break
|
||||
case '--memory':
|
||||
case '-m':
|
||||
rr.limits.memory = value
|
||||
break
|
||||
default:
|
||||
core.warning(
|
||||
`Container option ${key} is not supported. Supported options are ['--cpus', '--memory', '-m']`
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
return rr
|
||||
}
|
||||
|
||||
function parseOptions(options: string): { [option: string]: string } {
|
||||
const rv: { [option: string]: string } = {}
|
||||
|
||||
const spaceSplit = options.split(' ')
|
||||
for (let i = 0; i < spaceSplit.length; i++) {
|
||||
if (!spaceSplit[i].startsWith('-')) {
|
||||
throw new Error(`Options specified in wrong format: ${options}`)
|
||||
}
|
||||
|
||||
const optSplit = spaceSplit[i].split('=')
|
||||
const optName = optSplit[0]
|
||||
let optValue = ''
|
||||
switch (optSplit.length) {
|
||||
case 1:
|
||||
if (spaceSplit.length <= i + 1) {
|
||||
throw new Error(`Option ${optName} must have a value`)
|
||||
}
|
||||
optValue = spaceSplit[++i]
|
||||
break
|
||||
case 2:
|
||||
optValue = optSplit[1]
|
||||
break
|
||||
default:
|
||||
throw new Error(`failed to parse option ${spaceSplit[i]}`)
|
||||
}
|
||||
|
||||
rv[optName] = optValue
|
||||
}
|
||||
|
||||
return rv
|
||||
}
|
||||
|
||||
@@ -1,22 +1,39 @@
|
||||
import * as k8s from '@kubernetes/client-node'
|
||||
import * as core from '@actions/core'
|
||||
import { PodPhase } from 'hooklib'
|
||||
import * as k8s from '@kubernetes/client-node'
|
||||
import { RunContainerStepArgs } from 'hooklib'
|
||||
import {
|
||||
createJob,
|
||||
createSecretForEnvs,
|
||||
getContainerJobPodName,
|
||||
getPodLogs,
|
||||
getPodStatus,
|
||||
waitForJobToComplete,
|
||||
waitForPodPhases
|
||||
} from '../k8s'
|
||||
import {
|
||||
containerVolumes,
|
||||
DEFAULT_CONTAINER_ENTRY_POINT,
|
||||
DEFAULT_CONTAINER_ENTRY_POINT_ARGS,
|
||||
PodPhase,
|
||||
writeEntryPointScript
|
||||
} from '../k8s/utils'
|
||||
import { JOB_CONTAINER_NAME } from './constants'
|
||||
import { containerVolumes } from '../k8s/utils'
|
||||
|
||||
export async function runContainerStep(stepContainer): Promise<number> {
|
||||
export async function runContainerStep(
|
||||
stepContainer: RunContainerStepArgs
|
||||
): Promise<number> {
|
||||
if (stepContainer.dockerfile) {
|
||||
throw new Error('Building container actions is not currently supported')
|
||||
}
|
||||
const container = createPodSpec(stepContainer)
|
||||
|
||||
let secretName: string | undefined = undefined
|
||||
if (stepContainer.environmentVariables) {
|
||||
secretName = await createSecretForEnvs(stepContainer.environmentVariables)
|
||||
}
|
||||
|
||||
core.debug(`Created secret ${secretName} for container job envs`)
|
||||
const container = createPodSpec(stepContainer, secretName)
|
||||
|
||||
const job = await createJob(container)
|
||||
if (!job.metadata?.name) {
|
||||
throw new Error(
|
||||
@@ -25,45 +42,69 @@ export async function runContainerStep(stepContainer): Promise<number> {
|
||||
)} to have correctly set the metadata.name`
|
||||
)
|
||||
}
|
||||
core.debug(`Job created, waiting for pod to start: ${job.metadata?.name}`)
|
||||
|
||||
const podName = await getContainerJobPodName(job.metadata.name)
|
||||
await waitForPodPhases(
|
||||
podName,
|
||||
new Set([PodPhase.COMPLETED, PodPhase.RUNNING]),
|
||||
new Set([PodPhase.PENDING])
|
||||
new Set([PodPhase.COMPLETED, PodPhase.RUNNING, PodPhase.SUCCEEDED]),
|
||||
new Set([PodPhase.PENDING, PodPhase.UNKNOWN])
|
||||
)
|
||||
core.debug('Container step is running or complete, pulling logs')
|
||||
|
||||
await getPodLogs(podName, JOB_CONTAINER_NAME)
|
||||
|
||||
core.debug('Waiting for container job to complete')
|
||||
await waitForJobToComplete(job.metadata.name)
|
||||
// pod has failed so pull the status code from the container
|
||||
const status = await getPodStatus(podName)
|
||||
if (!status?.containerStatuses?.length) {
|
||||
core.warning(`Can't determine container status`)
|
||||
if (status?.phase === 'Succeeded') {
|
||||
return 0
|
||||
}
|
||||
|
||||
if (!status?.containerStatuses?.length) {
|
||||
core.error(
|
||||
`Can't determine container status from response: ${JSON.stringify(
|
||||
status
|
||||
)}`
|
||||
)
|
||||
return 1
|
||||
}
|
||||
const exitCode =
|
||||
status.containerStatuses[status.containerStatuses.length - 1].state
|
||||
?.terminated?.exitCode
|
||||
return Number(exitCode) || 0
|
||||
return Number(exitCode) || 1
|
||||
}
|
||||
|
||||
function createPodSpec(container): k8s.V1Container {
|
||||
function createPodSpec(
|
||||
container: RunContainerStepArgs,
|
||||
secretName?: string
|
||||
): k8s.V1Container {
|
||||
const podContainer = new k8s.V1Container()
|
||||
podContainer.name = JOB_CONTAINER_NAME
|
||||
podContainer.image = container.image
|
||||
if (container.entryPoint) {
|
||||
podContainer.command = [container.entryPoint, ...container.entryPointArgs]
|
||||
}
|
||||
|
||||
podContainer.env = []
|
||||
for (const [key, value] of Object.entries(
|
||||
container['environmentVariables']
|
||||
)) {
|
||||
if (value && key !== 'HOME') {
|
||||
podContainer.env.push({ name: key, value: value as string })
|
||||
}
|
||||
const { entryPoint, entryPointArgs } = container
|
||||
container.entryPoint = 'sh'
|
||||
|
||||
const { containerPath } = writeEntryPointScript(
|
||||
container.workingDirectory,
|
||||
entryPoint || DEFAULT_CONTAINER_ENTRY_POINT,
|
||||
entryPoint ? entryPointArgs || [] : DEFAULT_CONTAINER_ENTRY_POINT_ARGS
|
||||
)
|
||||
container.entryPointArgs = ['-e', containerPath]
|
||||
podContainer.command = [container.entryPoint, ...container.entryPointArgs]
|
||||
|
||||
if (secretName) {
|
||||
podContainer.envFrom = [
|
||||
{
|
||||
secretRef: {
|
||||
name: secretName,
|
||||
optional: false
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
podContainer.volumeMounts = containerVolumes()
|
||||
podContainer.volumeMounts = containerVolumes(undefined, false, true)
|
||||
|
||||
return podContainer
|
||||
}
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
/* eslint-disable @typescript-eslint/no-unused-vars */
|
||||
import * as fs from 'fs'
|
||||
import { RunScriptStepArgs } from 'hooklib'
|
||||
import { execPodStep } from '../k8s'
|
||||
import { writeEntryPointScript } from '../k8s/utils'
|
||||
import { JOB_CONTAINER_NAME } from './constants'
|
||||
|
||||
export async function runScriptStep(
|
||||
@@ -8,31 +10,26 @@ export async function runScriptStep(
|
||||
state,
|
||||
responseFile
|
||||
): Promise<void> {
|
||||
const cb = new CommandsBuilder(
|
||||
args.entryPoint,
|
||||
args.entryPointArgs,
|
||||
args.environmentVariables
|
||||
const { entryPoint, entryPointArgs, environmentVariables } = args
|
||||
const { containerPath, runnerPath } = writeEntryPointScript(
|
||||
args.workingDirectory,
|
||||
entryPoint,
|
||||
entryPointArgs,
|
||||
args.prependPath,
|
||||
environmentVariables
|
||||
)
|
||||
await execPodStep(cb.command, state.jobPod, JOB_CONTAINER_NAME)
|
||||
}
|
||||
|
||||
class CommandsBuilder {
|
||||
constructor(
|
||||
private entryPoint: string,
|
||||
private entryPointArgs: string[],
|
||||
private environmentVariables: { [key: string]: string }
|
||||
) {}
|
||||
|
||||
get command(): string[] {
|
||||
const envCommands: string[] = []
|
||||
if (
|
||||
this.environmentVariables &&
|
||||
Object.entries(this.environmentVariables).length
|
||||
) {
|
||||
for (const [key, value] of Object.entries(this.environmentVariables)) {
|
||||
envCommands.push(`${key}=${value}`)
|
||||
}
|
||||
}
|
||||
return ['env', ...envCommands, this.entryPoint, ...this.entryPointArgs]
|
||||
args.entryPoint = 'sh'
|
||||
args.entryPointArgs = ['-e', containerPath]
|
||||
try {
|
||||
await execPodStep(
|
||||
[args.entryPoint, ...args.entryPointArgs],
|
||||
state.jobPod,
|
||||
JOB_CONTAINER_NAME
|
||||
)
|
||||
} catch (err) {
|
||||
throw new Error(`failed to run script step: ${err}`)
|
||||
} finally {
|
||||
fs.rmSync(runnerPath)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import * as core from '@actions/core'
|
||||
import { Command, getInputFromStdin, prepareJobArgs } from 'hooklib'
|
||||
import {
|
||||
cleanupJob,
|
||||
@@ -5,6 +6,7 @@ import {
|
||||
runContainerStep,
|
||||
runScriptStep
|
||||
} from './hooks'
|
||||
import { isAuthPermissionsOK, namespace, requiredPermissions } from './k8s'
|
||||
|
||||
async function run(): Promise<void> {
|
||||
const input = await getInputFromStdin()
|
||||
@@ -16,6 +18,13 @@ async function run(): Promise<void> {
|
||||
|
||||
let exitCode = 0
|
||||
try {
|
||||
if (!(await isAuthPermissionsOK())) {
|
||||
throw new Error(
|
||||
`The Service account needs the following permissions ${JSON.stringify(
|
||||
requiredPermissions
|
||||
)} on the pod resource in the '${namespace()}' namespace. Please contact your self hosted runner administrator.`
|
||||
)
|
||||
}
|
||||
switch (command) {
|
||||
case Command.PrepareJob:
|
||||
await prepareJob(args as prepareJobArgs, responseFile)
|
||||
@@ -34,8 +43,7 @@ async function run(): Promise<void> {
|
||||
throw new Error(`Command not recognized: ${command}`)
|
||||
}
|
||||
} catch (error) {
|
||||
// eslint-disable-next-line no-console
|
||||
console.log(error)
|
||||
core.error(error as Error)
|
||||
exitCode = 1
|
||||
}
|
||||
process.exitCode = exitCode
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
import * as core from '@actions/core'
|
||||
import * as k8s from '@kubernetes/client-node'
|
||||
import { ContainerInfo, PodPhase, Registry } from 'hooklib'
|
||||
import { ContainerInfo, Registry } from 'hooklib'
|
||||
import * as stream from 'stream'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import {
|
||||
getJobPodName,
|
||||
getRunnerPodName,
|
||||
getSecretName,
|
||||
getStepPodName,
|
||||
getVolumeClaimName,
|
||||
RunnerInstanceLabel
|
||||
} from '../hooks/constants'
|
||||
import { PodPhase } from './utils'
|
||||
|
||||
const kc = new k8s.KubeConfig()
|
||||
|
||||
@@ -43,16 +46,15 @@ export const requiredPermissions = [
|
||||
verbs: ['get', 'list', 'create', 'delete'],
|
||||
resource: 'jobs',
|
||||
subresource: ''
|
||||
},
|
||||
{
|
||||
group: '',
|
||||
verbs: ['create', 'delete', 'get', 'list'],
|
||||
resource: 'secrets',
|
||||
subresource: ''
|
||||
}
|
||||
]
|
||||
|
||||
const secretPermission = {
|
||||
group: '',
|
||||
verbs: ['get', 'list', 'create', 'delete'],
|
||||
resource: 'secrets',
|
||||
subresource: ''
|
||||
}
|
||||
|
||||
export async function createPod(
|
||||
jobContainer?: k8s.V1Container,
|
||||
services?: k8s.V1Container[],
|
||||
@@ -92,19 +94,13 @@ export async function createPod(
|
||||
]
|
||||
|
||||
if (registry) {
|
||||
if (await isSecretsAuthOK()) {
|
||||
const secret = await createDockerSecret(registry)
|
||||
if (!secret?.metadata?.name) {
|
||||
throw new Error(`created secret does not have secret.metadata.name`)
|
||||
}
|
||||
const secretReference = new k8s.V1LocalObjectReference()
|
||||
secretReference.name = secret.metadata.name
|
||||
appPod.spec.imagePullSecrets = [secretReference]
|
||||
} else {
|
||||
throw new Error(
|
||||
`Pulls from private registry is not allowed. Please contact your self hosted runner administrator. Service account needs permissions for ${secretPermission.verbs} in resource ${secretPermission.resource}`
|
||||
)
|
||||
const secret = await createDockerSecret(registry)
|
||||
if (!secret?.metadata?.name) {
|
||||
throw new Error(`created secret does not have secret.metadata.name`)
|
||||
}
|
||||
const secretReference = new k8s.V1LocalObjectReference()
|
||||
secretReference.name = secret.metadata.name
|
||||
appPod.spec.imagePullSecrets = [secretReference]
|
||||
}
|
||||
|
||||
const { body } = await k8sApi.createNamespacedPod(namespace(), appPod)
|
||||
@@ -114,13 +110,14 @@ export async function createPod(
|
||||
export async function createJob(
|
||||
container: k8s.V1Container
|
||||
): Promise<k8s.V1Job> {
|
||||
const job = new k8s.V1Job()
|
||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
||||
|
||||
const job = new k8s.V1Job()
|
||||
job.apiVersion = 'batch/v1'
|
||||
job.kind = 'Job'
|
||||
job.metadata = new k8s.V1ObjectMeta()
|
||||
job.metadata.name = getJobPodName()
|
||||
job.metadata.labels = { 'runner-pod': getRunnerPodName() }
|
||||
job.metadata.name = getStepPodName()
|
||||
job.metadata.labels = { [runnerInstanceLabel.key]: runnerInstanceLabel.value }
|
||||
|
||||
job.spec = new k8s.V1JobSpec()
|
||||
job.spec.ttlSecondsAfterFinished = 300
|
||||
@@ -132,7 +129,7 @@ export async function createJob(
|
||||
job.spec.template.spec.restartPolicy = 'Never'
|
||||
job.spec.template.spec.nodeName = await getCurrentNodeName()
|
||||
|
||||
const claimName = `${runnerName()}-work`
|
||||
const claimName = getVolumeClaimName()
|
||||
job.spec.template.spec.volumes = [
|
||||
{
|
||||
name: 'work',
|
||||
@@ -173,7 +170,13 @@ export async function getContainerJobPodName(jobName: string): Promise<string> {
|
||||
}
|
||||
|
||||
export async function deletePod(podName: string): Promise<void> {
|
||||
await k8sApi.deleteNamespacedPod(podName, namespace())
|
||||
await k8sApi.deleteNamespacedPod(
|
||||
podName,
|
||||
namespace(),
|
||||
undefined,
|
||||
undefined,
|
||||
0
|
||||
)
|
||||
}
|
||||
|
||||
export async function execPodStep(
|
||||
@@ -182,36 +185,32 @@ export async function execPodStep(
|
||||
containerName: string,
|
||||
stdin?: stream.Readable
|
||||
): Promise<void> {
|
||||
// TODO, we need to add the path from `prependPath` to the PATH variable. How can we do that? Maybe another exec before running this one?
|
||||
// Maybe something like, get the current path, if these entries aren't in it, add them, then set the current path to that?
|
||||
|
||||
// TODO: how do we set working directory? There doesn't seem to be an easy way to do it. Should we cd then execute our bash script?
|
||||
const exec = new k8s.Exec(kc)
|
||||
return new Promise(async function (resolve, reject) {
|
||||
try {
|
||||
await exec.exec(
|
||||
namespace(),
|
||||
podName,
|
||||
containerName,
|
||||
command,
|
||||
process.stdout,
|
||||
process.stderr,
|
||||
stdin ?? null,
|
||||
false /* tty */,
|
||||
resp => {
|
||||
// kube.exec returns an error if exit code is not 0, but we can't actually get the exit code
|
||||
if (resp.status === 'Success') {
|
||||
resolve()
|
||||
} else {
|
||||
reject(
|
||||
JSON.stringify({ message: resp?.message, details: resp?.details })
|
||||
)
|
||||
}
|
||||
await new Promise(async function (resolve, reject) {
|
||||
await exec.exec(
|
||||
namespace(),
|
||||
podName,
|
||||
containerName,
|
||||
command,
|
||||
process.stdout,
|
||||
process.stderr,
|
||||
stdin ?? null,
|
||||
false /* tty */,
|
||||
resp => {
|
||||
// kube.exec returns an error if exit code is not 0, but we can't actually get the exit code
|
||||
if (resp.status === 'Success') {
|
||||
resolve(resp.code)
|
||||
} else {
|
||||
core.debug(
|
||||
JSON.stringify({
|
||||
message: resp?.message,
|
||||
details: resp?.details
|
||||
})
|
||||
)
|
||||
reject(resp?.message)
|
||||
}
|
||||
)
|
||||
} catch (error) {
|
||||
reject(error)
|
||||
}
|
||||
}
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -234,46 +233,100 @@ export async function createDockerSecret(
|
||||
): Promise<k8s.V1Secret> {
|
||||
const authContent = {
|
||||
auths: {
|
||||
[registry.serverUrl]: {
|
||||
[registry.serverUrl || 'https://index.docker.io/v1/']: {
|
||||
username: registry.username,
|
||||
password: registry.password,
|
||||
auth: Buffer.from(
|
||||
`${registry.username}:${registry.password}`,
|
||||
auth: Buffer.from(`${registry.username}:${registry.password}`).toString(
|
||||
'base64'
|
||||
).toString()
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
const secretName = generateSecretName()
|
||||
|
||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
||||
|
||||
const secretName = getSecretName()
|
||||
const secret = new k8s.V1Secret()
|
||||
secret.immutable = true
|
||||
secret.apiVersion = 'v1'
|
||||
secret.metadata = new k8s.V1ObjectMeta()
|
||||
secret.metadata.name = secretName
|
||||
secret.metadata.namespace = namespace()
|
||||
secret.metadata.labels = {
|
||||
[runnerInstanceLabel.key]: runnerInstanceLabel.value
|
||||
}
|
||||
secret.type = 'kubernetes.io/dockerconfigjson'
|
||||
secret.kind = 'Secret'
|
||||
secret.data = {
|
||||
'.dockerconfigjson': Buffer.from(
|
||||
JSON.stringify(authContent),
|
||||
'.dockerconfigjson': Buffer.from(JSON.stringify(authContent)).toString(
|
||||
'base64'
|
||||
).toString()
|
||||
)
|
||||
}
|
||||
|
||||
const { body } = await k8sApi.createNamespacedSecret(namespace(), secret)
|
||||
return body
|
||||
}
|
||||
|
||||
export async function createSecretForEnvs(envs: {
|
||||
[key: string]: string
|
||||
}): Promise<string> {
|
||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
||||
|
||||
const secret = new k8s.V1Secret()
|
||||
const secretName = getSecretName()
|
||||
secret.immutable = true
|
||||
secret.apiVersion = 'v1'
|
||||
secret.metadata = new k8s.V1ObjectMeta()
|
||||
secret.metadata.name = secretName
|
||||
|
||||
secret.metadata.labels = {
|
||||
[runnerInstanceLabel.key]: runnerInstanceLabel.value
|
||||
}
|
||||
secret.kind = 'Secret'
|
||||
secret.data = {}
|
||||
for (const [key, value] of Object.entries(envs)) {
|
||||
secret.data[key] = Buffer.from(value).toString('base64')
|
||||
}
|
||||
|
||||
await k8sApi.createNamespacedSecret(namespace(), secret)
|
||||
return secretName
|
||||
}
|
||||
|
||||
export async function deleteSecret(secretName: string): Promise<void> {
|
||||
await k8sApi.deleteNamespacedSecret(secretName, namespace())
|
||||
}
|
||||
|
||||
export async function pruneSecrets(): Promise<void> {
|
||||
const secretList = await k8sApi.listNamespacedSecret(
|
||||
namespace(),
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
new RunnerInstanceLabel().toString()
|
||||
)
|
||||
if (!secretList.body.items.length) {
|
||||
return
|
||||
}
|
||||
|
||||
await Promise.all(
|
||||
secretList.body.items.map(
|
||||
secret => secret.metadata?.name && deleteSecret(secret.metadata.name)
|
||||
)
|
||||
)
|
||||
}
|
||||
|
||||
export async function waitForPodPhases(
|
||||
podName: string,
|
||||
awaitingPhases: Set<PodPhase>,
|
||||
backOffPhases: Set<PodPhase>,
|
||||
maxTimeSeconds = 45 * 60 // 45 min
|
||||
maxTimeSeconds = 10 * 60 // 10 min
|
||||
): Promise<void> {
|
||||
const backOffManager = new BackOffManager(maxTimeSeconds)
|
||||
let phase: PodPhase = PodPhase.UNKNOWN
|
||||
try {
|
||||
while (true) {
|
||||
phase = await getPodPhase(podName)
|
||||
|
||||
if (awaitingPhases.has(phase)) {
|
||||
return
|
||||
}
|
||||
@@ -304,7 +357,7 @@ async function getPodPhase(podName: string): Promise<PodPhase> {
|
||||
if (!pod.status?.phase || !podPhaseLookup.has(pod.status.phase)) {
|
||||
return PodPhase.UNKNOWN
|
||||
}
|
||||
return pod.status?.phase
|
||||
return pod.status?.phase as PodPhase
|
||||
}
|
||||
|
||||
async function isJobSucceeded(jobName: string): Promise<boolean> {
|
||||
@@ -328,7 +381,7 @@ export async function getPodLogs(
|
||||
})
|
||||
|
||||
logStream.on('error', err => {
|
||||
process.stderr.write(JSON.stringify(err))
|
||||
process.stderr.write(err.message)
|
||||
})
|
||||
|
||||
const r = await log.log(namespace(), podName, containerName, logStream, {
|
||||
@@ -340,7 +393,7 @@ export async function getPodLogs(
|
||||
await new Promise(resolve => r.on('close', () => resolve(null)))
|
||||
}
|
||||
|
||||
export async function podPrune(): Promise<void> {
|
||||
export async function prunePods(): Promise<void> {
|
||||
const podList = await k8sApi.listNamespacedPod(
|
||||
namespace(),
|
||||
undefined,
|
||||
@@ -389,26 +442,6 @@ export async function isAuthPermissionsOK(): Promise<boolean> {
|
||||
return responses.every(resp => resp.body.status?.allowed)
|
||||
}
|
||||
|
||||
export async function isSecretsAuthOK(): Promise<boolean> {
|
||||
const sar = new k8s.V1SelfSubjectAccessReview()
|
||||
const asyncs: Promise<{
|
||||
response: unknown
|
||||
body: k8s.V1SelfSubjectAccessReview
|
||||
}>[] = []
|
||||
for (const verb of secretPermission.verbs) {
|
||||
sar.spec = new k8s.V1SelfSubjectAccessReviewSpec()
|
||||
sar.spec.resourceAttributes = new k8s.V1ResourceAttributes()
|
||||
sar.spec.resourceAttributes.verb = verb
|
||||
sar.spec.resourceAttributes.namespace = namespace()
|
||||
sar.spec.resourceAttributes.group = secretPermission.group
|
||||
sar.spec.resourceAttributes.resource = secretPermission.resource
|
||||
sar.spec.resourceAttributes.subresource = secretPermission.subresource
|
||||
asyncs.push(k8sAuthorizationV1Api.createSelfSubjectAccessReview(sar))
|
||||
}
|
||||
const responses = await Promise.all(asyncs)
|
||||
return responses.every(resp => resp.body.status?.allowed)
|
||||
}
|
||||
|
||||
export async function isPodContainerAlpine(
|
||||
podName: string,
|
||||
containerName: string
|
||||
@@ -454,20 +487,6 @@ export function namespace(): string {
|
||||
return context.namespace
|
||||
}
|
||||
|
||||
function generateSecretName(): string {
|
||||
return `github-secret-${uuidv4()}`
|
||||
}
|
||||
|
||||
function runnerName(): string {
|
||||
const name = process.env.ACTIONS_RUNNER_POD_NAME
|
||||
if (!name) {
|
||||
throw new Error(
|
||||
'Failed to determine runner name. "ACTIONS_RUNNER_POD_NAME" env variables should be set.'
|
||||
)
|
||||
}
|
||||
return name
|
||||
}
|
||||
|
||||
class BackOffManager {
|
||||
private backOffSeconds = 1
|
||||
totalTime = 0
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import * as k8s from '@kubernetes/client-node'
|
||||
import * as fs from 'fs'
|
||||
import { Mount } from 'hooklib'
|
||||
import * as path from 'path'
|
||||
import { v1 as uuidv4 } from 'uuid'
|
||||
import { POD_VOLUME_NAME } from './index'
|
||||
|
||||
export const DEFAULT_CONTAINER_ENTRY_POINT_ARGS = [`-f`, `/dev/null`]
|
||||
@@ -8,7 +10,8 @@ export const DEFAULT_CONTAINER_ENTRY_POINT = 'tail'
|
||||
|
||||
export function containerVolumes(
|
||||
userMountVolumes: Mount[] = [],
|
||||
jobContainer = true
|
||||
jobContainer = true,
|
||||
containerAction = false
|
||||
): k8s.V1VolumeMount[] {
|
||||
const mounts: k8s.V1VolumeMount[] = [
|
||||
{
|
||||
@@ -17,6 +20,23 @@ export function containerVolumes(
|
||||
}
|
||||
]
|
||||
|
||||
const workspacePath = process.env.GITHUB_WORKSPACE as string
|
||||
if (containerAction) {
|
||||
mounts.push(
|
||||
{
|
||||
name: POD_VOLUME_NAME,
|
||||
mountPath: '/github/workspace',
|
||||
subPath: workspacePath.substring(workspacePath.indexOf('work/') + 1)
|
||||
},
|
||||
{
|
||||
name: POD_VOLUME_NAME,
|
||||
mountPath: '/github/file_commands',
|
||||
subPath: workspacePath.substring(workspacePath.indexOf('work/') + 1)
|
||||
}
|
||||
)
|
||||
return mounts
|
||||
}
|
||||
|
||||
if (!jobContainer) {
|
||||
return mounts
|
||||
}
|
||||
@@ -44,14 +64,20 @@ export function containerVolumes(
|
||||
}
|
||||
|
||||
for (const userVolume of userMountVolumes) {
|
||||
const sourceVolumePath = `${
|
||||
path.isAbsolute(userVolume.sourceVolumePath)
|
||||
? userVolume.sourceVolumePath
|
||||
: path.join(
|
||||
process.env.GITHUB_WORKSPACE as string,
|
||||
userVolume.sourceVolumePath
|
||||
)
|
||||
}`
|
||||
let sourceVolumePath = ''
|
||||
if (path.isAbsolute(userVolume.sourceVolumePath)) {
|
||||
if (!userVolume.sourceVolumePath.startsWith(workspacePath)) {
|
||||
throw new Error(
|
||||
'Volume mounts outside of the work folder are not supported'
|
||||
)
|
||||
}
|
||||
// source volume path should be relative path
|
||||
sourceVolumePath = userVolume.sourceVolumePath.slice(
|
||||
workspacePath.length + 1
|
||||
)
|
||||
} else {
|
||||
sourceVolumePath = userVolume.sourceVolumePath
|
||||
}
|
||||
|
||||
mounts.push({
|
||||
name: POD_VOLUME_NAME,
|
||||
@@ -63,3 +89,57 @@ export function containerVolumes(
|
||||
|
||||
return mounts
|
||||
}
|
||||
|
||||
export function writeEntryPointScript(
|
||||
workingDirectory: string,
|
||||
entryPoint: string,
|
||||
entryPointArgs?: string[],
|
||||
prependPath?: string[],
|
||||
environmentVariables?: { [key: string]: string }
|
||||
): { containerPath: string; runnerPath: string } {
|
||||
let exportPath = ''
|
||||
if (prependPath?.length) {
|
||||
// TODO: remove compatibility with typeof prependPath === 'string' as we bump to next major version, the hooks will lose PrependPath compat with runners 2.293.0 and older
|
||||
const prepend =
|
||||
typeof prependPath === 'string' ? prependPath : prependPath.join(':')
|
||||
exportPath = `export PATH=${prepend}:$PATH`
|
||||
}
|
||||
let environmentPrefix = ''
|
||||
|
||||
if (environmentVariables && Object.entries(environmentVariables).length) {
|
||||
const envBuffer: string[] = []
|
||||
for (const [key, value] of Object.entries(environmentVariables)) {
|
||||
envBuffer.push(
|
||||
`"${key}=${value
|
||||
.replace(/\\/g, '\\\\')
|
||||
.replace(/"/g, '\\"')
|
||||
.replace(/=/g, '\\=')}"`
|
||||
)
|
||||
}
|
||||
environmentPrefix = `env ${envBuffer.join(' ')} `
|
||||
}
|
||||
|
||||
const content = `#!/bin/sh -l
|
||||
${exportPath}
|
||||
cd ${workingDirectory} && \
|
||||
exec ${environmentPrefix} ${entryPoint} ${
|
||||
entryPointArgs?.length ? entryPointArgs.join(' ') : ''
|
||||
}
|
||||
`
|
||||
const filename = `${uuidv4()}.sh`
|
||||
const entryPointPath = `${process.env.RUNNER_TEMP}/${filename}`
|
||||
fs.writeFileSync(entryPointPath, content)
|
||||
return {
|
||||
containerPath: `/__w/_temp/${filename}`,
|
||||
runnerPath: entryPointPath
|
||||
}
|
||||
}
|
||||
|
||||
export enum PodPhase {
|
||||
PENDING = 'Pending',
|
||||
RUNNING = 'Running',
|
||||
SUCCEEDED = 'Succeeded',
|
||||
FAILED = 'Failed',
|
||||
UNKNOWN = 'Unknown',
|
||||
COMPLETED = 'Completed'
|
||||
}
|
||||
|
||||
@@ -1,31 +1,65 @@
|
||||
import * as path from 'path'
|
||||
import * as fs from 'fs'
|
||||
import { prepareJob, cleanupJob } from '../src/hooks'
|
||||
import { TestTempOutput } from './test-setup'
|
||||
import * as k8s from '@kubernetes/client-node'
|
||||
import { cleanupJob, prepareJob } from '../src/hooks'
|
||||
import { RunnerInstanceLabel } from '../src/hooks/constants'
|
||||
import { namespace } from '../src/k8s'
|
||||
import { TestHelper } from './test-setup'
|
||||
|
||||
let testTempOutput: TestTempOutput
|
||||
|
||||
const prepareJobJsonPath = path.resolve(
|
||||
`${__dirname}/../../../examples/prepare-job.json`
|
||||
)
|
||||
|
||||
let prepareJobOutputFilePath: string
|
||||
let testHelper: TestHelper
|
||||
|
||||
describe('Cleanup Job', () => {
|
||||
beforeEach(async () => {
|
||||
const prepareJobJson = fs.readFileSync(prepareJobJsonPath)
|
||||
let prepareJobData = JSON.parse(prepareJobJson.toString())
|
||||
|
||||
testTempOutput = new TestTempOutput()
|
||||
testTempOutput.initialize()
|
||||
prepareJobOutputFilePath = testTempOutput.createFile(
|
||||
testHelper = new TestHelper()
|
||||
await testHelper.initialize()
|
||||
let prepareJobData = testHelper.getPrepareJobDefinition()
|
||||
const prepareJobOutputFilePath = testHelper.createFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||
})
|
||||
|
||||
afterEach(async () => {
|
||||
await testHelper.cleanup()
|
||||
})
|
||||
|
||||
it('should not throw', async () => {
|
||||
const outputJson = fs.readFileSync(prepareJobOutputFilePath)
|
||||
const outputData = JSON.parse(outputJson.toString())
|
||||
await expect(cleanupJob()).resolves.not.toThrow()
|
||||
})
|
||||
|
||||
it('should have no runner linked pods running', async () => {
|
||||
await cleanupJob()
|
||||
const kc = new k8s.KubeConfig()
|
||||
|
||||
kc.loadFromDefault()
|
||||
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
|
||||
|
||||
const podList = await k8sApi.listNamespacedPod(
|
||||
namespace(),
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
new RunnerInstanceLabel().toString()
|
||||
)
|
||||
|
||||
expect(podList.body.items.length).toBe(0)
|
||||
})
|
||||
|
||||
it('should have no runner linked secrets', async () => {
|
||||
await cleanupJob()
|
||||
const kc = new k8s.KubeConfig()
|
||||
|
||||
kc.loadFromDefault()
|
||||
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
|
||||
|
||||
const secretList = await k8sApi.listNamespacedSecret(
|
||||
namespace(),
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined,
|
||||
new RunnerInstanceLabel().toString()
|
||||
)
|
||||
|
||||
expect(secretList.body.items.length).toBe(0)
|
||||
})
|
||||
})
|
||||
|
||||
173
packages/k8s/tests/constants-test.ts
Normal file
173
packages/k8s/tests/constants-test.ts
Normal file
@@ -0,0 +1,173 @@
|
||||
import {
|
||||
getJobPodName,
|
||||
getRunnerPodName,
|
||||
getSecretName,
|
||||
getStepPodName,
|
||||
getVolumeClaimName,
|
||||
MAX_POD_NAME_LENGTH,
|
||||
RunnerInstanceLabel,
|
||||
STEP_POD_NAME_SUFFIX_LENGTH
|
||||
} from '../src/hooks/constants'
|
||||
|
||||
describe('constants', () => {
|
||||
describe('runner instance label', () => {
|
||||
beforeEach(() => {
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
|
||||
})
|
||||
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
||||
expect(() => new RunnerInstanceLabel()).toThrow()
|
||||
})
|
||||
|
||||
it('should have key truthy', () => {
|
||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
||||
expect(typeof runnerInstanceLabel.key).toBe('string')
|
||||
expect(runnerInstanceLabel.key).toBeTruthy()
|
||||
expect(runnerInstanceLabel.key.length).toBeGreaterThan(0)
|
||||
})
|
||||
|
||||
it('should have value as runner pod name', () => {
|
||||
const name = process.env.ACTIONS_RUNNER_POD_NAME as string
|
||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
||||
expect(typeof runnerInstanceLabel.value).toBe('string')
|
||||
expect(runnerInstanceLabel.value).toBe(name)
|
||||
})
|
||||
|
||||
it('should have toString combination of key and value', () => {
|
||||
const runnerInstanceLabel = new RunnerInstanceLabel()
|
||||
expect(runnerInstanceLabel.toString()).toBe(
|
||||
`${runnerInstanceLabel.key}=${runnerInstanceLabel.value}`
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('getRunnerPodName', () => {
|
||||
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
||||
expect(() => getRunnerPodName()).toThrow()
|
||||
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = ''
|
||||
expect(() => getRunnerPodName()).toThrow()
|
||||
})
|
||||
|
||||
it('should return corrent ACTIONS_RUNNER_POD_NAME name', () => {
|
||||
const name = 'example'
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = name
|
||||
expect(getRunnerPodName()).toBe(name)
|
||||
})
|
||||
})
|
||||
|
||||
describe('getJobPodName', () => {
|
||||
it('should throw on getJobPodName if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
||||
expect(() => getJobPodName()).toThrow()
|
||||
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = ''
|
||||
expect(() => getRunnerPodName()).toThrow()
|
||||
})
|
||||
|
||||
it('should contain suffix -workflow', () => {
|
||||
const tableTests = [
|
||||
{
|
||||
podName: 'test',
|
||||
expect: 'test-workflow'
|
||||
},
|
||||
{
|
||||
// podName.length == 63
|
||||
podName:
|
||||
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa',
|
||||
expect:
|
||||
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-workflow'
|
||||
}
|
||||
]
|
||||
|
||||
for (const tt of tableTests) {
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = tt.podName
|
||||
const actual = getJobPodName()
|
||||
expect(actual).toBe(tt.expect)
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe('getVolumeClaimName', () => {
|
||||
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
||||
delete process.env.ACTIONS_RUNNER_CLAIM_NAME
|
||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
||||
expect(() => getVolumeClaimName()).toThrow()
|
||||
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = ''
|
||||
expect(() => getVolumeClaimName()).toThrow()
|
||||
})
|
||||
|
||||
it('should return ACTIONS_RUNNER_CLAIM_NAME env if set', () => {
|
||||
const claimName = 'testclaim'
|
||||
process.env.ACTIONS_RUNNER_CLAIM_NAME = claimName
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
|
||||
expect(getVolumeClaimName()).toBe(claimName)
|
||||
})
|
||||
|
||||
it('should contain suffix -work if ACTIONS_RUNNER_CLAIM_NAME is not set', () => {
|
||||
delete process.env.ACTIONS_RUNNER_CLAIM_NAME
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
|
||||
expect(getVolumeClaimName()).toBe('example-work')
|
||||
})
|
||||
})
|
||||
|
||||
describe('getSecretName', () => {
|
||||
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
||||
expect(() => getSecretName()).toThrow()
|
||||
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = ''
|
||||
expect(() => getSecretName()).toThrow()
|
||||
})
|
||||
|
||||
it('should contain suffix -secret- and name trimmed', () => {
|
||||
const podNames = [
|
||||
'test',
|
||||
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
|
||||
]
|
||||
|
||||
for (const podName of podNames) {
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = podName
|
||||
const actual = getSecretName()
|
||||
const re = new RegExp(
|
||||
`${podName.substring(
|
||||
MAX_POD_NAME_LENGTH -
|
||||
'-secret-'.length -
|
||||
STEP_POD_NAME_SUFFIX_LENGTH
|
||||
)}-secret-[a-z0-9]{8,}`
|
||||
)
|
||||
expect(actual).toMatch(re)
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
describe('getStepPodName', () => {
|
||||
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
|
||||
delete process.env.ACTIONS_RUNNER_POD_NAME
|
||||
expect(() => getStepPodName()).toThrow()
|
||||
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = ''
|
||||
expect(() => getStepPodName()).toThrow()
|
||||
})
|
||||
|
||||
it('should contain suffix -step- and name trimmed', () => {
|
||||
const podNames = [
|
||||
'test',
|
||||
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
|
||||
]
|
||||
|
||||
for (const podName of podNames) {
|
||||
process.env.ACTIONS_RUNNER_POD_NAME = podName
|
||||
const actual = getStepPodName()
|
||||
const re = new RegExp(
|
||||
`${podName.substring(
|
||||
MAX_POD_NAME_LENGTH - '-step-'.length - STEP_POD_NAME_SUFFIX_LENGTH
|
||||
)}-step-[a-z0-9]{8,}`
|
||||
)
|
||||
expect(actual).toMatch(re)
|
||||
}
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -1,51 +1,36 @@
|
||||
import * as fs from 'fs'
|
||||
import * as path from 'path'
|
||||
import {
|
||||
cleanupJob,
|
||||
prepareJob,
|
||||
runContainerStep,
|
||||
runScriptStep
|
||||
} from '../src/hooks'
|
||||
import { TestTempOutput } from './test-setup'
|
||||
import { TestHelper } from './test-setup'
|
||||
|
||||
jest.useRealTimers()
|
||||
|
||||
let testTempOutput: TestTempOutput
|
||||
|
||||
const prepareJobJsonPath = path.resolve(
|
||||
`${__dirname}/../../../../examples/prepare-job.json`
|
||||
)
|
||||
const runScriptStepJsonPath = path.resolve(
|
||||
`${__dirname}/../../../../examples/run-script-step.json`
|
||||
)
|
||||
let runContainerStepJsonPath = path.resolve(
|
||||
`${__dirname}/../../../../examples/run-container-step.json`
|
||||
)
|
||||
let testHelper: TestHelper
|
||||
|
||||
let prepareJobData: any
|
||||
|
||||
let prepareJobOutputFilePath: string
|
||||
describe('e2e', () => {
|
||||
beforeEach(() => {
|
||||
const prepareJobJson = fs.readFileSync(prepareJobJsonPath)
|
||||
prepareJobData = JSON.parse(prepareJobJson.toString())
|
||||
beforeEach(async () => {
|
||||
testHelper = new TestHelper()
|
||||
await testHelper.initialize()
|
||||
|
||||
testTempOutput = new TestTempOutput()
|
||||
testTempOutput.initialize()
|
||||
prepareJobOutputFilePath = testTempOutput.createFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
prepareJobData = testHelper.getPrepareJobDefinition()
|
||||
prepareJobOutputFilePath = testHelper.createFile('prepare-job-output.json')
|
||||
})
|
||||
afterEach(async () => {
|
||||
testTempOutput.cleanup()
|
||||
await testHelper.cleanup()
|
||||
})
|
||||
it('should prepare job, run script step, run container step then cleanup without errors', async () => {
|
||||
await expect(
|
||||
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||
).resolves.not.toThrow()
|
||||
|
||||
const scriptStepContent = fs.readFileSync(runScriptStepJsonPath)
|
||||
const scriptStepData = JSON.parse(scriptStepContent.toString())
|
||||
const scriptStepData = testHelper.getRunScriptStepDefinition()
|
||||
|
||||
const prepareJobOutputJson = fs.readFileSync(prepareJobOutputFilePath)
|
||||
const prepareJobOutputData = JSON.parse(prepareJobOutputJson.toString())
|
||||
@@ -54,8 +39,7 @@ describe('e2e', () => {
|
||||
runScriptStep(scriptStepData.args, prepareJobOutputData.state, null)
|
||||
).resolves.not.toThrow()
|
||||
|
||||
const runContainerStepContent = fs.readFileSync(runContainerStepJsonPath)
|
||||
const runContainerStepData = JSON.parse(runContainerStepContent.toString())
|
||||
const runContainerStepData = testHelper.getRunContainerStepDefinition()
|
||||
|
||||
await expect(
|
||||
runContainerStep(runContainerStepData.args)
|
||||
|
||||
153
packages/k8s/tests/k8s-utils-test.ts
Normal file
153
packages/k8s/tests/k8s-utils-test.ts
Normal file
@@ -0,0 +1,153 @@
|
||||
import * as fs from 'fs'
|
||||
import { POD_VOLUME_NAME } from '../src/k8s'
|
||||
import { containerVolumes, writeEntryPointScript } from '../src/k8s/utils'
|
||||
import { TestHelper } from './test-setup'
|
||||
|
||||
let testHelper: TestHelper
|
||||
|
||||
describe('k8s utils', () => {
|
||||
describe('write entrypoint', () => {
|
||||
beforeEach(async () => {
|
||||
testHelper = new TestHelper()
|
||||
await testHelper.initialize()
|
||||
})
|
||||
|
||||
afterEach(async () => {
|
||||
await testHelper.cleanup()
|
||||
})
|
||||
|
||||
it('should not throw', () => {
|
||||
expect(() =>
|
||||
writeEntryPointScript(
|
||||
'/test',
|
||||
'sh',
|
||||
['-e', 'script.sh'],
|
||||
['/prepend/path'],
|
||||
{
|
||||
SOME_ENV: 'SOME_VALUE'
|
||||
}
|
||||
)
|
||||
).not.toThrow()
|
||||
})
|
||||
|
||||
it('should throw if RUNNER_TEMP is not set', () => {
|
||||
delete process.env.RUNNER_TEMP
|
||||
expect(() =>
|
||||
writeEntryPointScript(
|
||||
'/test',
|
||||
'sh',
|
||||
['-e', 'script.sh'],
|
||||
['/prepend/path'],
|
||||
{
|
||||
SOME_ENV: 'SOME_VALUE'
|
||||
}
|
||||
)
|
||||
).toThrow()
|
||||
})
|
||||
|
||||
it('should return object with containerPath and runnerPath', () => {
|
||||
const { containerPath, runnerPath } = writeEntryPointScript(
|
||||
'/test',
|
||||
'sh',
|
||||
['-e', 'script.sh'],
|
||||
['/prepend/path'],
|
||||
{
|
||||
SOME_ENV: 'SOME_VALUE'
|
||||
}
|
||||
)
|
||||
expect(containerPath).toMatch(/\/__w\/_temp\/.*\.sh/)
|
||||
const re = new RegExp(`${process.env.RUNNER_TEMP}/.*\\.sh`)
|
||||
expect(runnerPath).toMatch(re)
|
||||
})
|
||||
|
||||
it('should write entrypoint path and the file should exist', () => {
|
||||
const { runnerPath } = writeEntryPointScript(
|
||||
'/test',
|
||||
'sh',
|
||||
['-e', 'script.sh'],
|
||||
['/prepend/path'],
|
||||
{
|
||||
SOME_ENV: 'SOME_VALUE'
|
||||
}
|
||||
)
|
||||
expect(fs.existsSync(runnerPath)).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
describe('container volumes', () => {
|
||||
beforeEach(async () => {
|
||||
testHelper = new TestHelper()
|
||||
await testHelper.initialize()
|
||||
})
|
||||
|
||||
afterEach(async () => {
|
||||
await testHelper.cleanup()
|
||||
})
|
||||
|
||||
it('should throw if container action and GITHUB_WORKSPACE env is not set', () => {
|
||||
delete process.env.GITHUB_WORKSPACE
|
||||
expect(() => containerVolumes([], true, true)).toThrow()
|
||||
expect(() => containerVolumes([], false, true)).toThrow()
|
||||
})
|
||||
|
||||
it('should always have work mount', () => {
|
||||
let volumes = containerVolumes([], true, true)
|
||||
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
|
||||
volumes = containerVolumes([], true, false)
|
||||
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
|
||||
volumes = containerVolumes([], false, true)
|
||||
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
|
||||
volumes = containerVolumes([], false, false)
|
||||
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
|
||||
})
|
||||
|
||||
it('should have container action volumes', () => {
|
||||
let volumes = containerVolumes([], true, true)
|
||||
expect(
|
||||
volumes.find(e => e.mountPath === '/github/workspace')
|
||||
).toBeTruthy()
|
||||
expect(
|
||||
volumes.find(e => e.mountPath === '/github/file_commands')
|
||||
).toBeTruthy()
|
||||
volumes = containerVolumes([], false, true)
|
||||
expect(
|
||||
volumes.find(e => e.mountPath === '/github/workspace')
|
||||
).toBeTruthy()
|
||||
expect(
|
||||
volumes.find(e => e.mountPath === '/github/file_commands')
|
||||
).toBeTruthy()
|
||||
})
|
||||
|
||||
it('should have externals, github home and github workflow mounts if job container', () => {
|
||||
const volumes = containerVolumes()
|
||||
expect(volumes.find(e => e.mountPath === '/__e')).toBeTruthy()
|
||||
expect(volumes.find(e => e.mountPath === '/github/home')).toBeTruthy()
|
||||
expect(volumes.find(e => e.mountPath === '/github/workflow')).toBeTruthy()
|
||||
})
|
||||
|
||||
it('should throw if user volume source volume path is not in workspace', () => {
|
||||
expect(() =>
|
||||
containerVolumes(
|
||||
[
|
||||
{
|
||||
sourceVolumePath: '/outside/of/workdir'
|
||||
}
|
||||
],
|
||||
true,
|
||||
false
|
||||
)
|
||||
).toThrow()
|
||||
})
|
||||
|
||||
it(`all volumes should have name ${POD_VOLUME_NAME}`, () => {
|
||||
let volumes = containerVolumes([], true, true)
|
||||
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
|
||||
volumes = containerVolumes([], true, false)
|
||||
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
|
||||
volumes = containerVolumes([], false, true)
|
||||
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
|
||||
volumes = containerVolumes([], false, false)
|
||||
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -2,35 +2,26 @@ import * as fs from 'fs'
|
||||
import * as path from 'path'
|
||||
import { cleanupJob } from '../src/hooks'
|
||||
import { prepareJob } from '../src/hooks/prepare-job'
|
||||
import { TestTempOutput } from './test-setup'
|
||||
import { TestHelper } from './test-setup'
|
||||
|
||||
jest.useRealTimers()
|
||||
|
||||
let testTempOutput: TestTempOutput
|
||||
let testHelper: TestHelper
|
||||
|
||||
const prepareJobJsonPath = path.resolve(
|
||||
`${__dirname}/../../../examples/prepare-job.json`
|
||||
)
|
||||
let prepareJobData: any
|
||||
|
||||
let prepareJobOutputFilePath: string
|
||||
|
||||
describe('Prepare job', () => {
|
||||
beforeEach(() => {
|
||||
const prepareJobJson = fs.readFileSync(prepareJobJsonPath)
|
||||
prepareJobData = JSON.parse(prepareJobJson.toString())
|
||||
|
||||
testTempOutput = new TestTempOutput()
|
||||
testTempOutput.initialize()
|
||||
prepareJobOutputFilePath = testTempOutput.createFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
beforeEach(async () => {
|
||||
testHelper = new TestHelper()
|
||||
await testHelper.initialize()
|
||||
prepareJobData = testHelper.getPrepareJobDefinition()
|
||||
prepareJobOutputFilePath = testHelper.createFile('prepare-job-output.json')
|
||||
})
|
||||
afterEach(async () => {
|
||||
const outputJson = fs.readFileSync(prepareJobOutputFilePath)
|
||||
const outputData = JSON.parse(outputJson.toString())
|
||||
await cleanupJob()
|
||||
testTempOutput.cleanup()
|
||||
await testHelper.cleanup()
|
||||
})
|
||||
|
||||
it('should not throw exception', async () => {
|
||||
@@ -44,4 +35,40 @@ describe('Prepare job', () => {
|
||||
const content = fs.readFileSync(prepareJobOutputFilePath)
|
||||
expect(() => JSON.parse(content.toString())).not.toThrow()
|
||||
})
|
||||
|
||||
it('should prepare job with absolute path for userVolumeMount', async () => {
|
||||
prepareJobData.args.container.userMountVolumes = [
|
||||
{
|
||||
sourceVolumePath: path.join(
|
||||
process.env.GITHUB_WORKSPACE as string,
|
||||
'/myvolume'
|
||||
),
|
||||
targetVolumePath: '/volume_mount',
|
||||
readOnly: false
|
||||
}
|
||||
]
|
||||
await expect(
|
||||
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||
).resolves.not.toThrow()
|
||||
})
|
||||
|
||||
it('should throw an exception if the user volume mount is absolute path outside of GITHUB_WORKSPACE', async () => {
|
||||
prepareJobData.args.container.userMountVolumes = [
|
||||
{
|
||||
sourceVolumePath: '/somewhere/not/in/gh-workspace',
|
||||
targetVolumePath: '/containermount',
|
||||
readOnly: false
|
||||
}
|
||||
]
|
||||
await expect(
|
||||
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||
).rejects.toThrow()
|
||||
})
|
||||
|
||||
it('should not run prepare job without the job container', async () => {
|
||||
prepareJobData.args.container = undefined
|
||||
await expect(
|
||||
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||
).rejects.toThrow()
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,25 +1,39 @@
|
||||
import { TestTempOutput } from './test-setup'
|
||||
import * as path from 'path'
|
||||
import { runContainerStep } from '../src/hooks'
|
||||
import * as fs from 'fs'
|
||||
import { TestHelper } from './test-setup'
|
||||
|
||||
jest.useRealTimers()
|
||||
|
||||
let testTempOutput: TestTempOutput
|
||||
|
||||
let runContainerStepJsonPath = path.resolve(
|
||||
`${__dirname}/../../../examples/run-container-step.json`
|
||||
)
|
||||
let testHelper: TestHelper
|
||||
|
||||
let runContainerStepData: any
|
||||
|
||||
describe('Run container step', () => {
|
||||
beforeAll(() => {
|
||||
const content = fs.readFileSync(runContainerStepJsonPath)
|
||||
runContainerStepData = JSON.parse(content.toString())
|
||||
process.env.RUNNER_NAME = 'testjob'
|
||||
beforeEach(async () => {
|
||||
testHelper = new TestHelper()
|
||||
await testHelper.initialize()
|
||||
runContainerStepData = testHelper.getRunContainerStepDefinition()
|
||||
})
|
||||
|
||||
afterEach(async () => {
|
||||
await testHelper.cleanup()
|
||||
})
|
||||
|
||||
it('should not throw', async () => {
|
||||
const exitCode = await runContainerStep(runContainerStepData.args)
|
||||
expect(exitCode).toBe(0)
|
||||
})
|
||||
|
||||
it('should fail if the working directory does not exist', async () => {
|
||||
runContainerStepData.args.workingDirectory = '/foo/bar'
|
||||
await expect(runContainerStep(runContainerStepData.args)).rejects.toThrow()
|
||||
})
|
||||
|
||||
it('should shold have env variables available', async () => {
|
||||
runContainerStepData.args.entryPoint = 'bash'
|
||||
runContainerStepData.args.entryPointArgs = [
|
||||
'-c',
|
||||
"'if [[ -z $NODE_ENV ]]; then exit 1; fi'"
|
||||
]
|
||||
await expect(
|
||||
runContainerStep(runContainerStepData.args)
|
||||
).resolves.not.toThrow()
|
||||
|
||||
@@ -1,31 +1,26 @@
|
||||
import { prepareJob, cleanupJob, runScriptStep } from '../src/hooks'
|
||||
import { TestTempOutput } from './test-setup'
|
||||
import * as path from 'path'
|
||||
import * as fs from 'fs'
|
||||
import { cleanupJob, prepareJob, runScriptStep } from '../src/hooks'
|
||||
import { TestHelper } from './test-setup'
|
||||
|
||||
jest.useRealTimers()
|
||||
|
||||
let testTempOutput: TestTempOutput
|
||||
let testHelper: TestHelper
|
||||
|
||||
const prepareJobJsonPath = path.resolve(
|
||||
`${__dirname}/../../../examples/prepare-job.json`
|
||||
)
|
||||
let prepareJobData: any
|
||||
|
||||
let prepareJobOutputFilePath: string
|
||||
let prepareJobOutputData: any
|
||||
|
||||
let runScriptStepDefinition
|
||||
|
||||
describe('Run script step', () => {
|
||||
beforeEach(async () => {
|
||||
const prepareJobJson = fs.readFileSync(prepareJobJsonPath)
|
||||
prepareJobData = JSON.parse(prepareJobJson.toString())
|
||||
console.log(prepareJobData)
|
||||
|
||||
testTempOutput = new TestTempOutput()
|
||||
testTempOutput.initialize()
|
||||
prepareJobOutputFilePath = testTempOutput.createFile(
|
||||
testHelper = new TestHelper()
|
||||
await testHelper.initialize()
|
||||
const prepareJobOutputFilePath = testHelper.createFile(
|
||||
'prepare-job-output.json'
|
||||
)
|
||||
|
||||
const prepareJobData = testHelper.getPrepareJobDefinition()
|
||||
runScriptStepDefinition = testHelper.getRunScriptStepDefinition()
|
||||
|
||||
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
|
||||
const outputContent = fs.readFileSync(prepareJobOutputFilePath)
|
||||
prepareJobOutputData = JSON.parse(outputContent.toString())
|
||||
@@ -33,7 +28,7 @@ describe('Run script step', () => {
|
||||
|
||||
afterEach(async () => {
|
||||
await cleanupJob()
|
||||
testTempOutput.cleanup()
|
||||
await testHelper.cleanup()
|
||||
})
|
||||
|
||||
// NOTE: To use this test, do kubectl apply -f podspec.yaml (from podspec examples)
|
||||
@@ -41,21 +36,75 @@ describe('Run script step', () => {
|
||||
// npm run test run-script-step
|
||||
|
||||
it('should not throw an exception', async () => {
|
||||
const args = {
|
||||
entryPointArgs: ['echo "test"'],
|
||||
entryPoint: '/bin/bash',
|
||||
environmentVariables: {
|
||||
NODE_ENV: 'development'
|
||||
},
|
||||
prependPath: ['/foo/bar', 'bar/foo'],
|
||||
workingDirectory: '/__w/thboop-test2/thboop-test2'
|
||||
}
|
||||
const state = {
|
||||
jobPod: prepareJobOutputData.state.jobPod
|
||||
}
|
||||
const responseFile = null
|
||||
await expect(
|
||||
runScriptStep(args, state, responseFile)
|
||||
runScriptStep(
|
||||
runScriptStepDefinition.args,
|
||||
prepareJobOutputData.state,
|
||||
null
|
||||
)
|
||||
).resolves.not.toThrow()
|
||||
})
|
||||
|
||||
it('should fail if the working directory does not exist', async () => {
|
||||
runScriptStepDefinition.args.workingDirectory = '/foo/bar'
|
||||
await expect(
|
||||
runScriptStep(
|
||||
runScriptStepDefinition.args,
|
||||
prepareJobOutputData.state,
|
||||
null
|
||||
)
|
||||
).rejects.toThrow()
|
||||
})
|
||||
|
||||
it('should shold have env variables available', async () => {
|
||||
runScriptStepDefinition.args.entryPoint = 'bash'
|
||||
|
||||
runScriptStepDefinition.args.entryPointArgs = [
|
||||
'-c',
|
||||
"'if [[ -z $NODE_ENV ]]; then exit 1; fi'"
|
||||
]
|
||||
await expect(
|
||||
runScriptStep(
|
||||
runScriptStepDefinition.args,
|
||||
prepareJobOutputData.state,
|
||||
null
|
||||
)
|
||||
).resolves.not.toThrow()
|
||||
})
|
||||
|
||||
it('Should have path variable changed in container with prepend path string', async () => {
|
||||
runScriptStepDefinition.args.prependPath = '/some/path'
|
||||
runScriptStepDefinition.args.entryPoint = '/bin/bash'
|
||||
runScriptStepDefinition.args.entryPointArgs = [
|
||||
'-c',
|
||||
`'if [[ ! $(env | grep "^PATH=") = "PATH=${runScriptStepDefinition.args.prependPath}:"* ]]; then exit 1; fi'`
|
||||
]
|
||||
|
||||
await expect(
|
||||
runScriptStep(
|
||||
runScriptStepDefinition.args,
|
||||
prepareJobOutputData.state,
|
||||
null
|
||||
)
|
||||
).resolves.not.toThrow()
|
||||
})
|
||||
|
||||
it('Should have path variable changed in container with prepend path string array', async () => {
|
||||
runScriptStepDefinition.args.prependPath = ['/some/other/path']
|
||||
runScriptStepDefinition.args.entryPoint = '/bin/bash'
|
||||
runScriptStepDefinition.args.entryPointArgs = [
|
||||
'-c',
|
||||
`'if [[ ! $(env | grep "^PATH=") = "PATH=${runScriptStepDefinition.args.prependPath.join(
|
||||
':'
|
||||
)}:"* ]]; then exit 1; fi'`
|
||||
]
|
||||
|
||||
await expect(
|
||||
runScriptStep(
|
||||
runScriptStepDefinition.args,
|
||||
prepareJobOutputData.state,
|
||||
null
|
||||
)
|
||||
).resolves.not.toThrow()
|
||||
})
|
||||
})
|
||||
|
||||
18
packages/k8s/tests/test-kind.yaml
Normal file
18
packages/k8s/tests/test-kind.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
nodes:
|
||||
- role: control-plane
|
||||
# add a mount from /path/to/my/files on the host to /files on the node
|
||||
extraMounts:
|
||||
- hostPath: {{PATHTOREPO}}
|
||||
containerPath: {{PATHTOREPO}}
|
||||
# optional: if set, the mount is read-only.
|
||||
# default false
|
||||
readOnly: false
|
||||
# optional: if set, the mount needs SELinux relabeling.
|
||||
# default false
|
||||
selinuxRelabel: false
|
||||
# optional: set propagation mode (None, HostToContainer or Bidirectional)
|
||||
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
|
||||
# default None
|
||||
propagation: None
|
||||
@@ -1,20 +1,80 @@
|
||||
import * as k8s from '@kubernetes/client-node'
|
||||
import * as fs from 'fs'
|
||||
import { HookData } from 'hooklib/lib'
|
||||
import * as path from 'path'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
|
||||
export class TestTempOutput {
|
||||
const kc = new k8s.KubeConfig()
|
||||
|
||||
kc.loadFromDefault()
|
||||
|
||||
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
|
||||
const k8sStorageApi = kc.makeApiClient(k8s.StorageV1Api)
|
||||
|
||||
export class TestHelper {
|
||||
private tempDirPath: string
|
||||
private podName: string
|
||||
constructor() {
|
||||
this.tempDirPath = `${__dirname}/_temp/${uuidv4()}`
|
||||
this.tempDirPath = `${__dirname}/_temp/runner`
|
||||
this.podName = uuidv4().replace(/-/g, '')
|
||||
}
|
||||
|
||||
public initialize(): void {
|
||||
fs.mkdirSync(this.tempDirPath, { recursive: true })
|
||||
public async initialize(): Promise<void> {
|
||||
process.env['ACTIONS_RUNNER_POD_NAME'] = `${this.podName}`
|
||||
process.env['RUNNER_WORKSPACE'] = `${this.tempDirPath}/_work/repo`
|
||||
process.env['RUNNER_TEMP'] = `${this.tempDirPath}/_work/_temp`
|
||||
process.env['GITHUB_WORKSPACE'] = `${this.tempDirPath}/_work/repo/repo`
|
||||
process.env['ACTIONS_RUNNER_KUBERNETES_NAMESPACE'] = 'default'
|
||||
|
||||
fs.mkdirSync(`${this.tempDirPath}/_work/repo/repo`, { recursive: true })
|
||||
fs.mkdirSync(`${this.tempDirPath}/externals`, { recursive: true })
|
||||
fs.mkdirSync(process.env.RUNNER_TEMP, { recursive: true })
|
||||
|
||||
fs.copyFileSync(
|
||||
path.resolve(`${__dirname}/../../../examples/example-script.sh`),
|
||||
`${process.env.RUNNER_TEMP}/example-script.sh`
|
||||
)
|
||||
|
||||
await this.cleanupK8sResources()
|
||||
try {
|
||||
await this.createTestVolume()
|
||||
await this.createTestJobPod()
|
||||
} catch (e) {
|
||||
console.log(e)
|
||||
}
|
||||
}
|
||||
|
||||
public cleanup(): void {
|
||||
fs.rmSync(this.tempDirPath, { recursive: true })
|
||||
public async cleanup(): Promise<void> {
|
||||
try {
|
||||
await this.cleanupK8sResources()
|
||||
fs.rmSync(this.tempDirPath, { recursive: true })
|
||||
} catch {}
|
||||
}
|
||||
public async cleanupK8sResources() {
|
||||
await k8sApi
|
||||
.deleteNamespacedPersistentVolumeClaim(
|
||||
`${this.podName}-work`,
|
||||
'default',
|
||||
undefined,
|
||||
undefined,
|
||||
0
|
||||
)
|
||||
.catch(e => {})
|
||||
await k8sApi.deletePersistentVolume(`${this.podName}-pv`).catch(e => {})
|
||||
await k8sStorageApi.deleteStorageClass('local-storage').catch(e => {})
|
||||
await k8sApi
|
||||
.deleteNamespacedPod(this.podName, 'default', undefined, undefined, 0)
|
||||
.catch(e => {})
|
||||
await k8sApi
|
||||
.deleteNamespacedPod(
|
||||
`${this.podName}-workflow`,
|
||||
'default',
|
||||
undefined,
|
||||
undefined,
|
||||
0
|
||||
)
|
||||
.catch(e => {})
|
||||
}
|
||||
|
||||
public createFile(fileName?: string): string {
|
||||
const filePath = `${this.tempDirPath}/${fileName || uuidv4()}`
|
||||
fs.writeFileSync(filePath, '')
|
||||
@@ -25,4 +85,112 @@ export class TestTempOutput {
|
||||
const filePath = `${this.tempDirPath}/${fileName}`
|
||||
fs.rmSync(filePath)
|
||||
}
|
||||
|
||||
public async createTestJobPod() {
|
||||
const container = {
|
||||
name: 'nginx',
|
||||
image: 'nginx:latest',
|
||||
imagePullPolicy: 'IfNotPresent'
|
||||
} as k8s.V1Container
|
||||
|
||||
const pod: k8s.V1Pod = {
|
||||
metadata: {
|
||||
name: this.podName
|
||||
},
|
||||
spec: {
|
||||
restartPolicy: 'Never',
|
||||
containers: [container]
|
||||
}
|
||||
} as k8s.V1Pod
|
||||
await k8sApi.createNamespacedPod('default', pod)
|
||||
}
|
||||
|
||||
public async createTestVolume() {
|
||||
var sc: k8s.V1StorageClass = {
|
||||
metadata: {
|
||||
name: 'local-storage'
|
||||
},
|
||||
provisioner: 'kubernetes.io/no-provisioner',
|
||||
volumeBindingMode: 'Immediate'
|
||||
}
|
||||
await k8sStorageApi.createStorageClass(sc)
|
||||
|
||||
var volume: k8s.V1PersistentVolume = {
|
||||
metadata: {
|
||||
name: `${this.podName}-pv`
|
||||
},
|
||||
spec: {
|
||||
storageClassName: 'local-storage',
|
||||
capacity: {
|
||||
storage: '2Gi'
|
||||
},
|
||||
volumeMode: 'Filesystem',
|
||||
accessModes: ['ReadWriteOnce'],
|
||||
hostPath: {
|
||||
path: `${this.tempDirPath}/_work`
|
||||
}
|
||||
}
|
||||
}
|
||||
await k8sApi.createPersistentVolume(volume)
|
||||
var volumeClaim: k8s.V1PersistentVolumeClaim = {
|
||||
metadata: {
|
||||
name: `${this.podName}-work`
|
||||
},
|
||||
spec: {
|
||||
accessModes: ['ReadWriteOnce'],
|
||||
volumeMode: 'Filesystem',
|
||||
storageClassName: 'local-storage',
|
||||
volumeName: `${this.podName}-pv`,
|
||||
resources: {
|
||||
requests: {
|
||||
storage: '1Gi'
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
await k8sApi.createNamespacedPersistentVolumeClaim('default', volumeClaim)
|
||||
}
|
||||
|
||||
public getPrepareJobDefinition(): HookData {
|
||||
const prepareJob = JSON.parse(
|
||||
fs.readFileSync(
|
||||
path.resolve(__dirname + '/../../../examples/prepare-job.json'),
|
||||
'utf8'
|
||||
)
|
||||
)
|
||||
|
||||
prepareJob.args.container.userMountVolumes = undefined
|
||||
prepareJob.args.container.registry = null
|
||||
prepareJob.args.services.forEach(s => {
|
||||
s.registry = null
|
||||
})
|
||||
|
||||
return prepareJob
|
||||
}
|
||||
|
||||
public getRunScriptStepDefinition(): HookData {
|
||||
const runScriptStep = JSON.parse(
|
||||
fs.readFileSync(
|
||||
path.resolve(__dirname + '/../../../examples/run-script-step.json'),
|
||||
'utf8'
|
||||
)
|
||||
)
|
||||
|
||||
runScriptStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
|
||||
return runScriptStep
|
||||
}
|
||||
|
||||
public getRunContainerStepDefinition(): HookData {
|
||||
const runContainerStep = JSON.parse(
|
||||
fs.readFileSync(
|
||||
path.resolve(__dirname + '/../../../examples/run-container-step.json'),
|
||||
'utf8'
|
||||
)
|
||||
)
|
||||
|
||||
runContainerStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
|
||||
runContainerStep.args.userMountVolumes = undefined
|
||||
runContainerStep.args.registry = null
|
||||
return runContainerStep
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
## Features
|
||||
- Initial Release
|
||||
|
||||
## Bugs
|
||||
|
||||
- Fixed an issue where default private registry images did not pull correctly [#25]
|
||||
|
||||
## Misc
|
||||
Reference in New Issue
Block a user