Compare commits

..

17 Commits

Author SHA1 Message Date
Ferenc Hammerl
b3df7ec55b Update 0034-build-docker-with-kaniko.md 2023-01-26 17:58:10 +01:00
Ferenc Hammerl
16276a2a22 Update 0034-build-docker-with-kaniko.md 2023-01-09 17:44:09 +01:00
Ferenc Hammerl
abeebd2a37 Update 0034-build-docker-with-kaniko.md 2023-01-09 16:48:42 +01:00
Ferenc Hammerl
2eecf2d378 Update 0034-build-docker-with-kaniko.md 2023-01-04 12:20:08 +01:00
Ferenc Hammerl
0c38d44dbd Update 0034-build-docker-with-kaniko.md 2023-01-04 12:18:51 +01:00
Ferenc Hammerl
a62b81fc95 Update 0034-build-docker-with-kaniko.md 2022-12-15 14:59:53 +01:00
Ferenc Hammerl
ae0066ae41 Update 0034-build-docker-with-kaniko.md 2022-12-15 14:05:32 +01:00
Ferenc Hammerl
6c9241fb0e Update 0034-build-docker-with-kaniko.md 2022-10-04 16:33:41 +02:00
Ferenc Hammerl
efe66bb99b Update id of md file 2022-10-04 13:24:13 +02:00
Ferenc Hammerl
9a50e3a796 Add Kaniko ADR 2022-10-04 13:02:19 +02:00
Thomas Boop
16eb238caa 0.1.3 release notes (#26) 2022-08-16 15:43:31 +02:00
Nikola Jokic
8e06496e34 fixing defaulting to docker hub on private registry, and b64 encoding (#25) 2022-08-16 09:30:58 -04:00
Thomas Boop
e2033b29c7 0.1.2 release (#22)
* 0.1.2 release

* trace the error and show a user readable message
2022-06-23 08:57:14 -04:00
Nikola Jokic
eb47baaf5e Adding more tests and minor changes in code (#21)
* added cleanup job checks, started testing constants file

* added getVolumeClaimName test

* added write entrypoint tests

* added tests around k8s utils

* fixed new regexp

* added tests around runner instance label

* 100% test coverage of constants
2022-06-22 14:15:42 -04:00
Nikola Jokic
20c19dae27 refactor around job claim name and runner instance labels (#20)
* refactor around job claim name, and runner instance labels

* repaired failing test
2022-06-22 09:32:50 -04:00
Thomas Boop
4307828719 Don't use JSON.stringify for errors (#19)
* better error handling

* remove unneeded catch

* Update index.ts
2022-06-22 15:20:48 +02:00
Thomas Boop
5c6995dba1 Add Akvelon to codeowners 2022-06-22 09:06:20 -04:00
17 changed files with 509 additions and 75 deletions

View File

@@ -1 +1 @@
* @actions/actions-runtime * @actions/actions-runtime @actions/runner-akvelon

View File

@@ -0,0 +1,64 @@
# ADR 0034: Build container-action Dockerfiles with Kaniko
**Date**: 2023-01-26
**Status**: In Progress
# Background
[Building Dockerfiles in k8s using Kaniko](https://github.com/actions/runner-container-hooks/issues/23) has been on the radar since the beginning of container hooks.
Currently, this is possible in ARC using a [dind/docker-in-docker](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/runner/actions-runner-dind.dockerfile) sidecar container.
This container needs to be launched using `--privileged`, which presents a security concern.
As an alternative tool, a container running [Kaniko](https://github.com/GoogleContainerTools/kaniko) can be used to build these files instead.
Kaniko doesn't need to be `--privileged`.
Whether using dind/docker-in-docker sidecar or Kaniko, in this ADR I will refer to these containers as '**builder containers**'
# Guiding Principles
- **Security:** running a Kaniko builder container should be possible without the `--privileged` flag
- **Feature parity with Docker:** Any 'Dockerfile' that can be built with vanilla Docker should also be possible to build using a Kaniko build container
- **Ease of Use:** The customer should be able to build and push Docker images with minimal configuration
## Limitations
### User provided registry
The user needs to provide a a remote registry (like ghcr.io or dockerhub) and credentials, for the Kaniko builder container to push to and k8s to pull from later. This is the user's responsiblity so that our solution remains lightweight and generic.
- Alternatively, a user-managed local Docker Registry within the k8s cluster can of course be used instead
### Kaniko feature limit
Anything Kaniko can't do we'll be by definition unable to help with. Potential incompatibilities / inconsistencies between Docker and Kaniko will naturally be inherited by our solution.
## Interface
The user will set `containerMode:kubernetes`, because this is a change to the behaviour of our k8s hooks
The user will set two ENVs:
- `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST`: e.g. `ghcr.io/OWNER` or `dockerhandle`.
- `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_SECRET_NAME`: e.g. `docker-secret`: the name of the `k8s` secret resource that allows you to authenticate against the registry with the given handle above
The workspace is used as the image name.
The image tag is a random generated string.
To execute a container-action, we then run a k8s job by loading the image from the specified registry
## Additional configuration
Users may want to use different URLs for the registry when pushing and pulling an image as they will be invoked by different machines on different networks.
- The **Kaniko build container pushes the image** after building is a pod that belongs to the runner pod.
- The **kubelet pulls the image** before starting a pod.
The above two might not resolve all host names 100% the same so it makes sense to allow different push and pull URLs.
ENVs `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PUSH` and `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PULL` will be preferred if set.
### Example
As an example, a cluster local docker registry could be a long running pod exposed as a service _and_ as a NodePort.
The Kaniko builder pod would push to `my-local-registry.default.svc.cluster.local:12345/foohandle`. (`ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PUSH`)
This URL cannot be resolved by the kubelet to pull the image, so we need a secondary URL to pull it - in this case, using the NodePort, this URL is localhost:NODEPORT/foohandle. (`ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PULL)
## Consequences
- Users build container-actions with a local Dockerfile in their k8s cluster without a privileged docker builder container

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "hooks", "name": "hooks",
"version": "0.1.1", "version": "0.1.3",
"lockfileVersion": 2, "lockfileVersion": 2,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "hooks", "name": "hooks",
"version": "0.1.1", "version": "0.1.3",
"license": "MIT", "license": "MIT",
"devDependencies": { "devDependencies": {
"@types/jest": "^27.5.1", "@types/jest": "^27.5.1",

View File

@@ -1,6 +1,6 @@
{ {
"name": "hooks", "name": "hooks",
"version": "0.1.1", "version": "0.1.3",
"description": "Three projects are included - k8s: a kubernetes hook implementation that spins up pods dynamically to run a job - docker: A hook implementation of the runner's docker implementation - A hook lib, which contains shared typescript definitions and utilities that the other packages consume", "description": "Three projects are included - k8s: a kubernetes hook implementation that spins up pods dynamically to run a job - docker: A hook implementation of the runner's docker implementation - A hook lib, which contains shared typescript definitions and utilities that the other packages consume",
"main": "", "main": "",
"directories": { "directories": {

View File

@@ -52,7 +52,9 @@ describe('run script step', () => {
definitions.runScriptStep.args.entryPoint = '/bin/bash' definitions.runScriptStep.args.entryPoint = '/bin/bash'
definitions.runScriptStep.args.entryPointArgs = [ definitions.runScriptStep.args.entryPointArgs = [
'-c', '-c',
`if [[ ! $(env | grep "^PATH=") = "PATH=${definitions.runScriptStep.args.prependPath}:"* ]]; then exit 1; fi` `if [[ ! $(env | grep "^PATH=") = "PATH=${definitions.runScriptStep.args.prependPath.join(
':'
)}:"* ]]; then exit 1; fi`
] ]
await expect( await expect(
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state) runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)

View File

@@ -39,14 +39,14 @@ export function getSecretName(): string {
)}-secret-${uuidv4().substring(0, STEP_POD_NAME_SUFFIX_LENGTH)}` )}-secret-${uuidv4().substring(0, STEP_POD_NAME_SUFFIX_LENGTH)}`
} }
const MAX_POD_NAME_LENGTH = 63 export const MAX_POD_NAME_LENGTH = 63
const STEP_POD_NAME_SUFFIX_LENGTH = 8 export const STEP_POD_NAME_SUFFIX_LENGTH = 8
export const JOB_CONTAINER_NAME = 'job' export const JOB_CONTAINER_NAME = 'job'
export class RunnerInstanceLabel { export class RunnerInstanceLabel {
runnerhook: string private podName: string
constructor() { constructor() {
this.runnerhook = process.env.ACTIONS_RUNNER_POD_NAME as string this.podName = getRunnerPodName()
} }
get key(): string { get key(): string {
@@ -54,10 +54,10 @@ export class RunnerInstanceLabel {
} }
get value(): string { get value(): string {
return this.runnerhook return this.podName
} }
toString(): string { toString(): string {
return `runner-pod=${this.runnerhook}` return `runner-pod=${this.podName}`
} }
} }

View File

@@ -46,10 +46,10 @@ export async function prepareJob(
} }
let createdPod: k8s.V1Pod | undefined = undefined let createdPod: k8s.V1Pod | undefined = undefined
try { try {
createdPod = await createPod(container, services, args.registry) createdPod = await createPod(container, services, args.container.registry)
} catch (err) { } catch (err) {
await prunePods() await prunePods()
throw new Error(`failed to create job pod: ${JSON.stringify(err)}`) throw new Error(`failed to create job pod: ${err}`)
} }
if (!createdPod?.metadata?.name) { if (!createdPod?.metadata?.name) {

View File

@@ -28,7 +28,7 @@ export async function runScriptStep(
JOB_CONTAINER_NAME JOB_CONTAINER_NAME
) )
} catch (err) { } catch (err) {
throw new Error(`failed to run script step: ${JSON.stringify(err)}`) throw new Error(`failed to run script step: ${err}`)
} finally { } finally {
fs.rmSync(runnerPath) fs.rmSync(runnerPath)
} }

View File

@@ -22,7 +22,7 @@ async function run(): Promise<void> {
throw new Error( throw new Error(
`The Service account needs the following permissions ${JSON.stringify( `The Service account needs the following permissions ${JSON.stringify(
requiredPermissions requiredPermissions
)} on the pod resource in the '${namespace}' namespace. Please contact your self hosted runner administrator.` )} on the pod resource in the '${namespace()}' namespace. Please contact your self hosted runner administrator.`
) )
} }
switch (command) { switch (command) {

View File

@@ -1,3 +1,4 @@
import * as core from '@actions/core'
import * as k8s from '@kubernetes/client-node' import * as k8s from '@kubernetes/client-node'
import { ContainerInfo, Registry } from 'hooklib' import { ContainerInfo, Registry } from 'hooklib'
import * as stream from 'stream' import * as stream from 'stream'
@@ -109,13 +110,14 @@ export async function createPod(
export async function createJob( export async function createJob(
container: k8s.V1Container container: k8s.V1Container
): Promise<k8s.V1Job> { ): Promise<k8s.V1Job> {
const job = new k8s.V1Job() const runnerInstanceLabel = new RunnerInstanceLabel()
const job = new k8s.V1Job()
job.apiVersion = 'batch/v1' job.apiVersion = 'batch/v1'
job.kind = 'Job' job.kind = 'Job'
job.metadata = new k8s.V1ObjectMeta() job.metadata = new k8s.V1ObjectMeta()
job.metadata.name = getStepPodName() job.metadata.name = getStepPodName()
job.metadata.labels = { 'runner-pod': getRunnerPodName() } job.metadata.labels = { [runnerInstanceLabel.key]: runnerInstanceLabel.value }
job.spec = new k8s.V1JobSpec() job.spec = new k8s.V1JobSpec()
job.spec.ttlSecondsAfterFinished = 300 job.spec.ttlSecondsAfterFinished = 300
@@ -127,7 +129,7 @@ export async function createJob(
job.spec.template.spec.restartPolicy = 'Never' job.spec.template.spec.restartPolicy = 'Never'
job.spec.template.spec.nodeName = await getCurrentNodeName() job.spec.template.spec.nodeName = await getCurrentNodeName()
const claimName = `${runnerName()}-work` const claimName = getVolumeClaimName()
job.spec.template.spec.volumes = [ job.spec.template.spec.volumes = [
{ {
name: 'work', name: 'work',
@@ -185,7 +187,6 @@ export async function execPodStep(
): Promise<void> { ): Promise<void> {
const exec = new k8s.Exec(kc) const exec = new k8s.Exec(kc)
await new Promise(async function (resolve, reject) { await new Promise(async function (resolve, reject) {
try {
await exec.exec( await exec.exec(
namespace(), namespace(),
podName, podName,
@@ -200,18 +201,16 @@ export async function execPodStep(
if (resp.status === 'Success') { if (resp.status === 'Success') {
resolve(resp.code) resolve(resp.code)
} else { } else {
reject( core.debug(
JSON.stringify({ JSON.stringify({
message: resp?.message, message: resp?.message,
details: resp?.details details: resp?.details
}) })
) )
reject(resp?.message)
} }
} }
) )
} catch (error) {
reject(JSON.stringify(error))
}
}) })
} }
@@ -234,29 +233,34 @@ export async function createDockerSecret(
): Promise<k8s.V1Secret> { ): Promise<k8s.V1Secret> {
const authContent = { const authContent = {
auths: { auths: {
[registry.serverUrl]: { [registry.serverUrl || 'https://index.docker.io/v1/']: {
username: registry.username, username: registry.username,
password: registry.password, password: registry.password,
auth: Buffer.from( auth: Buffer.from(`${registry.username}:${registry.password}`).toString(
`${registry.username}:${registry.password}`,
'base64' 'base64'
).toString() )
} }
} }
} }
const runnerInstanceLabel = new RunnerInstanceLabel()
const secretName = getSecretName() const secretName = getSecretName()
const secret = new k8s.V1Secret() const secret = new k8s.V1Secret()
secret.immutable = true secret.immutable = true
secret.apiVersion = 'v1' secret.apiVersion = 'v1'
secret.metadata = new k8s.V1ObjectMeta() secret.metadata = new k8s.V1ObjectMeta()
secret.metadata.name = secretName secret.metadata.name = secretName
secret.metadata.labels = { 'runner-pod': getRunnerPodName() } secret.metadata.namespace = namespace()
secret.metadata.labels = {
[runnerInstanceLabel.key]: runnerInstanceLabel.value
}
secret.type = 'kubernetes.io/dockerconfigjson'
secret.kind = 'Secret' secret.kind = 'Secret'
secret.data = { secret.data = {
'.dockerconfigjson': Buffer.from( '.dockerconfigjson': Buffer.from(JSON.stringify(authContent)).toString(
JSON.stringify(authContent),
'base64' 'base64'
).toString() )
} }
const { body } = await k8sApi.createNamespacedSecret(namespace(), secret) const { body } = await k8sApi.createNamespacedSecret(namespace(), secret)
@@ -266,13 +270,18 @@ export async function createDockerSecret(
export async function createSecretForEnvs(envs: { export async function createSecretForEnvs(envs: {
[key: string]: string [key: string]: string
}): Promise<string> { }): Promise<string> {
const runnerInstanceLabel = new RunnerInstanceLabel()
const secret = new k8s.V1Secret() const secret = new k8s.V1Secret()
const secretName = getSecretName() const secretName = getSecretName()
secret.immutable = true secret.immutable = true
secret.apiVersion = 'v1' secret.apiVersion = 'v1'
secret.metadata = new k8s.V1ObjectMeta() secret.metadata = new k8s.V1ObjectMeta()
secret.metadata.name = secretName secret.metadata.name = secretName
secret.metadata.labels = { 'runner-pod': getRunnerPodName() }
secret.metadata.labels = {
[runnerInstanceLabel.key]: runnerInstanceLabel.value
}
secret.kind = 'Secret' secret.kind = 'Secret'
secret.data = {} secret.data = {}
for (const [key, value] of Object.entries(envs)) { for (const [key, value] of Object.entries(envs)) {
@@ -372,7 +381,7 @@ export async function getPodLogs(
}) })
logStream.on('error', err => { logStream.on('error', err => {
process.stderr.write(JSON.stringify(err)) process.stderr.write(err.message)
}) })
const r = await log.log(namespace(), podName, containerName, logStream, { const r = await log.log(namespace(), podName, containerName, logStream, {
@@ -478,16 +487,6 @@ export function namespace(): string {
return context.namespace return context.namespace
} }
function runnerName(): string {
const name = process.env.ACTIONS_RUNNER_POD_NAME
if (!name) {
throw new Error(
'Failed to determine runner name. "ACTIONS_RUNNER_POD_NAME" env variables should be set.'
)
}
return name
}
class BackOffManager { class BackOffManager {
private backOffSeconds = 1 private backOffSeconds = 1
totalTime = 0 totalTime = 0

View File

@@ -20,18 +20,18 @@ export function containerVolumes(
} }
] ]
const workspacePath = process.env.GITHUB_WORKSPACE as string
if (containerAction) { if (containerAction) {
const workspace = process.env.GITHUB_WORKSPACE as string
mounts.push( mounts.push(
{ {
name: POD_VOLUME_NAME, name: POD_VOLUME_NAME,
mountPath: '/github/workspace', mountPath: '/github/workspace',
subPath: workspace.substring(workspace.indexOf('work/') + 1) subPath: workspacePath.substring(workspacePath.indexOf('work/') + 1)
}, },
{ {
name: POD_VOLUME_NAME, name: POD_VOLUME_NAME,
mountPath: '/github/file_commands', mountPath: '/github/file_commands',
subPath: workspace.substring(workspace.indexOf('work/') + 1) subPath: workspacePath.substring(workspacePath.indexOf('work/') + 1)
} }
) )
return mounts return mounts
@@ -63,7 +63,6 @@ export function containerVolumes(
return mounts return mounts
} }
const workspacePath = process.env.GITHUB_WORKSPACE as string
for (const userVolume of userMountVolumes) { for (const userVolume of userMountVolumes) {
let sourceVolumePath = '' let sourceVolumePath = ''
if (path.isAbsolute(userVolume.sourceVolumePath)) { if (path.isAbsolute(userVolume.sourceVolumePath)) {

View File

@@ -1,4 +1,7 @@
import * as k8s from '@kubernetes/client-node'
import { cleanupJob, prepareJob } from '../src/hooks' import { cleanupJob, prepareJob } from '../src/hooks'
import { RunnerInstanceLabel } from '../src/hooks/constants'
import { namespace } from '../src/k8s'
import { TestHelper } from './test-setup' import { TestHelper } from './test-setup'
let testHelper: TestHelper let testHelper: TestHelper
@@ -13,10 +16,50 @@ describe('Cleanup Job', () => {
) )
await prepareJob(prepareJobData.args, prepareJobOutputFilePath) await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
}) })
it('should not throw', async () => {
await expect(cleanupJob()).resolves.not.toThrow()
})
afterEach(async () => { afterEach(async () => {
await testHelper.cleanup() await testHelper.cleanup()
}) })
it('should not throw', async () => {
await expect(cleanupJob()).resolves.not.toThrow()
})
it('should have no runner linked pods running', async () => {
await cleanupJob()
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
const podList = await k8sApi.listNamespacedPod(
namespace(),
undefined,
undefined,
undefined,
undefined,
new RunnerInstanceLabel().toString()
)
expect(podList.body.items.length).toBe(0)
})
it('should have no runner linked secrets', async () => {
await cleanupJob()
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
const secretList = await k8sApi.listNamespacedSecret(
namespace(),
undefined,
undefined,
undefined,
undefined,
new RunnerInstanceLabel().toString()
)
expect(secretList.body.items.length).toBe(0)
})
}) })

View File

@@ -0,0 +1,173 @@
import {
getJobPodName,
getRunnerPodName,
getSecretName,
getStepPodName,
getVolumeClaimName,
MAX_POD_NAME_LENGTH,
RunnerInstanceLabel,
STEP_POD_NAME_SUFFIX_LENGTH
} from '../src/hooks/constants'
describe('constants', () => {
describe('runner instance label', () => {
beforeEach(() => {
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
})
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => new RunnerInstanceLabel()).toThrow()
})
it('should have key truthy', () => {
const runnerInstanceLabel = new RunnerInstanceLabel()
expect(typeof runnerInstanceLabel.key).toBe('string')
expect(runnerInstanceLabel.key).toBeTruthy()
expect(runnerInstanceLabel.key.length).toBeGreaterThan(0)
})
it('should have value as runner pod name', () => {
const name = process.env.ACTIONS_RUNNER_POD_NAME as string
const runnerInstanceLabel = new RunnerInstanceLabel()
expect(typeof runnerInstanceLabel.value).toBe('string')
expect(runnerInstanceLabel.value).toBe(name)
})
it('should have toString combination of key and value', () => {
const runnerInstanceLabel = new RunnerInstanceLabel()
expect(runnerInstanceLabel.toString()).toBe(
`${runnerInstanceLabel.key}=${runnerInstanceLabel.value}`
)
})
})
describe('getRunnerPodName', () => {
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => getRunnerPodName()).toThrow()
process.env.ACTIONS_RUNNER_POD_NAME = ''
expect(() => getRunnerPodName()).toThrow()
})
it('should return corrent ACTIONS_RUNNER_POD_NAME name', () => {
const name = 'example'
process.env.ACTIONS_RUNNER_POD_NAME = name
expect(getRunnerPodName()).toBe(name)
})
})
describe('getJobPodName', () => {
it('should throw on getJobPodName if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => getJobPodName()).toThrow()
process.env.ACTIONS_RUNNER_POD_NAME = ''
expect(() => getRunnerPodName()).toThrow()
})
it('should contain suffix -workflow', () => {
const tableTests = [
{
podName: 'test',
expect: 'test-workflow'
},
{
// podName.length == 63
podName:
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa',
expect:
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-workflow'
}
]
for (const tt of tableTests) {
process.env.ACTIONS_RUNNER_POD_NAME = tt.podName
const actual = getJobPodName()
expect(actual).toBe(tt.expect)
}
})
})
describe('getVolumeClaimName', () => {
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_CLAIM_NAME
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => getVolumeClaimName()).toThrow()
process.env.ACTIONS_RUNNER_POD_NAME = ''
expect(() => getVolumeClaimName()).toThrow()
})
it('should return ACTIONS_RUNNER_CLAIM_NAME env if set', () => {
const claimName = 'testclaim'
process.env.ACTIONS_RUNNER_CLAIM_NAME = claimName
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
expect(getVolumeClaimName()).toBe(claimName)
})
it('should contain suffix -work if ACTIONS_RUNNER_CLAIM_NAME is not set', () => {
delete process.env.ACTIONS_RUNNER_CLAIM_NAME
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
expect(getVolumeClaimName()).toBe('example-work')
})
})
describe('getSecretName', () => {
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => getSecretName()).toThrow()
process.env.ACTIONS_RUNNER_POD_NAME = ''
expect(() => getSecretName()).toThrow()
})
it('should contain suffix -secret- and name trimmed', () => {
const podNames = [
'test',
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
]
for (const podName of podNames) {
process.env.ACTIONS_RUNNER_POD_NAME = podName
const actual = getSecretName()
const re = new RegExp(
`${podName.substring(
MAX_POD_NAME_LENGTH -
'-secret-'.length -
STEP_POD_NAME_SUFFIX_LENGTH
)}-secret-[a-z0-9]{8,}`
)
expect(actual).toMatch(re)
}
})
})
describe('getStepPodName', () => {
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => getStepPodName()).toThrow()
process.env.ACTIONS_RUNNER_POD_NAME = ''
expect(() => getStepPodName()).toThrow()
})
it('should contain suffix -step- and name trimmed', () => {
const podNames = [
'test',
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
]
for (const podName of podNames) {
process.env.ACTIONS_RUNNER_POD_NAME = podName
const actual = getStepPodName()
const re = new RegExp(
`${podName.substring(
MAX_POD_NAME_LENGTH - '-step-'.length - STEP_POD_NAME_SUFFIX_LENGTH
)}-step-[a-z0-9]{8,}`
)
expect(actual).toMatch(re)
}
})
})
})

View File

@@ -0,0 +1,153 @@
import * as fs from 'fs'
import { POD_VOLUME_NAME } from '../src/k8s'
import { containerVolumes, writeEntryPointScript } from '../src/k8s/utils'
import { TestHelper } from './test-setup'
let testHelper: TestHelper
describe('k8s utils', () => {
describe('write entrypoint', () => {
beforeEach(async () => {
testHelper = new TestHelper()
await testHelper.initialize()
})
afterEach(async () => {
await testHelper.cleanup()
})
it('should not throw', () => {
expect(() =>
writeEntryPointScript(
'/test',
'sh',
['-e', 'script.sh'],
['/prepend/path'],
{
SOME_ENV: 'SOME_VALUE'
}
)
).not.toThrow()
})
it('should throw if RUNNER_TEMP is not set', () => {
delete process.env.RUNNER_TEMP
expect(() =>
writeEntryPointScript(
'/test',
'sh',
['-e', 'script.sh'],
['/prepend/path'],
{
SOME_ENV: 'SOME_VALUE'
}
)
).toThrow()
})
it('should return object with containerPath and runnerPath', () => {
const { containerPath, runnerPath } = writeEntryPointScript(
'/test',
'sh',
['-e', 'script.sh'],
['/prepend/path'],
{
SOME_ENV: 'SOME_VALUE'
}
)
expect(containerPath).toMatch(/\/__w\/_temp\/.*\.sh/)
const re = new RegExp(`${process.env.RUNNER_TEMP}/.*\\.sh`)
expect(runnerPath).toMatch(re)
})
it('should write entrypoint path and the file should exist', () => {
const { runnerPath } = writeEntryPointScript(
'/test',
'sh',
['-e', 'script.sh'],
['/prepend/path'],
{
SOME_ENV: 'SOME_VALUE'
}
)
expect(fs.existsSync(runnerPath)).toBe(true)
})
})
describe('container volumes', () => {
beforeEach(async () => {
testHelper = new TestHelper()
await testHelper.initialize()
})
afterEach(async () => {
await testHelper.cleanup()
})
it('should throw if container action and GITHUB_WORKSPACE env is not set', () => {
delete process.env.GITHUB_WORKSPACE
expect(() => containerVolumes([], true, true)).toThrow()
expect(() => containerVolumes([], false, true)).toThrow()
})
it('should always have work mount', () => {
let volumes = containerVolumes([], true, true)
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
volumes = containerVolumes([], true, false)
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
volumes = containerVolumes([], false, true)
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
volumes = containerVolumes([], false, false)
expect(volumes.find(e => e.mountPath === '/__w')).toBeTruthy()
})
it('should have container action volumes', () => {
let volumes = containerVolumes([], true, true)
expect(
volumes.find(e => e.mountPath === '/github/workspace')
).toBeTruthy()
expect(
volumes.find(e => e.mountPath === '/github/file_commands')
).toBeTruthy()
volumes = containerVolumes([], false, true)
expect(
volumes.find(e => e.mountPath === '/github/workspace')
).toBeTruthy()
expect(
volumes.find(e => e.mountPath === '/github/file_commands')
).toBeTruthy()
})
it('should have externals, github home and github workflow mounts if job container', () => {
const volumes = containerVolumes()
expect(volumes.find(e => e.mountPath === '/__e')).toBeTruthy()
expect(volumes.find(e => e.mountPath === '/github/home')).toBeTruthy()
expect(volumes.find(e => e.mountPath === '/github/workflow')).toBeTruthy()
})
it('should throw if user volume source volume path is not in workspace', () => {
expect(() =>
containerVolumes(
[
{
sourceVolumePath: '/outside/of/workdir'
}
],
true,
false
)
).toThrow()
})
it(`all volumes should have name ${POD_VOLUME_NAME}`, () => {
let volumes = containerVolumes([], true, true)
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
volumes = containerVolumes([], true, false)
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
volumes = containerVolumes([], false, true)
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
volumes = containerVolumes([], false, false)
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
})
})
})

View File

@@ -94,7 +94,9 @@ describe('Run script step', () => {
runScriptStepDefinition.args.entryPoint = '/bin/bash' runScriptStepDefinition.args.entryPoint = '/bin/bash'
runScriptStepDefinition.args.entryPointArgs = [ runScriptStepDefinition.args.entryPointArgs = [
'-c', '-c',
`'if [[ ! $(env | grep "^PATH=") = "PATH=${runScriptStepDefinition.args.prependPath}:"* ]]; then exit 1; fi'` `'if [[ ! $(env | grep "^PATH=") = "PATH=${runScriptStepDefinition.args.prependPath.join(
':'
)}:"* ]]; then exit 1; fi'`
] ]
await expect( await expect(

View File

@@ -40,7 +40,7 @@ export class TestHelper {
await this.createTestVolume() await this.createTestVolume()
await this.createTestJobPod() await this.createTestJobPod()
} catch (e) { } catch (e) {
console.log(JSON.stringify(e)) console.log(e)
} }
} }

View File

@@ -1,7 +1,6 @@
## Features ## Features
- Loosened the restriction on `ACTIONS_RUNNER_CLAIM_NAME` to be optional, not required for k8s hooks
## Bugs ## Bugs
- Fixed an issue where default private registry images did not pull correctly [#25]
## Misc ## Misc