Compare commits

..

10 Commits

Author SHA1 Message Date
Ferenc Hammerl
b3df7ec55b Update 0034-build-docker-with-kaniko.md 2023-01-26 17:58:10 +01:00
Ferenc Hammerl
16276a2a22 Update 0034-build-docker-with-kaniko.md 2023-01-09 17:44:09 +01:00
Ferenc Hammerl
abeebd2a37 Update 0034-build-docker-with-kaniko.md 2023-01-09 16:48:42 +01:00
Ferenc Hammerl
2eecf2d378 Update 0034-build-docker-with-kaniko.md 2023-01-04 12:20:08 +01:00
Ferenc Hammerl
0c38d44dbd Update 0034-build-docker-with-kaniko.md 2023-01-04 12:18:51 +01:00
Ferenc Hammerl
a62b81fc95 Update 0034-build-docker-with-kaniko.md 2022-12-15 14:59:53 +01:00
Ferenc Hammerl
ae0066ae41 Update 0034-build-docker-with-kaniko.md 2022-12-15 14:05:32 +01:00
Ferenc Hammerl
6c9241fb0e Update 0034-build-docker-with-kaniko.md 2022-10-04 16:33:41 +02:00
Ferenc Hammerl
efe66bb99b Update id of md file 2022-10-04 13:24:13 +02:00
Ferenc Hammerl
9a50e3a796 Add Kaniko ADR 2022-10-04 13:02:19 +02:00
23 changed files with 1020 additions and 618 deletions

View File

@@ -13,7 +13,7 @@ You'll need a runner compatible with hooks, a repository with container workflow
- You'll need a runner compatible with hooks, a repository with container workflows to which you can register the runner and the hooks from this repository. - You'll need a runner compatible with hooks, a repository with container workflows to which you can register the runner and the hooks from this repository.
- See [the runner contributing.md](../../github/CONTRIBUTING.MD) for how to get started with runner development. - See [the runner contributing.md](../../github/CONTRIBUTING.MD) for how to get started with runner development.
- Build your hook using `npm run build` - Build your hook using `npm run build`
- Enable the hooks by setting `ACTIONS_RUNNER_CONTAINER_HOOKS=./packages/{libraryname}/dist/index.js` file generated by [ncc](https://github.com/vercel/ncc) - Enable the hooks by setting `ACTIONS_RUNNER_CONTAINER_HOOK=./packages/{libraryname}/dist/index.js` file generated by [ncc](https://github.com/vercel/ncc)
- Configure your self hosted runner against the a repository you have admin access - Configure your self hosted runner against the a repository you have admin access
- Run a workflow with a container job, for example - Run a workflow with a container job, for example
``` ```

View File

@@ -0,0 +1,64 @@
# ADR 0034: Build container-action Dockerfiles with Kaniko
**Date**: 2023-01-26
**Status**: In Progress
# Background
[Building Dockerfiles in k8s using Kaniko](https://github.com/actions/runner-container-hooks/issues/23) has been on the radar since the beginning of container hooks.
Currently, this is possible in ARC using a [dind/docker-in-docker](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/runner/actions-runner-dind.dockerfile) sidecar container.
This container needs to be launched using `--privileged`, which presents a security concern.
As an alternative tool, a container running [Kaniko](https://github.com/GoogleContainerTools/kaniko) can be used to build these files instead.
Kaniko doesn't need to be `--privileged`.
Whether using dind/docker-in-docker sidecar or Kaniko, in this ADR I will refer to these containers as '**builder containers**'
# Guiding Principles
- **Security:** running a Kaniko builder container should be possible without the `--privileged` flag
- **Feature parity with Docker:** Any 'Dockerfile' that can be built with vanilla Docker should also be possible to build using a Kaniko build container
- **Ease of Use:** The customer should be able to build and push Docker images with minimal configuration
## Limitations
### User provided registry
The user needs to provide a a remote registry (like ghcr.io or dockerhub) and credentials, for the Kaniko builder container to push to and k8s to pull from later. This is the user's responsiblity so that our solution remains lightweight and generic.
- Alternatively, a user-managed local Docker Registry within the k8s cluster can of course be used instead
### Kaniko feature limit
Anything Kaniko can't do we'll be by definition unable to help with. Potential incompatibilities / inconsistencies between Docker and Kaniko will naturally be inherited by our solution.
## Interface
The user will set `containerMode:kubernetes`, because this is a change to the behaviour of our k8s hooks
The user will set two ENVs:
- `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST`: e.g. `ghcr.io/OWNER` or `dockerhandle`.
- `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_SECRET_NAME`: e.g. `docker-secret`: the name of the `k8s` secret resource that allows you to authenticate against the registry with the given handle above
The workspace is used as the image name.
The image tag is a random generated string.
To execute a container-action, we then run a k8s job by loading the image from the specified registry
## Additional configuration
Users may want to use different URLs for the registry when pushing and pulling an image as they will be invoked by different machines on different networks.
- The **Kaniko build container pushes the image** after building is a pod that belongs to the runner pod.
- The **kubelet pulls the image** before starting a pod.
The above two might not resolve all host names 100% the same so it makes sense to allow different push and pull URLs.
ENVs `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PUSH` and `ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PULL` will be preferred if set.
### Example
As an example, a cluster local docker registry could be a long running pod exposed as a service _and_ as a NodePort.
The Kaniko builder pod would push to `my-local-registry.default.svc.cluster.local:12345/foohandle`. (`ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PUSH`)
This URL cannot be resolved by the kubelet to pull the image, so we need a secondary URL to pull it - in this case, using the NodePort, this URL is localhost:NODEPORT/foohandle. (`ACTIONS_RUNNER_CONTAINER_HOOKS_K8S_REGISTRY_HOST_PULL)
## Consequences
- Users build container-actions with a local Dockerfile in their k8s cluster without a privileged docker builder container

View File

@@ -73,8 +73,6 @@
"contextName": "redis", "contextName": "redis",
"image": "redis", "image": "redis",
"createOptions": "--cpus 1", "createOptions": "--cpus 1",
"entrypoint": null,
"entryPointArgs": [],
"environmentVariables": {}, "environmentVariables": {},
"userMountVolumes": [ "userMountVolumes": [
{ {

16
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "hooks", "name": "hooks",
"version": "0.3.1", "version": "0.1.3",
"lockfileVersion": 2, "lockfileVersion": 2,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "hooks", "name": "hooks",
"version": "0.3.1", "version": "0.1.3",
"license": "MIT", "license": "MIT",
"devDependencies": { "devDependencies": {
"@types/jest": "^27.5.1", "@types/jest": "^27.5.1",
@@ -1800,9 +1800,9 @@
"dev": true "dev": true
}, },
"node_modules/json5": { "node_modules/json5": {
"version": "1.0.2", "version": "1.0.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz", "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==", "integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"minimist": "^1.2.0" "minimist": "^1.2.0"
@@ -3926,9 +3926,9 @@
"dev": true "dev": true
}, },
"json5": { "json5": {
"version": "1.0.2", "version": "1.0.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz", "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==", "integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
"dev": true, "dev": true,
"requires": { "requires": {
"minimist": "^1.2.0" "minimist": "^1.2.0"

View File

@@ -1,6 +1,6 @@
{ {
"name": "hooks", "name": "hooks",
"version": "0.3.1", "version": "0.1.3",
"description": "Three projects are included - k8s: a kubernetes hook implementation that spins up pods dynamically to run a job - docker: A hook implementation of the runner's docker implementation - A hook lib, which contains shared typescript definitions and utilities that the other packages consume", "description": "Three projects are included - k8s: a kubernetes hook implementation that spins up pods dynamically to run a job - docker: A hook implementation of the runner's docker implementation - A hook lib, which contains shared typescript definitions and utilities that the other packages consume",
"main": "", "main": "",
"directories": { "directories": {

View File

@@ -9,7 +9,7 @@
"version": "0.1.0", "version": "0.1.0",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@actions/core": "^1.9.1", "@actions/core": "^1.6.0",
"@actions/exec": "^1.1.1", "@actions/exec": "^1.1.1",
"hooklib": "file:../hooklib", "hooklib": "file:../hooklib",
"uuid": "^8.3.2" "uuid": "^8.3.2"
@@ -30,7 +30,7 @@
"version": "0.1.0", "version": "0.1.0",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@actions/core": "^1.9.1" "@actions/core": "^1.6.0"
}, },
"devDependencies": { "devDependencies": {
"@types/node": "^17.0.23", "@types/node": "^17.0.23",
@@ -43,12 +43,11 @@
} }
}, },
"node_modules/@actions/core": { "node_modules/@actions/core": {
"version": "1.9.1", "version": "1.6.0",
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.9.1.tgz", "resolved": "https://registry.npmjs.org/@actions/core/-/core-1.6.0.tgz",
"integrity": "sha512-5ad+U2YGrmmiw6du20AQW5XuWo7UKN2052FjSV7MX+Wfjf8sCqcsZe62NfgHys4QI4/Y+vQvLKYL8jWtA1ZBTA==", "integrity": "sha512-NB1UAZomZlCV/LmJqkLhNTqtKfFXJZAUPcfl/zqG7EfsQdeUJtaWO98SGbuQ3pydJ3fHl2CvI/51OKYlCYYcaw==",
"dependencies": { "dependencies": {
"@actions/http-client": "^2.0.1", "@actions/http-client": "^1.0.11"
"uuid": "^8.3.2"
} }
}, },
"node_modules/@actions/exec": { "node_modules/@actions/exec": {
@@ -60,11 +59,11 @@
} }
}, },
"node_modules/@actions/http-client": { "node_modules/@actions/http-client": {
"version": "2.0.1", "version": "1.0.11",
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.0.1.tgz", "resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-1.0.11.tgz",
"integrity": "sha512-PIXiMVtz6VvyaRsGY268qvj57hXQEpsYogYOu2nrQhlf+XCGmZstmuZBbAybUl1nQGnvS1k1eEsQ69ZoD7xlSw==", "integrity": "sha512-VRYHGQV1rqnROJqdMvGUbY/Kn8vriQe/F9HR2AlYHzmKuM/p3kjNuXhmdBfcVgsvRWTz5C5XW5xvndZrVBuAYg==",
"dependencies": { "dependencies": {
"tunnel": "^0.0.6" "tunnel": "0.0.6"
} }
}, },
"node_modules/@actions/io": { "node_modules/@actions/io": {
@@ -3779,9 +3778,9 @@
"peer": true "peer": true
}, },
"node_modules/json5": { "node_modules/json5": {
"version": "2.2.3", "version": "2.2.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz", "resolved": "https://registry.npmjs.org/json5/-/json5-2.2.1.tgz",
"integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==", "integrity": "sha512-1hqLFMSrGHRHxav9q9gNjJ5EXznIxGVO09xQRrwplcS8qs28pZ8s8hupZAmqDwZUmVZ2Qb2jnyPOWcDH8m8dlA==",
"dev": true, "dev": true,
"bin": { "bin": {
"json5": "lib/cli.js" "json5": "lib/cli.js"
@@ -4903,9 +4902,9 @@
} }
}, },
"node_modules/tsconfig-paths/node_modules/json5": { "node_modules/tsconfig-paths/node_modules/json5": {
"version": "1.0.2", "version": "1.0.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz", "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==", "integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"minimist": "^1.2.0" "minimist": "^1.2.0"
@@ -5280,12 +5279,11 @@
}, },
"dependencies": { "dependencies": {
"@actions/core": { "@actions/core": {
"version": "1.9.1", "version": "1.6.0",
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.9.1.tgz", "resolved": "https://registry.npmjs.org/@actions/core/-/core-1.6.0.tgz",
"integrity": "sha512-5ad+U2YGrmmiw6du20AQW5XuWo7UKN2052FjSV7MX+Wfjf8sCqcsZe62NfgHys4QI4/Y+vQvLKYL8jWtA1ZBTA==", "integrity": "sha512-NB1UAZomZlCV/LmJqkLhNTqtKfFXJZAUPcfl/zqG7EfsQdeUJtaWO98SGbuQ3pydJ3fHl2CvI/51OKYlCYYcaw==",
"requires": { "requires": {
"@actions/http-client": "^2.0.1", "@actions/http-client": "^1.0.11"
"uuid": "^8.3.2"
} }
}, },
"@actions/exec": { "@actions/exec": {
@@ -5297,11 +5295,11 @@
} }
}, },
"@actions/http-client": { "@actions/http-client": {
"version": "2.0.1", "version": "1.0.11",
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.0.1.tgz", "resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-1.0.11.tgz",
"integrity": "sha512-PIXiMVtz6VvyaRsGY268qvj57hXQEpsYogYOu2nrQhlf+XCGmZstmuZBbAybUl1nQGnvS1k1eEsQ69ZoD7xlSw==", "integrity": "sha512-VRYHGQV1rqnROJqdMvGUbY/Kn8vriQe/F9HR2AlYHzmKuM/p3kjNuXhmdBfcVgsvRWTz5C5XW5xvndZrVBuAYg==",
"requires": { "requires": {
"tunnel": "^0.0.6" "tunnel": "0.0.6"
} }
}, },
"@actions/io": { "@actions/io": {
@@ -7378,7 +7376,7 @@
"hooklib": { "hooklib": {
"version": "file:../hooklib", "version": "file:../hooklib",
"requires": { "requires": {
"@actions/core": "^1.9.1", "@actions/core": "^1.6.0",
"@types/node": "^17.0.23", "@types/node": "^17.0.23",
"@typescript-eslint/parser": "^5.18.0", "@typescript-eslint/parser": "^5.18.0",
"@zeit/ncc": "^0.22.3", "@zeit/ncc": "^0.22.3",
@@ -8176,9 +8174,9 @@
"peer": true "peer": true
}, },
"json5": { "json5": {
"version": "2.2.3", "version": "2.2.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz", "resolved": "https://registry.npmjs.org/json5/-/json5-2.2.1.tgz",
"integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==", "integrity": "sha512-1hqLFMSrGHRHxav9q9gNjJ5EXznIxGVO09xQRrwplcS8qs28pZ8s8hupZAmqDwZUmVZ2Qb2jnyPOWcDH8m8dlA==",
"dev": true "dev": true
}, },
"kleur": { "kleur": {
@@ -8985,9 +8983,9 @@
}, },
"dependencies": { "dependencies": {
"json5": { "json5": {
"version": "1.0.2", "version": "1.0.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz", "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==", "integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
"dev": true, "dev": true,
"requires": { "requires": {
"minimist": "^1.2.0" "minimist": "^1.2.0"

View File

@@ -10,7 +10,7 @@
"author": "", "author": "",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@actions/core": "^1.9.1", "@actions/core": "^1.6.0",
"@actions/exec": "^1.1.1", "@actions/exec": "^1.1.1",
"hooklib": "file:../hooklib", "hooklib": "file:../hooklib",
"uuid": "^8.3.2" "uuid": "^8.3.2"

View File

@@ -427,9 +427,6 @@ export async function containerRun(
dockerArgs.push(args.image) dockerArgs.push(args.image)
if (args.entryPointArgs) { if (args.entryPointArgs) {
for (const entryPointArg of args.entryPointArgs) { for (const entryPointArg of args.entryPointArgs) {
if (!entryPointArg) {
continue
}
dockerArgs.push(entryPointArg) dockerArgs.push(entryPointArg)
} }
} }

View File

@@ -16,14 +16,15 @@ import {
import { checkEnvironment } from './utils' import { checkEnvironment } from './utils'
async function run(): Promise<void> { async function run(): Promise<void> {
const input = await getInputFromStdin()
const args = input['args']
const command = input['command']
const responseFile = input['responseFile']
const state = input['state']
try { try {
checkEnvironment() checkEnvironment()
const input = await getInputFromStdin()
const args = input['args']
const command = input['command']
const responseFile = input['responseFile']
const state = input['state']
switch (command) { switch (command) {
case Command.PrepareJob: case Command.PrepareJob:
await prepareJob(args as PrepareJobArgs, responseFile) await prepareJob(args as PrepareJobArgs, responseFile)

View File

@@ -16,7 +16,6 @@ export async function runDockerCommand(
args: string[], args: string[],
options?: RunDockerCommandOptions options?: RunDockerCommandOptions
): Promise<string> { ): Promise<string> {
options = optionsWithDockerEnvs(options)
const pipes = await exec.getExecOutput('docker', args, options) const pipes = await exec.getExecOutput('docker', args, options)
if (pipes.exitCode !== 0) { if (pipes.exitCode !== 0) {
core.error(`Docker failed with exit code ${pipes.exitCode}`) core.error(`Docker failed with exit code ${pipes.exitCode}`)
@@ -25,45 +24,6 @@ export async function runDockerCommand(
return Promise.resolve(pipes.stdout) return Promise.resolve(pipes.stdout)
} }
export function optionsWithDockerEnvs(
options?: RunDockerCommandOptions
): RunDockerCommandOptions | undefined {
// From https://docs.docker.com/engine/reference/commandline/cli/#environment-variables
const dockerCliEnvs = new Set([
'DOCKER_API_VERSION',
'DOCKER_CERT_PATH',
'DOCKER_CONFIG',
'DOCKER_CONTENT_TRUST_SERVER',
'DOCKER_CONTENT_TRUST',
'DOCKER_CONTEXT',
'DOCKER_DEFAULT_PLATFORM',
'DOCKER_HIDE_LEGACY_COMMANDS',
'DOCKER_HOST',
'DOCKER_STACK_ORCHESTRATOR',
'DOCKER_TLS_VERIFY',
'BUILDKIT_PROGRESS'
])
const dockerEnvs = {}
for (const key in process.env) {
if (dockerCliEnvs.has(key)) {
dockerEnvs[key] = process.env[key]
}
}
const newOptions = {
workingDir: options?.workingDir,
input: options?.input,
env: options?.env || {}
}
// Set docker envs or overwrite provided ones
for (const [key, value] of Object.entries(dockerEnvs)) {
newOptions.env[key] = value as string
}
return newOptions
}
export function sanitize(val: string): string { export function sanitize(val: string): string {
if (!val || typeof val !== 'string') { if (!val || typeof val !== 'string') {
return '' return ''

View File

@@ -1,4 +1,4 @@
import { optionsWithDockerEnvs, sanitize } from '../src/utils' import { sanitize } from '../src/utils'
describe('Utilities', () => { describe('Utilities', () => {
it('should return sanitized image name', () => { it('should return sanitized image name', () => {
@@ -9,41 +9,4 @@ describe('Utilities', () => {
const validStr = 'teststr8_one' const validStr = 'teststr8_one'
expect(sanitize(validStr)).toBe(validStr) expect(sanitize(validStr)).toBe(validStr)
}) })
describe('with docker options', () => {
it('should augment options with docker environment variables', () => {
process.env.DOCKER_HOST = 'unix:///run/user/1001/docker.sock'
process.env.DOCKER_NOTEXIST = 'notexist'
const optionDefinitions: any = [
undefined,
{},
{ env: {} },
{ env: { DOCKER_HOST: 'unix://var/run/docker.sock' } }
]
for (const opt of optionDefinitions) {
let options = optionsWithDockerEnvs(opt)
expect(options).toBeDefined()
expect(options?.env).toBeDefined()
expect(options?.env?.DOCKER_HOST).toBe(process.env.DOCKER_HOST)
expect(options?.env?.DOCKER_NOTEXIST).toBeUndefined()
}
})
it('should not overwrite other options', () => {
process.env.DOCKER_HOST = 'unix:///run/user/1001/docker.sock'
const opt = {
workingDir: 'test',
input: Buffer.from('test')
}
const options = optionsWithDockerEnvs(opt)
expect(options).toBeDefined()
expect(options?.workingDir).toBe(opt.workingDir)
expect(options?.input).toBe(opt.input)
expect(options?.env).toStrictEqual({
DOCKER_HOST: process.env.DOCKER_HOST
})
})
})
}) })

View File

@@ -9,7 +9,7 @@
"version": "0.1.0", "version": "0.1.0",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@actions/core": "^1.9.1" "@actions/core": "^1.6.0"
}, },
"devDependencies": { "devDependencies": {
"@types/node": "^17.0.23", "@types/node": "^17.0.23",
@@ -22,20 +22,19 @@
} }
}, },
"node_modules/@actions/core": { "node_modules/@actions/core": {
"version": "1.9.1", "version": "1.6.0",
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.9.1.tgz", "resolved": "https://registry.npmjs.org/@actions/core/-/core-1.6.0.tgz",
"integrity": "sha512-5ad+U2YGrmmiw6du20AQW5XuWo7UKN2052FjSV7MX+Wfjf8sCqcsZe62NfgHys4QI4/Y+vQvLKYL8jWtA1ZBTA==", "integrity": "sha512-NB1UAZomZlCV/LmJqkLhNTqtKfFXJZAUPcfl/zqG7EfsQdeUJtaWO98SGbuQ3pydJ3fHl2CvI/51OKYlCYYcaw==",
"dependencies": { "dependencies": {
"@actions/http-client": "^2.0.1", "@actions/http-client": "^1.0.11"
"uuid": "^8.3.2"
} }
}, },
"node_modules/@actions/http-client": { "node_modules/@actions/http-client": {
"version": "2.0.1", "version": "1.0.11",
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.0.1.tgz", "resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-1.0.11.tgz",
"integrity": "sha512-PIXiMVtz6VvyaRsGY268qvj57hXQEpsYogYOu2nrQhlf+XCGmZstmuZBbAybUl1nQGnvS1k1eEsQ69ZoD7xlSw==", "integrity": "sha512-VRYHGQV1rqnROJqdMvGUbY/Kn8vriQe/F9HR2AlYHzmKuM/p3kjNuXhmdBfcVgsvRWTz5C5XW5xvndZrVBuAYg==",
"dependencies": { "dependencies": {
"tunnel": "^0.0.6" "tunnel": "0.0.6"
} }
}, },
"node_modules/@eslint/eslintrc": { "node_modules/@eslint/eslintrc": {
@@ -1742,9 +1741,9 @@
"dev": true "dev": true
}, },
"node_modules/json5": { "node_modules/json5": {
"version": "1.0.2", "version": "1.0.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz", "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==", "integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"minimist": "^1.2.0" "minimist": "^1.2.0"
@@ -2486,14 +2485,6 @@
"punycode": "^2.1.0" "punycode": "^2.1.0"
} }
}, },
"node_modules/uuid": {
"version": "8.3.2",
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==",
"bin": {
"uuid": "dist/bin/uuid"
}
},
"node_modules/v8-compile-cache": { "node_modules/v8-compile-cache": {
"version": "2.3.0", "version": "2.3.0",
"resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.3.0.tgz", "resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.3.0.tgz",
@@ -2555,20 +2546,19 @@
}, },
"dependencies": { "dependencies": {
"@actions/core": { "@actions/core": {
"version": "1.9.1", "version": "1.6.0",
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.9.1.tgz", "resolved": "https://registry.npmjs.org/@actions/core/-/core-1.6.0.tgz",
"integrity": "sha512-5ad+U2YGrmmiw6du20AQW5XuWo7UKN2052FjSV7MX+Wfjf8sCqcsZe62NfgHys4QI4/Y+vQvLKYL8jWtA1ZBTA==", "integrity": "sha512-NB1UAZomZlCV/LmJqkLhNTqtKfFXJZAUPcfl/zqG7EfsQdeUJtaWO98SGbuQ3pydJ3fHl2CvI/51OKYlCYYcaw==",
"requires": { "requires": {
"@actions/http-client": "^2.0.1", "@actions/http-client": "^1.0.11"
"uuid": "^8.3.2"
} }
}, },
"@actions/http-client": { "@actions/http-client": {
"version": "2.0.1", "version": "1.0.11",
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.0.1.tgz", "resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-1.0.11.tgz",
"integrity": "sha512-PIXiMVtz6VvyaRsGY268qvj57hXQEpsYogYOu2nrQhlf+XCGmZstmuZBbAybUl1nQGnvS1k1eEsQ69ZoD7xlSw==", "integrity": "sha512-VRYHGQV1rqnROJqdMvGUbY/Kn8vriQe/F9HR2AlYHzmKuM/p3kjNuXhmdBfcVgsvRWTz5C5XW5xvndZrVBuAYg==",
"requires": { "requires": {
"tunnel": "^0.0.6" "tunnel": "0.0.6"
} }
}, },
"@eslint/eslintrc": { "@eslint/eslintrc": {
@@ -3789,9 +3779,9 @@
"dev": true "dev": true
}, },
"json5": { "json5": {
"version": "1.0.2", "version": "1.0.1",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz", "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
"integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==", "integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
"dev": true, "dev": true,
"requires": { "requires": {
"minimist": "^1.2.0" "minimist": "^1.2.0"
@@ -4310,11 +4300,6 @@
"punycode": "^2.1.0" "punycode": "^2.1.0"
} }
}, },
"uuid": {
"version": "8.3.2",
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg=="
},
"v8-compile-cache": { "v8-compile-cache": {
"version": "2.3.0", "version": "2.3.0",
"resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.3.0.tgz", "resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.3.0.tgz",

View File

@@ -23,6 +23,6 @@
"typescript": "^4.6.3" "typescript": "^4.6.3"
}, },
"dependencies": { "dependencies": {
"@actions/core": "^1.9.1" "@actions/core": "^1.6.0"
} }
} }

File diff suppressed because it is too large Load Diff

View File

@@ -13,10 +13,10 @@
"author": "", "author": "",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@actions/core": "^1.9.1", "@actions/core": "^1.6.0",
"@actions/exec": "^1.1.1", "@actions/exec": "^1.1.1",
"@actions/io": "^1.1.2", "@actions/io": "^1.1.2",
"@kubernetes/client-node": "^0.18.1", "@kubernetes/client-node": "^0.16.3",
"hooklib": "file:../hooklib" "hooklib": "file:../hooklib"
}, },
"devDependencies": { "devDependencies": {

View File

@@ -14,7 +14,6 @@ import {
containerVolumes, containerVolumes,
DEFAULT_CONTAINER_ENTRY_POINT, DEFAULT_CONTAINER_ENTRY_POINT,
DEFAULT_CONTAINER_ENTRY_POINT_ARGS, DEFAULT_CONTAINER_ENTRY_POINT_ARGS,
generateContainerName,
PodPhase PodPhase
} from '../k8s/utils' } from '../k8s/utils'
import { JOB_CONTAINER_NAME } from './constants' import { JOB_CONTAINER_NAME } from './constants'
@@ -32,14 +31,14 @@ export async function prepareJob(
let container: k8s.V1Container | undefined = undefined let container: k8s.V1Container | undefined = undefined
if (args.container?.image) { if (args.container?.image) {
core.debug(`Using image '${args.container.image}' for job image`) core.debug(`Using image '${args.container.image}' for job image`)
container = createContainerSpec(args.container, JOB_CONTAINER_NAME, true) container = createPodSpec(args.container, JOB_CONTAINER_NAME, true)
} }
let services: k8s.V1Container[] = [] let services: k8s.V1Container[] = []
if (args.services?.length) { if (args.services?.length) {
services = args.services.map(service => { services = args.services.map(service => {
core.debug(`Adding service '${service.image}' to pod definition`) core.debug(`Adding service '${service.image}' to pod definition`)
return createContainerSpec(service, generateContainerName(service.image)) return createPodSpec(service, service.image.split(':')[0])
}) })
} }
if (!container && !services?.length) { if (!container && !services?.length) {
@@ -125,11 +124,13 @@ function generateResponseFile(
) )
if (serviceContainers?.length) { if (serviceContainers?.length) {
response.context['services'] = serviceContainers.map(c => { response.context['services'] = serviceContainers.map(c => {
if (!c.ports) {
return
}
const ctxPorts: ContextPorts = {} const ctxPorts: ContextPorts = {}
if (c.ports?.length) { for (const port of c.ports) {
for (const port of c.ports) { ctxPorts[port.containerPort] = port.hostPort
ctxPorts[port.containerPort] = port.hostPort
}
} }
return { return {
@@ -152,12 +153,12 @@ async function copyExternalsToRoot(): Promise<void> {
} }
} }
export function createContainerSpec( function createPodSpec(
container, container,
name: string, name: string,
jobContainer = false jobContainer = false
): k8s.V1Container { ): k8s.V1Container {
if (!container.entryPoint && jobContainer) { if (!container.entryPoint) {
container.entryPoint = DEFAULT_CONTAINER_ENTRY_POINT container.entryPoint = DEFAULT_CONTAINER_ENTRY_POINT
container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
} }
@@ -165,20 +166,14 @@ export function createContainerSpec(
const podContainer = { const podContainer = {
name, name,
image: container.image, image: container.image,
command: [container.entryPoint],
args: container.entryPointArgs,
ports: containerPorts(container) ports: containerPorts(container)
} as k8s.V1Container } as k8s.V1Container
if (container.workingDirectory) { if (container.workingDirectory) {
podContainer.workingDir = container.workingDirectory podContainer.workingDir = container.workingDirectory
} }
if (container.entryPoint) {
podContainer.command = [container.entryPoint]
}
if (container.entryPointArgs?.length > 0) {
podContainer.args = container.entryPointArgs
}
podContainer.env = [] podContainer.env = []
for (const [key, value] of Object.entries( for (const [key, value] of Object.entries(
container['environmentVariables'] container['environmentVariables']

View File

@@ -9,13 +9,15 @@ import {
import { isAuthPermissionsOK, namespace, requiredPermissions } from './k8s' import { isAuthPermissionsOK, namespace, requiredPermissions } from './k8s'
async function run(): Promise<void> { async function run(): Promise<void> {
try { const input = await getInputFromStdin()
const input = await getInputFromStdin()
const args = input['args'] const args = input['args']
const command = input['command'] const command = input['command']
const responseFile = input['responseFile'] const responseFile = input['responseFile']
const state = input['state'] const state = input['state']
let exitCode = 0
try {
if (!(await isAuthPermissionsOK())) { if (!(await isAuthPermissionsOK())) {
throw new Error( throw new Error(
`The Service account needs the following permissions ${JSON.stringify( `The Service account needs the following permissions ${JSON.stringify(
@@ -23,28 +25,28 @@ async function run(): Promise<void> {
)} on the pod resource in the '${namespace()}' namespace. Please contact your self hosted runner administrator.` )} on the pod resource in the '${namespace()}' namespace. Please contact your self hosted runner administrator.`
) )
} }
let exitCode = 0
switch (command) { switch (command) {
case Command.PrepareJob: case Command.PrepareJob:
await prepareJob(args as prepareJobArgs, responseFile) await prepareJob(args as prepareJobArgs, responseFile)
return process.exit(0) break
case Command.CleanupJob: case Command.CleanupJob:
await cleanupJob() await cleanupJob()
return process.exit(0) break
case Command.RunScriptStep: case Command.RunScriptStep:
await runScriptStep(args, state, null) await runScriptStep(args, state, null)
return process.exit(0) break
case Command.RunContainerStep: case Command.RunContainerStep:
exitCode = await runContainerStep(args) exitCode = await runContainerStep(args)
return process.exit(exitCode) break
case Command.runContainerStep:
default: default:
throw new Error(`Command not recognized: ${command}`) throw new Error(`Command not recognized: ${command}`)
} }
} catch (error) { } catch (error) {
core.error(error as Error) core.error(error as Error)
process.exit(1) exitCode = 1
} }
process.exitCode = exitCode
} }
void run() void run()

View File

@@ -516,40 +516,27 @@ class BackOffManager {
export function containerPorts( export function containerPorts(
container: ContainerInfo container: ContainerInfo
): k8s.V1ContainerPort[] { ): k8s.V1ContainerPort[] {
// 8080:8080/tcp
const portFormat = /(\d{1,5})(:(\d{1,5}))?(\/(tcp|udp))?/
const ports: k8s.V1ContainerPort[] = [] const ports: k8s.V1ContainerPort[] = []
if (!container.portMappings?.length) {
return ports
}
for (const portDefinition of container.portMappings) { for (const portDefinition of container.portMappings) {
const portProtoSplit = portDefinition.split('/') const submatches = portFormat.exec(portDefinition)
if (portProtoSplit.length > 2) { if (!submatches) {
throw new Error(`Unexpected port format: ${portDefinition}`) throw new Error(
`Port definition "${portDefinition}" is in incorrect format`
)
} }
const port = new k8s.V1ContainerPort() const port = new k8s.V1ContainerPort()
port.protocol = port.hostPort = Number(submatches[1])
portProtoSplit.length === 2 ? portProtoSplit[1].toUpperCase() : 'TCP' if (submatches[3]) {
port.containerPort = Number(submatches[3])
const portSplit = portProtoSplit[0].split(':')
if (portSplit.length > 2) {
throw new Error('ports should have at most one ":" separator')
} }
if (submatches[5]) {
const parsePort = (p: string): number => { port.protocol = submatches[5].toUpperCase()
const num = Number(p)
if (!Number.isInteger(num) || num < 1 || num > 65535) {
throw new Error(`invalid container port: ${p}`)
}
return num
}
if (portSplit.length === 1) {
port.containerPort = parsePort(portSplit[0])
} else { } else {
port.hostPort = parsePort(portSplit[0]) port.protocol = 'TCP'
port.containerPort = parsePort(portSplit[1])
} }
ports.push(port) ports.push(port)
} }
return ports return ports

View File

@@ -22,18 +22,16 @@ export function containerVolumes(
const workspacePath = process.env.GITHUB_WORKSPACE as string const workspacePath = process.env.GITHUB_WORKSPACE as string
if (containerAction) { if (containerAction) {
const i = workspacePath.lastIndexOf('_work/')
const workspaceRelativePath = workspacePath.slice(i + '_work/'.length)
mounts.push( mounts.push(
{ {
name: POD_VOLUME_NAME, name: POD_VOLUME_NAME,
mountPath: '/github/workspace', mountPath: '/github/workspace',
subPath: workspaceRelativePath subPath: workspacePath.substring(workspacePath.indexOf('work/') + 1)
}, },
{ {
name: POD_VOLUME_NAME, name: POD_VOLUME_NAME,
mountPath: '/github/file_commands', mountPath: '/github/file_commands',
subPath: '_temp/_runner_file_commands' subPath: workspacePath.substring(workspacePath.indexOf('work/') + 1)
} }
) )
return mounts return mounts
@@ -111,13 +109,11 @@ export function writeEntryPointScript(
if (environmentVariables && Object.entries(environmentVariables).length) { if (environmentVariables && Object.entries(environmentVariables).length) {
const envBuffer: string[] = [] const envBuffer: string[] = []
for (const [key, value] of Object.entries(environmentVariables)) { for (const [key, value] of Object.entries(environmentVariables)) {
if (key.includes(`=`) || key.includes(`'`) || key.includes(`"`)) {
throw new Error(
`environment key ${key} is invalid - the key must not contain =, ' or "`
)
}
envBuffer.push( envBuffer.push(
`"${key}=${value.replace(/\\/g, '\\\\').replace(/"/g, '\\"')}"` `"${key}=${value
.replace(/\\/g, '\\\\')
.replace(/"/g, '\\"')
.replace(/=/g, '\\=')}"`
) )
} }
environmentPrefix = `env ${envBuffer.join(' ')} ` environmentPrefix = `env ${envBuffer.join(' ')} `
@@ -139,17 +135,6 @@ exec ${environmentPrefix} ${entryPoint} ${
} }
} }
export function generateContainerName(image: string): string {
const nameWithTag = image.split('/').pop()
const name = nameWithTag?.split(':').at(0)
if (!name) {
throw new Error(`Image definition '${image}' is invalid`)
}
return name
}
export enum PodPhase { export enum PodPhase {
PENDING = 'Pending', PENDING = 'Pending',
RUNNING = 'Running', RUNNING = 'Running',

View File

@@ -4,7 +4,6 @@ import {
getSecretName, getSecretName,
getStepPodName, getStepPodName,
getVolumeClaimName, getVolumeClaimName,
JOB_CONTAINER_NAME,
MAX_POD_NAME_LENGTH, MAX_POD_NAME_LENGTH,
RunnerInstanceLabel, RunnerInstanceLabel,
STEP_POD_NAME_SUFFIX_LENGTH STEP_POD_NAME_SUFFIX_LENGTH
@@ -171,12 +170,4 @@ describe('constants', () => {
} }
}) })
}) })
describe('const values', () => {
it('should have constants set', () => {
expect(JOB_CONTAINER_NAME).toBeTruthy()
expect(MAX_POD_NAME_LENGTH).toBeGreaterThan(0)
expect(STEP_POD_NAME_SUFFIX_LENGTH).toBeGreaterThan(0)
})
})
}) })

View File

@@ -1,10 +1,6 @@
import * as fs from 'fs' import * as fs from 'fs'
import { containerPorts, POD_VOLUME_NAME } from '../src/k8s' import { POD_VOLUME_NAME } from '../src/k8s'
import { import { containerVolumes, writeEntryPointScript } from '../src/k8s/utils'
containerVolumes,
generateContainerName,
writeEntryPointScript
} from '../src/k8s/utils'
import { TestHelper } from './test-setup' import { TestHelper } from './test-setup'
let testHelper: TestHelper let testHelper: TestHelper
@@ -107,22 +103,19 @@ describe('k8s utils', () => {
it('should have container action volumes', () => { it('should have container action volumes', () => {
let volumes = containerVolumes([], true, true) let volumes = containerVolumes([], true, true)
let workspace = volumes.find(e => e.mountPath === '/github/workspace') expect(
let fileCommands = volumes.find( volumes.find(e => e.mountPath === '/github/workspace')
e => e.mountPath === '/github/file_commands' ).toBeTruthy()
) expect(
expect(workspace).toBeTruthy() volumes.find(e => e.mountPath === '/github/file_commands')
expect(workspace?.subPath).toBe('repo/repo') ).toBeTruthy()
expect(fileCommands).toBeTruthy()
expect(fileCommands?.subPath).toBe('_temp/_runner_file_commands')
volumes = containerVolumes([], false, true) volumes = containerVolumes([], false, true)
workspace = volumes.find(e => e.mountPath === '/github/workspace') expect(
fileCommands = volumes.find(e => e.mountPath === '/github/file_commands') volumes.find(e => e.mountPath === '/github/workspace')
expect(workspace).toBeTruthy() ).toBeTruthy()
expect(workspace?.subPath).toBe('repo/repo') expect(
expect(fileCommands).toBeTruthy() volumes.find(e => e.mountPath === '/github/file_commands')
expect(fileCommands?.subPath).toBe('_temp/_runner_file_commands') ).toBeTruthy()
}) })
it('should have externals, github home and github workflow mounts if job container', () => { it('should have externals, github home and github workflow mounts if job container', () => {
@@ -156,101 +149,5 @@ describe('k8s utils', () => {
volumes = containerVolumes([], false, false) volumes = containerVolumes([], false, false)
expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy() expect(volumes.every(e => e.name === POD_VOLUME_NAME)).toBeTruthy()
}) })
it('should parse container ports', () => {
const tt = [
{
spec: '8080:80',
want: {
containerPort: 80,
hostPort: 8080,
protocol: 'TCP'
}
},
{
spec: '8080:80/udp',
want: {
containerPort: 80,
hostPort: 8080,
protocol: 'UDP'
}
},
{
spec: '8080/udp',
want: {
containerPort: 8080,
hostPort: undefined,
protocol: 'UDP'
}
},
{
spec: '8080',
want: {
containerPort: 8080,
hostPort: undefined,
protocol: 'TCP'
}
}
]
for (const tc of tt) {
const got = containerPorts({ portMappings: [tc.spec] })
for (const [key, value] of Object.entries(tc.want)) {
expect(got[0][key]).toBe(value)
}
}
})
it('should throw when ports are out of range (0, 65536)', () => {
expect(() => containerPorts({ portMappings: ['65536'] })).toThrow()
expect(() => containerPorts({ portMappings: ['0'] })).toThrow()
expect(() => containerPorts({ portMappings: ['65536/udp'] })).toThrow()
expect(() => containerPorts({ portMappings: ['0/udp'] })).toThrow()
expect(() => containerPorts({ portMappings: ['1:65536'] })).toThrow()
expect(() => containerPorts({ portMappings: ['65536:1'] })).toThrow()
expect(() => containerPorts({ portMappings: ['1:65536/tcp'] })).toThrow()
expect(() => containerPorts({ portMappings: ['65536:1/tcp'] })).toThrow()
expect(() => containerPorts({ portMappings: ['1:'] })).toThrow()
expect(() => containerPorts({ portMappings: [':1'] })).toThrow()
expect(() => containerPorts({ portMappings: ['1:/tcp'] })).toThrow()
expect(() => containerPorts({ portMappings: [':1/tcp'] })).toThrow()
})
it('should throw on multi ":" splits', () => {
expect(() => containerPorts({ portMappings: ['1:1:1'] })).toThrow()
})
it('should throw on multi "/" splits', () => {
expect(() => containerPorts({ portMappings: ['1:1/tcp/udp'] })).toThrow()
expect(() => containerPorts({ portMappings: ['1/tcp/udp'] })).toThrow()
})
})
describe('generate container name', () => {
it('should return the container name from image string', () => {
expect(
generateContainerName('public.ecr.aws/localstack/localstack')
).toEqual('localstack')
expect(
generateContainerName(
'public.ecr.aws/url/with/multiple/slashes/postgres:latest'
)
).toEqual('postgres')
expect(generateContainerName('postgres')).toEqual('postgres')
expect(generateContainerName('postgres:latest')).toEqual('postgres')
expect(generateContainerName('localstack/localstack')).toEqual(
'localstack'
)
expect(generateContainerName('localstack/localstack:latest')).toEqual(
'localstack'
)
})
it('should throw on invalid image string', () => {
expect(() =>
generateContainerName('localstack/localstack/:latest')
).toThrow()
expect(() => generateContainerName(':latest')).toThrow()
})
}) })
}) })

View File

@@ -1,10 +1,8 @@
import * as fs from 'fs' import * as fs from 'fs'
import * as path from 'path' import * as path from 'path'
import { cleanupJob } from '../src/hooks' import { cleanupJob } from '../src/hooks'
import { createContainerSpec, prepareJob } from '../src/hooks/prepare-job' import { prepareJob } from '../src/hooks/prepare-job'
import { TestHelper } from './test-setup' import { TestHelper } from './test-setup'
import { generateContainerName } from '../src/k8s/utils'
import { V1Container } from '@kubernetes/client-node'
jest.useRealTimers() jest.useRealTimers()
@@ -73,27 +71,4 @@ describe('Prepare job', () => {
prepareJob(prepareJobData.args, prepareJobOutputFilePath) prepareJob(prepareJobData.args, prepareJobOutputFilePath)
).rejects.toThrow() ).rejects.toThrow()
}) })
it('should not set command + args for service container if not passed in args', async () => {
const services = prepareJobData.args.services.map(service => {
return createContainerSpec(service, generateContainerName(service.image))
}) as [V1Container]
expect(services[0].command).toBe(undefined)
expect(services[0].args).toBe(undefined)
})
test.each([undefined, null, []])(
'should not throw exception when portMapping=%p',
async pm => {
prepareJobData.args.services.forEach(s => {
s.portMappings = pm
})
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
const content = JSON.parse(
fs.readFileSync(prepareJobOutputFilePath).toString()
)
expect(() => content.context.services[0].image).not.toThrow()
}
)
}) })

View File

@@ -1,6 +1,6 @@
<!-- ## Features --> ## Features
## Bugs ## Bugs
- Fixed an issue where default private registry images did not pull correctly [#25]
- Ensure the response file contains ports object for service containers in k8s [#70] ## Misc
<!-- ## Misc -->