Compare commits

...

159 Commits

Author SHA1 Message Date
Nikola Jokic
c03a5fb3c1 Prepare 0.8.0 release and bump dependencies once more (#256)
* Prepare 0.8.0 release and bump dependencies once more

* Update releaseNotes.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-04 11:53:55 +02:00
Nikola Jokic
96c35e7cc6 Remove dependency on the runner's volume (#244)
* bump actions

* experiment using init container to prepare working environment

* rm script before continuing

* fix

* Update packages/k8s/src/hooks/run-script-step.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* leverage exec stat instead of printf

* npm update

* document the new constraint

---------

Co-authored-by: DenisPalnitsky <DenisPalnitsky@users.noreply.github.com>
2025-10-02 16:23:07 +02:00
Nikola Jokic
c67938c536 bump actions (#254) 2025-10-02 16:20:55 +02:00
Nikola Jokic
464be47642 Separate CI docker and k8s tests (#250)
* Separate tests

* Update .github/workflows/build.yaml

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-24 19:18:35 +02:00
Nikola Jokic
74ce64c1d0 Update codeowners to reflect the same team from the ARC (#251) 2025-09-24 12:40:42 -04:00
dependabot[bot]
9a71a3a7e9 Bump brace-expansion from 1.1.11 to 1.1.12 (#238)
Bumps [brace-expansion](https://github.com/juliangruber/brace-expansion) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/juliangruber/brace-expansion/releases)
- [Commits](https://github.com/juliangruber/brace-expansion/compare/1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: brace-expansion
  dependency-version: 1.1.12
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-29 11:24:23 +02:00
dependabot[bot]
9a858922c8 Bump form-data from 4.0.3 to 4.0.4 in /packages/k8s (#239)
Bumps [form-data](https://github.com/form-data/form-data) from 4.0.3 to 4.0.4.
- [Release notes](https://github.com/form-data/form-data/releases)
- [Changelog](https://github.com/form-data/form-data/blob/master/CHANGELOG.md)
- [Commits](https://github.com/form-data/form-data/compare/v4.0.3...v4.0.4)

---
updated-dependencies:
- dependency-name: form-data
  dependency-version: 4.0.4
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-29 11:24:01 +02:00
dependabot[bot]
605551ff1c Bump @eslint/plugin-kit from 0.3.3 to 0.3.4 (#240)
Bumps [@eslint/plugin-kit](https://github.com/eslint/rewrite/tree/HEAD/packages/plugin-kit) from 0.3.3 to 0.3.4.
- [Release notes](https://github.com/eslint/rewrite/releases)
- [Changelog](https://github.com/eslint/rewrite/blob/main/packages/plugin-kit/CHANGELOG.md)
- [Commits](https://github.com/eslint/rewrite/commits/plugin-kit-v0.3.4/packages/plugin-kit)

---
updated-dependencies:
- dependency-name: "@eslint/plugin-kit"
  dependency-version: 0.3.4
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-29 11:23:37 +02:00
name_snrl
878781f9c4 docker: fix readOnly volumes in createContainer (#236) 2025-07-29 11:12:01 +02:00
Nikola Jokic
1e051b849b Update codeowners (#237)
* Update codeowners

* Update CODEOWNERS

Co-authored-by: Rocio Montes <88550502+rociomontes@users.noreply.github.com>

---------

Co-authored-by: Rocio Montes <88550502+rociomontes@users.noreply.github.com>
2025-07-29 11:11:27 +02:00
Nikola Jokic
589414ea69 Bump all dependencies (#234)
* Bump all dependencies

* build and reformat

* lint

* format
2025-07-29 11:06:45 +02:00
Ben De St Paer-Gotch
dd4f7dae2c Update README.md (#224) 2025-06-05 14:03:29 +02:00
Nikola Jokic
7da5474a5d Release 0.7.0 (#218) 2025-04-17 12:34:48 +02:00
Nikola Jokic
375992cd31 Expose CI=true and GITHUB_ACTIONS env variables (#215)
* Expose CI=true and GITHUB_ACTIONS env variables

* fmt

* revert the prettier and finish this

* revert package-lock.json
2025-04-17 12:08:32 +02:00
Nikola Jokic
aae800a69b bump node in tests to node 22 since node14 is quite old (#216)
* bump node in tests to node 22 since node14 is quite old

* change test contsants
2025-04-16 15:57:59 +02:00
dependabot[bot]
e47f9b8af4 Bump jsonpath-plus from 10.1.0 to 10.3.0 in /packages/k8s (#213)
Bumps [jsonpath-plus](https://github.com/s3u/JSONPath) from 10.1.0 to 10.3.0.
- [Release notes](https://github.com/s3u/JSONPath/releases)
- [Changelog](https://github.com/JSONPath-Plus/JSONPath/blob/main/CHANGES.md)
- [Commits](https://github.com/s3u/JSONPath/compare/v10.1.0...v10.3.0)

---
updated-dependencies:
- dependency-name: jsonpath-plus
  dependency-version: 10.3.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-15 14:25:32 +02:00
dependabot[bot]
54e14cb7f3 Bump braces from 3.0.2 to 3.0.3 in /packages/hooklib (#194)
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-14 14:37:19 +02:00
Grant Buskey
ef2229fc0b feat(k8s): add /github/home to containerAction mounts and surface createSecretForEnvs errors #181 (#198)
* feat: add /github/home to containerAction mounts #181

* fix: add debug logging for failed secret creations #181
2025-04-14 14:12:51 +02:00
Andre Klärner
88dc98f8ef k8s: start logging from the beginning (#184) 2025-04-14 14:03:05 +02:00
Joan Miquel Luque
b388518d40 feat(k8s): Use pod affinity when KubeScheduler is enabled #201 (#212)
Signed-off-by: Joan Miquel Luque Oliver <joan.luque@dynatrace.com>
2025-04-14 13:36:21 +02:00
dependabot[bot]
7afb8f9323 Bump cross-spawn from 7.0.3 to 7.0.6 in /packages/k8s (#196)
Bumps [cross-spawn](https://github.com/moxystudio/node-cross-spawn) from 7.0.3 to 7.0.6.
- [Changelog](https://github.com/moxystudio/node-cross-spawn/blob/master/CHANGELOG.md)
- [Commits](https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.6)

---
updated-dependencies:
- dependency-name: cross-spawn
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-24 16:51:12 +01:00
Robin Bobbitt
d4c5425b22 support alternative network modes (#209) 2025-03-24 16:33:43 +01:00
dependabot[bot]
120636d3d7 Bump ws from 7.5.8 to 7.5.10 in /packages/k8s (#192)
Bumps [ws](https://github.com/websockets/ws) from 7.5.8 to 7.5.10.
- [Release notes](https://github.com/websockets/ws/releases)
- [Commits](https://github.com/websockets/ws/compare/7.5.8...7.5.10)

---
updated-dependencies:
- dependency-name: ws
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-06 11:41:28 -05:00
Josh Gross
5e805a0546 Remove dependency on deprecated release actions (#193)
* Update to the latest available actions

* Remove dependency on deprecated release actions

* Add release workflow fixes from testing
2024-11-06 11:41:09 -05:00
Josh Gross
27bae0b2b7 Update to the latest available actions (#191) 2024-11-06 11:18:49 -05:00
dependabot[bot]
8eed1ad1b6 Bump jsonpath-plus and @kubernetes/client-node in /packages/k8s (#187)
Bumps [jsonpath-plus](https://github.com/s3u/JSONPath) to 10.1.0 and updates ancestor dependency [@kubernetes/client-node](https://github.com/kubernetes-client/javascript). These dependencies need to be updated together.


Updates `jsonpath-plus` from 9.0.0 to 10.1.0
- [Release notes](https://github.com/s3u/JSONPath/releases)
- [Changelog](https://github.com/JSONPath-Plus/JSONPath/blob/main/CHANGES.md)
- [Commits](https://github.com/s3u/JSONPath/compare/v9.0.0...v10.1.0)

Updates `@kubernetes/client-node` from 0.22.0 to 0.22.2
- [Release notes](https://github.com/kubernetes-client/javascript/releases)
- [Changelog](https://github.com/kubernetes-client/javascript/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes-client/javascript/compare/0.22.0...0.22.2)

---
updated-dependencies:
- dependency-name: jsonpath-plus
  dependency-type: indirect
- dependency-name: "@kubernetes/client-node"
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-06 10:34:45 -05:00
dependabot[bot]
7b404841b2 Bump braces from 3.0.2 to 3.0.3 in /packages/k8s (#188)
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-06 10:34:16 -05:00
Josh Gross
977d53963d Remove @actions/runner-akvelon from CODEOWNERS (#190) 2024-11-05 18:13:42 -05:00
Josh Gross
77b40ac6df Prepare 0.6.2 Release (#189) 2024-11-05 14:36:03 -05:00
Oliver Radwell
ee10d95fd4 Bump kubernetes/client-node from 0.18.1 to 0.22.0 (#182) 2024-11-05 13:22:04 -05:00
Nikola Jokic
73655d4639 Release 0.6.1 (#172) 2024-06-19 13:42:23 +02:00
Nikola Jokic
ca4ea17d58 Skip writing extension containers in output context file (#154) 2024-06-19 11:49:43 +02:00
dependabot[bot]
ed70e2f8e0 Bump tar from 6.1.11 to 6.2.1 in /packages/k8s (#156)
Bumps [tar](https://github.com/isaacs/node-tar) from 6.1.11 to 6.2.1.
- [Release notes](https://github.com/isaacs/node-tar/releases)
- [Changelog](https://github.com/isaacs/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/isaacs/node-tar/compare/v6.1.11...v6.2.1)

---
updated-dependencies:
- dependency-name: tar
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-18 17:35:16 +02:00
dependabot[bot]
aeabaf144a Bump braces from 3.0.2 to 3.0.3 in /packages/docker (#171)
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-18 17:34:53 +02:00
dependabot[bot]
8388a36f44 Bump ws from 7.5.7 to 7.5.10 in /packages/docker (#170)
Bumps [ws](https://github.com/websockets/ws) from 7.5.7 to 7.5.10.
- [Release notes](https://github.com/websockets/ws/releases)
- [Commits](https://github.com/websockets/ws/compare/7.5.7...7.5.10)

---
updated-dependencies:
- dependency-name: ws
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-18 16:19:49 +02:00
Nikola Jokic
9705deeb08 Release 0.6.0 (#148) 2024-03-14 14:36:02 +01:00
Katarzyna
99efdeca99 Mount /github/workflow to docker action pods (#137)
* Mount /github/workflow to docker action pods, the same way as for container job pods

* Adjust tests
2024-03-14 12:36:27 +01:00
dependabot[bot]
bb09a79b22 Bump jose from 4.11.4 to 4.15.5 in /packages/k8s (#142)
Bumps [jose](https://github.com/panva/jose) from 4.11.4 to 4.15.5.
- [Release notes](https://github.com/panva/jose/releases)
- [Changelog](https://github.com/panva/jose/blob/v4.15.5/CHANGELOG.md)
- [Commits](https://github.com/panva/jose/compare/v4.11.4...v4.15.5)

---
updated-dependencies:
- dependency-name: jose
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-14 12:33:36 +01:00
Katarzyna
746e644039 ADR-0134 superseding ADR-0096 (#136)
Related to https://github.com/actions/runner-container-hooks/issues/132
2024-03-14 12:33:06 +01:00
Katarzyna
7223e1dbb2 Use ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE to extend service containers (#134)
https://github.com/actions/runner-container-hooks/issues/132

Co-authored-by: Katarzyna Radkowska <katarzyna.radkowska@sabre.com>
2024-02-20 16:19:29 +01:00
Katarzyna
af27abe1f7 Read logs also from failed child (container job/container action) pod (#135)
Co-authored-by: Katarzyna Radkowska <katarzyna.radkowska@sabre.com>
2024-02-20 12:01:11 +01:00
Nikola Jokic
638bd19c9d Release 0.5.1 (#131)
* Release 0.5.1

* Add one more PR that is part of the release
2024-02-05 15:00:35 +01:00
Nikola Jokic
50e14cf868 Switch exec pod promise to reject on websocket error (#127)
* Switch exec pod promise to reject on websocket error

* Fix incorrectly resolved merge conflict

* Apply suggestions from code review

Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>

---------

Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
2024-02-05 14:40:15 +01:00
Nikola Jokic
921be5b85f Fix is alpine check using shlex (#130) 2024-02-05 09:50:51 +01:00
Nikola Jokic
0cce49705b Try to get response body message and log entire error response in debug mode (#123) 2023-12-15 13:01:04 +01:00
Nikola Jokic
46c92fe43e Release 0.5.0 (#119) 2023-11-22 13:15:52 +01:00
Nikola Jokic
56208347f1 Update 0096-hook-extensions.md (#118) 2023-11-20 15:10:21 +01:00
Nikola Jokic
c093f87779 Docker and K8s: Fix shell arguments when split by the runner (#115)
* Docker: Fix shell arguments when split by the runner

* Add shlex to k8s hook as well
2023-11-20 15:09:36 +01:00
Nikola Jokic
c47c74ad9e Update CODEOWNERS (#114) 2023-11-09 14:11:50 +01:00
Wout Van De Wiel
90a6236466 Add option to use the kubernetes scheduler for workflow pods (#111)
* Add option to use kube scheduler

This should only be used when rwx volumes are supported or when using a single node cluster.

* Add option to set timeout for prepare job

If the kube scheduler is used to hold jobs until sufficient resources are available,
then prepare job needs to wait for a longer period until the workflow pod is running.
This timeout will mostly need an increase in cases where many jobs are triggered
which together exceed the resources available in the cluster.
The workflows can then be gracefully handled later when sufficient resources become available again.

* Skip name override warning when names match or job extension

* Add guard for positive timeouts with a warning

* Write out ReadWriteMany in full
2023-10-31 12:51:09 +01:00
dependabot[bot]
496287d61d Bump @babel/traverse from 7.18.2 to 7.23.2 in /packages/k8s (#110)
Bumps [@babel/traverse](https://github.com/babel/babel/tree/HEAD/packages/babel-traverse) from 7.18.2 to 7.23.2.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.23.2/packages/babel-traverse)

---
updated-dependencies:
- dependency-name: "@babel/traverse"
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-18 09:36:40 +02:00
dependabot[bot]
5264b6cd7d Bump @babel/traverse from 7.17.9 to 7.23.2 in /packages/docker (#109)
Bumps [@babel/traverse](https://github.com/babel/babel/tree/HEAD/packages/babel-traverse) from 7.17.9 to 7.23.2.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.23.2/packages/babel-traverse)

---
updated-dependencies:
- dependency-name: "@babel/traverse"
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-18 09:36:30 +02:00
Nikola Jokic
b58b13134a import fs in CD workflow (#106) 2023-09-25 14:39:20 +02:00
Nikola Jokic
8ea7e21dec Fix CD release order (#105) 2023-09-25 14:34:49 +02:00
Nikola Jokic
64000d716a Release notes for 0.4.0 version (#104) 2023-09-25 13:53:01 +02:00
Nikola Jokic
4ff4b552a6 [ADR] Hook extensions (#96)
* [ADR] Hook extensions

* Add ADR number

* Add image field to specify that it is going to be ignored

* Update env name, explain that the file is going to be applied to both job and container step pods

* rewphrase job container to job pod

* update name for the job to $job
2023-09-25 13:52:48 +02:00
Nikola Jokic
4cdcf09c43 Implement yaml extensions overwriting the default pod/container spec (#75)
* Implement yaml extensions overwriting the default pod/container spec

* format files

* Extend specs for container job and include docker and k8s tests in k8s

* Create table tests for docker tests

* included warnings and extracted append logic as generic

* updated merge to allow for file read

* reverted back examples and k8s/tests

* reverted back docker tests

* Tests for extension prepare-job

* Fix lint and format and merge error

* Added basic test for container step

* revert hooklib since new definition for container options is received from a file

* revert docker options since create options are a string

* Fix revert

* Update package locks and deps

* included example of extension.yaml. Added side-car container that was missing

* Ignore spec modification for the service containers, change selector to

* fix lint error

* Add missing image override

* Add comment explaining merge object meta with job and pod

* fix test
2023-09-25 11:49:03 +02:00
Nikola Jokic
5107bb1d41 Escape backtick in writeEntryPointScript (#101) 2023-08-28 10:27:20 +02:00
Nikola Jokic
547ed30dc3 Include sha256 checksums in releaseNotes (#98)
* Include sha256 checksums in releaseNotes

* Add ul for sha
2023-08-28 10:15:08 +02:00
dependabot[bot]
17fb66892c Bump word-wrap from 1.2.3 to 1.2.5 in /packages/docker (#95)
Bumps [word-wrap](https://github.com/jonschlinkert/word-wrap) from 1.2.3 to 1.2.5.
- [Release notes](https://github.com/jonschlinkert/word-wrap/releases)
- [Commits](https://github.com/jonschlinkert/word-wrap/compare/1.2.3...1.2.5)

---
updated-dependencies:
- dependency-name: word-wrap
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-25 13:30:07 +02:00
dependabot[bot]
9319a8566a Bump word-wrap from 1.2.3 to 1.2.4 in /packages/hooklib (#88)
Bumps [word-wrap](https://github.com/jonschlinkert/word-wrap) from 1.2.3 to 1.2.4.
- [Release notes](https://github.com/jonschlinkert/word-wrap/releases)
- [Commits](https://github.com/jonschlinkert/word-wrap/compare/1.2.3...1.2.4)

---
updated-dependencies:
- dependency-name: word-wrap
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-25 13:27:12 +02:00
dependabot[bot]
669ec6f706 Bump word-wrap from 1.2.3 to 1.2.4 in /packages/k8s (#89)
Bumps [word-wrap](https://github.com/jonschlinkert/word-wrap) from 1.2.3 to 1.2.4.
- [Release notes](https://github.com/jonschlinkert/word-wrap/releases)
- [Commits](https://github.com/jonschlinkert/word-wrap/compare/1.2.3...1.2.4)

---
updated-dependencies:
- dependency-name: word-wrap
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-25 13:26:57 +02:00
dependabot[bot]
aa658859f8 Bump word-wrap from 1.2.3 to 1.2.4 (#90)
Bumps [word-wrap](https://github.com/jonschlinkert/word-wrap) from 1.2.3 to 1.2.4.
- [Release notes](https://github.com/jonschlinkert/word-wrap/releases)
- [Commits](https://github.com/jonschlinkert/word-wrap/compare/1.2.3...1.2.4)

---
updated-dependencies:
- dependency-name: word-wrap
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-25 13:26:27 +02:00
Nikola Jokic
8b83223a2b Add limitation and throw if an entrypoint is not specified for container step (#77) 2023-07-17 11:02:03 +02:00
Takamasa Saichi
586a052286 Do not overwrite entrypoint if it has already been set or if it is Service container (#83) 2023-07-17 10:33:34 +02:00
dependabot[bot]
730509f702 Bump tough-cookie from 4.0.0 to 4.1.3 in /packages/docker (#87)
Bumps [tough-cookie](https://github.com/salesforce/tough-cookie) from 4.0.0 to 4.1.3.
- [Release notes](https://github.com/salesforce/tough-cookie/releases)
- [Changelog](https://github.com/salesforce/tough-cookie/blob/master/CHANGELOG.md)
- [Commits](https://github.com/salesforce/tough-cookie/compare/v4.0.0...v4.1.3)

---
updated-dependencies:
- dependency-name: tough-cookie
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-11 10:41:36 +02:00
Marko Zagožen
3fc91e4132 Fix argument order for 'docker pull' (#85)
The optional --config option must come *before* the pull argument.
2023-06-30 15:03:01 +02:00
Nikola Jokic
ebbe2bdaff Update package.json version (#79) 2023-05-17 11:18:23 +02:00
Nikola Jokic
17837d25d2 Release notes for 0.3.2 (#78) 2023-05-17 11:03:05 +02:00
Arthur Baars
c37c5ca584 k8s: handle $ symbols in environment variable names and values (#74)
* Add test cases

* Handle $ symbols in environment variable names and values
2023-04-18 15:14:10 +02:00
Bassem Dghaidi
04b58be49a ADR: using ephemeral containers (#72)
* Add ephemeral containers ADR draft

* Add ADR PR number to filename and title

* Add motivation

* Add evaluation section with details

* Add storage configuration

* Add the remaining sections

* Fix formatting

* Add guidance

* Update ADR status to rejected
2023-04-05 06:48:48 -04:00
Nikola Jokic
89ff7d1155 Release 0.3.1 (#71)
* Update releaseNotes.md

* updated package.json and package lock
2023-03-20 15:26:23 +01:00
Tingluo Huang
6dbb0b61b7 Ensure response consist no matter having ports or not. (#70)
* Ensure responseconsist no matter having ports or not.

* Update packages/k8s/src/hooks/prepare-job.ts

Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>

---------

Co-authored-by: Nikola Jokic <jokicnikola07@gmail.com>
2023-03-20 10:11:19 +01:00
Bassem Dghaidi
c92bb5544e Fix 0.3.0 release notes (#69) 2023-03-17 05:30:10 -04:00
Nikola Jokic
26f4a32c30 0.3.0 release notes (#68) 2023-03-17 10:18:56 +01:00
dependabot[bot]
10c6c0aa70 Bump cacheable-request and @kubernetes/client-node in /packages/k8s (#66)
Removes [cacheable-request](https://github.com/jaredwray/cacheable-request). It's no longer used after updating ancestor dependency [@kubernetes/client-node](https://github.com/kubernetes-client/javascript). These dependencies need to be updated together.


Removes `cacheable-request`

Updates `@kubernetes/client-node` from 0.16.3 to 0.18.1
- [Release notes](https://github.com/kubernetes-client/javascript/releases)
- [Changelog](https://github.com/kubernetes-client/javascript/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes-client/javascript/commits)

---
updated-dependencies:
- dependency-name: cacheable-request
  dependency-type: indirect
- dependency-name: "@kubernetes/client-node"
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 11:28:06 +01:00
Nikola Jokic
d735152125 Exit from run k8s not allowing promise rejection (#65)
* Exit from run k8s not allowing promise rejection

* Unused case removed k8s
2023-02-14 11:30:16 +01:00
Nikola Jokic
ae31f04223 removed equal sign from env buffer, added defensive guard against the key (#62)
* removed equal sign from env buffer, added defensive guard against the key

* Update packages/k8s/src/k8s/utils.ts

Co-authored-by: John Sudol <24583161+johnsudol@users.noreply.github.com>

* Update packages/k8s/src/k8s/utils.ts

Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>

* fix format

---------

Co-authored-by: John Sudol <24583161+johnsudol@users.noreply.github.com>
Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
2023-02-09 17:11:16 +01:00
dependabot[bot]
7754cb80eb Bump http-cache-semantics from 4.1.0 to 4.1.1 in /packages/k8s (#63)
Bumps [http-cache-semantics](https://github.com/kornelski/http-cache-semantics) from 4.1.0 to 4.1.1.
- [Release notes](https://github.com/kornelski/http-cache-semantics/releases)
- [Commits](https://github.com/kornelski/http-cache-semantics/compare/v4.1.0...v4.1.1)

---
updated-dependencies:
- dependency-name: http-cache-semantics
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-09 14:54:32 +01:00
Nikola Jokic
ae432db512 docker and k8s: read from stdin inside try catch block (#49)
There might be situation where reading from standard input fails. In
that case, we should encapsulate that exception within the try catch
block to avoid unhandeled Promise rejection exception and provide more
information about the error
2023-01-23 12:46:47 +01:00
Nikola Jokic
4448b61e00 Fix service port mappings when input is undefined, null, or empty (#60)
* fix: service without ports defined

* fix port mappings when ports are undefined,null or empty

* fix

Co-authored-by: Ronald Claveau <ronald.claveau@pennylane.com>
2023-01-06 11:54:52 +01:00
dependabot[bot]
bf39b9bf16 Bump json5 from 1.0.1 to 1.0.2 in /packages/hooklib (#56)
Bumps [json5](https://github.com/json5/json5) from 1.0.1 to 1.0.2.
- [Release notes](https://github.com/json5/json5/releases)
- [Changelog](https://github.com/json5/json5/blob/main/CHANGELOG.md)
- [Commits](https://github.com/json5/json5/compare/v1.0.1...v1.0.2)

---
updated-dependencies:
- dependency-name: json5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-06 11:05:26 +01:00
dependabot[bot]
5b597b0fe2 Bump json5 from 2.2.1 to 2.2.3 in /packages/k8s (#57)
Bumps [json5](https://github.com/json5/json5) from 2.2.1 to 2.2.3.
- [Release notes](https://github.com/json5/json5/releases)
- [Changelog](https://github.com/json5/json5/blob/main/CHANGELOG.md)
- [Commits](https://github.com/json5/json5/compare/v2.2.1...v2.2.3)

---
updated-dependencies:
- dependency-name: json5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-06 11:05:05 +01:00
dependabot[bot]
0e1ba7bdc8 Bump json5 from 1.0.1 to 1.0.2 in /packages/docker (#58)
Bumps [json5](https://github.com/json5/json5) from 1.0.1 to 1.0.2.
- [Release notes](https://github.com/json5/json5/releases)
- [Changelog](https://github.com/json5/json5/blob/main/CHANGELOG.md)
- [Commits](https://github.com/json5/json5/compare/v1.0.1...v1.0.2)

---
updated-dependencies:
- dependency-name: json5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-06 11:04:42 +01:00
Niels ten Boom
73914b840c fix: naming for services & service entrypoint (#53)
* rename to container

* fix container image name bug

* fix entrypoint bug

* bump patch version

* formatting

* fix versions in package-lock

* add test

* revert version bump

* added check + test for args as well

* formatting

* remove cscode launch.json

* expand example json

* wrong version, revert to correct one

* correct lock

* throw error on invalid image definition

* change falsy check

* Update packages/k8s/src/k8s/utils.ts

Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>

Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
2023-01-06 10:22:41 +01:00
Nikola Jokic
b537fd4c92 Upgrade package json5 (#55) 2023-01-05 10:30:51 +01:00
Ferenc Hammerl
17d2b3b850 Release notes for v0.2.0 (#47)
* Update releaseNotes.md

* Bump version to 0.2.0
2022-12-15 15:29:15 +01:00
dependabot[bot]
ea011028f5 Bump @actions/core from 1.6.0 to 1.9.1 in /packages/hooklib (#29)
* Bump @actions/core from 1.6.0 to 1.9.1 in /packages/hooklib

Bumps [@actions/core](https://github.com/actions/toolkit/tree/HEAD/packages/core) from 1.6.0 to 1.9.1.
- [Release notes](https://github.com/actions/toolkit/releases)
- [Changelog](https://github.com/actions/toolkit/blob/main/packages/core/RELEASES.md)
- [Commits](https://github.com/actions/toolkit/commits/HEAD/packages/core)

---
updated-dependencies:
- dependency-name: "@actions/core"
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Trigger Build

* Update package lock for docker and k8s

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
2022-12-15 14:58:13 +01:00
Nikola Jokic
eaae191ebb k8s: don't overwriting service entrypoint (#45) 2022-12-15 14:13:57 +01:00
dependabot[bot]
418d484160 Bump jose from 2.0.5 to 2.0.6 in /packages/k8s (#31)
Bumps [jose](https://github.com/panva/jose) from 2.0.5 to 2.0.6.
- [Release notes](https://github.com/panva/jose/releases)
- [Changelog](https://github.com/panva/jose/blob/v2.0.6/CHANGELOG.md)
- [Commits](https://github.com/panva/jose/compare/v2.0.5...v2.0.6)

---
updated-dependencies:
- dependency-name: jose
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-15 14:04:57 +01:00
Nikola Jokic
ce3c55d086 exposing env variables from runner with DOCKER_ envs to respect docker options set on host (#40)
* exposing env variables from runner with DOCKER_ prefix to respect rootless docker

* Prioritize DOCKER cli over workflow envs

* formatted
2022-12-08 08:09:51 +01:00
Nikola Jokic
d988d965c5 fixing issue related to setting hostPort and containerPort when format is port/proto (#38)
* fixing issue related to setting hostPort and containerPort when format is port/proto

* added one more test case and refactored containerPorts to be without regexp

* added throw on ports outside of (0,65536) range with test

* repaired error message and added tests to multi splits. refactored port checking
2022-11-15 14:23:09 +01:00
Nikola Jokic
23cc6dda6f fixed substring issue with /github/workspace and /github/file_commands (#35)
* fixed substring issue with /github/workspace and /github/file_commands

* npm run format

* last 3 parts of the path are mounted to /github/workspace and /github/file_commands

* file commands now point to _temp/_runner_file_commands
2022-11-03 14:55:07 +01:00
dependabot[bot]
8986035ca8 Bump @actions/core from 1.8.2 to 1.9.1 in /packages/k8s (#28)
Bumps [@actions/core](https://github.com/actions/toolkit/tree/HEAD/packages/core) from 1.8.2 to 1.9.1.
- [Release notes](https://github.com/actions/toolkit/releases)
- [Changelog](https://github.com/actions/toolkit/blob/main/packages/core/RELEASES.md)
- [Commits](https://github.com/actions/toolkit/commits/HEAD/packages/core)

---
updated-dependencies:
- dependency-name: "@actions/core"
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-25 17:12:49 +02:00
dependabot[bot]
e975289683 Bump @actions/core from 1.6.0 to 1.9.1 in /packages/docker (#27)
Bumps [@actions/core](https://github.com/actions/toolkit/tree/HEAD/packages/core) from 1.6.0 to 1.9.1.
- [Release notes](https://github.com/actions/toolkit/releases)
- [Changelog](https://github.com/actions/toolkit/blob/main/packages/core/RELEASES.md)
- [Commits](https://github.com/actions/toolkit/commits/HEAD/packages/core)

---
updated-dependencies:
- dependency-name: "@actions/core"
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-25 17:12:17 +02:00
Nikola Jokic
a555151eef repaired env variable name in CONTRIBUTING (HOOK(S)) (#37) 2022-10-25 16:26:59 +02:00
Thomas Boop
16eb238caa 0.1.3 release notes (#26) 2022-08-16 15:43:31 +02:00
Nikola Jokic
8e06496e34 fixing defaulting to docker hub on private registry, and b64 encoding (#25) 2022-08-16 09:30:58 -04:00
Thomas Boop
e2033b29c7 0.1.2 release (#22)
* 0.1.2 release

* trace the error and show a user readable message
2022-06-23 08:57:14 -04:00
Nikola Jokic
eb47baaf5e Adding more tests and minor changes in code (#21)
* added cleanup job checks, started testing constants file

* added getVolumeClaimName test

* added write entrypoint tests

* added tests around k8s utils

* fixed new regexp

* added tests around runner instance label

* 100% test coverage of constants
2022-06-22 14:15:42 -04:00
Nikola Jokic
20c19dae27 refactor around job claim name and runner instance labels (#20)
* refactor around job claim name, and runner instance labels

* repaired failing test
2022-06-22 09:32:50 -04:00
Thomas Boop
4307828719 Don't use JSON.stringify for errors (#19)
* better error handling

* remove unneeded catch

* Update index.ts
2022-06-22 15:20:48 +02:00
Thomas Boop
5c6995dba1 Add Akvelon to codeowners 2022-06-22 09:06:20 -04:00
Thomas Boop
bb1a033ed7 Make K8s claim name optional (#18)
* make claim name optional

* update version and notes

* fix ci

* correctly invoke function
2022-06-20 15:09:04 -04:00
Nikola Jokic
898063bddd repaired docker PATH export and added tests both for docker and k8s (#17)
* repaired docker PATH export and added tests both for docker and k8s

* added todo comments about next major version and typeof prepend path
2022-06-16 09:44:40 -04:00
Thomas Boop
266b8edb99 Fix error handling for invalid pods (#16)
* update readme and fix error handling for bad pods

* update limitations
2022-06-16 09:02:55 -04:00
Thomas Boop
47cbf5a0d7 Misc Tracing cleanup (#15)
* cleanup final bits

* fix import
2022-06-15 09:28:43 -04:00
Nikola Jokic
de4553f25a added permission check for secrets (#14)
* added permission check for secrets

* typo in subresource

* moved auth check to the command receiver
2022-06-15 08:54:50 -04:00
Nikola Jokic
8ea57170d8 Fix working directory and write state for appPod to be used in run-script-step (#8)
* added initial entrypoint script

* change workingg directory working with addition to fix prepare-job state output

* added prepend path

* added run-script-step file generation, removed prepend path from container-step and prepare job

* latest changes with testing run script step

* fix the mounts real fast

* cleanup

* fix tests

* add kind test

* add kind yaml to ignore and run it during ci

* fix kind option

* remove gitignore

* lowercase pwd

* checkout first!

* ignore test file in build.yaml

* fixed wrong working directory and added test to run script step testing for the env

* handle env's/escaping better

* added single quote escape to env escapes

* surounded env value with single quote

* added spacing around run-container-step, changed examples to actually echo hello world

* refactored tests

* make sure to escape properly

* set addition mounts for container steps

* fixup container action mounts

Co-authored-by: Thomas Boop <thboop@github.com>
Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>
2022-06-14 21:41:49 -04:00
Nikola Jokic
643bf36fd8 docker apply env on commands where we are using '-e' (#11)
* added wrapper for docker -e to apply env

* added envs around services as well

* added wrapping environment around execute command

* fixed setting the env variable for containerRun

* added env to exec and not to handle envs by ourself

* returned back the comment to run-container-step
2022-06-13 11:13:47 -04:00
Thomas Boop
de59bd8716 Merge pull request #12 from actions/nikola-jokic/allow-no-job-container
Repaired prepare-job hook without job container
2022-06-10 13:34:52 -04:00
Nikola Jokic
d3ec1c0040 prepare job in k8s does not allow for no job container 2022-06-10 16:38:07 +02:00
Nikola Jokic
3e04b45585 removed leftover todo comments 2022-06-10 12:45:59 +02:00
Nikola Jokic
2b386f7cbd returned logging below the try k8s prepare job 2022-06-10 12:00:49 +02:00
Nikola Jokic
bf362ba0dd fixed prepare-job for docker to allow for no job container 2022-06-10 11:56:10 +02:00
Nikola Jokic
7ae8942b3d Repaired prepare-job hook without job container 2022-06-10 11:07:50 +02:00
Thomas Boop
347e68d3c9 Merge pull request #7 from actions/thboop/refactor3
K8s hook refactor
2022-06-09 09:33:53 -04:00
Thomas Boop
7c4e0f8d51 update limitations 2022-06-08 15:32:30 -04:00
Thomas Boop
cd310988c9 slight refactor, bring pod phase to k8s lib, better types 2022-06-08 15:32:30 -04:00
Thomas Boop
1bfc52f466 Merge pull request #2 from actions/nikola-jokic/computed-build-directory
Computed action build directory. Refactored tests and added docker build test
2022-06-08 13:38:57 -04:00
Nikola Jokic
2aa6f9d9c8 added quotes back to the path 2022-06-08 17:39:37 +02:00
Nikola Jokic
3d0ca83d2d removed quotes around -e env variables 2022-06-08 17:37:43 +02:00
Thomas Boop
5daaae120b Merge pull request #9 from actions/nikola-jokic/user-volume-mounts-path
User volume mount restriction to the work directory mounts if path is absolute
2022-06-08 11:15:31 -04:00
Nikola Jokic
df448fbbb0 cleared registry for testing 2022-06-08 17:13:43 +02:00
Nikola Jokic
ee2554e2c0 filter out empty ports 2022-06-08 16:49:44 +02:00
Thomas Boop
f764d18c4c Update packages/k8s/src/k8s/utils.ts 2022-06-08 09:41:38 -04:00
Thomas Boop
55761eab39 Merge pull request #10 from actions/nikola-jokic/docker-test-refactor
Added prepend path and refactored tests, adding isAlpine test and basic run-script-step test
2022-06-08 09:40:49 -04:00
Nikola Jokic
51bd8b62a4 Merge branch 'nikola-jokic/docker-env' into nikola-jokic/computed-build-directory 2022-06-08 15:07:59 +02:00
Nikola Jokic
150bc0503a substituited all -e key=value to -e key 2022-06-08 15:02:40 +02:00
Nikola Jokic
9ce39e5a60 Merge branch 'main' into nikola-jokic/computed-build-directory 2022-06-08 14:37:13 +02:00
Nikola Jokic
8351f842bd added isAlpine test to prepare job 2022-06-08 14:29:10 +02:00
Nikola Jokic
bf3707d7e0 added guard around prependPath 2022-06-08 13:28:46 +02:00
Nikola Jokic
88b7b19db7 fixed interface for hooklib and example repos 2022-06-08 13:25:45 +02:00
Nikola Jokic
dd5dfb3e48 refactored tests to be easier to follow 2022-06-08 13:20:54 +02:00
Nikola Jokic
84a57de2e3 added tests around user volume mounts for prepare job 2022-06-08 11:23:05 +02:00
Nikola Jokic
fa680b2073 Merge branch 'nikola-jokic/user-volume-mounts-path' of https://github.com/actions/runner-container-hooks into nikola-jokic/user-volume-mounts-path 2022-06-08 11:02:49 +02:00
Nikola Jokic
02f0b322a0 fixed merge conflict, repaired paths in examples 2022-06-08 11:02:33 +02:00
Thomas Boop
ecb9376000 Merge pull request #4 from actions/thboop/setupTests
Setup CI to run k8s tests
2022-06-07 22:38:06 -04:00
Thomas Boop
cc90cd2361 pr feedback 2022-06-07 16:47:45 -04:00
Thomas Boop
152c4e1cc8 Update packages/k8s/src/k8s/utils.ts 2022-06-07 16:38:02 -04:00
Nikola Jokic
d0e094649e use variable for env GITHUB_WORKSPACE 2022-06-07 16:48:05 +02:00
Nikola Jokic
3ba45d3d7e user volume mount fix based on workspacePath 2022-06-07 16:47:05 +02:00
Nikola Jokic
58ebf56ad3 Merge branch 'main' into nikola-jokic/computed-build-directory 2022-06-07 10:52:28 +02:00
Thomas Boop
e928fa3252 Pass secrets more securely for container action 2022-06-06 18:43:57 -04:00
Thomas Boop
ddf09ad7bd Merge pull request #3 from actions/nikola-jokic/docker-label-cleanup
Cleanup now looks at the containers by the label, and not by the state
2022-06-06 18:36:59 -04:00
Thomas Boop
689a74e352 run format 2022-06-06 14:27:06 -04:00
Thomas Boop
55c9198ada new pv for each pod 2022-06-06 14:15:14 -04:00
Thomas Boop
bd7e053180 Merge pull request #5 from actions/nikola-jokic/setupTests-addition
fixed testing adding storage class and persistent volume and timeout for cleanup job
2022-06-06 09:02:00 -04:00
Nikola Jokic
0ebccbd8c6 fixed testing adding storage class and persistent volume and timeout to wait for cleanup 2022-06-06 12:56:50 +02:00
Thomas Boop
ec8131abb7 setup ci to run k8s tests 2022-06-06 00:21:44 -04:00
Nikola Jokic
7010d21bff repaired tests 2022-06-03 16:24:20 +02:00
Nikola Jokic
c65ec28bbb added force cleanup for network and cleanUp hook is cleaning up based on the label 2022-06-03 16:01:24 +02:00
Ferenc Hammerl
c2f9b10f4d Fix case sensitive image in test 2022-06-03 06:56:26 -07:00
Ferenc Hammerl
1e49f4ba5b Merge branch 'nikola-jokic/computed-build-directory' of github.com:actions/runner-container-hooks into nikola-jokic/computed-build-directory 2022-06-03 06:52:11 -07:00
Ferenc Hammerl
171956673c Handle empty registry property in input 2022-06-03 06:52:06 -07:00
Nikola Jokic
3ab4ae20f9 added network prune 2022-06-03 15:15:19 +02:00
Nikola Jokic
b0cf60b678 changed split to dirname and fixed syntax error in err message 2022-06-03 15:10:14 +02:00
Nikola Jokic
5ec2edbe11 err message suggestion
Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>
2022-06-03 15:02:10 +02:00
Nikola Jokic
4b7efe88ef refactored tests and added docker build test, repaired state.network 2022-06-03 14:10:15 +02:00
73 changed files with 18754 additions and 19769 deletions

View File

@@ -1,4 +0,0 @@
dist/
lib/
node_modules/
**/tests/**

View File

@@ -1,56 +0,0 @@
{
"plugins": ["@typescript-eslint"],
"extends": ["plugin:github/recommended"],
"parser": "@typescript-eslint/parser",
"parserOptions": {
"ecmaVersion": 9,
"sourceType": "module",
"project": "./tsconfig.json"
},
"rules": {
"eslint-comments/no-use": "off",
"import/no-namespace": "off",
"no-constant-condition": "off",
"no-unused-vars": "off",
"i18n-text/no-en": "off",
"@typescript-eslint/no-unused-vars": "error",
"@typescript-eslint/explicit-member-accessibility": ["error", {"accessibility": "no-public"}],
"@typescript-eslint/no-require-imports": "error",
"@typescript-eslint/array-type": "error",
"@typescript-eslint/await-thenable": "error",
"camelcase": "off",
"@typescript-eslint/explicit-function-return-type": ["error", {"allowExpressions": true}],
"@typescript-eslint/func-call-spacing": ["error", "never"],
"@typescript-eslint/no-array-constructor": "error",
"@typescript-eslint/no-empty-interface": "error",
"@typescript-eslint/no-explicit-any": "warn",
"@typescript-eslint/no-extraneous-class": "error",
"@typescript-eslint/no-floating-promises": "error",
"@typescript-eslint/no-for-in-array": "error",
"@typescript-eslint/no-inferrable-types": "error",
"@typescript-eslint/no-misused-new": "error",
"@typescript-eslint/no-namespace": "error",
"@typescript-eslint/no-non-null-assertion": "warn",
"@typescript-eslint/no-unnecessary-qualifier": "error",
"@typescript-eslint/no-unnecessary-type-assertion": "error",
"@typescript-eslint/no-useless-constructor": "error",
"@typescript-eslint/no-var-requires": "error",
"@typescript-eslint/prefer-for-of": "warn",
"@typescript-eslint/prefer-function-type": "warn",
"@typescript-eslint/prefer-includes": "error",
"@typescript-eslint/prefer-string-starts-ends-with": "error",
"@typescript-eslint/promise-function-async": "error",
"@typescript-eslint/require-array-sort-compare": "error",
"@typescript-eslint/restrict-plus-operands": "error",
"semi": "off",
"@typescript-eslint/semi": ["error", "never"],
"@typescript-eslint/type-annotation-spacing": "error",
"@typescript-eslint/unbound-method": "error",
"no-shadow": "off",
"@typescript-eslint/no-shadow": ["error"]
},
"env": {
"node": true,
"es6": true
}
}

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
*.png filter=lfs diff=lfs merge=lfs -text

View File

@@ -6,11 +6,13 @@ on:
paths-ignore: paths-ignore:
- '**.md' - '**.md'
workflow_dispatch: workflow_dispatch:
jobs: jobs:
build: format-and-lint:
name: Format & Lint Checks
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v5
- run: npm install - run: npm install
name: Install dependencies name: Install dependencies
- run: npm run bootstrap - run: npm run bootstrap
@@ -18,9 +20,43 @@ jobs:
- run: npm run build-all - run: npm run build-all
name: Build packages name: Build packages
- run: npm run format-check - run: npm run format-check
name: Check formatting
- name: Check linter - name: Check linter
run: | run: |
npm run lint npm run lint
git diff --exit-code git diff --exit-code -- . ':!packages/k8s/tests/test-kind.yaml'
- name: Run tests
run: npm run test docker-tests:
name: Docker Hook Tests
runs-on: ubuntu-latest
needs: format-and-lint
steps:
- uses: actions/checkout@v5
- run: npm install
name: Install dependencies
- run: npm run bootstrap
name: Bootstrap the packages
- run: npm run build-all
name: Build packages
- name: Run Docker tests
run: npm run test --prefix packages/docker
k8s-tests:
name: Kubernetes Hook Tests
runs-on: ubuntu-latest
needs: format-and-lint
steps:
- uses: actions/checkout@v5
- run: sed -i "s|{{PATHTOREPO}}|$(pwd)|" packages/k8s/tests/test-kind.yaml
name: Setup kind cluster yaml config
- uses: helm/kind-action@v1.12.0
with:
config: packages/k8s/tests/test-kind.yaml
- run: npm install
name: Install dependencies
- run: npm run bootstrap
name: Bootstrap the packages
- run: npm run build-all
name: Build packages
- name: Run Kubernetes tests
run: npm run test --prefix packages/k8s

View File

@@ -38,11 +38,11 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v3 uses: actions/checkout@v5
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@v2 uses: github/codeql-action/init@v3
with: with:
languages: ${{ matrix.language }} languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file. # If you wish to specify custom queries, you can do so here or in a config file.
@@ -56,7 +56,7 @@ jobs:
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java). # Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below) # If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild - name: Autobuild
uses: github/codeql-action/autobuild@v2 uses: github/codeql-action/autobuild@v3
# Command-line programs to run using the OS shell. # Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun # 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
@@ -69,4 +69,4 @@ jobs:
# ./location_of_script_within_repo/buildscript.sh # ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2 uses: github/codeql-action/analyze@v3

View File

@@ -1,57 +1,70 @@
name: CD - Release new version name: CD - Release new version
on: on:
workflow_dispatch: workflow_dispatch:
permissions:
contents: write
jobs: jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v5
- run: npm install
name: Install dependencies - name: Install dependencies
- run: npm run bootstrap run: npm install
name: Bootstrap the packages
- run: npm run build-all - name: Bootstrap the packages
name: Build packages run: npm run bootstrap
- uses: actions/github-script@v6
id: releaseNotes - name: Build packages
run: npm run build-all
- uses: actions/github-script@v8
id: releaseVersion
with: with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} result-encoding: string
script: | script: |
const fs = require('fs'); const fs = require('fs');
const hookVersion = require('./package.json').version return require('./package.json').version
var releaseNotes = fs.readFileSync('${{ github.workspace }}/releaseNotes.md', 'utf8').replace(/<HOOK_VERSION>/g, hookVersion)
console.log(releaseNotes)
core.setOutput('version', hookVersion);
core.setOutput('note', releaseNotes);
- name: Zip up releases - name: Zip up releases
run: | run: |
zip -r -j actions-runner-hooks-docker-${{ steps.releaseNotes.outputs.version }}.zip packages/docker/dist zip -r -j actions-runner-hooks-docker-${{ steps.releaseVersion.outputs.result }}.zip packages/docker/dist
zip -r -j actions-runner-hooks-k8s-${{ steps.releaseNotes.outputs.version }}.zip packages/k8s/dist zip -r -j actions-runner-hooks-k8s-${{ steps.releaseVersion.outputs.result }}.zip packages/k8s/dist
- uses: actions/create-release@v1
id: createRelease - name: Calculate SHA
name: Create ${{ steps.releaseNotes.outputs.version }} Hook Release id: sha
env: shell: bash
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: |
sha_docker=$(sha256sum actions-runner-hooks-docker-${{ steps.releaseVersion.outputs.result }}.zip | awk '{print $1}')
echo "Docker SHA: $sha_docker"
echo "docker-sha=$sha_docker" >> $GITHUB_OUTPUT
sha_k8s=$(sha256sum actions-runner-hooks-k8s-${{ steps.releaseVersion.outputs.result }}.zip | awk '{print $1}')
echo "K8s SHA: $sha_k8s"
echo "k8s-sha=$sha_k8s" >> $GITHUB_OUTPUT
- name: Create release notes
id: releaseNotes
uses: actions/github-script@v8
with: with:
tag_name: "v${{ steps.releaseNotes.outputs.version }}" script: |
release_name: "v${{ steps.releaseNotes.outputs.version }}" const fs = require('fs');
body: | var releaseNotes = fs.readFileSync('${{ github.workspace }}/releaseNotes.md', 'utf8').replace(/<HOOK_VERSION>/g, '${{ steps.releaseVersion.outputs.result }}')
${{ steps.releaseNotes.outputs.note }} releaseNotes = releaseNotes.replace(/<DOCKER_SHA>/g, '${{ steps.sha.outputs.docker-sha }}')
- name: Upload K8s hooks releaseNotes = releaseNotes.replace(/<K8S_SHA>/g, '${{ steps.sha.outputs.k8s-sha }}')
uses: actions/upload-release-asset@v1 console.log(releaseNotes)
fs.writeFileSync('${{ github.workspace }}/finalReleaseNotes.md', releaseNotes);
- name: Create ${{ steps.releaseVersion.outputs.result }} Hook Release
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with: run: |
upload_url: ${{ steps.createRelease.outputs.upload_url }} gh release create v${{ steps.releaseVersion.outputs.result }} \
asset_path: ${{ github.workspace }}/actions-runner-hooks-k8s-${{ steps.releaseNotes.outputs.version }}.zip --title "v${{ steps.releaseVersion.outputs.result }}" \
asset_name: actions-runner-hooks-k8s-${{ steps.releaseNotes.outputs.version }}.zip --repo ${{ github.repository }} \
asset_content_type: application/octet-stream --notes-file ${{ github.workspace }}/finalReleaseNotes.md \
- name: Upload docker hooks --latest \
uses: actions/upload-release-asset@v1 ${{ github.workspace }}/actions-runner-hooks-k8s-${{ steps.releaseVersion.outputs.result }}.zip \
env: ${{ github.workspace }}/actions-runner-hooks-docker-${{ steps.releaseVersion.outputs.result }}.zip
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/actions-runner-hooks-docker-${{ steps.releaseNotes.outputs.version }}.zip
asset_name: actions-runner-hooks-docker-${{ steps.releaseNotes.outputs.version }}.zip
asset_content_type: application/octet-stream

1
.gitignore vendored
View File

@@ -2,3 +2,4 @@ node_modules/
lib/ lib/
dist/ dist/
**/tests/_temp/** **/tests/_temp/**
packages/k8s/tests/test-kind.yaml

View File

@@ -1 +1 @@
* @actions/actions-runtime * @actions/actions-compute @nikola-jokic

View File

@@ -13,7 +13,7 @@ You'll need a runner compatible with hooks, a repository with container workflow
- You'll need a runner compatible with hooks, a repository with container workflows to which you can register the runner and the hooks from this repository. - You'll need a runner compatible with hooks, a repository with container workflows to which you can register the runner and the hooks from this repository.
- See [the runner contributing.md](../../github/CONTRIBUTING.MD) for how to get started with runner development. - See [the runner contributing.md](../../github/CONTRIBUTING.MD) for how to get started with runner development.
- Build your hook using `npm run build` - Build your hook using `npm run build`
- Enable the hooks by setting `ACTIONS_RUNNER_CONTAINER_HOOK=./packages/{libraryname}/dist/index.js` file generated by [ncc](https://github.com/vercel/ncc) - Enable the hooks by setting `ACTIONS_RUNNER_CONTAINER_HOOKS=./packages/{libraryname}/dist/index.js` file generated by [ncc](https://github.com/vercel/ncc)
- Configure your self hosted runner against the a repository you have admin access - Configure your self hosted runner against the a repository you have admin access
- Run a workflow with a container job, for example - Run a workflow with a container job, for example
``` ```

View File

@@ -3,6 +3,24 @@ The Runner Container Hooks repo provides a set of packages that implement the co
More information on how to implement your own hooks can be found in the [adr](https://github.com/actions/runner/pull/1891). The `examples` folder provides example inputs for each hook. More information on how to implement your own hooks can be found in the [adr](https://github.com/actions/runner/pull/1891). The `examples` folder provides example inputs for each hook.
### Note
Thank you for your interest in this GitHub action, however, right now we are not taking contributions.
We continue to focus our resources on strategic areas that help our customers be successful while making developers' lives easier. While GitHub Actions remains a key part of this vision, we are allocating resources towards other areas of Actions and are not taking contributions to this repository at this time. The GitHub public roadmap is the best place to follow along for any updates on features were working on and what stage theyre in.
We are taking the following steps to better direct requests related to GitHub Actions, including:
1. We will be directing questions and support requests to our [Community Discussions area](https://github.com/orgs/community/discussions/categories/actions)
2. High Priority bugs can be reported through Community Discussions or you can report these to our support team https://support.github.com/contact/bug-report.
3. Security Issues should be handled as per our [security.md](security.md)
We will still provide security updates for this project and fix major breaking changes during this time.
You are welcome to still raise bugs in this repo.
## Background ## Background
Three projects are included in the `packages` folder Three projects are included in the `packages` folder
@@ -10,10 +28,6 @@ Three projects are included in the `packages` folder
- docker: A hook implementation of the runner's docker implementation. More details can be found in the [readme](./packages/docker/README.md) - docker: A hook implementation of the runner's docker implementation. More details can be found in the [readme](./packages/docker/README.md)
- hooklib: a shared library which contains typescript definitions and utilities that the other projects consume - hooklib: a shared library which contains typescript definitions and utilities that the other projects consume
### Requirements
We welcome contributions. See [how to contribute to get started](./CONTRIBUTING.md).
## License ## License
This project is licensed under the terms of the MIT open source license. Please refer to [MIT](./LICENSE.md) for the full terms. This project is licensed under the terms of the MIT open source license. Please refer to [MIT](./LICENSE.md) for the full terms.

View File

@@ -0,0 +1,184 @@
# ADR 0072: Using Ephemeral Containers
**Date:** 27 March 2023
**Status**: Rejected <!--Accepted|Rejected|Superceded|Deprecated-->
## Context
We are evaluating using Kubernetes [ephemeral containers](https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/) as a drop-in replacement for creating pods for [jobs that run in containers](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container) and [service containers](https://docs.github.com/en/actions/using-containerized-services/about-service-containers).
The main motivator behind using ephemeral containers is to eliminate the need for [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). Persistent Volume implementations vary depending on the provider and we want to avoid building a dependency on it in order to provide our end-users a consistent experience.
With ephemeral containers we could leverage [emptyDir volumes](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) which fits our use case better and its behaviour is consistent across providers.
However, it's important to acknowledge that ephemeral containers were not designed to handle workloads but rather provide a mechanism to inspect running containers for debugging and troubleshooting purposes.
## Evaluation
The criteria that we are using to evaluate whether ephemeral containers are fit for purpose are:
- Networking
- Storage
- Security
- Resource limits
- Logs
- Customizability
### Networking
Ephemeral containers share the networking namespace of the pod they are attached to. This means that ephemeral containers can access the same network interfaces as the pod and can communicate with other containers in the same pod. However, ephemeral containers cannot have ports configured and as such the fields ports, livenessProbe, and readinessProbe are not available [^1][^2]
In this scenario we have 3 containers in a pod:
- `runner`: the main container that runs the GitHub Actions job
- `debugger`: the first ephemeral container
- `debugger2`: the second ephemeral container
By sequentially opening ports on each of these containers and connecting to them we can demonstrate that the communication flow between the runner and the debuggers is feasible.
<details>
<summary>1. Runner -> Debugger communication</summary>
![runner->debugger](./images/runner-debugger.png)
</details>
<details>
<summary>2. Debugger -> Runner communication</summary>
![debugger->runner](./images/debugger-runner.png)
</details>
<details>
<summary>3. Debugger2 -> Debugger communication</summary>
![debugger2->debugger](./images/debugger2-debugger.png)
</details>
### Storage
An emptyDir volume can be successfully mounted (read/write) by the runner as well as the ephemeral containers. This means that ephemeral containers can share data with the runner and other ephemeral containers.
<details>
<summary>Configuration</summary>
```yaml
# Extracted from the values.yaml for the gha-runner-scale-set helm chart
spec:
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
volumeMounts:
- mountPath: /workspace
name: work-volume
volumes:
- name: work-volume
emptyDir:
sizeLimit: 1Gi
```
```bash
# The API call to the Kubernetes API used to create the ephemeral containers
POD_NAME="arc-runner-set-6sfwd-runner-k7qq6"
NAMESPACE="arc-runners"
curl -v "https://<IP>:<PORT>/api/v1/namespaces/$NAMESPACE/pods/$POD_NAME/ephemeralcontainers" \
-X PATCH \
-H 'Content-Type: application/strategic-merge-patch+json' \
--cacert <PATH_TO_CACERT> \
--cert <PATH_TO_CERT> \
--key <PATH_TO_CLIENT_KEY> \
-d '
{
"spec":
{
"ephemeralContainers":
[
{
"name": "debugger",
"command": ["sh"],
"image": "ghcr.io/actions/actions-runner:latest",
"targetContainerName": "runner",
"stdin": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/workspace",
"name": "work-volume",
"readOnly": false
}]
},
{
"name": "debugger2",
"command": ["sh"],
"image": "ghcr.io/actions/actions-runner:latest",
"targetContainerName": "runner",
"stdin": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/workspace",
"name": "work-volume",
"readOnly": false
}]
}
]
}
}'
```
</details>
<details>
<summary>emptyDir volume mount</summary>
![emptyDir volume mount](./images/emptyDir_volume.png)
</details>
### Security
According to the [ephemeral containers API specification](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#ephemeralcontainer-v1-core) the configuration of the `securityContext` field is possible.
Ephemeral containers share the same network namespace as the pod they are attached to. This means that ephemeral containers can access the same network interfaces as the pod and can communicate with other containers in the same pod.
It is also possible for ephemeral containers to [share the process namespace](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/) with the other containers in the pod. This is disabled by default.
The above could have unpredictable security implications.
### Resource limits
Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. [^1] This is a major drawback as it means that ephemeral containers cannot be configured to have resource limits.
There are no guaranteed resources for ad-hoc troubleshooting. If troubleshooting causes a pod to exceed its resource limit it may be evicted. [^3]
### Logs
Since ephemeral containers can share volumes with the runner container, it's possible to write logs to the same volume and have them available to the runner container.
### Customizability
Ephemeral containers can run any image and tag provided, they can be customized to run any arbitrary job. However, it's important to note that the following are not feasible:
- Lifecycle is not allowed for ephemeral containers
- Ephemeral containers will stop when their command exits, such as exiting a shell, and they will not be restarted. Unlike `kubectl exec`, processes in Ephemeral Containers will not receive an `EOF` if their connections are interrupted, so shells won't automatically exit on disconnect. There is no API support for killing or restarting an ephemeral container. The only way to exit the container is to send it an OS signal. [^4]
- Probes are not allowed for ephemeral containers.
- Ports are not allowed for ephemeral containers.
## Decision
While the evaluation shows that ephemeral containers can be used to run jobs in containers, it's important to acknowledge that ephemeral containers were not designed to handle workloads but rather provide a mechanism to inspect running containers for debugging and troubleshooting purposes.
Given the limitations of ephemeral containers, we decided not to use them outside of their intended purpose.
## Consequences
Proposal rejected, no further action required. This document will be used as a reference for future discussions.
[^1]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#ephemeralcontainer-v1-core
[^2]: https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/
[^3]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/277-ephemeral-containers/README.md#notesconstraintscaveats
[^4]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/277-ephemeral-containers/README.md#ephemeral-container-lifecycle

View File

@@ -0,0 +1,34 @@
# ADR 0096: Hook extensions
**Date:** 3 August 2023
**Status**: Superceded [^1]
## Context
The current implementation of container hooks does not allow users to customize the pods created by the hook. While the implementation is designed to be used as is or as a starting point, building and maintaining a custom hook implementation just to specify additional fields is not a good user experience.
## Decision
We have decided to add hook extensions to the container hook implementation. This will allow users to customize the pods created by the hook by specifying additional fields. The hook extensions will be implemented in a way that is backwards-compatible with the existing hook implementation.
To allow customization, the runner executing the hook should have `ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE` environment variable pointing to a yaml file on the runner system. The extension specified in that file will be applied both for job pods, and container steps.
If environment variable is set, but the file can't be read, the hook will fail, signaling incorrect configuration.
If the environment variable does not exist, the hook will apply the default spec.
In case the hook is able to read the extended spec, it will first create a default configuration, and then merged modified fields in the following way:
1. The `.metadata` fields that will be appended if they are not reserved are `labels` and `annotations`.
2. The pod spec fields except for `containers` and `volumes` are applied from the template, possibly overwriting the field.
3. The volumes are applied in form of appending additional volumes to the default volumes.
4. The containers are merged based on the name assigned to them:
1. If the name of the container *is not* "$job", the entire spec of the container will be added to the pod definition.
2. If the name of the container *is* "$job", the `name` and the `image` fields are going to be ignored and the spec will be applied so that `env`, `volumeMounts`, `ports` are appended to the default container spec created by the hook, while the rest of the fields are going to be applied to the newly created container spec.
## Consequences
The addition of hook extensions will provide a better user experience for users who need to customize the pods created by the container hook. However, it will require additional effort to provide the template to the runner pod, and configure it properly.
[^1]: Superseded by [ADR 0134](0134-hook-extensions.md)

View File

@@ -0,0 +1,41 @@
# ADR 0134: Hook extensions
**Date:** 20 February 2024
**Status**: Accepted [^1]
## Context
The current implementation of container hooks does not allow users to customize the pods created by the hook.
While the implementation is designed to be used as is or as a starting point, building and maintaining a custom hook implementation just to specify additional fields is not a good user experience.
## Decision
We have decided to add hook extensions to the container hook implementation.
This will allow users to customize the pods created by the hook by specifying additional fields.
The hook extensions will be implemented in a way that is backwards-compatible with the existing hook implementation.
To allow customization, the runner executing the hook should have `ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE` environment variable pointing to a yaml file on the runner system.
The extension specified in that file will be applied both for job pods, and container steps.
If environment variable is set, but the file can't be read, the hook will fail, signaling incorrect configuration.
If the environment variable does not exist, the hook will apply the default spec.
In case the hook is able to read the extended spec, it will first create a default configuration, and then merged modified fields in the following way:
1. The `.metadata` fields that will be appended if they are not reserved are `labels` and `annotations`.
2. The pod spec fields except for `containers` and `volumes` are applied from the template, possibly overwriting the field.
3. The volumes are applied in form of appending additional volumes to the default volumes.
4. The containers are merged based on the name assigned to them:
1. If the name of the container *is* "$job", the `name` and the `image` fields are going to be ignored and the spec will be applied so that `env`, `volumeMounts`, `ports` are appended to the default container spec created by the hook, while the rest of the fields are going to be applied to the newly created container spec.
2. If the name of the container *starts with* "$", and matches the name of the [container service](https://docs.github.com/en/actions/using-containerized-services/about-service-containers), the `name` and the `image` fields are going to be ignored and the spec will be applied to that service container, so that `env`, `volumeMounts`, `ports` are appended to the default container spec for service created by the hook, while the rest of the fields are going to be applied to the created container spec.
If there is no container service with such name defined in the workflow, such spec extension will be ignored.
3. If the name of the container *does not start with* "$", the entire spec of the container will be added to the pod definition.
## Consequences
The addition of hook extensions will provide a better user experience for users who need to customize the pods created by the container hook.
However, it will require additional effort to provide the template to the runner pod, and configure it properly.
[^1]: Supersedes [ADR 0096](0096-hook-extensions.md)

BIN
docs/adrs/images/debugger-runner.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/adrs/images/debugger2-debugger.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/adrs/images/emptyDir_volume.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/adrs/images/runner-debugger.png (Stored with Git LFS) Normal file

Binary file not shown.

122
eslint.config.js Normal file
View File

@@ -0,0 +1,122 @@
const eslint = require('@eslint/js');
const tseslint = require('@typescript-eslint/eslint-plugin');
const tsparser = require('@typescript-eslint/parser');
const globals = require('globals');
const pluginJest = require('eslint-plugin-jest');
module.exports = [
eslint.configs.recommended,
{
files: ['**/*.ts'],
languageOptions: {
parser: tsparser,
parserOptions: {
ecmaVersion: 2018,
sourceType: 'module',
project: ['./tsconfig.json', './packages/*/tsconfig.json']
},
globals: {
...globals.node,
...globals.es6
}
},
plugins: {
'@typescript-eslint': tseslint,
},
rules: {
// Disabled rules from original config
'eslint-comments/no-use': 'off',
'import/no-namespace': 'off',
'no-constant-condition': 'off',
'no-unused-vars': 'off',
'i18n-text/no-en': 'off',
'camelcase': 'off',
'semi': 'off',
'no-shadow': 'off',
// TypeScript ESLint rules
'@typescript-eslint/no-unused-vars': 'error',
'@typescript-eslint/explicit-member-accessibility': ['error', { accessibility: 'no-public' }],
'@typescript-eslint/no-require-imports': 'error',
'@typescript-eslint/array-type': 'error',
'@typescript-eslint/await-thenable': 'error',
'@typescript-eslint/explicit-function-return-type': ['error', { allowExpressions: true }],
'@typescript-eslint/no-array-constructor': 'error',
'@typescript-eslint/no-empty-interface': 'error',
'@typescript-eslint/no-explicit-any': 'off', // Fixed: removed duplicate and kept only this one
'@typescript-eslint/no-extraneous-class': 'error',
'@typescript-eslint/no-floating-promises': 'error',
'@typescript-eslint/no-for-in-array': 'error',
'@typescript-eslint/no-inferrable-types': 'error',
'@typescript-eslint/no-misused-new': 'error',
'@typescript-eslint/no-namespace': 'error',
'@typescript-eslint/no-non-null-assertion': 'warn',
'@typescript-eslint/no-unnecessary-qualifier': 'error',
'@typescript-eslint/no-unnecessary-type-assertion': 'error',
'@typescript-eslint/no-useless-constructor': 'error',
'@typescript-eslint/no-var-requires': 'error',
'@typescript-eslint/prefer-for-of': 'warn',
'@typescript-eslint/prefer-function-type': 'warn',
'@typescript-eslint/prefer-includes': 'error',
'@typescript-eslint/prefer-string-starts-ends-with': 'error',
'@typescript-eslint/promise-function-async': 'error',
'@typescript-eslint/require-array-sort-compare': 'error',
'@typescript-eslint/restrict-plus-operands': 'error',
'@typescript-eslint/unbound-method': 'error',
'@typescript-eslint/no-shadow': ['error']
}
},
{
// Test files configuration - Fixed file pattern to match .ts files
files: ['**/*test*.ts', '**/*spec*.ts', '**/tests/**/*.ts'],
languageOptions: {
parser: tsparser,
parserOptions: {
ecmaVersion: 2018,
sourceType: 'module',
project: ['./tsconfig.json', './packages/*/tsconfig.json']
},
globals: {
...globals.node,
...globals.es6,
// Fixed Jest globals
describe: 'readonly',
it: 'readonly',
test: 'readonly',
expect: 'readonly',
beforeEach: 'readonly',
afterEach: 'readonly',
beforeAll: 'readonly',
afterAll: 'readonly',
jest: 'readonly'
}
},
plugins: {
'@typescript-eslint': tseslint,
jest: pluginJest
},
rules: {
// Disable no-undef for test files since Jest globals are handled above
'no-undef': 'off',
// Relax some rules for test files
'@typescript-eslint/no-explicit-any': 'off',
'@typescript-eslint/no-non-null-assertion': 'off',
'@typescript-eslint/explicit-function-return-type': 'off'
}
},
{
files: ['**/jest.config.js', '**/jest.setup.js'],
languageOptions: {
globals: {
...globals.node,
jest: 'readonly',
module: 'writable'
}
},
rules: {
'@typescript-eslint/no-require-imports': 'off',
'@typescript-eslint/no-var-requires': 'off',
'import/no-commonjs': 'off'
}
}
];

View File

@@ -0,0 +1,3 @@
#!/bin/bash
echo "Hello World"

38
examples/extension.yaml Normal file
View File

@@ -0,0 +1,38 @@
metadata:
annotations:
annotated-by: "extension"
labels:
labeled-by: "extension"
spec:
restartPolicy: Never
containers:
- name: $job # overwrites job container
env:
- name: ENV1
value: "value1"
imagePullPolicy: Always
image: "busybox:1.28" # Ignored
command:
- sh
args:
- -c
- sleep 50
- name: $redis # overwrites redis service
env:
- name: ENV2
value: "value2"
image: "busybox:1.28" # Ignored
resources:
requests:
memory: "1Mi"
cpu: "1"
limits:
memory: "1Gi"
cpu: "2"
- name: side-car
image: "ubuntu:latest" # required
command:
- sh
args:
- -c
- sleep 60

View File

@@ -4,8 +4,8 @@
"state": {}, "state": {},
"args": { "args": {
"container": { "container": {
"image": "node:14.16", "image": "node:22",
"workingDirectory": "/__w/thboop-test2/thboop-test2", "workingDirectory": "/__w/repo/repo",
"createOptions": "--cpus 1", "createOptions": "--cpus 1",
"environmentVariables": { "environmentVariables": {
"NODE_ENV": "development" "NODE_ENV": "development"
@@ -24,37 +24,37 @@
"readOnly": false "readOnly": false
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work",
"targetVolumePath": "/__w", "targetVolumePath": "/__w",
"readOnly": false "readOnly": false
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/externals", "sourceVolumePath": "/Users/thomas/git/runner/_layout/externals",
"targetVolumePath": "/__e", "targetVolumePath": "/__e",
"readOnly": true "readOnly": true
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp", "targetVolumePath": "/__w/_temp",
"readOnly": false "readOnly": false
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_actions", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions", "targetVolumePath": "/__w/_actions",
"readOnly": false "readOnly": false
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_tool", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool", "targetVolumePath": "/__w/_tool",
"readOnly": false "readOnly": false
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp/_github_home", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home", "targetVolumePath": "/github/home",
"readOnly": false "readOnly": false
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp/_github_workflow", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow", "targetVolumePath": "/github/workflow",
"readOnly": false "readOnly": false
} }
@@ -73,6 +73,8 @@
"contextName": "redis", "contextName": "redis",
"image": "redis", "image": "redis",
"createOptions": "--cpus 1", "createOptions": "--cpus 1",
"entrypoint": null,
"entryPointArgs": [],
"environmentVariables": {}, "environmentVariables": {},
"userMountVolumes": [ "userMountVolumes": [
{ {

View File

@@ -9,14 +9,14 @@
} }
}, },
"args": { "args": {
"image": "node:14.16", "image": "node:22",
"dockerfile": null, "dockerfile": null,
"entryPointArgs": [ "entryPointArgs": [
"-c", "-e",
"echo \"hello world2\"" "example-script.sh"
], ],
"entryPoint": "bash", "entryPoint": "bash",
"workingDirectory": "/__w/thboop-test2/thboop-test2", "workingDirectory": "/__w/repo/repo",
"createOptions": "--cpus 1", "createOptions": "--cpus 1",
"environmentVariables": { "environmentVariables": {
"NODE_ENV": "development" "NODE_ENV": "development"
@@ -34,27 +34,27 @@
], ],
"systemMountVolumes": [ "systemMountVolumes": [
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work",
"targetVolumePath": "/__w", "targetVolumePath": "/__w",
"readOnly": false "readOnly": false
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/externals", "sourceVolumePath": "/Users/thomas/git/runner/_layout/externals",
"targetVolumePath": "/__e", "targetVolumePath": "/__e",
"readOnly": true "readOnly": true
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp", "targetVolumePath": "/__w/_temp",
"readOnly": false "readOnly": false
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_actions", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions", "targetVolumePath": "/__w/_actions",
"readOnly": false "readOnly": false
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_tool", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool", "targetVolumePath": "/__w/_tool",
"readOnly": false "readOnly": false
}, },
@@ -64,7 +64,7 @@
"readOnly": false "readOnly": false
}, },
{ {
"sourceVolumePath": "//Users/thomas/git/runner/_layout/_work/_temp/_github_workflow", "sourceVolumePath": "/Users/thomas/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow", "targetVolumePath": "/github/workflow",
"readOnly": false "readOnly": false
} }

View File

@@ -10,8 +10,8 @@
}, },
"args": { "args": {
"entryPointArgs": [ "entryPointArgs": [
"-c", "-e",
"echo \"hello world\"" "example-script.sh"
], ],
"entryPoint": "bash", "entryPoint": "bash",
"environmentVariables": { "environmentVariables": {
@@ -21,6 +21,6 @@
"/foo/bar", "/foo/bar",
"bar/foo" "bar/foo"
], ],
"workingDirectory": "/__w/thboop-test2/thboop-test2" "workingDirectory": "/__w/repo/repo"
} }
} }

6207
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,13 @@
{ {
"name": "hooks", "name": "hooks",
"version": "0.1.0", "version": "0.8.0",
"description": "Three projects are included - k8s: a kubernetes hook implementation that spins up pods dynamically to run a job - docker: A hook implementation of the runner's docker implementation - A hook lib, which contains shared typescript definitions and utilities that the other packages consume", "description": "Three projects are included - k8s: a kubernetes hook implementation that spins up pods dynamically to run a job - docker: A hook implementation of the runner's docker implementation - A hook lib, which contains shared typescript definitions and utilities that the other packages consume",
"main": "", "main": "",
"directories": { "directories": {
"doc": "docs" "doc": "docs"
}, },
"scripts": { "scripts": {
"test": "npm run test --prefix packages/docker", "test": "npm run test --prefix packages/docker && npm run test --prefix packages/k8s",
"bootstrap": "npm install --prefix packages/hooklib && npm install --prefix packages/k8s && npm install --prefix packages/docker", "bootstrap": "npm install --prefix packages/hooklib && npm install --prefix packages/k8s && npm install --prefix packages/docker",
"format": "prettier --write '**/*.ts'", "format": "prettier --write '**/*.ts'",
"format-check": "prettier --check '**/*.ts'", "format-check": "prettier --check '**/*.ts'",
@@ -25,12 +25,18 @@
}, },
"homepage": "https://github.com/actions/runner-container-hooks#readme", "homepage": "https://github.com/actions/runner-container-hooks#readme",
"devDependencies": { "devDependencies": {
"@types/jest": "^27.5.1", "@eslint/js": "^9.31.0",
"@types/node": "^17.0.23", "@types/jest": "^30.0.0",
"@typescript-eslint/parser": "^5.18.0", "@types/node": "^24.0.14",
"eslint": "^8.12.0", "@typescript-eslint/eslint-plugin": "^8.37.0",
"eslint-plugin-github": "^4.3.6", "@typescript-eslint/parser": "^8.37.0",
"prettier": "^2.6.2", "eslint": "^9.31.0",
"typescript": "^4.6.3" "eslint-plugin-github": "^6.0.0",
"globals": "^15.12.0",
"prettier": "^3.6.2",
"typescript": "^5.8.3"
},
"dependencies": {
"eslint-plugin-jest": "^29.0.1"
} }
} }

View File

@@ -1,13 +1,26 @@
// eslint-disable-next-line import/no-commonjs
module.exports = { module.exports = {
clearMocks: true, clearMocks: true,
preset: 'ts-jest',
moduleFileExtensions: ['js', 'ts'], moduleFileExtensions: ['js', 'ts'],
testEnvironment: 'node', testEnvironment: 'node',
testMatch: ['**/*-test.ts'], testMatch: ['**/*-test.ts'],
testRunner: 'jest-circus/runner', testRunner: 'jest-circus/runner',
verbose: true,
transform: { transform: {
'^.+\\.ts$': 'ts-jest' '^.+\\.ts$': [
}, 'ts-jest',
setupFilesAfterEnv: ['./jest.setup.js'], {
verbose: true tsconfig: 'tsconfig.test.json'
}
],
// Transform ESM modules to CommonJS
'^.+\\.(js|mjs)$': ['babel-jest', {
presets: [['@babel/preset-env', { targets: { node: 'current' } }]]
}]
},
transformIgnorePatterns: [
// Transform these ESM packages
'node_modules/(?!(shlex|@kubernetes/client-node|openid-client|oauth4webapi|jose|uuid)/)'
],
setupFilesAfterEnv: ['./jest.setup.js']
} }

View File

@@ -1 +1 @@
jest.setTimeout(90000) jest.setTimeout(500000)

File diff suppressed because it is too large Load Diff

View File

@@ -5,25 +5,31 @@
"main": "lib/index.js", "main": "lib/index.js",
"scripts": { "scripts": {
"test": "jest --runInBand", "test": "jest --runInBand",
"build": "npx tsc && npx ncc build" "build": "npx tsc && npx ncc build",
"format": "prettier --write '**/*.ts'",
"format-check": "prettier --check '**/*.ts'",
"lint": "eslint src/**/*.ts"
}, },
"author": "", "author": "",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@actions/core": "^1.6.0", "@actions/core": "^1.11.1",
"@actions/exec": "^1.1.1", "@actions/exec": "^1.1.1",
"hooklib": "file:../hooklib", "hooklib": "file:../hooklib",
"uuid": "^8.3.2" "shlex": "^3.0.0",
"uuid": "^11.1.0"
}, },
"devDependencies": { "devDependencies": {
"@types/jest": "^27.4.1", "@babel/core": "^7.25.2",
"@types/node": "^17.0.23", "@babel/preset-env": "^7.25.4",
"@typescript-eslint/parser": "^5.18.0", "@types/jest": "^30.0.0",
"@vercel/ncc": "^0.33.4", "@types/node": "^24.0.14",
"jest": "^27.5.1", "@typescript-eslint/parser": "^8.37.0",
"ts-jest": "^27.1.4", "@vercel/ncc": "^0.38.3",
"ts-node": "^10.7.0", "jest": "^30.0.4",
"tsconfig-paths": "^3.14.1", "ts-jest": "^29.4.0",
"typescript": "^4.6.3" "ts-node": "^10.9.2",
"tsconfig-paths": "^4.2.0",
"typescript": "^5.8.3"
} }
} }

View File

@@ -2,12 +2,11 @@ import * as core from '@actions/core'
import * as fs from 'fs' import * as fs from 'fs'
import { import {
ContainerInfo, ContainerInfo,
JobContainerInfo, Registry,
RunContainerStepArgs, RunContainerStepArgs,
ServiceContainerInfo, ServiceContainerInfo
StepContainerInfo
} from 'hooklib/lib' } from 'hooklib/lib'
import path from 'path' import * as path from 'path'
import { env } from 'process' import { env } from 'process'
import { v4 as uuidv4 } from 'uuid' import { v4 as uuidv4 } from 'uuid'
import { runDockerCommand, RunDockerCommandOptions } from '../utils' import { runDockerCommand, RunDockerCommandOptions } from '../utils'
@@ -43,23 +42,26 @@ export async function createContainer(
} }
if (args.environmentVariables) { if (args.environmentVariables) {
for (const [key, value] of Object.entries(args.environmentVariables)) { for (const [key] of Object.entries(args.environmentVariables)) {
dockerArgs.push('-e') dockerArgs.push('-e', key)
if (!value) {
dockerArgs.push(`"${key}"`)
} else {
dockerArgs.push(`"${key}=${value}"`)
} }
} }
dockerArgs.push('-e', 'GITHUB_ACTIONS=true')
// Use same behavior as the runner https://github.com/actions/runner/blob/27d9c886ab9a45e0013cb462529ac85d581f8c41/src/Runner.Worker/Container/DockerCommandManager.cs#L150
if (!('CI' in (args.environmentVariables ?? {}))) {
dockerArgs.push('-e', 'CI=true')
} }
const mountVolumes = [ const mountVolumes = [
...(args.userMountVolumes || []), ...(args.userMountVolumes || []),
...((args as JobContainerInfo | StepContainerInfo).systemMountVolumes || []) ...(args.systemMountVolumes || [])
] ]
for (const mountVolume of mountVolumes) { for (const mountVolume of mountVolumes) {
dockerArgs.push( dockerArgs.push(
`-v=${mountVolume.sourceVolumePath}:${mountVolume.targetVolumePath}` `-v=${mountVolume.sourceVolumePath}:${mountVolume.targetVolumePath}${
mountVolume.readOnly ? ':ro' : ''
}`
) )
} }
if (args.entryPoint) { if (args.entryPoint) {
@@ -74,7 +76,9 @@ export async function createContainer(
} }
} }
const id = (await runDockerCommand(dockerArgs)).trim() const id = (
await runDockerCommand(dockerArgs, { env: args.environmentVariables })
).trim()
if (!id) { if (!id) {
throw new Error('Could not read id from docker command') throw new Error('Could not read id from docker command')
} }
@@ -94,11 +98,12 @@ export async function containerPull(
image: string, image: string,
configLocation: string configLocation: string
): Promise<void> { ): Promise<void> {
const dockerArgs: string[] = ['pull'] const dockerArgs: string[] = []
if (configLocation) { if (configLocation) {
dockerArgs.push('--config') dockerArgs.push('--config')
dockerArgs.push(configLocation) dockerArgs.push(configLocation)
} }
dockerArgs.push('pull')
dockerArgs.push(image) dockerArgs.push(image)
for (let i = 0; i < 3; i++) { for (let i = 0; i < 3; i++) {
try { try {
@@ -146,17 +151,41 @@ export async function containerBuild(
args: RunContainerStepArgs, args: RunContainerStepArgs,
tag: string tag: string
): Promise<void> { ): Promise<void> {
const context = path.dirname(`${env.GITHUB_WORKSPACE}/${args.dockerfile}`) if (!args.dockerfile) {
throw new Error("Container build expects 'args.dockerfile' to be set")
}
const dockerArgs: string[] = ['build'] const dockerArgs: string[] = ['build']
dockerArgs.push('-t', tag) dockerArgs.push('-t', tag)
dockerArgs.push('-f', `${env.GITHUB_WORKSPACE}/${args.dockerfile}`) dockerArgs.push('-f', args.dockerfile)
dockerArgs.push(context) dockerArgs.push(getBuildContext(args.dockerfile))
// TODO: figure out build working directory
await runDockerCommand(dockerArgs, { await runDockerCommand(dockerArgs, {
workingDir: args['buildWorkingDirectory'] workingDir: getWorkingDir(args.dockerfile)
}) })
} }
function getBuildContext(dockerfilePath: string): string {
return path.dirname(dockerfilePath)
}
function getWorkingDir(dockerfilePath: string): string {
const workspace = env.GITHUB_WORKSPACE as string
let workingDir = workspace
if (!dockerfilePath?.includes(workspace)) {
// This is container action
const pathSplit = dockerfilePath.split('/')
const actionIndex = pathSplit?.findIndex(d => d === '_actions')
if (actionIndex) {
const actionSubdirectoryDepth = 3 // handle + repo + [branch | tag]
pathSplit.splice(actionIndex + actionSubdirectoryDepth + 1)
workingDir = pathSplit.join('/')
}
}
return workingDir
}
export async function containerLogs(id: string): Promise<void> { export async function containerLogs(id: string): Promise<void> {
const dockerArgs: string[] = ['logs'] const dockerArgs: string[] = ['logs']
dockerArgs.push('--details') dockerArgs.push('--details')
@@ -171,6 +200,18 @@ export async function containerNetworkRemove(network: string): Promise<void> {
await runDockerCommand(dockerArgs) await runDockerCommand(dockerArgs)
} }
export async function containerNetworkPrune(): Promise<void> {
const dockerArgs = [
'network',
'prune',
'--force',
'--filter',
`label=${getRunnerLabel()}`
]
await runDockerCommand(dockerArgs)
}
export async function containerPrune(): Promise<void> { export async function containerPrune(): Promise<void> {
const dockerPSArgs: string[] = [ const dockerPSArgs: string[] = [
'ps', 'ps',
@@ -238,22 +279,36 @@ export async function healthCheck({
export async function containerPorts(id: string): Promise<string[]> { export async function containerPorts(id: string): Promise<string[]> {
const dockerArgs = ['port', id] const dockerArgs = ['port', id]
const portMappings = (await runDockerCommand(dockerArgs)).trim() const portMappings = (await runDockerCommand(dockerArgs)).trim()
return portMappings.split('\n') return portMappings.split('\n').filter(p => !!p)
} }
export async function registryLogin(args): Promise<string> { export async function getContainerEnvValue(
if (!args.registry) { id: string,
name: string
): Promise<string> {
const dockerArgs = [
'inspect',
`--format='{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "${name}"}}{{index (split $value "=") 1}}{{end}}{{end}}'`,
id
]
const value = (await runDockerCommand(dockerArgs)).trim()
const lines = value.split('\n')
return lines.length ? lines[0].replace(/^'/, '').replace(/'$/, '') : ''
}
export async function registryLogin(registry?: Registry): Promise<string> {
if (!registry) {
return '' return ''
} }
const credentials = { const credentials = {
username: args.registry.username, username: registry.username,
password: args.registry.password password: registry.password
} }
const configLocation = `${env.RUNNER_TEMP}/.docker_${uuidv4()}` const configLocation = `${env.RUNNER_TEMP}/.docker_${uuidv4()}`
fs.mkdirSync(configLocation) fs.mkdirSync(configLocation)
try { try {
await dockerLogin(configLocation, args.registry.serverUrl, credentials) await dockerLogin(configLocation, registry.serverUrl, credentials)
} catch (error) { } catch (error) {
fs.rmdirSync(configLocation, { recursive: true }) fs.rmdirSync(configLocation, { recursive: true })
throw error throw error
@@ -271,7 +326,7 @@ export async function registryLogout(configLocation: string): Promise<void> {
async function dockerLogin( async function dockerLogin(
configLocation: string, configLocation: string,
registry: string, registry: string,
credentials: { username: string; password: string } credentials: { username?: string; password?: string }
): Promise<void> { ): Promise<void> {
const credentialsArgs = const credentialsArgs =
credentials.username && credentials.password credentials.username && credentials.password
@@ -307,30 +362,36 @@ export async function containerExecStep(
): Promise<void> { ): Promise<void> {
const dockerArgs: string[] = ['exec', '-i'] const dockerArgs: string[] = ['exec', '-i']
dockerArgs.push(`--workdir=${args.workingDirectory}`) dockerArgs.push(`--workdir=${args.workingDirectory}`)
for (const [key, value] of Object.entries(args['environmentVariables'])) { for (const [key] of Object.entries(args['environmentVariables'])) {
dockerArgs.push('-e') dockerArgs.push('-e')
if (!value) { dockerArgs.push(key)
dockerArgs.push(`"${key}"`)
} else {
dockerArgs.push(`"${key}=${value}"`)
}
} }
// Todo figure out prepend path and update it here if (args.prependPath?.length) {
// (we need to pass path in as -e Path={fullpath}) where {fullpath is the prepend path added to the current containers path} // TODO: remove compatibility with typeof prependPath === 'string' as we bump to next major version, the hooks will lose PrependPath compat with runners 2.293.0 and older
const prependPath =
typeof args.prependPath === 'string'
? args.prependPath
: args.prependPath.join(':')
dockerArgs.push(
'-e',
`PATH=${prependPath}:${await getContainerEnvValue(containerId, 'PATH')}`
)
}
dockerArgs.push(containerId) dockerArgs.push(containerId)
dockerArgs.push(args.entryPoint) dockerArgs.push(args.entryPoint)
for (const entryPointArg of args.entryPointArgs) { for (const entryPointArg of args.entryPointArgs) {
dockerArgs.push(entryPointArg) dockerArgs.push(entryPointArg)
} }
await runDockerCommand(dockerArgs) await runDockerCommand(dockerArgs, { env: args.environmentVariables })
} }
export async function containerRun( export async function containerRun(
args: RunContainerStepArgs, args: RunContainerStepArgs,
name: string, name: string,
network: string network?: string
): Promise<void> { ): Promise<void> {
if (!args.image) { if (!args.image) {
throw new Error('expected image to be set') throw new Error('expected image to be set')
@@ -340,20 +401,25 @@ export async function containerRun(
dockerArgs.push('--name', name) dockerArgs.push('--name', name)
dockerArgs.push(`--workdir=${args.workingDirectory}`) dockerArgs.push(`--workdir=${args.workingDirectory}`)
dockerArgs.push(`--label=${getRunnerLabel()}`) dockerArgs.push(`--label=${getRunnerLabel()}`)
if (network) {
dockerArgs.push(`--network=${network}`) dockerArgs.push(`--network=${network}`)
}
if (args.createOptions) { if (args.createOptions) {
dockerArgs.push(...args.createOptions.split(' ')) dockerArgs.push(...args.createOptions.split(' '))
} }
if (args.environmentVariables) { if (args.environmentVariables) {
for (const [key, value] of Object.entries(args.environmentVariables)) { for (const [key] of Object.entries(args.environmentVariables)) {
// Pass in this way to avoid printing secrets dockerArgs.push('-e', key)
env[key] = value ?? undefined
dockerArgs.push('-e')
dockerArgs.push(key)
} }
} }
dockerArgs.push('-e', 'GITHUB_ACTIONS=true')
// Use same behavior as the runner https://github.com/actions/runner/blob/27d9c886ab9a45e0013cb462529ac85d581f8c41/src/Runner.Worker/Container/DockerCommandManager.cs#L150
if (!('CI' in (args.environmentVariables ?? {}))) {
dockerArgs.push('-e', 'CI=true')
}
const mountVolumes = [ const mountVolumes = [
...(args.userMountVolumes || []), ...(args.userMountVolumes || []),
...(args.systemMountVolumes || []) ...(args.systemMountVolumes || [])
@@ -374,11 +440,14 @@ export async function containerRun(
dockerArgs.push(args.image) dockerArgs.push(args.image)
if (args.entryPointArgs) { if (args.entryPointArgs) {
for (const entryPointArg of args.entryPointArgs) { for (const entryPointArg of args.entryPointArgs) {
if (!entryPointArg) {
continue
}
dockerArgs.push(entryPointArg) dockerArgs.push(entryPointArg)
} }
} }
await runDockerCommand(dockerArgs) await runDockerCommand(dockerArgs, { env: args.environmentVariables })
} }
export async function isContainerAlpine(containerId: string): Promise<boolean> { export async function isContainerAlpine(containerId: string): Promise<boolean> {
@@ -387,7 +456,7 @@ export async function isContainerAlpine(containerId: string): Promise<boolean> {
containerId, containerId,
'sh', 'sh',
'-c', '-c',
"[ $(cat /etc/*release* | grep -i -e '^ID=*alpine*' -c) != 0 ] || exit 1" `'[ $(cat /etc/*release* | grep -i -e "^ID=*alpine*" -c) != 0 ] || exit 1'`
] ]
try { try {
await runDockerCommand(dockerArgs) await runDockerCommand(dockerArgs)

View File

@@ -1,21 +1,9 @@
import { import {
containerRemove, containerNetworkPrune,
containerNetworkRemove containerPrune
} from '../dockerCommands/container' } from '../dockerCommands/container'
// eslint-disable-next-line @typescript-eslint/no-unused-vars export async function cleanupJob(): Promise<void> {
export async function cleanupJob(args, state, responseFile): Promise<void> { await containerPrune()
const containerIds: string[] = [] await containerNetworkPrune()
if (state?.container) {
containerIds.push(state.container)
}
if (state?.services) {
containerIds.push(state.services)
}
if (containerIds.length > 0) {
await containerRemove(containerIds)
}
if (state.network) {
await containerNetworkRemove(state.network)
}
} }

View File

@@ -31,16 +31,20 @@ export async function prepareJob(
core.info('No containers exist, skipping hook invocation') core.info('No containers exist, skipping hook invocation')
exit(0) exit(0)
} }
const networkName = generateNetworkName()
let networkName = process.env.ACTIONS_RUNNER_NETWORK_DRIVER
if (!networkName) {
networkName = generateNetworkName()
// Create network // Create network
await networkCreate(networkName) await networkCreate(networkName)
}
// Create Job Container // Create Job Container
let containerMetadata: ContainerMetadata | undefined = undefined let containerMetadata: ContainerMetadata | undefined = undefined
if (!container?.image) { if (!container?.image) {
core.info('No job container provided, skipping') core.info('No job container provided, skipping')
} else { } else {
setupContainer(container) setupContainer(container, true)
const configLocation = await registryLogin(container.registry) const configLocation = await registryLogin(container.registry)
try { try {
@@ -48,6 +52,7 @@ export async function prepareJob(
} finally { } finally {
await registryLogout(configLocation) await registryLogout(configLocation)
} }
containerMetadata = await createContainer( containerMetadata = await createContainer(
container, container,
generateContainerName(container.image), generateContainerName(container.image),
@@ -78,6 +83,7 @@ export async function prepareJob(
generateContainerName(service.image), generateContainerName(service.image),
networkName networkName
) )
servicesMetadata.push(response) servicesMetadata.push(response)
await containerStart(response.id) await containerStart(response.id)
} }
@@ -94,7 +100,10 @@ export async function prepareJob(
) )
} }
const isAlpine = await isContainerAlpine(containerMetadata!.id) let isAlpine = false
if (containerMetadata?.id) {
isAlpine = await isContainerAlpine(containerMetadata.id)
}
if (containerMetadata?.id) { if (containerMetadata?.id) {
containerMetadata.ports = await containerPorts(containerMetadata.id) containerMetadata.ports = await containerPorts(containerMetadata.id)
@@ -105,7 +114,10 @@ export async function prepareJob(
} }
} }
const healthChecks: Promise<void>[] = [healthCheck(containerMetadata!)] const healthChecks: Promise<void>[] = []
if (containerMetadata) {
healthChecks.push(healthCheck(containerMetadata))
}
for (const service of servicesMetadata) { for (const service of servicesMetadata) {
healthChecks.push(healthCheck(service)) healthChecks.push(healthCheck(service))
} }
@@ -133,7 +145,6 @@ function generateResponseFile(
servicesMetadata?: ContainerMetadata[], servicesMetadata?: ContainerMetadata[],
isAlpine = false isAlpine = false
): void { ): void {
// todo figure out if we are alpine
const response = { const response = {
state: { network: networkName }, state: { network: networkName },
context: {}, context: {},
@@ -167,10 +178,12 @@ function generateResponseFile(
writeToResponseFile(responseFile, JSON.stringify(response)) writeToResponseFile(responseFile, JSON.stringify(response))
} }
function setupContainer(container): void { function setupContainer(container, jobContainer = false): void {
if (!container.entryPoint && jobContainer) {
container.entryPointArgs = [`-f`, `/dev/null`] container.entryPointArgs = [`-f`, `/dev/null`]
container.entryPoint = 'tail' container.entryPoint = 'tail'
} }
}
function generateNetworkName(): string { function generateNetworkName(): string {
return `github_network_${uuidv4()}` return `github_network_${uuidv4()}`
@@ -186,15 +199,15 @@ function transformDockerPortsToContextPorts(
meta: ContainerMetadata meta: ContainerMetadata
): ContextPorts { ): ContextPorts {
// ex: '80/tcp -> 0.0.0.0:80' // ex: '80/tcp -> 0.0.0.0:80'
const re = /^(\d+)\/(\w+)? -> (.*):(\d+)$/ const re = /^(\d+)(\/\w+)? -> (.*):(\d+)$/
const contextPorts: ContextPorts = {} const contextPorts: ContextPorts = {}
if (meta.ports) { if (meta.ports?.length) {
for (const port of meta.ports) { for (const port of meta.ports) {
const matches = port.match(re) const matches = port.match(re)
if (!matches) { if (!matches) {
throw new Error( throw new Error(
'Container ports could not match the regex: "^(\\d+)\\/(\\w+)? -> (.*):(\\d+)$"' 'Container ports could not match the regex: "^(\\d+)(\\/\\w+)? -> (.*):(\\d+)$"'
) )
} }
contextPorts[matches[1]] = matches[matches.length - 1] contextPorts[matches[1]] = matches[matches.length - 1]

View File

@@ -1,13 +1,12 @@
import { RunContainerStepArgs } from 'hooklib/lib'
import { v4 as uuidv4 } from 'uuid'
import { import {
containerBuild, containerBuild,
registryLogin,
registryLogout,
containerPull, containerPull,
containerRun containerRun,
registryLogin,
registryLogout
} from '../dockerCommands' } from '../dockerCommands'
import { v4 as uuidv4 } from 'uuid'
import * as core from '@actions/core'
import { RunContainerStepArgs } from 'hooklib/lib'
import { getRunnerLabel } from '../dockerCommands/constants' import { getRunnerLabel } from '../dockerCommands/constants'
export async function runContainerStep( export async function runContainerStep(
@@ -15,23 +14,23 @@ export async function runContainerStep(
state state
): Promise<void> { ): Promise<void> {
const tag = generateBuildTag() // for docker build const tag = generateBuildTag() // for docker build
if (!args.image) { if (args.image) {
core.error('expected an image') const configLocation = await registryLogin(args.registry)
} else {
if (args.dockerfile) {
await containerBuild(args, tag)
args.image = tag
} else {
const configLocation = await registryLogin(args)
try { try {
await containerPull(args.image, configLocation) await containerPull(args.image, configLocation)
} finally { } finally {
await registryLogout(configLocation) await registryLogout(configLocation)
} }
} } else if (args.dockerfile) {
await containerBuild(args, tag)
args.image = tag
} else {
throw new Error(
'run container step should have image or dockerfile fields specified'
)
} }
// container will get pruned at the end of the job based on the label, no need to cleanup here // container will get pruned at the end of the job based on the label, no need to cleanup here
await containerRun(args, tag.split(':')[1], state.network) await containerRun(args, tag.split(':')[1], state?.network)
} }
function generateBuildTag(): string { function generateBuildTag(): string {

View File

@@ -13,22 +13,23 @@ import {
runContainerStep, runContainerStep,
runScriptStep runScriptStep
} from './hooks' } from './hooks'
import { checkEnvironment } from './utils'
async function run(): Promise<void> { async function run(): Promise<void> {
try {
checkEnvironment()
const input = await getInputFromStdin() const input = await getInputFromStdin()
const args = input['args'] const args = input['args']
const command = input['command'] const command = input['command']
const responseFile = input['responseFile'] const responseFile = input['responseFile']
const state = input['state'] const state = input['state']
try {
switch (command) { switch (command) {
case Command.PrepareJob: case Command.PrepareJob:
await prepareJob(args as PrepareJobArgs, responseFile) await prepareJob(args as PrepareJobArgs, responseFile)
return exit(0) return exit(0)
case Command.CleanupJob: case Command.CleanupJob:
await cleanupJob(null, state, null) await cleanupJob()
return exit(0) return exit(0)
case Command.RunScriptStep: case Command.RunScriptStep:
await runScriptStep(args as RunScriptStepArgs, state) await runScriptStep(args as RunScriptStepArgs, state)

View File

@@ -1,19 +1,23 @@
/* eslint-disable @typescript-eslint/no-var-requires */ /* eslint-disable @typescript-eslint/no-var-requires */
/* eslint-disable @typescript-eslint/no-require-imports */ /* eslint-disable @typescript-eslint/no-require-imports */
/* eslint-disable import/no-commonjs */
import * as core from '@actions/core' import * as core from '@actions/core'
import { env } from 'process'
// Import this way otherwise typescript has errors // Import this way otherwise typescript has errors
const exec = require('@actions/exec') const exec = require('@actions/exec')
const shlex = require('shlex')
export interface RunDockerCommandOptions { export interface RunDockerCommandOptions {
workingDir?: string workingDir?: string
input?: Buffer input?: Buffer
env?: { [key: string]: string }
} }
export async function runDockerCommand( export async function runDockerCommand(
args: string[], args: string[],
options?: RunDockerCommandOptions options?: RunDockerCommandOptions
): Promise<string> { ): Promise<string> {
options = optionsWithDockerEnvs(options)
args = fixArgs(args)
const pipes = await exec.getExecOutput('docker', args, options) const pipes = await exec.getExecOutput('docker', args, options)
if (pipes.exitCode !== 0) { if (pipes.exitCode !== 0) {
core.error(`Docker failed with exit code ${pipes.exitCode}`) core.error(`Docker failed with exit code ${pipes.exitCode}`)
@@ -22,6 +26,45 @@ export async function runDockerCommand(
return Promise.resolve(pipes.stdout) return Promise.resolve(pipes.stdout)
} }
export function optionsWithDockerEnvs(
options?: RunDockerCommandOptions
): RunDockerCommandOptions | undefined {
// From https://docs.docker.com/engine/reference/commandline/cli/#environment-variables
const dockerCliEnvs = new Set([
'DOCKER_API_VERSION',
'DOCKER_CERT_PATH',
'DOCKER_CONFIG',
'DOCKER_CONTENT_TRUST_SERVER',
'DOCKER_CONTENT_TRUST',
'DOCKER_CONTEXT',
'DOCKER_DEFAULT_PLATFORM',
'DOCKER_HIDE_LEGACY_COMMANDS',
'DOCKER_HOST',
'DOCKER_STACK_ORCHESTRATOR',
'DOCKER_TLS_VERIFY',
'BUILDKIT_PROGRESS'
])
const dockerEnvs = {}
for (const key in process.env) {
if (dockerCliEnvs.has(key)) {
dockerEnvs[key] = process.env[key]
}
}
const newOptions = {
workingDir: options?.workingDir,
input: options?.input,
env: options?.env || {}
}
// Set docker envs or overwrite provided ones
for (const [key, value] of Object.entries(dockerEnvs)) {
newOptions.env[key] = value as string
}
return newOptions
}
export function sanitize(val: string): string { export function sanitize(val: string): string {
if (!val || typeof val !== 'string') { if (!val || typeof val !== 'string') {
return '' return ''
@@ -42,6 +85,16 @@ export function sanitize(val: string): string {
return newNameBuilder.join('') return newNameBuilder.join('')
} }
export function fixArgs(args: string[]): string[] {
return shlex.split(args.join(' '))
}
export function checkEnvironment(): void {
if (!env.GITHUB_WORKSPACE) {
throw new Error('GITHUB_WORKSPACE is not set')
}
}
// isAlpha accepts single character and checks if // isAlpha accepts single character and checks if
// that character is [a-zA-Z] // that character is [a-zA-Z]
function isAlpha(val: string): boolean { function isAlpha(val: string): boolean {

View File

@@ -1,62 +1,33 @@
import { prepareJob, cleanupJob } from '../src/hooks' import { PrepareJobArgs } from 'hooklib/lib'
import { v4 as uuidv4 } from 'uuid' import { cleanupJob, prepareJob } from '../src/hooks'
import * as fs from 'fs'
import * as path from 'path'
import TestSetup from './test-setup' import TestSetup from './test-setup'
const prepareJobInputPath = path.resolve(
`${__dirname}/../../../examples/prepare-job.json`
)
const tmpOutputDir = `${__dirname}/${uuidv4()}`
let prepareJobOutputPath: string
let prepareJobData: any
let testSetup: TestSetup let testSetup: TestSetup
jest.useRealTimers() jest.useRealTimers()
describe('cleanup job', () => { describe('cleanup job', () => {
beforeAll(() => {
fs.mkdirSync(tmpOutputDir, { recursive: true })
})
afterAll(() => {
fs.rmSync(tmpOutputDir, { recursive: true })
})
beforeEach(async () => { beforeEach(async () => {
const prepareJobRawData = fs.readFileSync(prepareJobInputPath, 'utf8')
prepareJobData = JSON.parse(prepareJobRawData.toString())
prepareJobOutputPath = `${tmpOutputDir}/prepare-job-output-${uuidv4()}.json`
fs.writeFileSync(prepareJobOutputPath, '')
testSetup = new TestSetup() testSetup = new TestSetup()
testSetup.initialize() testSetup.initialize()
prepareJobData.args.container.userMountVolumes = testSetup.userMountVolumes const prepareJobDefinition = testSetup.getPrepareJobDefinition()
prepareJobData.args.container.systemMountVolumes =
testSetup.systemMountVolumes
prepareJobData.args.container.workingDirectory = testSetup.workingDirectory
await prepareJob(prepareJobData.args, prepareJobOutputPath) const prepareJobOutput = testSetup.createOutputFile(
'prepare-job-output.json'
)
await prepareJob(
prepareJobDefinition.args as PrepareJobArgs,
prepareJobOutput
)
}) })
afterEach(() => { afterEach(() => {
fs.rmSync(prepareJobOutputPath, { force: true })
testSetup.teardown() testSetup.teardown()
}) })
it('should cleanup successfully', async () => { it('should cleanup successfully', async () => {
const prepareJobOutputContent = fs.readFileSync( await expect(cleanupJob()).resolves.not.toThrow()
prepareJobOutputPath,
'utf-8'
)
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
await expect(
cleanupJob(prepareJobData.args, parsedPrepareJobOutput.state, null)
).resolves.not.toThrow()
}) })
}) })

View File

@@ -0,0 +1,27 @@
import { containerBuild } from '../src/dockerCommands'
import TestSetup from './test-setup'
let testSetup
let runContainerStepDefinition
describe('container build', () => {
beforeEach(() => {
testSetup = new TestSetup()
testSetup.initialize()
runContainerStepDefinition = testSetup.getRunContainerStepDefinition()
})
afterEach(() => {
testSetup.teardown()
})
it('should build container', async () => {
runContainerStepDefinition.image = ''
const actionPath = testSetup.initializeDockerAction()
runContainerStepDefinition.dockerfile = `${actionPath}/Dockerfile`
await expect(
containerBuild(runContainerStepDefinition, 'example-test-tag')
).resolves.not.toThrow()
})
})

View File

@@ -4,7 +4,7 @@ jest.useRealTimers()
describe('container pull', () => { describe('container pull', () => {
it('should fail', async () => { it('should fail', async () => {
const arg = { image: 'doesNotExist' } const arg = { image: 'does-not-exist' }
await expect(containerPull(arg.image, '')).rejects.toThrow() await expect(containerPull(arg.image, '')).rejects.toThrow()
}) })
it('should succeed', async () => { it('should succeed', async () => {

View File

@@ -1,102 +1,72 @@
import {
prepareJob,
cleanupJob,
runScriptStep,
runContainerStep
} from '../src/hooks'
import * as fs from 'fs' import * as fs from 'fs'
import * as path from 'path' import {
import { v4 as uuidv4 } from 'uuid' cleanupJob,
prepareJob,
runContainerStep,
runScriptStep
} from '../src/hooks'
import TestSetup from './test-setup' import TestSetup from './test-setup'
const prepareJobJson = fs.readFileSync( let definitions
path.resolve(__dirname + '/../../../examples/prepare-job.json'),
'utf8'
)
const containerStepJson = fs.readFileSync(
path.resolve(__dirname + '/../../../examples/run-container-step.json'),
'utf8'
)
const tmpOutputDir = `${__dirname}/_temp/${uuidv4()}`
let prepareJobData: any
let scriptStepJson: any
let scriptStepData: any
let containerStepData: any
let prepareJobOutputFilePath: string
let testSetup: TestSetup let testSetup: TestSetup
describe('e2e', () => { describe('e2e', () => {
beforeAll(() => {
fs.mkdirSync(tmpOutputDir, { recursive: true })
})
afterAll(() => {
fs.rmSync(tmpOutputDir, { recursive: true })
})
beforeEach(() => { beforeEach(() => {
// init dirs
testSetup = new TestSetup() testSetup = new TestSetup()
testSetup.initialize() testSetup.initialize()
prepareJobData = JSON.parse(prepareJobJson) definitions = {
prepareJobData.args.container.userMountVolumes = testSetup.userMountVolumes prepareJob: testSetup.getPrepareJobDefinition(),
prepareJobData.args.container.systemMountVolumes = runScriptStep: testSetup.getRunScriptStepDefinition(),
testSetup.systemMountVolumes runContainerStep: testSetup.getRunContainerStepDefinition()
prepareJobData.args.container.workingDirectory = testSetup.workingDirectory }
scriptStepJson = fs.readFileSync(
path.resolve(__dirname + '/../../../examples/run-script-step.json'),
'utf8'
)
scriptStepData = JSON.parse(scriptStepJson)
scriptStepData.args.workingDirectory = testSetup.workingDirectory
containerStepData = JSON.parse(containerStepJson)
containerStepData.args.workingDirectory = testSetup.workingDirectory
containerStepData.args.userMountVolumes = testSetup.userMountVolumes
containerStepData.args.systemMountVolumes = testSetup.systemMountVolumes
prepareJobOutputFilePath = `${tmpOutputDir}/prepare-job-output-${uuidv4()}.json`
fs.writeFileSync(prepareJobOutputFilePath, '')
}) })
afterEach(() => { afterEach(() => {
fs.rmSync(prepareJobOutputFilePath, { force: true })
testSetup.teardown() testSetup.teardown()
}) })
it('should prepare job, then run script step, then run container step then cleanup', async () => { it('should prepare job, then run script step, then run container step then cleanup', async () => {
const prepareJobOutput = testSetup.createOutputFile(
'prepare-job-output.json'
)
await expect( await expect(
prepareJob(prepareJobData.args, prepareJobOutputFilePath) prepareJob(definitions.prepareJob.args, prepareJobOutput)
).resolves.not.toThrow() ).resolves.not.toThrow()
let rawState = fs.readFileSync(prepareJobOutputFilePath, 'utf-8')
let rawState = fs.readFileSync(prepareJobOutput, 'utf-8')
let resp = JSON.parse(rawState) let resp = JSON.parse(rawState)
await expect( await expect(
runScriptStep(scriptStepData.args, resp.state) runScriptStep(definitions.runScriptStep.args, resp.state)
).resolves.not.toThrow() ).resolves.not.toThrow()
await expect( await expect(
runContainerStep(containerStepData.args, resp.state) runContainerStep(definitions.runContainerStep.args, resp.state)
).resolves.not.toThrow() ).resolves.not.toThrow()
await expect(cleanupJob(resp, resp.state, null)).resolves.not.toThrow()
await expect(cleanupJob()).resolves.not.toThrow()
}) })
it('should prepare job, then run script step, then run container step with Dockerfile then cleanup', async () => { it('should prepare job, then run script step, then run container step with Dockerfile then cleanup', async () => {
const prepareJobOutput = testSetup.createOutputFile(
'prepare-job-output.json'
)
await expect( await expect(
prepareJob(prepareJobData.args, prepareJobOutputFilePath) prepareJob(definitions.prepareJob.args, prepareJobOutput)
).resolves.not.toThrow()
let rawState = fs.readFileSync(prepareJobOutputFilePath, 'utf-8')
let resp = JSON.parse(rawState)
await expect(
runScriptStep(scriptStepData.args, resp.state)
).resolves.not.toThrow() ).resolves.not.toThrow()
const dockerfilePath = `${tmpOutputDir}/Dockerfile` let rawState = fs.readFileSync(prepareJobOutput, 'utf-8')
let resp = JSON.parse(rawState)
await expect(
runScriptStep(definitions.runScriptStep.args, resp.state)
).resolves.not.toThrow()
const dockerfilePath = `${testSetup.workingDirectory}/Dockerfile`
fs.writeFileSync( fs.writeFileSync(
dockerfilePath, dockerfilePath,
`FROM ubuntu:latest `FROM ubuntu:latest
@@ -104,14 +74,17 @@ ENV TEST=test
ENTRYPOINT [ "tail", "-f", "/dev/null" ] ENTRYPOINT [ "tail", "-f", "/dev/null" ]
` `
) )
const containerStepDataCopy = JSON.parse(JSON.stringify(containerStepData))
process.env.GITHUB_WORKSPACE = tmpOutputDir const containerStepDataCopy = JSON.parse(
JSON.stringify(definitions.runContainerStep)
)
containerStepDataCopy.args.dockerfile = 'Dockerfile' containerStepDataCopy.args.dockerfile = 'Dockerfile'
containerStepDataCopy.args.context = '.'
console.log(containerStepDataCopy.args)
await expect( await expect(
runContainerStep(containerStepDataCopy.args, resp.state) runContainerStep(containerStepDataCopy.args, resp.state)
).resolves.not.toThrow() ).resolves.not.toThrow()
await expect(cleanupJob(resp, resp.state, null)).resolves.not.toThrow()
await expect(cleanupJob()).resolves.not.toThrow()
}) })
}) })

View File

@@ -1,40 +1,18 @@
import * as fs from 'fs' import * as fs from 'fs'
import { v4 as uuidv4 } from 'uuid'
import { prepareJob } from '../src/hooks' import { prepareJob } from '../src/hooks'
import TestSetup from './test-setup' import TestSetup from './test-setup'
jest.useRealTimers() jest.useRealTimers()
let prepareJobOutputPath: string let prepareJobDefinition
let prepareJobData: any
const tmpOutputDir = `${__dirname}/_temp/${uuidv4()}`
const prepareJobInputPath = `${__dirname}/../../../examples/prepare-job.json`
let testSetup: TestSetup let testSetup: TestSetup
describe('prepare job', () => { describe('prepare job', () => {
beforeAll(() => { beforeEach(() => {
fs.mkdirSync(tmpOutputDir, { recursive: true })
})
afterAll(() => {
fs.rmSync(tmpOutputDir, { recursive: true })
})
beforeEach(async () => {
testSetup = new TestSetup() testSetup = new TestSetup()
testSetup.initialize() testSetup.initialize()
prepareJobDefinition = testSetup.getPrepareJobDefinition()
let prepareJobRawData = fs.readFileSync(prepareJobInputPath, 'utf8')
prepareJobData = JSON.parse(prepareJobRawData.toString())
prepareJobData.args.container.userMountVolumes = testSetup.userMountVolumes
prepareJobData.args.container.systemMountVolumes =
testSetup.systemMountVolumes
prepareJobData.args.container.workingDirectory = testSetup.workingDirectory
prepareJobOutputPath = `${tmpOutputDir}/prepare-job-output-${uuidv4()}.json`
fs.writeFileSync(prepareJobOutputPath, '')
}) })
afterEach(() => { afterEach(() => {
@@ -42,38 +20,68 @@ describe('prepare job', () => {
}) })
it('should not throw', async () => { it('should not throw', async () => {
const prepareJobOutput = testSetup.createOutputFile(
'prepare-job-output.json'
)
await expect( await expect(
prepareJob(prepareJobData.args, prepareJobOutputPath) prepareJob(prepareJobDefinition.args, prepareJobOutput)
).resolves.not.toThrow() ).resolves.not.toThrow()
expect(() => fs.readFileSync(prepareJobOutputPath, 'utf-8')).not.toThrow() expect(() => fs.readFileSync(prepareJobOutput, 'utf-8')).not.toThrow()
}) })
it('should have JSON output written to a file', async () => { it('should have JSON output written to a file', async () => {
await prepareJob(prepareJobData.args, prepareJobOutputPath) const prepareJobOutput = testSetup.createOutputFile(
const prepareJobOutputContent = fs.readFileSync( 'prepare-job-output.json'
prepareJobOutputPath,
'utf-8'
) )
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
const prepareJobOutputContent = fs.readFileSync(prepareJobOutput, 'utf-8')
expect(() => JSON.parse(prepareJobOutputContent)).not.toThrow() expect(() => JSON.parse(prepareJobOutputContent)).not.toThrow()
}) })
it('should have context written to a file', async () => { it('should have context written to a file', async () => {
await prepareJob(prepareJobData.args, prepareJobOutputPath) const prepareJobOutput = testSetup.createOutputFile(
const prepareJobOutputContent = fs.readFileSync( 'prepare-job-output.json'
prepareJobOutputPath, )
'utf-8' await prepareJob(prepareJobDefinition.args, prepareJobOutput)
const parsedPrepareJobOutput = JSON.parse(
fs.readFileSync(prepareJobOutput, 'utf-8')
) )
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
expect(parsedPrepareJobOutput.context).toBeDefined() expect(parsedPrepareJobOutput.context).toBeDefined()
}) })
it('should have container ids written to file', async () => { it('should have isAlpine field set correctly', async () => {
await prepareJob(prepareJobData.args, prepareJobOutputPath) let prepareJobOutput = testSetup.createOutputFile(
const prepareJobOutputContent = fs.readFileSync( 'prepare-job-output-alpine.json'
prepareJobOutputPath,
'utf-8'
) )
const prepareJobArgsClone = JSON.parse(
JSON.stringify(prepareJobDefinition.args)
)
prepareJobArgsClone.container.image = 'alpine:latest'
await prepareJob(prepareJobArgsClone, prepareJobOutput)
let parsedPrepareJobOutput = JSON.parse(
fs.readFileSync(prepareJobOutput, 'utf-8')
)
expect(parsedPrepareJobOutput.isAlpine).toBe(true)
prepareJobOutput = testSetup.createOutputFile(
'prepare-job-output-ubuntu.json'
)
prepareJobArgsClone.container.image = 'ubuntu:latest'
await prepareJob(prepareJobArgsClone, prepareJobOutput)
parsedPrepareJobOutput = JSON.parse(
fs.readFileSync(prepareJobOutput, 'utf-8')
)
expect(parsedPrepareJobOutput.isAlpine).toBe(false)
})
it('should have container ids written to file', async () => {
const prepareJobOutput = testSetup.createOutputFile(
'prepare-job-output.json'
)
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
const prepareJobOutputContent = fs.readFileSync(prepareJobOutput, 'utf-8')
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent) const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
expect(parsedPrepareJobOutput.context.container.id).toBeDefined() expect(parsedPrepareJobOutput.context.container.id).toBeDefined()
@@ -82,11 +90,11 @@ describe('prepare job', () => {
}) })
it('should have ports for context written in form [containerPort]:[hostPort]', async () => { it('should have ports for context written in form [containerPort]:[hostPort]', async () => {
await prepareJob(prepareJobData.args, prepareJobOutputPath) const prepareJobOutput = testSetup.createOutputFile(
const prepareJobOutputContent = fs.readFileSync( 'prepare-job-output.json'
prepareJobOutputPath,
'utf-8'
) )
await prepareJob(prepareJobDefinition.args, prepareJobOutput)
const prepareJobOutputContent = fs.readFileSync(prepareJobOutput, 'utf-8')
const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent) const parsedPrepareJobOutput = JSON.parse(prepareJobOutputContent)
const mainContainerPorts = parsedPrepareJobOutput.context.container.ports const mainContainerPorts = parsedPrepareJobOutput.context.container.ports
@@ -100,4 +108,14 @@ describe('prepare job', () => {
expect(redisServicePorts['80']).toBe('8080') expect(redisServicePorts['80']).toBe('8080')
expect(redisServicePorts['8080']).toBe('8088') expect(redisServicePorts['8080']).toBe('8088')
}) })
it('should run prepare job without job container without exception', async () => {
prepareJobDefinition.args.container = null
const prepareJobOutput = testSetup.createOutputFile(
'prepare-job-output.json'
)
await expect(
prepareJob(prepareJobDefinition.args, prepareJobOutput)
).resolves.not.toThrow()
})
}) })

View File

@@ -0,0 +1,96 @@
import * as fs from 'fs'
import { PrepareJobResponse } from 'hooklib/lib'
import { prepareJob, runScriptStep } from '../src/hooks'
import TestSetup from './test-setup'
jest.useRealTimers()
let testSetup: TestSetup
let definitions
let prepareJobResponse: PrepareJobResponse
describe('run script step', () => {
beforeEach(async () => {
testSetup = new TestSetup()
testSetup.initialize()
definitions = {
prepareJob: testSetup.getPrepareJobDefinition(),
runScriptStep: testSetup.getRunScriptStepDefinition()
}
const prepareJobOutput = testSetup.createOutputFile(
'prepare-job-output.json'
)
await prepareJob(definitions.prepareJob.args, prepareJobOutput)
prepareJobResponse = JSON.parse(fs.readFileSync(prepareJobOutput, 'utf-8'))
})
it('Should run script step without exceptions', async () => {
await expect(
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
).resolves.not.toThrow()
})
it('Should have path variable changed in container with prepend path string', async () => {
definitions.runScriptStep.args.prependPath = '/some/path'
definitions.runScriptStep.args.entryPoint = '/bin/bash'
definitions.runScriptStep.args.entryPointArgs = [
'-c',
`'if [[ ! $(env | grep "^PATH=") = "PATH=${definitions.runScriptStep.args.prependPath}:"* ]]; then exit 1; fi'`
]
await expect(
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
).resolves.not.toThrow()
})
it("Should fix expansion and print correctly in container's stdout", async () => {
const spy = jest.spyOn(process.stdout, 'write').mockImplementation()
definitions.runScriptStep.args.entryPoint = 'echo'
definitions.runScriptStep.args.entryPointArgs = ['"Mona', 'the', `Octocat"`]
await expect(
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
).resolves.not.toThrow()
expect(spy).toHaveBeenCalledWith(
expect.stringContaining('Mona the Octocat')
)
spy.mockRestore()
})
it('Should have path variable changed in container with prepend path string array', async () => {
definitions.runScriptStep.args.prependPath = ['/some/other/path']
definitions.runScriptStep.args.entryPoint = '/bin/bash'
definitions.runScriptStep.args.entryPointArgs = [
'-c',
`'if [[ ! $(env | grep "^PATH=") = "PATH=${definitions.runScriptStep.args.prependPath.join(
':'
)}:"* ]]; then exit 1; fi'`
]
await expect(
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
).resolves.not.toThrow()
})
it('Should confirm that CI and GITHUB_ACTIONS are set', async () => {
definitions.runScriptStep.args.entryPoint = '/bin/bash'
definitions.runScriptStep.args.entryPointArgs = [
'-c',
`'if [[ ! $(env | grep "^CI=") = "CI=true" ]]; then exit 1; fi'`
]
await expect(
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
).resolves.not.toThrow()
definitions.runScriptStep.args.entryPointArgs = [
'-c',
`'if [[ ! $(env | grep "^GITHUB_ACTIONS=") = "GITHUB_ACTIONS=true" ]]; then exit 1; fi'`
]
await expect(
runScriptStep(definitions.runScriptStep.args, prepareJobResponse.state)
).resolves.not.toThrow()
})
})

View File

@@ -1,11 +1,15 @@
import * as fs from 'fs' import * as fs from 'fs'
import { v4 as uuidv4 } from 'uuid'
import { env } from 'process'
import { Mount } from 'hooklib' import { Mount } from 'hooklib'
import { HookData } from 'hooklib/lib'
import * as path from 'path'
import { env } from 'process'
import { v4 as uuidv4 } from 'uuid'
export default class TestSetup { export default class TestSetup {
private testdir: string private testdir: string
private runnerMockDir: string private runnerMockDir: string
readonly runnerOutputDir: string
private runnerMockSubdirs = { private runnerMockSubdirs = {
work: '_work', work: '_work',
externals: 'externals', externals: 'externals',
@@ -16,17 +20,18 @@ export default class TestSetup {
githubWorkflow: '_work/_temp/_github_workflow' githubWorkflow: '_work/_temp/_github_workflow'
} }
private readonly projectName = 'example' private readonly projectName = 'repo'
constructor() { constructor() {
this.testdir = `${__dirname}/_temp/${uuidv4()}` this.testdir = `${__dirname}/_temp/${uuidv4()}`
this.runnerMockDir = `${this.testdir}/runner/_layout` this.runnerMockDir = `${this.testdir}/runner/_layout`
this.runnerOutputDir = `${this.testdir}/outputs`
} }
private get allTestDirectories() { private get allTestDirectories() {
const resp = [this.testdir, this.runnerMockDir] const resp = [this.testdir, this.runnerMockDir, this.runnerOutputDir]
for (const [key, value] of Object.entries(this.runnerMockSubdirs)) { for (const [, value] of Object.entries(this.runnerMockSubdirs)) {
resp.push(`${this.runnerMockDir}/${value}`) resp.push(`${this.runnerMockDir}/${value}`)
} }
@@ -37,31 +42,27 @@ export default class TestSetup {
return resp return resp
} }
public initialize(): void { initialize(): void {
env['GITHUB_WORKSPACE'] = this.workingDirectory
env['RUNNER_NAME'] = 'test'
env['RUNNER_TEMP'] =
`${this.runnerMockDir}/${this.runnerMockSubdirs.workTemp}`
for (const dir of this.allTestDirectories) { for (const dir of this.allTestDirectories) {
fs.mkdirSync(dir, { recursive: true }) fs.mkdirSync(dir, { recursive: true })
} }
env['RUNNER_NAME'] = 'test'
env[ fs.copyFileSync(
'RUNNER_TEMP' path.resolve(`${__dirname}/../../../examples/example-script.sh`),
] = `${this.runnerMockDir}/${this.runnerMockSubdirs.workTemp}` `${env.RUNNER_TEMP}/example-script.sh`
)
} }
public teardown(): void { teardown(): void {
fs.rmdirSync(this.testdir, { recursive: true }) fs.rmdirSync(this.testdir, { recursive: true })
} }
public get userMountVolumes(): Mount[] { private get systemMountVolumes(): Mount[] {
return [
{
sourceVolumePath: 'my_docker_volume',
targetVolumePath: '/volume_mount',
readOnly: false
}
]
}
public get systemMountVolumes(): Mount[] {
return [ return [
{ {
sourceVolumePath: '/var/run/docker.sock', sourceVolumePath: '/var/run/docker.sock',
@@ -106,7 +107,89 @@ export default class TestSetup {
] ]
} }
public get workingDirectory(): string { createOutputFile(name: string): string {
let filePath = path.join(this.runnerOutputDir, name || `${uuidv4()}.json`)
fs.writeFileSync(filePath, '')
return filePath
}
get workingDirectory(): string {
return `${this.runnerMockDir}/_work/${this.projectName}/${this.projectName}`
}
get containerWorkingDirectory(): string {
return `/__w/${this.projectName}/${this.projectName}` return `/__w/${this.projectName}/${this.projectName}`
} }
initializeDockerAction(): string {
const actionPath = `${this.testdir}/_actions/example-handle/example-repo/example-branch/mock-directory`
fs.mkdirSync(actionPath, { recursive: true })
this.writeDockerfile(actionPath)
this.writeEntrypoint(actionPath)
return actionPath
}
private writeDockerfile(actionPath: string) {
const content = `FROM alpine:3.10
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]`
fs.writeFileSync(`${actionPath}/Dockerfile`, content)
}
private writeEntrypoint(actionPath) {
const content = `#!/bin/sh -l
echo "Hello $1"
time=$(date)
echo "::set-output name=time::$time"`
const entryPointPath = `${actionPath}/entrypoint.sh`
fs.writeFileSync(entryPointPath, content)
fs.chmodSync(entryPointPath, 0o755)
}
getPrepareJobDefinition(): HookData {
const prepareJob = JSON.parse(
fs.readFileSync(
path.resolve(__dirname + '/../../../examples/prepare-job.json'),
'utf8'
)
)
prepareJob.args.container.systemMountVolumes = this.systemMountVolumes
prepareJob.args.container.workingDirectory = this.workingDirectory
prepareJob.args.container.userMountVolumes = undefined
prepareJob.args.container.registry = null
prepareJob.args.services.forEach(s => {
s.registry = null
})
return prepareJob
}
getRunScriptStepDefinition(): HookData {
const runScriptStep = JSON.parse(
fs.readFileSync(
path.resolve(__dirname + '/../../../examples/run-script-step.json'),
'utf8'
)
)
runScriptStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
return runScriptStep
}
getRunContainerStepDefinition(): HookData {
const runContainerStep = JSON.parse(
fs.readFileSync(
path.resolve(__dirname + '/../../../examples/run-container-step.json'),
'utf8'
)
)
runContainerStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
runContainerStep.args.systemMountVolumes = this.systemMountVolumes
runContainerStep.args.workingDirectory = this.workingDirectory
runContainerStep.args.userMountVolumes = undefined
runContainerStep.args.registry = null
return runContainerStep
}
} }

View File

@@ -1,4 +1,4 @@
import { sanitize } from '../src/utils' import { optionsWithDockerEnvs, sanitize, fixArgs } from '../src/utils'
describe('Utilities', () => { describe('Utilities', () => {
it('should return sanitized image name', () => { it('should return sanitized image name', () => {
@@ -9,4 +9,72 @@ describe('Utilities', () => {
const validStr = 'teststr8_one' const validStr = 'teststr8_one'
expect(sanitize(validStr)).toBe(validStr) expect(sanitize(validStr)).toBe(validStr)
}) })
test.each([
[['"Hello', 'World"'], ['Hello World']],
[
[
'sh',
'-c',
`'[ $(cat /etc/*release* | grep -i -e "^ID=*alpine*" -c) != 0 ] || exit 1'`
],
[
'sh',
'-c',
`[ $(cat /etc/*release* | grep -i -e "^ID=*alpine*" -c) != 0 ] || exit 1`
]
],
[
[
'sh',
'-c',
`'[ $(cat /etc/*release* | grep -i -e '\\''^ID=*alpine*'\\'' -c) != 0 ] || exit 1'`
],
[
'sh',
'-c',
`[ $(cat /etc/*release* | grep -i -e '^ID=*alpine*' -c) != 0 ] || exit 1`
]
]
])('should fix split arguments(%p, %p)', (args, expected) => {
const got = fixArgs(args)
expect(got).toStrictEqual(expected)
})
describe('with docker options', () => {
it('should augment options with docker environment variables', () => {
process.env.DOCKER_HOST = 'unix:///run/user/1001/docker.sock'
process.env.DOCKER_NOTEXIST = 'notexist'
const optionDefinitions: any = [
undefined,
{},
{ env: {} },
{ env: { DOCKER_HOST: 'unix://var/run/docker.sock' } }
]
for (const opt of optionDefinitions) {
let options = optionsWithDockerEnvs(opt)
expect(options).toBeDefined()
expect(options?.env).toBeDefined()
expect(options?.env?.DOCKER_HOST).toBe(process.env.DOCKER_HOST)
expect(options?.env?.DOCKER_NOTEXIST).toBeUndefined()
}
})
it('should not overwrite other options', () => {
process.env.DOCKER_HOST = 'unix:///run/user/1001/docker.sock'
const opt = {
workingDir: 'test',
input: Buffer.from('test')
}
const options = optionsWithDockerEnvs(opt)
expect(options).toBeDefined()
expect(options?.workingDir).toBe(opt.workingDir)
expect(options?.input).toBe(opt.input)
expect(options?.env).toStrictEqual({
DOCKER_HOST: process.env.DOCKER_HOST
})
})
})
}) })

View File

@@ -0,0 +1,6 @@
{
"compilerOptions": {
"allowJs": true
},
"extends": "./tsconfig.json"
}

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@
"version": "0.1.0", "version": "0.1.0",
"description": "", "description": "",
"main": "lib/index.js", "main": "lib/index.js",
"types": "index.d.ts", "types": "lib/index.d.ts",
"scripts": { "scripts": {
"test": "echo \"Error: no test specified\" && exit 1", "test": "echo \"Error: no test specified\" && exit 1",
"build": "tsc", "build": "tsc",
@@ -14,15 +14,14 @@
"author": "", "author": "",
"license": "MIT", "license": "MIT",
"devDependencies": { "devDependencies": {
"@types/node": "^17.0.23", "@types/node": "^24.0.14",
"@typescript-eslint/parser": "^5.18.0",
"@zeit/ncc": "^0.22.3", "@zeit/ncc": "^0.22.3",
"eslint": "^8.12.0", "eslint": "^9.31.0",
"eslint-plugin-github": "^4.3.6", "eslint-plugin-github": "^6.0.0",
"prettier": "^2.6.2", "prettier": "^3.6.2",
"typescript": "^4.6.3" "typescript": "^5.8.3"
}, },
"dependencies": { "dependencies": {
"@actions/core": "^1.6.0" "@actions/core": "^1.11.1"
} }
} }

View File

@@ -34,6 +34,7 @@ export interface ContainerInfo {
createOptions?: string createOptions?: string
environmentVariables?: { [key: string]: string } environmentVariables?: { [key: string]: string }
userMountVolumes?: Mount[] userMountVolumes?: Mount[]
systemMountVolumes?: Mount[]
registry?: Registry registry?: Registry
portMappings?: string[] portMappings?: string[]
} }
@@ -73,14 +74,6 @@ export enum Protocol {
UDP = 'udp' UDP = 'udp'
} }
export enum PodPhase {
PENDING = 'Pending',
RUNNING = 'Running',
SUCCEEDED = 'Succeded',
FAILED = 'Failed',
UNKNOWN = 'Unknown'
}
export interface PrepareJobResponse { export interface PrepareJobResponse {
state?: object state?: object
context?: ContainerContext context?: ContainerContext

View File

@@ -1,4 +1,3 @@
import * as core from '@actions/core'
import * as events from 'events' import * as events from 'events'
import * as fs from 'fs' import * as fs from 'fs'
import * as os from 'os' import * as os from 'os'
@@ -13,7 +12,6 @@ export async function getInputFromStdin(): Promise<HookData> {
}) })
rl.on('line', line => { rl.on('line', line => {
core.debug(`Line from STDIN: ${line}`)
input = line input = line
}) })
await events.default.once(rl, 'close') await events.default.once(rl, 'close')

View File

@@ -6,7 +6,38 @@ This implementation provides a way to dynamically spin up jobs to run container
## Pre-requisites ## Pre-requisites
Some things are expected to be set when using these hooks Some things are expected to be set when using these hooks
- The runner itself should be running in a pod, with a service account with the following permissions - The runner itself should be running in a pod, with a service account with the following permissions
- The `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER=true` should be set to true ```
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: runner-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch",]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "delete"]
```
- The `ACTIONS_RUNNER_POD_NAME` env should be set to the name of the pod - The `ACTIONS_RUNNER_POD_NAME` env should be set to the name of the pod
- The `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` env should be set to true to prevent the runner from running any jobs outside of a container
- The runner pod should map a persistent volume claim into the `_work` directory - The runner pod should map a persistent volume claim into the `_work` directory
- The `ACTIONS_RUNNER_CLAIM_NAME` should be set to the persistent volume claim that contains the runner's working directory - The `ACTIONS_RUNNER_CLAIM_NAME` env should be set to the persistent volume claim that contains the runner's working directory, otherwise it defaults to `${ACTIONS_RUNNER_POD_NAME}-work`
- Some actions runner env's are expected to be set. These are set automatically by the runner.
- `RUNNER_WORKSPACE` is expected to be set to the workspace of the runner
- `GITHUB_WORKSPACE` is expected to be set to the workspace of the job
## Limitations
- A [job containers](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container) will be required for all jobs
- Building container actions from a dockerfile is not supported at this time
- Container actions will not have access to the services network or job container network
- Docker [create options](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontaineroptions) are not supported
- Container actions will have to specify the entrypoint, since the default entrypoint will be overridden to run the commands from the workflow.

View File

@@ -1,13 +1,26 @@
// eslint-disable-next-line import/no-commonjs
module.exports = { module.exports = {
clearMocks: true, clearMocks: true,
preset: 'ts-jest',
moduleFileExtensions: ['js', 'ts'], moduleFileExtensions: ['js', 'ts'],
testEnvironment: 'node', testEnvironment: 'node',
testMatch: ['**/*-test.ts'], testMatch: ['**/*-test.ts'],
testRunner: 'jest-circus/runner', testRunner: 'jest-circus/runner',
verbose: true,
transform: { transform: {
'^.+\\.ts$': 'ts-jest' '^.+\\.ts$': [
}, 'ts-jest',
setupFilesAfterEnv: ['./jest.setup.js'], {
verbose: true tsconfig: 'tsconfig.test.json'
}
],
// Transform ESM modules to CommonJS
'^.+\\.(js|mjs)$': ['babel-jest', {
presets: [['@babel/preset-env', { targets: { node: 'current' } }]]
}]
},
transformIgnorePatterns: [
// Transform these ESM packages
'node_modules/(?!(shlex|@kubernetes/client-node|openid-client|oauth4webapi|jose|uuid)/)'
],
setupFilesAfterEnv: ['./jest.setup.js']
} }

View File

@@ -1 +1,2 @@
jest.setTimeout(90000) // eslint-disable-next-line filenames/match-regex, no-undef
jest.setTimeout(500000)

File diff suppressed because it is too large Load Diff

View File

@@ -13,18 +13,25 @@
"author": "", "author": "",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@actions/core": "^1.6.0", "@actions/core": "^1.11.1",
"@actions/exec": "^1.1.1", "@actions/exec": "^1.1.1",
"@actions/io": "^1.1.2", "@actions/io": "^1.1.3",
"@kubernetes/client-node": "^0.16.3", "@kubernetes/client-node": "^1.3.0",
"hooklib": "file:../hooklib" "hooklib": "file:../hooklib",
"js-yaml": "^4.1.0",
"shlex": "^3.0.0",
"tar-fs": "^3.1.0",
"uuid": "^11.1.0"
}, },
"devDependencies": { "devDependencies": {
"@types/jest": "^27.4.1", "@babel/core": "^7.28.3",
"@types/node": "^17.0.23", "@babel/preset-env": "^7.28.3",
"@vercel/ncc": "^0.33.4", "@types/jest": "^30.0.0",
"jest": "^27.5.1", "@types/node": "^24.3.0",
"ts-jest": "^27.1.4", "@vercel/ncc": "^0.38.3",
"typescript": "^4.6.3" "babel-jest": "^30.1.1",
"jest": "^30.1.1",
"ts-jest": "^29.4.1",
"typescript": "^5.9.2"
} }
} }

View File

@@ -1,5 +1,5 @@
import { podPrune } from '../k8s' import { prunePods, pruneSecrets } from '../k8s'
export async function cleanupJob(): Promise<void> { export async function cleanupJob(): Promise<void> {
await podPrune() await Promise.all([prunePods(), pruneSecrets()])
} }

View File

@@ -20,28 +20,35 @@ export function getJobPodName(): string {
export function getStepPodName(): string { export function getStepPodName(): string {
return `${getRunnerPodName().substring( return `${getRunnerPodName().substring(
0, 0,
MAX_POD_NAME_LENGTH - ('-step'.length + STEP_POD_NAME_SUFFIX_LENGTH) MAX_POD_NAME_LENGTH - ('-step-'.length + STEP_POD_NAME_SUFFIX_LENGTH)
)}-step-${uuidv4().substring(0, STEP_POD_NAME_SUFFIX_LENGTH)}` )}-step-${uuidv4().substring(0, STEP_POD_NAME_SUFFIX_LENGTH)}`
} }
export function getVolumeClaimName(): string { export function getVolumeClaimName(): string {
const name = process.env.ACTIONS_RUNNER_CLAIM_NAME const name = process.env.ACTIONS_RUNNER_CLAIM_NAME
if (!name) { if (!name) {
throw new Error( return `${getRunnerPodName()}-work`
"'ACTIONS_RUNNER_CLAIM_NAME' is required, please contact your self hosted runner administrator"
)
} }
return name return name
} }
const MAX_POD_NAME_LENGTH = 63 export function getSecretName(): string {
const STEP_POD_NAME_SUFFIX_LENGTH = 8 return `${getRunnerPodName().substring(
0,
MAX_POD_NAME_LENGTH - ('-secret-'.length + STEP_POD_NAME_SUFFIX_LENGTH)
)}-secret-${uuidv4().substring(0, STEP_POD_NAME_SUFFIX_LENGTH)}`
}
export const MAX_POD_NAME_LENGTH = 63
export const STEP_POD_NAME_SUFFIX_LENGTH = 8
export const CONTAINER_EXTENSION_PREFIX = '$'
export const JOB_CONTAINER_NAME = 'job' export const JOB_CONTAINER_NAME = 'job'
export const JOB_CONTAINER_EXTENSION_NAME = '$job'
export class RunnerInstanceLabel { export class RunnerInstanceLabel {
runnerhook: string private podName: string
constructor() { constructor() {
this.runnerhook = process.env.ACTIONS_RUNNER_POD_NAME as string this.podName = getRunnerPodName()
} }
get key(): string { get key(): string {
@@ -49,10 +56,10 @@ export class RunnerInstanceLabel {
} }
get value(): string { get value(): string {
return this.runnerhook return this.podName
} }
toString(): string { toString(): string {
return `runner-pod=${this.runnerhook}` return `runner-pod=${this.podName}`
} }
} }

View File

@@ -1,83 +1,143 @@
import * as core from '@actions/core' import * as core from '@actions/core'
import * as io from '@actions/io'
import * as k8s from '@kubernetes/client-node' import * as k8s from '@kubernetes/client-node'
import { import {
JobContainerInfo,
ContextPorts, ContextPorts,
PodPhase, PrepareJobArgs,
prepareJobArgs, writeToResponseFile,
writeToResponseFile ServiceContainerInfo
} from 'hooklib' } from 'hooklib'
import path from 'path'
import { import {
containerPorts, containerPorts,
createPod, createJobPod,
isAuthPermissionsOK,
isPodContainerAlpine, isPodContainerAlpine,
namespace, prunePods,
podPrune, waitForPodPhases,
requiredPermissions, getPrepareJobTimeoutSeconds,
waitForPodPhases execCpToPod,
execPodStep
} from '../k8s' } from '../k8s'
import { import {
containerVolumes, CONTAINER_VOLUMES,
DEFAULT_CONTAINER_ENTRY_POINT, DEFAULT_CONTAINER_ENTRY_POINT,
DEFAULT_CONTAINER_ENTRY_POINT_ARGS DEFAULT_CONTAINER_ENTRY_POINT_ARGS,
generateContainerName,
mergeContainerWithOptions,
readExtensionFromFile,
PodPhase,
fixArgs,
prepareJobScript
} from '../k8s/utils' } from '../k8s/utils'
import { JOB_CONTAINER_NAME } from './constants' import {
CONTAINER_EXTENSION_PREFIX,
getJobPodName,
JOB_CONTAINER_NAME
} from './constants'
import { dirname } from 'path'
export async function prepareJob( export async function prepareJob(
args: prepareJobArgs, args: PrepareJobArgs,
responseFile responseFile
): Promise<void> { ): Promise<void> {
await podPrune() if (!args.container) {
if (!(await isAuthPermissionsOK())) { throw new Error('Job Container is required.')
throw new Error(
`The Service account needs the following permissions ${JSON.stringify(
requiredPermissions
)} on the pod resource in the '${namespace}' namespace. Please contact your self hosted runner administrator.`
)
} }
await copyExternalsToRoot()
await prunePods()
const extension = readExtensionFromFile()
let container: k8s.V1Container | undefined = undefined let container: k8s.V1Container | undefined = undefined
if (args.container?.image) { if (args.container?.image) {
core.info(`Using image '${args.container.image}' for job image`) container = createContainerSpec(
container = createPodSpec(args.container, JOB_CONTAINER_NAME, true) args.container,
JOB_CONTAINER_NAME,
true,
extension
)
} }
let services: k8s.V1Container[] = [] let services: k8s.V1Container[] = []
if (args.services?.length) { if (args.services?.length) {
services = args.services.map(service => { services = args.services.map(service => {
core.info(`Adding service '${service.image}' to pod definition`) return createContainerSpec(
return createPodSpec(service, service.image.split(':')[0]) service,
generateContainerName(service.image),
false,
extension
)
}) })
} }
if (!container && !services?.length) { if (!container && !services?.length) {
throw new Error('No containers exist, skipping hook invocation') throw new Error('No containers exist, skipping hook invocation')
} }
let createdPod: k8s.V1Pod | undefined = undefined let createdPod: k8s.V1Pod | undefined = undefined
try { try {
createdPod = await createPod(container, services, args.registry) createdPod = await createJobPod(
getJobPodName(),
container,
services,
args.container.registry,
extension
)
} catch (err) { } catch (err) {
await podPrune() await prunePods()
throw new Error(`failed to create job pod: ${err}`) core.debug(`createPod failed: ${JSON.stringify(err)}`)
const message = (err as any)?.response?.body?.message || err
throw new Error(`failed to create job pod: ${message}`)
} }
if (!createdPod?.metadata?.name) { if (!createdPod?.metadata?.name) {
throw new Error('created pod should have metadata.name') throw new Error('created pod should have metadata.name')
} }
core.debug(
`Job pod created, waiting for it to come online ${createdPod?.metadata?.name}`
)
const runnerWorkspace = dirname(process.env.RUNNER_WORKSPACE as string)
let prepareScript: { containerPath: string; runnerPath: string } | undefined
if (args.container?.userMountVolumes?.length) {
prepareScript = prepareJobScript(args.container.userMountVolumes || [])
}
try { try {
await waitForPodPhases( await waitForPodPhases(
createdPod.metadata.name, createdPod.metadata.name,
new Set([PodPhase.RUNNING]), new Set([PodPhase.RUNNING]),
new Set([PodPhase.PENDING]) new Set([PodPhase.PENDING]),
getPrepareJobTimeoutSeconds()
) )
} catch (err) { } catch (err) {
await podPrune() await prunePods()
throw new Error(`Pod failed to come online with error: ${err}`) throw new Error(`pod failed to come online with error: ${err}`)
} }
core.info('Pod is ready for traffic') await execCpToPod(createdPod.metadata.name, runnerWorkspace, '/__w')
if (prepareScript) {
await execPodStep(
['sh', '-e', prepareScript.containerPath],
createdPod.metadata.name,
JOB_CONTAINER_NAME
)
const promises: Promise<void>[] = []
for (const vol of args?.container?.userMountVolumes || []) {
promises.push(
execCpToPod(
createdPod.metadata.name,
vol.sourceVolumePath,
vol.targetVolumePath
)
)
}
await Promise.all(promises)
}
core.debug('Job pod is ready for traffic')
let isAlpine = false let isAlpine = false
try { try {
@@ -86,19 +146,29 @@ export async function prepareJob(
JOB_CONTAINER_NAME JOB_CONTAINER_NAME
) )
} catch (err) { } catch (err) {
throw new Error(`Failed to determine if the pod is alpine: ${err}`) core.debug(
`Failed to determine if the pod is alpine: ${JSON.stringify(err)}`
)
const message = (err as any)?.response?.body?.message || err
throw new Error(`failed to determine if the pod is alpine: ${message}`)
} }
core.debug(`Setting isAlpine to ${isAlpine}`)
generateResponseFile(responseFile, createdPod, isAlpine) generateResponseFile(responseFile, args, createdPod, isAlpine)
} }
function generateResponseFile( function generateResponseFile(
responseFile: string, responseFile: string,
args: PrepareJobArgs,
appPod: k8s.V1Pod, appPod: k8s.V1Pod,
isAlpine isAlpine: boolean
): void { ): void {
if (!appPod.metadata?.name) {
throw new Error('app pod must have metadata.name specified')
}
const response = { const response = {
state: {}, state: {
jobPod: appPod.metadata.name
},
context: {}, context: {},
isAlpine isAlpine
} }
@@ -121,18 +191,20 @@ function generateResponseFile(
} }
} }
const serviceContainers = appPod.spec?.containers.filter( if (args.services?.length) {
c => c.name !== JOB_CONTAINER_NAME const serviceContainerNames =
) args.services?.map(s => generateContainerName(s.image)) || []
if (serviceContainers?.length) {
response.context['services'] = serviceContainers.map(c => {
if (!c.ports) {
return
}
response.context['services'] = appPod?.spec?.containers
?.filter(c => serviceContainerNames.includes(c.name))
.map(c => {
const ctxPorts: ContextPorts = {} const ctxPorts: ContextPorts = {}
if (c.ports?.length) {
for (const port of c.ports) { for (const port of c.ports) {
ctxPorts[port.containerPort] = port.hostPort if (port.containerPort && port.hostPort) {
ctxPorts[port.containerPort.toString()] = port.hostPort.toString()
}
}
} }
return { return {
@@ -141,57 +213,72 @@ function generateResponseFile(
} }
}) })
} }
writeToResponseFile(responseFile, JSON.stringify(response)) writeToResponseFile(responseFile, JSON.stringify(response))
} }
async function copyExternalsToRoot(): Promise<void> { export function createContainerSpec(
const workspace = process.env['RUNNER_WORKSPACE'] container: JobContainerInfo | ServiceContainerInfo,
if (workspace) { name: string,
await io.cp( jobContainer = false,
path.join(workspace, '../../externals'), extension?: k8s.V1PodTemplateSpec
path.join(workspace, '../externals'), ): k8s.V1Container {
{ force: true, recursive: true, copySourceDirectory: false } if (!container.entryPoint && jobContainer) {
) container.entryPoint = DEFAULT_CONTAINER_ENTRY_POINT
} container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
} }
function createPodSpec(
container,
name: string,
jobContainer = false
): k8s.V1Container {
core.info(JSON.stringify(container))
if (!container.entryPointArgs) {
container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
}
container.entryPointArgs = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
if (!container.entryPoint) {
container.entryPoint = DEFAULT_CONTAINER_ENTRY_POINT
}
const podContainer = { const podContainer = {
name, name,
image: container.image, image: container.image,
command: [container.entryPoint],
args: container.entryPointArgs,
ports: containerPorts(container) ports: containerPorts(container)
} as k8s.V1Container } as k8s.V1Container
if (container.workingDirectory) { if (container['workingDirectory']) {
podContainer.workingDir = container.workingDirectory podContainer.workingDir = container['workingDirectory']
}
if (container.entryPoint) {
podContainer.command = [container.entryPoint]
}
if (container.entryPointArgs && container.entryPointArgs.length > 0) {
podContainer.args = fixArgs(container.entryPointArgs)
} }
podContainer.env = [] podContainer.env = []
for (const [key, value] of Object.entries( for (const [key, value] of Object.entries(
container['environmentVariables'] container['environmentVariables'] || {}
)) { )) {
if (value && key !== 'HOME') { if (value && key !== 'HOME') {
podContainer.env.push({ name: key, value: value as string }) podContainer.env.push({ name: key, value })
} }
} }
podContainer.volumeMounts = containerVolumes( podContainer.env.push({
container.userMountVolumes, name: 'GITHUB_ACTIONS',
jobContainer value: 'true'
})
if (!('CI' in (container['environmentVariables'] || {}))) {
podContainer.env.push({
name: 'CI',
value: 'true'
})
}
podContainer.volumeMounts = CONTAINER_VOLUMES
if (!extension) {
return podContainer
}
const from = extension.spec?.containers?.find(
c => c.name === CONTAINER_EXTENSION_PREFIX + name
) )
if (from) {
mergeContainerWithOptions(podContainer, from)
}
return podContainer return podContainer
} }

View File

@@ -1,69 +1,153 @@
import * as k8s from '@kubernetes/client-node'
import * as core from '@actions/core' import * as core from '@actions/core'
import { PodPhase } from 'hooklib' import * as fs from 'fs'
import * as k8s from '@kubernetes/client-node'
import { RunContainerStepArgs } from 'hooklib'
import { dirname } from 'path'
import { import {
createJob, createContainerStepPod,
getContainerJobPodName, deletePod,
getPodLogs, execCpFromPod,
getPodStatus, execCpToPod,
waitForJobToComplete, execPodStep,
getPrepareJobTimeoutSeconds,
waitForPodPhases waitForPodPhases
} from '../k8s' } from '../k8s'
import { JOB_CONTAINER_NAME } from './constants' import {
import { containerVolumes } from '../k8s/utils' CONTAINER_VOLUMES,
mergeContainerWithOptions,
PodPhase,
readExtensionFromFile,
DEFAULT_CONTAINER_ENTRY_POINT_ARGS,
writeContainerStepScript
} from '../k8s/utils'
import {
getJobPodName,
getStepPodName,
JOB_CONTAINER_EXTENSION_NAME,
JOB_CONTAINER_NAME
} from './constants'
export async function runContainerStep(stepContainer): Promise<number> { export async function runContainerStep(
stepContainer: RunContainerStepArgs
): Promise<number> {
if (stepContainer.dockerfile) { if (stepContainer.dockerfile) {
throw new Error('Building container actions is not currently supported') throw new Error('Building container actions is not currently supported')
} }
const container = createPodSpec(stepContainer)
const job = await createJob(container) if (!stepContainer.entryPoint) {
if (!job.metadata?.name) { throw new Error(
'failed to start the container since the entrypoint is overwritten'
)
}
const envs = stepContainer.environmentVariables || {}
envs['GITHUB_ACTIONS'] = 'true'
if (!('CI' in envs)) {
envs.CI = 'true'
}
const extension = readExtensionFromFile()
const container = createContainerSpec(stepContainer, extension)
let pod: k8s.V1Pod
try {
pod = await createContainerStepPod(getStepPodName(), container, extension)
} catch (err) {
core.debug(`createJob failed: ${JSON.stringify(err)}`)
const message = (err as any)?.response?.body?.message || err
throw new Error(`failed to run script step: ${message}`)
}
if (!pod.metadata?.name) {
throw new Error( throw new Error(
`Expected job ${JSON.stringify( `Expected job ${JSON.stringify(
job pod
)} to have correctly set the metadata.name` )} to have correctly set the metadata.name`
) )
} }
const podName = pod.metadata.name
const podName = await getContainerJobPodName(job.metadata.name) try {
await waitForPodPhases( await waitForPodPhases(
podName, podName,
new Set([PodPhase.COMPLETED, PodPhase.RUNNING]), new Set([PodPhase.RUNNING]),
new Set([PodPhase.PENDING]) new Set([PodPhase.PENDING, PodPhase.UNKNOWN]),
getPrepareJobTimeoutSeconds()
) )
await getPodLogs(podName, JOB_CONTAINER_NAME)
await waitForJobToComplete(job.metadata.name) const runnerWorkspace = dirname(process.env.RUNNER_WORKSPACE as string)
// pod has failed so pull the status code from the container const githubWorkspace = process.env.GITHUB_WORKSPACE as string
const status = await getPodStatus(podName) const parts = githubWorkspace.split('/').slice(-2)
if (!status?.containerStatuses?.length) { if (parts.length !== 2) {
core.warning(`Can't determine container status`) throw new Error(`Invalid github workspace directory: ${githubWorkspace}`)
return 0 }
const relativeWorkspace = parts.join('/')
core.debug(
`Copying files from pod ${getJobPodName()} to ${runnerWorkspace}/${relativeWorkspace}`
)
await execCpFromPod(getJobPodName(), `/__w`, `${runnerWorkspace}`)
const { containerPath, runnerPath } = writeContainerStepScript(
`${runnerWorkspace}/__w/_temp`,
githubWorkspace,
stepContainer.entryPoint,
stepContainer.entryPointArgs,
envs
)
await execCpToPod(podName, `${runnerWorkspace}/__w`, '/__w')
fs.rmSync(`${runnerWorkspace}/__w`, { recursive: true, force: true })
try {
core.debug(`Executing container step script in pod ${podName}`)
return await execPodStep(
['/__e/sh', '-e', containerPath],
pod.metadata.name,
JOB_CONTAINER_NAME
)
} catch (err) {
core.debug(`execPodStep failed: ${JSON.stringify(err)}`)
const message = (err as any)?.response?.body?.message || err
throw new Error(`failed to run script step: ${message}`)
} finally {
fs.rmSync(runnerPath, { force: true })
}
} catch (error) {
core.error(`Failed to run container step: ${error}`)
throw error
} finally {
await deletePod(podName).catch(err => {
core.error(`Failed to delete step pod ${podName}: ${err}`)
})
}
} }
const exitCode = function createContainerSpec(
status.containerStatuses[status.containerStatuses.length - 1].state container: RunContainerStepArgs,
?.terminated?.exitCode extension?: k8s.V1PodTemplateSpec
return Number(exitCode) || 0 ): k8s.V1Container {
}
function createPodSpec(container): k8s.V1Container {
const podContainer = new k8s.V1Container() const podContainer = new k8s.V1Container()
podContainer.name = JOB_CONTAINER_NAME podContainer.name = JOB_CONTAINER_NAME
podContainer.image = container.image podContainer.image = container.image
if (container.entryPoint) { podContainer.workingDir = '/__w'
podContainer.command = [container.entryPoint, ...container.entryPointArgs] podContainer.command = ['/__e/tail']
podContainer.args = DEFAULT_CONTAINER_ENTRY_POINT_ARGS
podContainer.volumeMounts = CONTAINER_VOLUMES
if (!extension) {
return podContainer
} }
podContainer.env = [] const from = extension.spec?.containers?.find(
for (const [key, value] of Object.entries( c => c.name === JOB_CONTAINER_EXTENSION_NAME
container['environmentVariables'] )
)) { if (from) {
if (value && key !== 'HOME') { mergeContainerWithOptions(podContainer, from)
podContainer.env.push({ name: key, value: value as string })
} }
}
podContainer.volumeMounts = containerVolumes()
return podContainer return podContainer
} }

View File

@@ -1,38 +1,58 @@
/* eslint-disable @typescript-eslint/no-unused-vars */ /* eslint-disable @typescript-eslint/no-unused-vars */
import * as fs from 'fs'
import * as core from '@actions/core'
import { RunScriptStepArgs } from 'hooklib' import { RunScriptStepArgs } from 'hooklib'
import { execPodStep } from '../k8s' import { execCpFromPod, execCpToPod, execPodStep } from '../k8s'
import { writeRunScript, sleep, listDirAllCommand } from '../k8s/utils'
import { JOB_CONTAINER_NAME } from './constants' import { JOB_CONTAINER_NAME } from './constants'
import { dirname } from 'path'
export async function runScriptStep( export async function runScriptStep(
args: RunScriptStepArgs, args: RunScriptStepArgs,
state, state
responseFile
): Promise<void> { ): Promise<void> {
const cb = new CommandsBuilder( // Write the entrypoint first. This will be later coppied to the workflow pod
args.entryPoint, const { entryPoint, entryPointArgs, environmentVariables } = args
args.entryPointArgs, const { containerPath, runnerPath } = writeRunScript(
args.environmentVariables args.workingDirectory,
entryPoint,
entryPointArgs,
args.prependPath,
environmentVariables
) )
await execPodStep(cb.command, state.jobPod, JOB_CONTAINER_NAME)
const workdir = dirname(process.env.RUNNER_WORKSPACE as string)
const containerTemp = '/__w/_temp'
const runnerTemp = `${workdir}/_temp`
await execCpToPod(state.jobPod, runnerTemp, containerTemp)
// Execute the entrypoint script
args.entryPoint = 'sh'
args.entryPointArgs = ['-e', containerPath]
try {
await execPodStep(
[args.entryPoint, ...args.entryPointArgs],
state.jobPod,
JOB_CONTAINER_NAME
)
} catch (err) {
core.debug(`execPodStep failed: ${JSON.stringify(err)}`)
const message = (err as any)?.response?.body?.message || err
throw new Error(`failed to run script step: ${message}`)
} finally {
try {
fs.rmSync(runnerPath, { force: true })
} catch (removeErr) {
core.debug(`Failed to remove file ${runnerPath}: ${removeErr}`)
}
} }
class CommandsBuilder { try {
constructor( core.debug(
private entryPoint: string, `Copying from job pod '${state.jobPod}' ${containerTemp} to ${runnerTemp}`
private entryPointArgs: string[], )
private environmentVariables: { [key: string]: string } await execCpFromPod(state.jobPod, containerTemp, workdir)
) {} } catch (error) {
core.warning('Failed to copy _temp from pod')
get command(): string[] {
const envCommands: string[] = []
if (
this.environmentVariables &&
Object.entries(this.environmentVariables).length
) {
for (const [key, value] of Object.entries(this.environmentVariables)) {
envCommands.push(`${key}=${value}`)
}
}
return ['env', ...envCommands, this.entryPoint, ...this.entryPointArgs]
} }
} }

View File

@@ -1,44 +1,56 @@
import { Command, getInputFromStdin, prepareJobArgs } from 'hooklib' import * as core from '@actions/core'
import {
Command,
getInputFromStdin,
PrepareJobArgs,
RunContainerStepArgs,
RunScriptStepArgs
} from 'hooklib'
import { import {
cleanupJob, cleanupJob,
prepareJob, prepareJob,
runContainerStep, runContainerStep,
runScriptStep runScriptStep
} from './hooks' } from './hooks'
import { isAuthPermissionsOK, namespace, requiredPermissions } from './k8s'
async function run(): Promise<void> { async function run(): Promise<void> {
try {
const input = await getInputFromStdin() const input = await getInputFromStdin()
const args = input['args'] const args = input['args']
const command = input['command'] const command = input['command']
const responseFile = input['responseFile'] const responseFile = input['responseFile']
const state = input['state'] const state = input['state']
if (!(await isAuthPermissionsOK())) {
throw new Error(
`The Service account needs the following permissions ${JSON.stringify(
requiredPermissions
)} on the pod resource in the '${namespace()}' namespace. Please contact your self hosted runner administrator.`
)
}
let exitCode = 0 let exitCode = 0
try {
switch (command) { switch (command) {
case Command.PrepareJob: case Command.PrepareJob:
await prepareJob(args as prepareJobArgs, responseFile) await prepareJob(args as PrepareJobArgs, responseFile)
break return process.exit(0)
case Command.CleanupJob: case Command.CleanupJob:
await cleanupJob() await cleanupJob()
break return process.exit(0)
case Command.RunScriptStep: case Command.RunScriptStep:
await runScriptStep(args, state, null) await runScriptStep(args as RunScriptStepArgs, state)
break return process.exit(0)
case Command.RunContainerStep: case Command.RunContainerStep:
exitCode = await runContainerStep(args) exitCode = await runContainerStep(args as RunContainerStepArgs)
break return process.exit(exitCode)
case Command.runContainerStep:
default: default:
throw new Error(`Command not recognized: ${command}`) throw new Error(`Command not recognized: ${command}`)
} }
} catch (error) { } catch (error) {
// eslint-disable-next-line no-console core.error(error as Error)
console.log(error) process.exit(1)
exitCode = 1
} }
process.exitCode = exitCode
} }
void run() void run()

View File

@@ -1,13 +1,27 @@
import * as core from '@actions/core'
import * as path from 'path'
import { spawn } from 'child_process'
import * as k8s from '@kubernetes/client-node' import * as k8s from '@kubernetes/client-node'
import { ContainerInfo, PodPhase, Registry } from 'hooklib' import tar from 'tar-fs'
import * as stream from 'stream' import * as stream from 'stream'
import { v4 as uuidv4 } from 'uuid' import { WritableStreamBuffer } from 'stream-buffers'
import { createHash } from 'crypto'
import type { ContainerInfo, Registry } from 'hooklib'
import { import {
getJobPodName, getSecretName,
getRunnerPodName, JOB_CONTAINER_NAME,
getVolumeClaimName,
RunnerInstanceLabel RunnerInstanceLabel
} from '../hooks/constants' } from '../hooks/constants'
import {
PodPhase,
mergePodSpecWithOptions,
mergeObjectMeta,
fixArgs,
listDirAllCommand,
sleep,
EXTERNALS_VOLUME_NAME,
GITHUB_VOLUME_NAME
} from './utils'
const kc = new k8s.KubeConfig() const kc = new k8s.KubeConfig()
@@ -17,7 +31,7 @@ const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
const k8sBatchV1Api = kc.makeApiClient(k8s.BatchV1Api) const k8sBatchV1Api = kc.makeApiClient(k8s.BatchV1Api)
const k8sAuthorizationV1Api = kc.makeApiClient(k8s.AuthorizationV1Api) const k8sAuthorizationV1Api = kc.makeApiClient(k8s.AuthorizationV1Api)
export const POD_VOLUME_NAME = 'work' const DEFAULT_WAIT_FOR_POD_TIME_SECONDS = 10 * 60 // 10 min
export const requiredPermissions = [ export const requiredPermissions = [
{ {
@@ -39,24 +53,19 @@ export const requiredPermissions = [
subresource: 'log' subresource: 'log'
}, },
{ {
group: 'batch', group: '',
verbs: ['get', 'list', 'create', 'delete'], verbs: ['create', 'delete', 'get', 'list'],
resource: 'jobs', resource: 'secrets',
subresource: '' subresource: ''
} }
] ]
const secretPermission = { export async function createJobPod(
group: '', name: string,
verbs: ['get', 'list', 'create', 'delete'],
resource: 'secrets',
subresource: ''
}
export async function createPod(
jobContainer?: k8s.V1Container, jobContainer?: k8s.V1Container,
services?: k8s.V1Container[], services?: k8s.V1Container[],
registry?: Registry registry?: Registry,
extension?: k8s.V1PodTemplateSpec
): Promise<k8s.V1Pod> { ): Promise<k8s.V1Pod> {
const containers: k8s.V1Container[] = [] const containers: k8s.V1Container[] = []
if (jobContainer) { if (jobContainer) {
@@ -72,27 +81,50 @@ export async function createPod(
appPod.kind = 'Pod' appPod.kind = 'Pod'
appPod.metadata = new k8s.V1ObjectMeta() appPod.metadata = new k8s.V1ObjectMeta()
appPod.metadata.name = getJobPodName() appPod.metadata.name = name
const instanceLabel = new RunnerInstanceLabel() const instanceLabel = new RunnerInstanceLabel()
appPod.metadata.labels = { appPod.metadata.labels = {
[instanceLabel.key]: instanceLabel.value [instanceLabel.key]: instanceLabel.value
} }
appPod.metadata.annotations = {}
appPod.spec = new k8s.V1PodSpec() appPod.spec = new k8s.V1PodSpec()
appPod.spec.containers = containers appPod.spec.containers = containers
appPod.spec.initContainers = [
{
name: 'fs-init',
image:
process.env.ACTIONS_RUNNER_IMAGE ||
'ghcr.io/actions/actions-runner:latest',
command: ['sh', '-c', 'sudo mv /home/runner/externals/* /mnt/externals'],
securityContext: {
runAsGroup: 1001,
runAsUser: 1001
},
volumeMounts: [
{
name: EXTERNALS_VOLUME_NAME,
mountPath: '/mnt/externals'
}
]
}
]
appPod.spec.restartPolicy = 'Never' appPod.spec.restartPolicy = 'Never'
appPod.spec.nodeName = await getCurrentNodeName()
const claimName = getVolumeClaimName()
appPod.spec.volumes = [ appPod.spec.volumes = [
{ {
name: 'work', name: EXTERNALS_VOLUME_NAME,
persistentVolumeClaim: { claimName } emptyDir: {}
},
{
name: GITHUB_VOLUME_NAME,
emptyDir: {}
} }
] ]
if (registry) { if (registry) {
if (await isSecretsAuthOK()) {
const secret = await createDockerSecret(registry) const secret = await createDockerSecret(registry)
if (!secret?.metadata?.name) { if (!secret?.metadata?.name) {
throw new Error(`created secret does not have secret.metadata.name`) throw new Error(`created secret does not have secret.metadata.name`)
@@ -100,80 +132,104 @@ export async function createPod(
const secretReference = new k8s.V1LocalObjectReference() const secretReference = new k8s.V1LocalObjectReference()
secretReference.name = secret.metadata.name secretReference.name = secret.metadata.name
appPod.spec.imagePullSecrets = [secretReference] appPod.spec.imagePullSecrets = [secretReference]
} else {
throw new Error(
`Pulls from private registry is not allowed. Please contact your self hosted runner administrator. Service account needs permissions for ${secretPermission.verbs} in resource ${secretPermission.resource}`
)
}
} }
const { body } = await k8sApi.createNamespacedPod(namespace(), appPod) if (extension?.metadata) {
return body mergeObjectMeta(appPod, extension.metadata)
} }
export async function createJob( if (extension?.spec) {
container: k8s.V1Container mergePodSpecWithOptions(appPod.spec, extension.spec)
): Promise<k8s.V1Job> { }
const job = new k8s.V1Job()
job.apiVersion = 'batch/v1' return await k8sApi.createNamespacedPod({
job.kind = 'Job' namespace: namespace(),
job.metadata = new k8s.V1ObjectMeta() body: appPod
job.metadata.name = getJobPodName() })
job.metadata.labels = { 'runner-pod': getRunnerPodName() } }
job.spec = new k8s.V1JobSpec() export async function createContainerStepPod(
job.spec.ttlSecondsAfterFinished = 300 name: string,
job.spec.backoffLimit = 0 container: k8s.V1Container,
job.spec.template = new k8s.V1PodTemplateSpec() extension?: k8s.V1PodTemplateSpec
): Promise<k8s.V1Pod> {
const appPod = new k8s.V1Pod()
job.spec.template.spec = new k8s.V1PodSpec() appPod.apiVersion = 'v1'
job.spec.template.spec.containers = [container] appPod.kind = 'Pod'
job.spec.template.spec.restartPolicy = 'Never'
job.spec.template.spec.nodeName = await getCurrentNodeName()
const claimName = `${runnerName()}-work` appPod.metadata = new k8s.V1ObjectMeta()
job.spec.template.spec.volumes = [ appPod.metadata.name = name
const instanceLabel = new RunnerInstanceLabel()
appPod.metadata.labels = {
[instanceLabel.key]: instanceLabel.value
}
appPod.metadata.annotations = {}
appPod.spec = new k8s.V1PodSpec()
appPod.spec.containers = [container]
appPod.spec.initContainers = [
{ {
name: 'work', name: 'fs-init',
persistentVolumeClaim: { claimName } image:
process.env.ACTIONS_RUNNER_IMAGE ||
'ghcr.io/actions/actions-runner:latest',
command: [
'bash',
'-c',
`sudo cp $(which sh) /mnt/externals/sh \
&& sudo cp $(which tail) /mnt/externals/tail \
&& sudo cp $(which env) /mnt/externals/env \
&& sudo chmod -R 777 /mnt/externals`
],
securityContext: {
runAsGroup: 1001,
runAsUser: 1001,
privileged: true
},
volumeMounts: [
{
name: EXTERNALS_VOLUME_NAME,
mountPath: '/mnt/externals'
}
]
} }
] ]
const { body } = await k8sBatchV1Api.createNamespacedJob(namespace(), job) appPod.spec.restartPolicy = 'Never'
return body
appPod.spec.volumes = [
{
name: EXTERNALS_VOLUME_NAME,
emptyDir: {}
},
{
name: GITHUB_VOLUME_NAME,
emptyDir: {}
}
]
if (extension?.metadata) {
mergeObjectMeta(appPod, extension.metadata)
} }
export async function getContainerJobPodName(jobName: string): Promise<string> { if (extension?.spec) {
const selector = `job-name=${jobName}` mergePodSpecWithOptions(appPod.spec, extension.spec)
const backOffManager = new BackOffManager(60)
while (true) {
const podList = await k8sApi.listNamespacedPod(
namespace(),
undefined,
undefined,
undefined,
undefined,
selector,
1
)
if (!podList.body.items?.length) {
await backOffManager.backOff()
continue
} }
if (!podList.body.items[0].metadata?.name) { return await k8sApi.createNamespacedPod({
throw new Error( namespace: namespace(),
`Failed to determine the name of the pod for job ${jobName}` body: appPod
) })
}
return podList.body.items[0].metadata.name
}
} }
export async function deletePod(podName: string): Promise<void> { export async function deletePod(name: string): Promise<void> {
await k8sApi.deleteNamespacedPod(podName, namespace()) await k8sApi.deleteNamespacedPod({
name,
namespace: namespace(),
gracePeriodSeconds: 0
})
} }
export async function execPodStep( export async function execPodStep(
@@ -181,15 +237,13 @@ export async function execPodStep(
podName: string, podName: string,
containerName: string, containerName: string,
stdin?: stream.Readable stdin?: stream.Readable
): Promise<void> { ): Promise<number> {
// TODO, we need to add the path from `prependPath` to the PATH variable. How can we do that? Maybe another exec before running this one?
// Maybe something like, get the current path, if these entries aren't in it, add them, then set the current path to that?
// TODO: how do we set working directory? There doesn't seem to be an easy way to do it. Should we cd then execute our bash script?
const exec = new k8s.Exec(kc) const exec = new k8s.Exec(kc)
return new Promise(async function (resolve, reject) {
try { command = fixArgs(command)
await exec.exec( return await new Promise(function (resolve, reject) {
exec
.exec(
namespace(), namespace(),
podName, podName,
containerName, containerName,
@@ -199,20 +253,279 @@ export async function execPodStep(
stdin ?? null, stdin ?? null,
false /* tty */, false /* tty */,
resp => { resp => {
// kube.exec returns an error if exit code is not 0, but we can't actually get the exit code core.debug(`execPodStep response: ${JSON.stringify(resp)}`)
if (resp.status === 'Success') {
resolve(resp.code || 0)
} else {
core.debug(
JSON.stringify({
message: resp?.message,
details: resp?.details
})
)
reject(new Error(resp?.message || 'execPodStep failed'))
}
}
)
.catch(e => reject(e))
})
}
export async function execCalculateOutputHash(
podName: string,
containerName: string,
command: string[]
): Promise<string> {
const exec = new k8s.Exec(kc)
// Create a writable stream that updates a SHA-256 hash with stdout data
const hash = createHash('sha256')
const hashWriter = new stream.Writable({
write(chunk, _enc, cb) {
try {
hash.update(chunk.toString('utf8') as Buffer)
cb()
} catch (e) {
cb(e as Error)
}
}
})
await new Promise<void>((resolve, reject) => {
exec
.exec(
namespace(),
podName,
containerName,
command,
hashWriter, // capture stdout for hashing
process.stderr,
null,
false /* tty */,
resp => {
core.debug(`internalExecOutput response: ${JSON.stringify(resp)}`)
if (resp.status === 'Success') { if (resp.status === 'Success') {
resolve() resolve()
} else { } else {
reject( core.debug(
JSON.stringify({ message: resp?.message, details: resp?.details }) JSON.stringify({
message: resp?.message,
details: resp?.details
})
) )
reject(new Error(resp?.message || 'internalExecOutput failed'))
} }
} }
) )
} catch (error) { .catch(e => reject(e))
reject(error) })
// finalize hash and return digest
hashWriter.end()
return hash.digest('hex')
}
export async function localCalculateOutputHash(
commands: string[]
): Promise<string> {
return await new Promise<string>((resolve, reject) => {
const hash = createHash('sha256')
const child = spawn(commands[0], commands.slice(1), {
stdio: ['ignore', 'pipe', 'ignore']
})
child.stdout.on('data', chunk => {
hash.update(chunk)
})
child.on('error', reject)
child.on('close', (code: number) => {
if (code === 0) {
resolve(hash.digest('hex'))
} else {
reject(new Error(`child process exited with code ${code}`))
} }
}) })
})
}
export async function execCpToPod(
podName: string,
runnerPath: string,
containerPath: string
): Promise<void> {
core.debug(`Copying ${runnerPath} to pod ${podName} at ${containerPath}`)
let attempt = 0
while (true) {
try {
const exec = new k8s.Exec(kc)
const command = ['tar', 'xf', '-', '-C', containerPath]
const readStream = tar.pack(runnerPath)
const errStream = new WritableStreamBuffer()
await new Promise((resolve, reject) => {
exec
.exec(
namespace(),
podName,
JOB_CONTAINER_NAME,
command,
null,
errStream,
readStream,
false,
async status => {
if (errStream.size()) {
reject(
new Error(
`Error from cpFromPod - details: \n ${errStream.getContentsAsString()}`
)
)
}
resolve(status)
}
)
.catch(e => reject(e))
})
break
} catch (error) {
core.debug(`cpToPod: Attempt ${attempt + 1} failed: ${error}`)
attempt++
if (attempt >= 30) {
throw new Error(
`cpToPod failed after ${attempt} attempts: ${JSON.stringify(error)}`
)
}
await sleep(1000)
}
}
const want = await localCalculateOutputHash([
'sh',
'-c',
listDirAllCommand(runnerPath)
])
let attempts = 15
const delay = 1000
for (let i = 0; i < attempts; i++) {
try {
const got = await execCalculateOutputHash(podName, JOB_CONTAINER_NAME, [
'sh',
'-c',
listDirAllCommand(containerPath)
])
if (got !== want) {
core.debug(
`The hash of the directory does not match the expected value; want='${want}' got='${got}'`
)
await sleep(delay)
continue
}
break
} catch (error) {
core.debug(`Attempt ${i + 1} failed: ${error}`)
await sleep(delay)
}
}
}
export async function execCpFromPod(
podName: string,
containerPath: string,
parentRunnerPath: string
): Promise<void> {
const targetRunnerPath = `${parentRunnerPath}/${path.basename(containerPath)}`
core.debug(
`Copying from pod ${podName} ${containerPath} to ${targetRunnerPath}`
)
const want = await execCalculateOutputHash(podName, JOB_CONTAINER_NAME, [
'sh',
'-c',
listDirAllCommand(containerPath)
])
let attempt = 0
while (true) {
try {
// make temporary directory
const exec = new k8s.Exec(kc)
const containerPaths = containerPath.split('/')
const dirname = containerPaths.pop() as string
const command = [
'tar',
'cf',
'-',
'-C',
containerPaths.join('/') || '/',
dirname
]
const writerStream = tar.extract(parentRunnerPath)
const errStream = new WritableStreamBuffer()
await new Promise((resolve, reject) => {
exec
.exec(
namespace(),
podName,
JOB_CONTAINER_NAME,
command,
writerStream,
errStream,
null,
false,
async status => {
if (errStream.size()) {
reject(
new Error(
`Error from cpFromPod - details: \n ${errStream.getContentsAsString()}`
)
)
}
resolve(status)
}
)
.catch(e => reject(e))
})
break
} catch (error) {
core.debug(`Attempt ${attempt + 1} failed: ${error}`)
attempt++
if (attempt >= 30) {
throw new Error(
`execCpFromPod failed after ${attempt} attempts: ${JSON.stringify(error)}`
)
}
await sleep(1000)
}
}
let attempts = 15
const delay = 1000
for (let i = 0; i < attempts; i++) {
try {
const got = await localCalculateOutputHash([
'sh',
'-c',
listDirAllCommand(targetRunnerPath)
])
if (got !== want) {
core.debug(
`The hash of the directory does not match the expected value; want='${want}' got='${got}'`
)
await sleep(delay)
continue
}
break
} catch (error) {
core.debug(`Attempt ${i + 1} failed: ${error}`)
await sleep(delay)
}
}
} }
export async function waitForJobToComplete(jobName: string): Promise<void> { export async function waitForJobToComplete(jobName: string): Promise<void> {
@@ -223,7 +536,7 @@ export async function waitForJobToComplete(jobName: string): Promise<void> {
return return
} }
} catch (error) { } catch (error) {
throw new Error(`job ${jobName} has failed`) throw new Error(`job ${jobName} has failed: ${JSON.stringify(error)}`)
} }
await backOffManager.backOff() await backOffManager.backOff()
} }
@@ -234,46 +547,105 @@ export async function createDockerSecret(
): Promise<k8s.V1Secret> { ): Promise<k8s.V1Secret> {
const authContent = { const authContent = {
auths: { auths: {
[registry.serverUrl]: { [registry.serverUrl || 'https://index.docker.io/v1/']: {
username: registry.username, username: registry.username,
password: registry.password, password: registry.password,
auth: Buffer.from( auth: Buffer.from(`${registry.username}:${registry.password}`).toString(
`${registry.username}:${registry.password}`,
'base64' 'base64'
).toString() )
} }
} }
} }
const secretName = generateSecretName()
const runnerInstanceLabel = new RunnerInstanceLabel()
const secretName = getSecretName()
const secret = new k8s.V1Secret() const secret = new k8s.V1Secret()
secret.immutable = true secret.immutable = true
secret.apiVersion = 'v1' secret.apiVersion = 'v1'
secret.metadata = new k8s.V1ObjectMeta() secret.metadata = new k8s.V1ObjectMeta()
secret.metadata.name = secretName secret.metadata.name = secretName
secret.metadata.namespace = namespace()
secret.metadata.labels = {
[runnerInstanceLabel.key]: runnerInstanceLabel.value
}
secret.type = 'kubernetes.io/dockerconfigjson'
secret.kind = 'Secret' secret.kind = 'Secret'
secret.data = { secret.data = {
'.dockerconfigjson': Buffer.from( '.dockerconfigjson': Buffer.from(JSON.stringify(authContent)).toString(
JSON.stringify(authContent),
'base64' 'base64'
).toString() )
} }
const { body } = await k8sApi.createNamespacedSecret(namespace(), secret) return await k8sApi.createNamespacedSecret({
return body namespace: namespace(),
body: secret
})
}
export async function createSecretForEnvs(envs: {
[key: string]: string
}): Promise<string> {
const runnerInstanceLabel = new RunnerInstanceLabel()
const secret = new k8s.V1Secret()
const secretName = getSecretName()
secret.immutable = true
secret.apiVersion = 'v1'
secret.metadata = new k8s.V1ObjectMeta()
secret.metadata.name = secretName
secret.metadata.labels = {
[runnerInstanceLabel.key]: runnerInstanceLabel.value
}
secret.kind = 'Secret'
secret.data = {}
for (const [key, value] of Object.entries(envs)) {
secret.data[key] = Buffer.from(value).toString('base64')
}
await k8sApi.createNamespacedSecret({
namespace: namespace(),
body: secret
})
return secretName
}
export async function deleteSecret(name: string): Promise<void> {
await k8sApi.deleteNamespacedSecret({
name,
namespace: namespace()
})
}
export async function pruneSecrets(): Promise<void> {
const secretList = await k8sApi.listNamespacedSecret({
namespace: namespace(),
labelSelector: new RunnerInstanceLabel().toString()
})
if (!secretList.items.length) {
return
}
await Promise.all(
secretList.items.map(
async secret =>
secret.metadata?.name && (await deleteSecret(secret.metadata.name))
)
)
} }
export async function waitForPodPhases( export async function waitForPodPhases(
podName: string, podName: string,
awaitingPhases: Set<PodPhase>, awaitingPhases: Set<PodPhase>,
backOffPhases: Set<PodPhase>, backOffPhases: Set<PodPhase>,
maxTimeSeconds = 45 * 60 // 45 min maxTimeSeconds = DEFAULT_WAIT_FOR_POD_TIME_SECONDS
): Promise<void> { ): Promise<void> {
const backOffManager = new BackOffManager(maxTimeSeconds) const backOffManager = new BackOffManager(maxTimeSeconds)
let phase: PodPhase = PodPhase.UNKNOWN let phase: PodPhase = PodPhase.UNKNOWN
try { try {
while (true) { while (true) {
phase = await getPodPhase(podName) phase = await getPodPhase(podName)
if (awaitingPhases.has(phase)) { if (awaitingPhases.has(phase)) {
return return
} }
@@ -286,11 +658,32 @@ export async function waitForPodPhases(
await backOffManager.backOff() await backOffManager.backOff()
} }
} catch (error) { } catch (error) {
throw new Error(`Pod ${podName} is unhealthy with phase status ${phase}`) throw new Error(
`Pod ${podName} is unhealthy with phase status ${phase}: ${JSON.stringify(error)}`
)
} }
} }
async function getPodPhase(podName: string): Promise<PodPhase> { export function getPrepareJobTimeoutSeconds(): number {
const envTimeoutSeconds =
process.env['ACTIONS_RUNNER_PREPARE_JOB_TIMEOUT_SECONDS']
if (!envTimeoutSeconds) {
return DEFAULT_WAIT_FOR_POD_TIME_SECONDS
}
const timeoutSeconds = parseInt(envTimeoutSeconds, 10)
if (!timeoutSeconds || timeoutSeconds <= 0) {
core.warning(
`Prepare job timeout is invalid ("${timeoutSeconds}"): use an int > 0`
)
return DEFAULT_WAIT_FOR_POD_TIME_SECONDS
}
return timeoutSeconds
}
async function getPodPhase(name: string): Promise<PodPhase> {
const podPhaseLookup = new Set<string>([ const podPhaseLookup = new Set<string>([
PodPhase.PENDING, PodPhase.PENDING,
PodPhase.RUNNING, PodPhase.RUNNING,
@@ -298,20 +691,24 @@ async function getPodPhase(podName: string): Promise<PodPhase> {
PodPhase.FAILED, PodPhase.FAILED,
PodPhase.UNKNOWN PodPhase.UNKNOWN
]) ])
const { body } = await k8sApi.readNamespacedPod(podName, namespace()) const pod = await k8sApi.readNamespacedPod({
const pod = body name,
namespace: namespace()
})
if (!pod.status?.phase || !podPhaseLookup.has(pod.status.phase)) { if (!pod.status?.phase || !podPhaseLookup.has(pod.status.phase)) {
return PodPhase.UNKNOWN return PodPhase.UNKNOWN
} }
return pod.status?.phase return pod.status?.phase as PodPhase
} }
async function isJobSucceeded(jobName: string): Promise<boolean> { async function isJobSucceeded(name: string): Promise<boolean> {
const { body } = await k8sBatchV1Api.readNamespacedJob(jobName, namespace()) const job = await k8sBatchV1Api.readNamespacedJob({
const job = body name,
namespace: namespace()
})
if (job.status?.failed) { if (job.status?.failed) {
throw new Error(`job ${jobName} has failed`) throw new Error(`job ${name} has failed`)
} }
return !!job.status?.succeeded return !!job.status?.succeeded
} }
@@ -328,34 +725,29 @@ export async function getPodLogs(
}) })
logStream.on('error', err => { logStream.on('error', err => {
process.stderr.write(JSON.stringify(err)) process.stderr.write(err.message)
}) })
const r = await log.log(namespace(), podName, containerName, logStream, { await log.log(namespace(), podName, containerName, logStream, {
follow: true, follow: true,
tailLines: 50,
pretty: false, pretty: false,
timestamps: false timestamps: false
}) })
await new Promise(resolve => r.on('close', () => resolve(null))) await new Promise(resolve => logStream.on('end', () => resolve(null)))
} }
export async function podPrune(): Promise<void> { export async function prunePods(): Promise<void> {
const podList = await k8sApi.listNamespacedPod( const podList = await k8sApi.listNamespacedPod({
namespace(), namespace: namespace(),
undefined, labelSelector: new RunnerInstanceLabel().toString()
undefined, })
undefined, if (!podList.items.length) {
undefined,
new RunnerInstanceLabel().toString()
)
if (!podList.body.items.length) {
return return
} }
await Promise.all( await Promise.all(
podList.body.items.map( podList.items.map(
pod => pod.metadata?.name && deletePod(pod.metadata.name) async pod => pod.metadata?.name && (await deletePod(pod.metadata.name))
) )
) )
} }
@@ -363,16 +755,16 @@ export async function podPrune(): Promise<void> {
export async function getPodStatus( export async function getPodStatus(
name: string name: string
): Promise<k8s.V1PodStatus | undefined> { ): Promise<k8s.V1PodStatus | undefined> {
const { body } = await k8sApi.readNamespacedPod(name, namespace()) const pod = await k8sApi.readNamespacedPod({
return body.status name,
namespace: namespace()
})
return pod.status
} }
export async function isAuthPermissionsOK(): Promise<boolean> { export async function isAuthPermissionsOK(): Promise<boolean> {
const sar = new k8s.V1SelfSubjectAccessReview() const sar = new k8s.V1SelfSubjectAccessReview()
const asyncs: Promise<{ const asyncs: Promise<k8s.V1SelfSubjectAccessReview>[] = []
response: unknown
body: k8s.V1SelfSubjectAccessReview
}>[] = []
for (const resource of requiredPermissions) { for (const resource of requiredPermissions) {
for (const verb of resource.verbs) { for (const verb of resource.verbs) {
sar.spec = new k8s.V1SelfSubjectAccessReviewSpec() sar.spec = new k8s.V1SelfSubjectAccessReviewSpec()
@@ -382,31 +774,13 @@ export async function isAuthPermissionsOK(): Promise<boolean> {
sar.spec.resourceAttributes.group = resource.group sar.spec.resourceAttributes.group = resource.group
sar.spec.resourceAttributes.resource = resource.resource sar.spec.resourceAttributes.resource = resource.resource
sar.spec.resourceAttributes.subresource = resource.subresource sar.spec.resourceAttributes.subresource = resource.subresource
asyncs.push(k8sAuthorizationV1Api.createSelfSubjectAccessReview(sar)) asyncs.push(
k8sAuthorizationV1Api.createSelfSubjectAccessReview({ body: sar })
)
} }
} }
const responses = await Promise.all(asyncs) const responses = await Promise.all(asyncs)
return responses.every(resp => resp.body.status?.allowed) return responses.every(resp => resp.status?.allowed)
}
export async function isSecretsAuthOK(): Promise<boolean> {
const sar = new k8s.V1SelfSubjectAccessReview()
const asyncs: Promise<{
response: unknown
body: k8s.V1SelfSubjectAccessReview
}>[] = []
for (const verb of secretPermission.verbs) {
sar.spec = new k8s.V1SelfSubjectAccessReviewSpec()
sar.spec.resourceAttributes = new k8s.V1ResourceAttributes()
sar.spec.resourceAttributes.verb = verb
sar.spec.resourceAttributes.namespace = namespace()
sar.spec.resourceAttributes.group = secretPermission.group
sar.spec.resourceAttributes.resource = secretPermission.resource
sar.spec.resourceAttributes.subresource = secretPermission.subresource
asyncs.push(k8sAuthorizationV1Api.createSelfSubjectAccessReview(sar))
}
const responses = await Promise.all(asyncs)
return responses.every(resp => resp.body.status?.allowed)
} }
export async function isPodContainerAlpine( export async function isPodContainerAlpine(
@@ -419,27 +793,18 @@ export async function isPodContainerAlpine(
[ [
'sh', 'sh',
'-c', '-c',
"[ $(cat /etc/*release* | grep -i -e '^ID=*alpine*' -c) != 0 ] || exit 1" `'[ $(cat /etc/*release* | grep -i -e "^ID=*alpine*" -c) != 0 ] || exit 1'`
], ],
podName, podName,
containerName containerName
) )
} catch (err) { } catch {
isAlpine = false isAlpine = false
} }
return isAlpine return isAlpine
} }
async function getCurrentNodeName(): Promise<string> {
const resp = await k8sApi.readNamespacedPod(getRunnerPodName(), namespace())
const nodeName = resp.body.spec?.nodeName
if (!nodeName) {
throw new Error('Failed to determine node name')
}
return nodeName
}
export function namespace(): string { export function namespace(): string {
if (process.env['ACTIONS_RUNNER_KUBERNETES_NAMESPACE']) { if (process.env['ACTIONS_RUNNER_KUBERNETES_NAMESPACE']) {
return process.env['ACTIONS_RUNNER_KUBERNETES_NAMESPACE'] return process.env['ACTIONS_RUNNER_KUBERNETES_NAMESPACE']
@@ -454,20 +819,6 @@ export function namespace(): string {
return context.namespace return context.namespace
} }
function generateSecretName(): string {
return `github-secret-${uuidv4()}`
}
function runnerName(): string {
const name = process.env.ACTIONS_RUNNER_POD_NAME
if (!name) {
throw new Error(
'Failed to determine runner name. "ACTIONS_RUNNER_POD_NAME" env variables should be set.'
)
}
return name
}
class BackOffManager { class BackOffManager {
private backOffSeconds = 1 private backOffSeconds = 1
totalTime = 0 totalTime = 0
@@ -497,28 +848,48 @@ class BackOffManager {
export function containerPorts( export function containerPorts(
container: ContainerInfo container: ContainerInfo
): k8s.V1ContainerPort[] { ): k8s.V1ContainerPort[] {
// 8080:8080/tcp
const portFormat = /(\d{1,5})(:(\d{1,5}))?(\/(tcp|udp))?/
const ports: k8s.V1ContainerPort[] = [] const ports: k8s.V1ContainerPort[] = []
if (!container.portMappings?.length) {
return ports
}
for (const portDefinition of container.portMappings) { for (const portDefinition of container.portMappings) {
const submatches = portFormat.exec(portDefinition) const portProtoSplit = portDefinition.split('/')
if (!submatches) { if (portProtoSplit.length > 2) {
throw new Error( throw new Error(`Unexpected port format: ${portDefinition}`)
`Port definition "${portDefinition}" is in incorrect format`
)
} }
const port = new k8s.V1ContainerPort() const port = new k8s.V1ContainerPort()
port.hostPort = Number(submatches[1]) port.protocol =
if (submatches[3]) { portProtoSplit.length === 2 ? portProtoSplit[1].toUpperCase() : 'TCP'
port.containerPort = Number(submatches[3])
const portSplit = portProtoSplit[0].split(':')
if (portSplit.length > 2) {
throw new Error('ports should have at most one ":" separator')
} }
if (submatches[5]) {
port.protocol = submatches[5].toUpperCase() const parsePort = (p: string): number => {
const num = Number(p)
if (!Number.isInteger(num) || num < 1 || num > 65535) {
throw new Error(`invalid container port: ${p}`)
}
return num
}
if (portSplit.length === 1) {
port.containerPort = parsePort(portSplit[0])
} else { } else {
port.protocol = 'TCP' port.hostPort = parsePort(portSplit[0])
port.containerPort = parsePort(portSplit[1])
} }
ports.push(port) ports.push(port)
} }
return ports return ports
} }
export async function getPodByName(name): Promise<k8s.V1Pod> {
return await k8sApi.readNamespacedPod({
name,
namespace: namespace()
})
}

View File

@@ -1,65 +1,295 @@
import * as k8s from '@kubernetes/client-node' import * as k8s from '@kubernetes/client-node'
import * as fs from 'fs'
import * as yaml from 'js-yaml'
import * as core from '@actions/core'
import { v1 as uuidv4 } from 'uuid'
import { CONTAINER_EXTENSION_PREFIX } from '../hooks/constants'
import * as shlex from 'shlex'
import { Mount } from 'hooklib' import { Mount } from 'hooklib'
import * as path from 'path'
import { POD_VOLUME_NAME } from './index'
export const DEFAULT_CONTAINER_ENTRY_POINT_ARGS = [`-f`, `/dev/null`] export const DEFAULT_CONTAINER_ENTRY_POINT_ARGS = [`-f`, `/dev/null`]
export const DEFAULT_CONTAINER_ENTRY_POINT = 'tail' export const DEFAULT_CONTAINER_ENTRY_POINT = 'tail'
export function containerVolumes( export const ENV_HOOK_TEMPLATE_PATH = 'ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE'
userMountVolumes: Mount[] = [], export const ENV_USE_KUBE_SCHEDULER = 'ACTIONS_RUNNER_USE_KUBE_SCHEDULER'
jobContainer = true
): k8s.V1VolumeMount[] { export const EXTERNALS_VOLUME_NAME = 'externals'
const mounts: k8s.V1VolumeMount[] = [ export const GITHUB_VOLUME_NAME = 'github'
export const CONTAINER_VOLUMES: k8s.V1VolumeMount[] = [
{ {
name: POD_VOLUME_NAME, name: EXTERNALS_VOLUME_NAME,
mountPath: '/__w' mountPath: '/__e'
},
{
name: GITHUB_VOLUME_NAME,
mountPath: '/github'
} }
] ]
if (!jobContainer) { export function prepareJobScript(userVolumeMounts: Mount[]): {
return mounts containerPath: string
runnerPath: string
} {
let mountDirs = userVolumeMounts.map(m => m.targetVolumePath).join(' ')
const content = `#!/bin/sh -l
set -e
cp -R /__w/_temp/_github_home /github/home
cp -R /__w/_temp/_github_workflow /github/workflow
mkdir -p ${mountDirs}
`
const filename = `${uuidv4()}.sh`
const entryPointPath = `${process.env.RUNNER_TEMP}/${filename}`
fs.writeFileSync(entryPointPath, content)
return {
containerPath: `/__w/_temp/${filename}`,
runnerPath: entryPointPath
}
} }
mounts.push( export function writeRunScript(
{ workingDirectory: string,
name: POD_VOLUME_NAME, entryPoint: string,
mountPath: '/__e', entryPointArgs?: string[],
subPath: 'externals' prependPath?: string[],
}, environmentVariables?: { [key: string]: string }
{ ): { containerPath: string; runnerPath: string } {
name: POD_VOLUME_NAME, let exportPath = ''
mountPath: '/github/home', if (prependPath?.length) {
subPath: '_temp/_github_home' // TODO: remove compatibility with typeof prependPath === 'string' as we bump to next major version, the hooks will lose PrependPath compat with runners 2.293.0 and older
}, const prepend =
{ typeof prependPath === 'string' ? prependPath : prependPath.join(':')
name: POD_VOLUME_NAME, exportPath = `export PATH=${prepend}:$PATH`
mountPath: '/github/workflow',
subPath: '_temp/_github_workflow'
} }
let environmentPrefix = scriptEnv(environmentVariables)
const content = `#!/bin/sh -l
set -e
rm "$0" # remove script after running
${exportPath}
cd ${workingDirectory} && \
exec ${environmentPrefix} ${entryPoint} ${
entryPointArgs?.length ? entryPointArgs.join(' ') : ''
}
`
const filename = `${uuidv4()}.sh`
const entryPointPath = `${process.env.RUNNER_TEMP}/${filename}`
fs.writeFileSync(entryPointPath, content)
return {
containerPath: `/__w/_temp/${filename}`,
runnerPath: entryPointPath
}
}
export function writeContainerStepScript(
dst: string,
workingDirectory: string,
entryPoint: string,
entryPointArgs?: string[],
environmentVariables?: { [key: string]: string }
): { containerPath: string; runnerPath: string } {
let environmentPrefix = scriptEnv(environmentVariables)
const parts = workingDirectory.split('/').slice(-2)
if (parts.length !== 2) {
throw new Error(`Invalid working directory: ${workingDirectory}`)
}
const content = `#!/bin/sh -l
rm "$0" # remove script after running
mv /__w/_temp/_github_home /github/home && \
mv /__w/_temp/_github_workflow /github/workflow && \
mv /__w/_temp/_runner_file_commands /github/file_commands && \
mv /__w/${parts.join('/')}/ /github/workspace && \
cd /github/workspace && \
exec ${environmentPrefix} ${entryPoint} ${
entryPointArgs?.length ? entryPointArgs.join(' ') : ''
}
`
const filename = `${uuidv4()}.sh`
const entryPointPath = `${dst}/${filename}`
core.debug(`Writing container step script to ${entryPointPath}`)
fs.writeFileSync(entryPointPath, content)
return {
containerPath: `/__w/_temp/${filename}`,
runnerPath: entryPointPath
}
}
function scriptEnv(envs?: { [key: string]: string }): string {
if (!envs || !Object.entries(envs).length) {
return ''
}
const envBuffer: string[] = []
for (const [key, value] of Object.entries(envs)) {
if (
key.includes(`=`) ||
key.includes(`'`) ||
key.includes(`"`) ||
key.includes(`$`)
) {
throw new Error(
`environment key ${key} is invalid - the key must not contain =, $, ', or "`
) )
if (!userMountVolumes?.length) {
return mounts
} }
envBuffer.push(
for (const userVolume of userMountVolumes) { `"${key}=${value
const sourceVolumePath = `${ .replace(/\\/g, '\\\\')
path.isAbsolute(userVolume.sourceVolumePath) .replace(/"/g, '\\"')
? userVolume.sourceVolumePath .replace(/\$/g, '\\$')
: path.join( .replace(/`/g, '\\`')}"`
process.env.GITHUB_WORKSPACE as string,
userVolume.sourceVolumePath
) )
}`
mounts.push({
name: POD_VOLUME_NAME,
mountPath: userVolume.targetVolumePath,
subPath: sourceVolumePath,
readOnly: userVolume.readOnly
})
} }
return mounts if (!envBuffer?.length) {
return ''
}
return `env ${envBuffer.join(' ')} `
}
export function generateContainerName(image: string): string {
const nameWithTag = image.split('/').pop()
const name = nameWithTag?.split(':')[0]
if (!name) {
throw new Error(`Image definition '${image}' is invalid`)
}
return name
}
// Overwrite or append based on container options
//
// Keep in mind, envs and volumes could be passed as fields in container definition
// so default volume mounts and envs are appended first, and then create options are used
// to append more values
//
// Rest of the fields are just applied
// For example, container.createOptions.container.image is going to overwrite container.image field
export function mergeContainerWithOptions(
base: k8s.V1Container,
from: k8s.V1Container
): void {
for (const [key, value] of Object.entries(from)) {
if (key === 'name') {
if (value !== CONTAINER_EXTENSION_PREFIX + base.name) {
core.warning("Skipping name override: name can't be overwritten")
}
continue
} else if (key === 'image') {
core.warning("Skipping image override: image can't be overwritten")
continue
} else if (key === 'env') {
const envs = value as k8s.V1EnvVar[]
base.env = mergeLists(base.env, envs)
} else if (key === 'volumeMounts' && value) {
const volumeMounts = value as k8s.V1VolumeMount[]
base.volumeMounts = mergeLists(base.volumeMounts, volumeMounts)
} else if (key === 'ports' && value) {
const ports = value as k8s.V1ContainerPort[]
base.ports = mergeLists(base.ports, ports)
} else {
base[key] = value
}
}
}
export function mergePodSpecWithOptions(
base: k8s.V1PodSpec,
from: k8s.V1PodSpec
): void {
for (const [key, value] of Object.entries(from)) {
if (key === 'containers') {
base.containers.push(
...from.containers.filter(
e => !e.name?.startsWith(CONTAINER_EXTENSION_PREFIX)
)
)
} else if (key === 'volumes' && value) {
const volumes = value as k8s.V1Volume[]
base.volumes = mergeLists(base.volumes, volumes)
} else {
base[key] = value
}
}
}
export function mergeObjectMeta(
base: { metadata?: k8s.V1ObjectMeta },
from: k8s.V1ObjectMeta
): void {
if (!base.metadata?.labels || !base.metadata?.annotations) {
throw new Error(
"Can't merge metadata: base.metadata or base.annotations field is undefined"
)
}
if (from?.labels) {
for (const [key, value] of Object.entries(from.labels)) {
if (base.metadata?.labels?.[key]) {
core.warning(`Label ${key} is already defined and will be overwritten`)
}
base.metadata.labels[key] = value
}
}
if (from?.annotations) {
for (const [key, value] of Object.entries(from.annotations)) {
if (base.metadata?.annotations?.[key]) {
core.warning(
`Annotation ${key} is already defined and will be overwritten`
)
}
base.metadata.annotations[key] = value
}
}
}
export function readExtensionFromFile(): k8s.V1PodTemplateSpec | undefined {
const filePath = process.env[ENV_HOOK_TEMPLATE_PATH]
if (!filePath) {
return undefined
}
const doc = yaml.load(fs.readFileSync(filePath, 'utf8'))
if (!doc || typeof doc !== 'object') {
throw new Error(`Failed to parse ${filePath}`)
}
return doc as k8s.V1PodTemplateSpec
}
export function useKubeScheduler(): boolean {
return process.env[ENV_USE_KUBE_SCHEDULER] === 'true'
}
export enum PodPhase {
PENDING = 'Pending',
RUNNING = 'Running',
SUCCEEDED = 'Succeeded',
FAILED = 'Failed',
UNKNOWN = 'Unknown',
COMPLETED = 'Completed'
}
function mergeLists<T>(base?: T[], from?: T[]): T[] {
const b: T[] = base || []
if (!from?.length) {
return b
}
b.push(...from)
return b
}
export function fixArgs(args: string[]): string[] {
return shlex.split(args.join(' '))
}
export async function sleep(ms: number): Promise<void> {
return new Promise(resolve => setTimeout(resolve, ms))
}
export function listDirAllCommand(dir: string): string {
return `cd ${shlex.quote(dir)} && find . -not -path '*/_runner_hook_responses*' -exec stat -c '%b %n' {} \\;`
} }

View File

@@ -1,31 +1,61 @@
import * as path from 'path' import * as k8s from '@kubernetes/client-node'
import * as fs from 'fs' import { cleanupJob, prepareJob } from '../src/hooks'
import { prepareJob, cleanupJob } from '../src/hooks' import { RunnerInstanceLabel } from '../src/hooks/constants'
import { TestTempOutput } from './test-setup' import { namespace } from '../src/k8s'
import { TestHelper } from './test-setup'
import { PrepareJobArgs } from 'hooklib'
let testTempOutput: TestTempOutput let testHelper: TestHelper
const prepareJobJsonPath = path.resolve(
`${__dirname}/../../../examples/prepare-job.json`
)
let prepareJobOutputFilePath: string
describe('Cleanup Job', () => { describe('Cleanup Job', () => {
beforeEach(async () => { beforeEach(async () => {
const prepareJobJson = fs.readFileSync(prepareJobJsonPath) testHelper = new TestHelper()
let prepareJobData = JSON.parse(prepareJobJson.toString()) await testHelper.initialize()
let prepareJobData = testHelper.getPrepareJobDefinition()
testTempOutput = new TestTempOutput() const prepareJobOutputFilePath = testHelper.createFile(
testTempOutput.initialize()
prepareJobOutputFilePath = testTempOutput.createFile(
'prepare-job-output.json' 'prepare-job-output.json'
) )
await prepareJob(prepareJobData.args, prepareJobOutputFilePath) await prepareJob(
prepareJobData.args as PrepareJobArgs,
prepareJobOutputFilePath
)
}) })
afterEach(async () => {
await testHelper.cleanup()
})
it('should not throw', async () => { it('should not throw', async () => {
const outputJson = fs.readFileSync(prepareJobOutputFilePath)
const outputData = JSON.parse(outputJson.toString())
await expect(cleanupJob()).resolves.not.toThrow() await expect(cleanupJob()).resolves.not.toThrow()
}) })
it('should have no runner linked pods running', async () => {
await cleanupJob()
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
const podList = await k8sApi.listNamespacedPod({
namespace: namespace(),
labelSelector: new RunnerInstanceLabel().toString()
})
expect(podList.items.length).toBe(0)
})
it('should have no runner linked secrets', async () => {
await cleanupJob()
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
const secretList = await k8sApi.listNamespacedSecret({
namespace: namespace(),
labelSelector: new RunnerInstanceLabel().toString()
})
expect(secretList.items.length).toBe(0)
})
}) })

View File

@@ -0,0 +1,182 @@
import {
getJobPodName,
getRunnerPodName,
getSecretName,
getStepPodName,
getVolumeClaimName,
JOB_CONTAINER_NAME,
MAX_POD_NAME_LENGTH,
RunnerInstanceLabel,
STEP_POD_NAME_SUFFIX_LENGTH
} from '../src/hooks/constants'
describe('constants', () => {
describe('runner instance label', () => {
beforeEach(() => {
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
})
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => new RunnerInstanceLabel()).toThrow()
})
it('should have key truthy', () => {
const runnerInstanceLabel = new RunnerInstanceLabel()
expect(typeof runnerInstanceLabel.key).toBe('string')
expect(runnerInstanceLabel.key).toBeTruthy()
expect(runnerInstanceLabel.key.length).toBeGreaterThan(0)
})
it('should have value as runner pod name', () => {
const name = process.env.ACTIONS_RUNNER_POD_NAME as string
const runnerInstanceLabel = new RunnerInstanceLabel()
expect(typeof runnerInstanceLabel.value).toBe('string')
expect(runnerInstanceLabel.value).toBe(name)
})
it('should have toString combination of key and value', () => {
const runnerInstanceLabel = new RunnerInstanceLabel()
expect(runnerInstanceLabel.toString()).toBe(
`${runnerInstanceLabel.key}=${runnerInstanceLabel.value}`
)
})
})
describe('getRunnerPodName', () => {
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => getRunnerPodName()).toThrow()
process.env.ACTIONS_RUNNER_POD_NAME = ''
expect(() => getRunnerPodName()).toThrow()
})
it('should return corrent ACTIONS_RUNNER_POD_NAME name', () => {
const name = 'example'
process.env.ACTIONS_RUNNER_POD_NAME = name
expect(getRunnerPodName()).toBe(name)
})
})
describe('getJobPodName', () => {
it('should throw on getJobPodName if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => getJobPodName()).toThrow()
process.env.ACTIONS_RUNNER_POD_NAME = ''
expect(() => getRunnerPodName()).toThrow()
})
it('should contain suffix -workflow', () => {
const tableTests = [
{
podName: 'test',
expect: 'test-workflow'
},
{
// podName.length == 63
podName:
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa',
expect:
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-workflow'
}
]
for (const tt of tableTests) {
process.env.ACTIONS_RUNNER_POD_NAME = tt.podName
const actual = getJobPodName()
expect(actual).toBe(tt.expect)
}
})
})
describe('getVolumeClaimName', () => {
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_CLAIM_NAME
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => getVolumeClaimName()).toThrow()
process.env.ACTIONS_RUNNER_POD_NAME = ''
expect(() => getVolumeClaimName()).toThrow()
})
it('should return ACTIONS_RUNNER_CLAIM_NAME env if set', () => {
const claimName = 'testclaim'
process.env.ACTIONS_RUNNER_CLAIM_NAME = claimName
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
expect(getVolumeClaimName()).toBe(claimName)
})
it('should contain suffix -work if ACTIONS_RUNNER_CLAIM_NAME is not set', () => {
delete process.env.ACTIONS_RUNNER_CLAIM_NAME
process.env.ACTIONS_RUNNER_POD_NAME = 'example'
expect(getVolumeClaimName()).toBe('example-work')
})
})
describe('getSecretName', () => {
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => getSecretName()).toThrow()
process.env.ACTIONS_RUNNER_POD_NAME = ''
expect(() => getSecretName()).toThrow()
})
it('should contain suffix -secret- and name trimmed', () => {
const podNames = [
'test',
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
]
for (const podName of podNames) {
process.env.ACTIONS_RUNNER_POD_NAME = podName
const actual = getSecretName()
const re = new RegExp(
`${podName.substring(
MAX_POD_NAME_LENGTH -
'-secret-'.length -
STEP_POD_NAME_SUFFIX_LENGTH
)}-secret-[a-z0-9]{8,}`
)
expect(actual).toMatch(re)
}
})
})
describe('getStepPodName', () => {
it('should throw if ACTIONS_RUNNER_POD_NAME env is not set', () => {
delete process.env.ACTIONS_RUNNER_POD_NAME
expect(() => getStepPodName()).toThrow()
process.env.ACTIONS_RUNNER_POD_NAME = ''
expect(() => getStepPodName()).toThrow()
})
it('should contain suffix -step- and name trimmed', () => {
const podNames = [
'test',
'abcdaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
]
for (const podName of podNames) {
process.env.ACTIONS_RUNNER_POD_NAME = podName
const actual = getStepPodName()
const re = new RegExp(
`${podName.substring(
MAX_POD_NAME_LENGTH - '-step-'.length - STEP_POD_NAME_SUFFIX_LENGTH
)}-step-[a-z0-9]{8,}`
)
expect(actual).toMatch(re)
}
})
})
describe('const values', () => {
it('should have constants set', () => {
expect(JOB_CONTAINER_NAME).toBeTruthy()
expect(MAX_POD_NAME_LENGTH).toBeGreaterThan(0)
expect(STEP_POD_NAME_SUFFIX_LENGTH).toBeGreaterThan(0)
})
})
})

View File

@@ -1,64 +1,52 @@
import * as fs from 'fs' import * as fs from 'fs'
import * as path from 'path'
import { import {
cleanupJob, cleanupJob,
prepareJob, prepareJob,
runContainerStep, runContainerStep,
runScriptStep runScriptStep
} from '../src/hooks' } from '../src/hooks'
import { TestTempOutput } from './test-setup' import { TestHelper } from './test-setup'
import { RunContainerStepArgs, RunScriptStepArgs } from 'hooklib'
jest.useRealTimers() jest.useRealTimers()
let testTempOutput: TestTempOutput let testHelper: TestHelper
const prepareJobJsonPath = path.resolve(
`${__dirname}/../../../../examples/prepare-job.json`
)
const runScriptStepJsonPath = path.resolve(
`${__dirname}/../../../../examples/run-script-step.json`
)
let runContainerStepJsonPath = path.resolve(
`${__dirname}/../../../../examples/run-container-step.json`
)
let prepareJobData: any let prepareJobData: any
let prepareJobOutputFilePath: string let prepareJobOutputFilePath: string
describe('e2e', () => { describe('e2e', () => {
beforeEach(() => { beforeEach(async () => {
const prepareJobJson = fs.readFileSync(prepareJobJsonPath) testHelper = new TestHelper()
prepareJobData = JSON.parse(prepareJobJson.toString()) await testHelper.initialize()
testTempOutput = new TestTempOutput() prepareJobData = testHelper.getPrepareJobDefinition()
testTempOutput.initialize() prepareJobOutputFilePath = testHelper.createFile('prepare-job-output.json')
prepareJobOutputFilePath = testTempOutput.createFile(
'prepare-job-output.json'
)
}) })
afterEach(async () => { afterEach(async () => {
testTempOutput.cleanup() await testHelper.cleanup()
}) })
it('should prepare job, run script step, run container step then cleanup without errors', async () => { it('should prepare job, run script step, run container step then cleanup without errors', async () => {
await expect( await expect(
prepareJob(prepareJobData.args, prepareJobOutputFilePath) prepareJob(prepareJobData.args, prepareJobOutputFilePath)
).resolves.not.toThrow() ).resolves.not.toThrow()
const scriptStepContent = fs.readFileSync(runScriptStepJsonPath) const scriptStepData = testHelper.getRunScriptStepDefinition()
const scriptStepData = JSON.parse(scriptStepContent.toString())
const prepareJobOutputJson = fs.readFileSync(prepareJobOutputFilePath) const prepareJobOutputJson = fs.readFileSync(prepareJobOutputFilePath)
const prepareJobOutputData = JSON.parse(prepareJobOutputJson.toString()) const prepareJobOutputData = JSON.parse(prepareJobOutputJson.toString())
await expect( await expect(
runScriptStep(scriptStepData.args, prepareJobOutputData.state, null) runScriptStep(
scriptStepData.args as RunScriptStepArgs,
prepareJobOutputData.state
)
).resolves.not.toThrow() ).resolves.not.toThrow()
const runContainerStepContent = fs.readFileSync(runContainerStepJsonPath) const runContainerStepData = testHelper.getRunContainerStepDefinition()
const runContainerStepData = JSON.parse(runContainerStepContent.toString())
await expect( await expect(
runContainerStep(runContainerStepData.args) runContainerStep(runContainerStepData.args as RunContainerStepArgs)
).resolves.not.toThrow() ).resolves.not.toThrow()
await expect(cleanupJob()).resolves.not.toThrow() await expect(cleanupJob()).resolves.not.toThrow()

View File

@@ -0,0 +1,409 @@
import * as fs from 'fs'
import { containerPorts } from '../src/k8s'
import {
generateContainerName,
writeRunScript,
mergePodSpecWithOptions,
mergeContainerWithOptions,
readExtensionFromFile,
ENV_HOOK_TEMPLATE_PATH
} from '../src/k8s/utils'
import * as k8s from '@kubernetes/client-node'
import { TestHelper } from './test-setup'
let testHelper: TestHelper
describe('k8s utils', () => {
describe('write entrypoint', () => {
beforeEach(async () => {
testHelper = new TestHelper()
await testHelper.initialize()
})
afterEach(async () => {
await testHelper.cleanup()
})
it('should not throw', () => {
expect(() =>
writeRunScript('/test', 'sh', ['-e', 'script.sh'], ['/prepend/path'], {
SOME_ENV: 'SOME_VALUE'
})
).not.toThrow()
})
it('should throw if RUNNER_TEMP is not set', () => {
delete process.env.RUNNER_TEMP
expect(() =>
writeRunScript('/test', 'sh', ['-e', 'script.sh'], ['/prepend/path'], {
SOME_ENV: 'SOME_VALUE'
})
).toThrow()
})
it('should throw if environment variable name contains double quote', () => {
expect(() =>
writeRunScript('/test', 'sh', ['-e', 'script.sh'], ['/prepend/path'], {
'SOME"_ENV': 'SOME_VALUE'
})
).toThrow()
})
it('should throw if environment variable name contains =', () => {
expect(() =>
writeRunScript('/test', 'sh', ['-e', 'script.sh'], ['/prepend/path'], {
'SOME=ENV': 'SOME_VALUE'
})
).toThrow()
})
it('should throw if environment variable name contains single quote', () => {
expect(() =>
writeRunScript('/test', 'sh', ['-e', 'script.sh'], ['/prepend/path'], {
"SOME'_ENV": 'SOME_VALUE'
})
).toThrow()
})
it('should throw if environment variable name contains dollar', () => {
expect(() =>
writeRunScript('/test', 'sh', ['-e', 'script.sh'], ['/prepend/path'], {
SOME_$_ENV: 'SOME_VALUE'
})
).toThrow()
})
it('should escape double quote, dollar and backslash in environment variable values', () => {
const { runnerPath } = writeRunScript(
'/test',
'sh',
['-e', 'script.sh'],
['/prepend/path'],
{
DQUOTE: '"',
BACK_SLASH: '\\',
DOLLAR: '$'
}
)
expect(fs.existsSync(runnerPath)).toBe(true)
const script = fs.readFileSync(runnerPath, 'utf8')
expect(script).toContain('"DQUOTE=\\"')
expect(script).toContain('"BACK_SLASH=\\\\"')
expect(script).toContain('"DOLLAR=\\$"')
})
it('should return object with containerPath and runnerPath', () => {
const { containerPath, runnerPath } = writeRunScript(
'/test',
'sh',
['-e', 'script.sh'],
['/prepend/path'],
{
SOME_ENV: 'SOME_VALUE'
}
)
expect(containerPath).toMatch(/\/__w\/_temp\/.*\.sh/)
const re = new RegExp(`${process.env.RUNNER_TEMP}/.*\\.sh`)
expect(runnerPath).toMatch(re)
})
it('should write entrypoint path and the file should exist', () => {
const { runnerPath } = writeRunScript(
'/test',
'sh',
['-e', 'script.sh'],
['/prepend/path'],
{
SOME_ENV: 'SOME_VALUE'
}
)
expect(fs.existsSync(runnerPath)).toBe(true)
})
})
describe('container volumes', () => {
beforeEach(async () => {
testHelper = new TestHelper()
await testHelper.initialize()
})
afterEach(async () => {
await testHelper.cleanup()
})
it('should parse container ports', () => {
const tt = [
{
spec: '8080:80',
want: {
containerPort: 80,
hostPort: 8080,
protocol: 'TCP'
}
},
{
spec: '8080:80/udp',
want: {
containerPort: 80,
hostPort: 8080,
protocol: 'UDP'
}
},
{
spec: '8080/udp',
want: {
containerPort: 8080,
hostPort: undefined,
protocol: 'UDP'
}
},
{
spec: '8080',
want: {
containerPort: 8080,
hostPort: undefined,
protocol: 'TCP'
}
}
]
for (const tc of tt) {
const got = containerPorts({ portMappings: [tc.spec] })
for (const [key, value] of Object.entries(tc.want)) {
expect(got[0][key]).toBe(value)
}
}
})
it('should throw when ports are out of range (0, 65536)', () => {
expect(() => containerPorts({ portMappings: ['65536'] })).toThrow()
expect(() => containerPorts({ portMappings: ['0'] })).toThrow()
expect(() => containerPorts({ portMappings: ['65536/udp'] })).toThrow()
expect(() => containerPorts({ portMappings: ['0/udp'] })).toThrow()
expect(() => containerPorts({ portMappings: ['1:65536'] })).toThrow()
expect(() => containerPorts({ portMappings: ['65536:1'] })).toThrow()
expect(() => containerPorts({ portMappings: ['1:65536/tcp'] })).toThrow()
expect(() => containerPorts({ portMappings: ['65536:1/tcp'] })).toThrow()
expect(() => containerPorts({ portMappings: ['1:'] })).toThrow()
expect(() => containerPorts({ portMappings: [':1'] })).toThrow()
expect(() => containerPorts({ portMappings: ['1:/tcp'] })).toThrow()
expect(() => containerPorts({ portMappings: [':1/tcp'] })).toThrow()
})
it('should throw on multi ":" splits', () => {
expect(() => containerPorts({ portMappings: ['1:1:1'] })).toThrow()
})
it('should throw on multi "/" splits', () => {
expect(() => containerPorts({ portMappings: ['1:1/tcp/udp'] })).toThrow()
expect(() => containerPorts({ portMappings: ['1/tcp/udp'] })).toThrow()
})
})
describe('generate container name', () => {
it('should return the container name from image string', () => {
expect(
generateContainerName('public.ecr.aws/localstack/localstack')
).toEqual('localstack')
expect(
generateContainerName(
'public.ecr.aws/url/with/multiple/slashes/postgres:latest'
)
).toEqual('postgres')
expect(generateContainerName('postgres')).toEqual('postgres')
expect(generateContainerName('postgres:latest')).toEqual('postgres')
expect(generateContainerName('localstack/localstack')).toEqual(
'localstack'
)
expect(generateContainerName('localstack/localstack:latest')).toEqual(
'localstack'
)
})
it('should throw on invalid image string', () => {
expect(() =>
generateContainerName('localstack/localstack/:latest')
).toThrow()
expect(() => generateContainerName(':latest')).toThrow()
})
})
describe('read extension', () => {
beforeEach(async () => {
testHelper = new TestHelper()
await testHelper.initialize()
})
afterEach(async () => {
await testHelper.cleanup()
})
it('should throw if env variable is set but file does not exist', () => {
process.env[ENV_HOOK_TEMPLATE_PATH] =
'/path/that/does/not/exist/data.yaml'
expect(() => readExtensionFromFile()).toThrow()
})
it('should return undefined if env variable is not set', () => {
delete process.env[ENV_HOOK_TEMPLATE_PATH]
expect(readExtensionFromFile()).toBeUndefined()
})
it('should throw if file is empty', () => {
let filePath = testHelper.createFile('data.yaml')
process.env[ENV_HOOK_TEMPLATE_PATH] = filePath
expect(() => readExtensionFromFile()).toThrow()
})
it('should throw if file is not valid yaml', () => {
let filePath = testHelper.createFile('data.yaml')
fs.writeFileSync(filePath, 'invalid yaml')
process.env[ENV_HOOK_TEMPLATE_PATH] = filePath
expect(() => readExtensionFromFile()).toThrow()
})
it('should return object if file is valid', () => {
let filePath = testHelper.createFile('data.yaml')
fs.writeFileSync(
filePath,
`
metadata:
labels:
label-name: label-value
annotations:
annotation-name: annotation-value
spec:
containers:
- name: test
image: node:22
- name: job
image: ubuntu:latest`
)
process.env[ENV_HOOK_TEMPLATE_PATH] = filePath
const extension = readExtensionFromFile()
expect(extension).toBeDefined()
})
})
it('should merge container spec', () => {
const base = {
image: 'node:22',
name: 'test',
env: [
{
name: 'TEST',
value: 'TEST'
}
],
ports: [
{
containerPort: 8080,
hostPort: 8080,
protocol: 'TCP'
}
]
} as k8s.V1Container
const from = {
ports: [
{
containerPort: 9090,
hostPort: 9090,
protocol: 'TCP'
}
],
env: [
{
name: 'TEST_TWO',
value: 'TEST_TWO'
}
],
image: 'ubuntu:latest',
name: 'overwrite'
} as k8s.V1Container
const expectContainer = {
name: base.name,
image: base.image,
ports: [
...(base.ports as k8s.V1ContainerPort[]),
...(from.ports as k8s.V1ContainerPort[])
],
env: [...(base.env as k8s.V1EnvVar[]), ...(from.env as k8s.V1EnvVar[])]
}
const expectJobContainer = JSON.parse(JSON.stringify(expectContainer))
expectJobContainer.name = base.name
mergeContainerWithOptions(base, from)
expect(base).toStrictEqual(expectContainer)
})
it('should merge pod spec', () => {
const base = {
containers: [
{
image: 'node:22',
name: 'test',
env: [
{
name: 'TEST',
value: 'TEST'
}
],
ports: [
{
containerPort: 8080,
hostPort: 8080,
protocol: 'TCP'
}
]
}
],
restartPolicy: 'Never'
} as k8s.V1PodSpec
const from = {
securityContext: {
runAsUser: 1000,
fsGroup: 2000
},
restartPolicy: 'Always',
volumes: [
{
name: 'work',
emptyDir: {}
}
],
containers: [
{
image: 'ubuntu:latest',
name: 'side-car',
env: [
{
name: 'TEST',
value: 'TEST'
}
],
ports: [
{
containerPort: 8080,
hostPort: 8080,
protocol: 'TCP'
}
]
}
]
} as k8s.V1PodSpec
const expected = JSON.parse(JSON.stringify(base))
expected.securityContext = from.securityContext
expected.restartPolicy = from.restartPolicy
expected.volumes = from.volumes
expected.containers.push(from.containers[0])
mergePodSpecWithOptions(base, from)
expect(base).toStrictEqual(expected)
})
})

View File

@@ -1,36 +1,31 @@
import * as fs from 'fs' import * as fs from 'fs'
import * as path from 'path' import * as path from 'path'
import { cleanupJob } from '../src/hooks' import { cleanupJob } from '../src/hooks'
import { prepareJob } from '../src/hooks/prepare-job' import { createContainerSpec, prepareJob } from '../src/hooks/prepare-job'
import { TestTempOutput } from './test-setup' import { TestHelper } from './test-setup'
import { ENV_HOOK_TEMPLATE_PATH, generateContainerName } from '../src/k8s/utils'
import { execPodStep, getPodByName } from '../src/k8s'
import { V1Container } from '@kubernetes/client-node'
import { JOB_CONTAINER_NAME } from '../src/hooks/constants'
jest.useRealTimers() jest.useRealTimers()
let testTempOutput: TestTempOutput let testHelper: TestHelper
const prepareJobJsonPath = path.resolve(
`${__dirname}/../../../examples/prepare-job.json`
)
let prepareJobData: any let prepareJobData: any
let prepareJobOutputFilePath: string let prepareJobOutputFilePath: string
describe('Prepare job', () => { describe('Prepare job', () => {
beforeEach(() => { beforeEach(async () => {
const prepareJobJson = fs.readFileSync(prepareJobJsonPath) testHelper = new TestHelper()
prepareJobData = JSON.parse(prepareJobJson.toString()) await testHelper.initialize()
prepareJobData = testHelper.getPrepareJobDefinition()
testTempOutput = new TestTempOutput() prepareJobOutputFilePath = testHelper.createFile('prepare-job-output.json')
testTempOutput.initialize()
prepareJobOutputFilePath = testTempOutput.createFile(
'prepare-job-output.json'
)
}) })
afterEach(async () => { afterEach(async () => {
const outputJson = fs.readFileSync(prepareJobOutputFilePath)
const outputData = JSON.parse(outputJson.toString())
await cleanupJob() await cleanupJob()
testTempOutput.cleanup() await testHelper.cleanup()
}) })
it('should not throw exception', async () => { it('should not throw exception', async () => {
@@ -44,4 +39,196 @@ describe('Prepare job', () => {
const content = fs.readFileSync(prepareJobOutputFilePath) const content = fs.readFileSync(prepareJobOutputFilePath)
expect(() => JSON.parse(content.toString())).not.toThrow() expect(() => JSON.parse(content.toString())).not.toThrow()
}) })
it('should prepare job with absolute path for userVolumeMount', async () => {
const userVolumeMount = path.join(
process.env.GITHUB_WORKSPACE as string,
'myvolume'
)
fs.mkdirSync(userVolumeMount)
fs.writeFileSync(path.join(userVolumeMount, 'file.txt'), 'hello')
prepareJobData.args.container.userMountVolumes = [
{
sourceVolumePath: userVolumeMount,
targetVolumePath: '/__w/myvolume',
readOnly: false
}
]
await expect(
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
).resolves.not.toThrow()
const content = JSON.parse(
fs.readFileSync(prepareJobOutputFilePath).toString()
)
await execPodStep(
[
'sh',
'-c',
'\'[ "$(cat /__w/myvolume/file.txt)" = "hello" ] || exit 5\''
],
content!.state!.jobPod,
JOB_CONTAINER_NAME
).then(output => {
expect(output).toBe(0)
})
})
it('should prepare job with envs CI and GITHUB_ACTIONS', async () => {
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
const content = JSON.parse(
fs.readFileSync(prepareJobOutputFilePath).toString()
)
const got = await getPodByName(content.state.jobPod)
expect(got.spec?.containers[0].env).toEqual(
expect.arrayContaining([
{ name: 'CI', value: 'true' },
{ name: 'GITHUB_ACTIONS', value: 'true' }
])
)
expect(got.spec?.containers[1].env).toEqual(
expect.arrayContaining([
{ name: 'CI', value: 'true' },
{ name: 'GITHUB_ACTIONS', value: 'true' }
])
)
})
it('should not override CI env var if already set', async () => {
prepareJobData.args.container.environmentVariables = {
CI: 'false'
}
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
const content = JSON.parse(
fs.readFileSync(prepareJobOutputFilePath).toString()
)
const got = await getPodByName(content.state.jobPod)
expect(got.spec?.containers[0].env).toEqual(
expect.arrayContaining([
{ name: 'CI', value: 'false' },
{ name: 'GITHUB_ACTIONS', value: 'true' }
])
)
expect(got.spec?.containers[1].env).toEqual(
expect.arrayContaining([
{ name: 'CI', value: 'true' },
{ name: 'GITHUB_ACTIONS', value: 'true' }
])
)
})
it('should not run prepare job without the job container', async () => {
prepareJobData.args.container = undefined
await expect(
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
).rejects.toThrow()
})
it('should not set command + args for service container if not passed in args', async () => {
const services = prepareJobData.args.services.map(service => {
return createContainerSpec(service, generateContainerName(service.image))
}) as [V1Container]
expect(services[0].command).toBe(undefined)
expect(services[0].args).toBe(undefined)
})
it('should determine alpine correctly', async () => {
prepareJobData.args.container.image = 'alpine:latest'
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
const content = JSON.parse(
fs.readFileSync(prepareJobOutputFilePath).toString()
)
expect(content.isAlpine).toBe(true)
})
it('should run pod with extensions applied', async () => {
process.env[ENV_HOOK_TEMPLATE_PATH] = path.join(
__dirname,
'../../../examples/extension.yaml'
)
await expect(
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
).resolves.not.toThrow()
delete process.env[ENV_HOOK_TEMPLATE_PATH]
const content = JSON.parse(
fs.readFileSync(prepareJobOutputFilePath).toString()
)
const got = await getPodByName(content.state.jobPod)
expect(got.metadata?.annotations?.['annotated-by']).toBe('extension')
expect(got.metadata?.labels?.['labeled-by']).toBe('extension')
expect(got.spec?.restartPolicy).toBe('Never')
// job container
expect(got.spec?.containers[0].name).toBe(JOB_CONTAINER_NAME)
expect(got.spec?.containers[0].image).toBe('node:22')
expect(got.spec?.containers[0].command).toEqual(['sh'])
expect(got.spec?.containers[0].args).toEqual(['-c', 'sleep 50'])
// service container
expect(got.spec?.containers[1].image).toBe('redis')
expect(got.spec?.containers[1].command).toBeFalsy()
expect(got.spec?.containers[1].args).toBeFalsy()
expect(got.spec?.containers[1].env).toEqual(
expect.arrayContaining([
{ name: 'CI', value: 'true' },
{ name: 'GITHUB_ACTIONS', value: 'true' },
{ name: 'ENV2', value: 'value2' }
])
)
expect(got.spec?.containers[1].resources).toEqual({
requests: { memory: '1Mi', cpu: '1' },
limits: { memory: '1Gi', cpu: '2' }
})
// side-car
expect(got.spec?.containers[2].name).toBe('side-car')
expect(got.spec?.containers[2].image).toBe('ubuntu:latest')
expect(got.spec?.containers[2].command).toEqual(['sh'])
expect(got.spec?.containers[2].args).toEqual(['-c', 'sleep 60'])
})
it('should put only job and services in output context file', async () => {
process.env[ENV_HOOK_TEMPLATE_PATH] = path.join(
__dirname,
'../../../examples/extension.yaml'
)
await expect(
prepareJob(prepareJobData.args, prepareJobOutputFilePath)
).resolves.not.toThrow()
const content = JSON.parse(
fs.readFileSync(prepareJobOutputFilePath).toString()
)
expect(content.state.jobPod).toBeTruthy()
expect(content.context.container).toBeTruthy()
expect(content.context.services).toBeTruthy()
expect(content.context.services.length).toBe(1)
})
test.each([undefined, null, []])(
'should not throw exception when portMapping=%p',
async pm => {
prepareJobData.args.services.forEach(s => {
s.portMappings = pm
})
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
const content = JSON.parse(
fs.readFileSync(prepareJobOutputFilePath).toString()
)
expect(() => content.context.services[0].image).not.toThrow()
}
)
}) })

View File

@@ -1,25 +1,86 @@
import { TestTempOutput } from './test-setup' import { prepareJob, runContainerStep } from '../src/hooks'
import * as path from 'path' import { TestHelper } from './test-setup'
import { runContainerStep } from '../src/hooks' import { ENV_HOOK_TEMPLATE_PATH } from '../src/k8s/utils'
import * as fs from 'fs' import * as fs from 'fs'
import * as yaml from 'js-yaml'
import { JOB_CONTAINER_EXTENSION_NAME } from '../src/hooks/constants'
jest.useRealTimers() jest.useRealTimers()
let testTempOutput: TestTempOutput let testHelper: TestHelper
let runContainerStepJsonPath = path.resolve(
`${__dirname}/../../../examples/run-container-step.json`
)
let runContainerStepData: any let runContainerStepData: any
let prepareJobData: any
let prepareJobOutputFilePath: string
describe('Run container step', () => { describe('Run container step', () => {
beforeAll(() => { beforeEach(async () => {
const content = fs.readFileSync(runContainerStepJsonPath) testHelper = new TestHelper()
runContainerStepData = JSON.parse(content.toString()) await testHelper.initialize()
process.env.RUNNER_NAME = 'testjob' prepareJobData = testHelper.getPrepareJobDefinition()
prepareJobOutputFilePath = testHelper.createFile('prepare-job-output.json')
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
runContainerStepData = testHelper.getRunContainerStepDefinition()
}) })
it('should not throw', async () => {
afterEach(async () => {
await testHelper.cleanup()
})
it('should run pod with extensions applied', async () => {
const extension = {
metadata: {
annotations: {
foo: 'bar'
},
labels: {
bar: 'baz'
}
},
spec: {
containers: [
{
name: JOB_CONTAINER_EXTENSION_NAME,
command: ['sh'],
args: ['-c', 'sleep 10000']
},
{
name: 'side-container',
image: 'ubuntu:latest',
command: ['sh'],
args: ['-c', 'echo test']
}
],
restartPolicy: 'Never'
}
}
let filePath = testHelper.createFile()
fs.writeFileSync(filePath, yaml.dump(extension))
process.env[ENV_HOOK_TEMPLATE_PATH] = filePath
await expect(
runContainerStep(runContainerStepData.args)
).resolves.not.toThrow()
delete process.env[ENV_HOOK_TEMPLATE_PATH]
})
it('should shold have env variables available', async () => {
runContainerStepData.args.entryPoint = 'bash'
runContainerStepData.args.entryPointArgs = [
'-c',
"'if [[ -z $NODE_ENV ]]; then exit 1; fi'"
]
await expect(
runContainerStep(runContainerStepData.args)
).resolves.not.toThrow()
})
it('should run container step with envs CI and GITHUB_ACTIONS', async () => {
runContainerStepData.args.entryPoint = 'bash'
runContainerStepData.args.entryPointArgs = [
'-c',
"'if [[ -z $GITHUB_ACTIONS ]] || [[ -z $CI ]]; then exit 1; fi'"
]
await expect( await expect(
runContainerStep(runContainerStepData.args) runContainerStep(runContainerStepData.args)
).resolves.not.toThrow() ).resolves.not.toThrow()

View File

@@ -1,39 +1,42 @@
import { prepareJob, cleanupJob, runScriptStep } from '../src/hooks'
import { TestTempOutput } from './test-setup'
import * as path from 'path'
import * as fs from 'fs' import * as fs from 'fs'
import { cleanupJob, prepareJob, runScriptStep } from '../src/hooks'
import { TestHelper } from './test-setup'
import { PrepareJobArgs, RunScriptStepArgs } from 'hooklib'
jest.useRealTimers() jest.useRealTimers()
let testTempOutput: TestTempOutput let testHelper: TestHelper
const prepareJobJsonPath = path.resolve(
`${__dirname}/../../../examples/prepare-job.json`
)
let prepareJobData: any
let prepareJobOutputFilePath: string
let prepareJobOutputData: any let prepareJobOutputData: any
let runScriptStepDefinition: {
args: RunScriptStepArgs
}
describe('Run script step', () => { describe('Run script step', () => {
beforeEach(async () => { beforeEach(async () => {
const prepareJobJson = fs.readFileSync(prepareJobJsonPath) testHelper = new TestHelper()
prepareJobData = JSON.parse(prepareJobJson.toString()) await testHelper.initialize()
console.log(prepareJobData) const prepareJobOutputFilePath = testHelper.createFile(
testTempOutput = new TestTempOutput()
testTempOutput.initialize()
prepareJobOutputFilePath = testTempOutput.createFile(
'prepare-job-output.json' 'prepare-job-output.json'
) )
await prepareJob(prepareJobData.args, prepareJobOutputFilePath)
const prepareJobData = testHelper.getPrepareJobDefinition()
runScriptStepDefinition = testHelper.getRunScriptStepDefinition() as {
args: RunScriptStepArgs
}
await prepareJob(
prepareJobData.args as PrepareJobArgs,
prepareJobOutputFilePath
)
const outputContent = fs.readFileSync(prepareJobOutputFilePath) const outputContent = fs.readFileSync(prepareJobOutputFilePath)
prepareJobOutputData = JSON.parse(outputContent.toString()) prepareJobOutputData = JSON.parse(outputContent.toString())
}) })
afterEach(async () => { afterEach(async () => {
await cleanupJob() await cleanupJob()
testTempOutput.cleanup() await testHelper.cleanup()
}) })
// NOTE: To use this test, do kubectl apply -f podspec.yaml (from podspec examples) // NOTE: To use this test, do kubectl apply -f podspec.yaml (from podspec examples)
@@ -41,21 +44,73 @@ describe('Run script step', () => {
// npm run test run-script-step // npm run test run-script-step
it('should not throw an exception', async () => { it('should not throw an exception', async () => {
const args = {
entryPointArgs: ['echo "test"'],
entryPoint: '/bin/bash',
environmentVariables: {
NODE_ENV: 'development'
},
prependPath: ['/foo/bar', 'bar/foo'],
workingDirectory: '/__w/thboop-test2/thboop-test2'
}
const state = {
jobPod: prepareJobOutputData.state.jobPod
}
const responseFile = null
await expect( await expect(
runScriptStep(args, state, responseFile) runScriptStep(runScriptStepDefinition.args, prepareJobOutputData.state)
).resolves.not.toThrow()
})
it('should fail if the working directory does not exist', async () => {
runScriptStepDefinition.args.workingDirectory = '/foo/bar'
await expect(
runScriptStep(runScriptStepDefinition.args, prepareJobOutputData.state)
).rejects.toThrow()
})
it('should shold have env variables available', async () => {
runScriptStepDefinition.args.entryPoint = 'bash'
runScriptStepDefinition.args.entryPointArgs = [
'-c',
"'if [[ -z $NODE_ENV ]]; then exit 1; fi'"
]
await expect(
runScriptStep(runScriptStepDefinition.args, prepareJobOutputData.state)
).resolves.not.toThrow()
})
it('Should have path variable changed in container with prepend path string', async () => {
runScriptStepDefinition.args.prependPath = ['/some/path']
runScriptStepDefinition.args.entryPoint = '/bin/bash'
runScriptStepDefinition.args.entryPointArgs = [
'-c',
`'if [[ ! $(env | grep "^PATH=") = "PATH=${runScriptStepDefinition.args.prependPath}:"* ]]; then exit 1; fi'`
]
await expect(
runScriptStep(runScriptStepDefinition.args, prepareJobOutputData.state)
).resolves.not.toThrow()
})
it('Dollar symbols in environment variables should not be expanded', async () => {
runScriptStepDefinition.args.environmentVariables = {
VARIABLE1: '$VAR',
VARIABLE2: '${VAR}',
VARIABLE3: '$(VAR)'
}
runScriptStepDefinition.args.entryPointArgs = [
'-c',
'\'if [[ -z "$VARIABLE1" ]]; then exit 1; fi\'',
'\'if [[ -z "$VARIABLE2" ]]; then exit 2; fi\'',
'\'if [[ -z "$VARIABLE3" ]]; then exit 3; fi\''
]
await expect(
runScriptStep(runScriptStepDefinition.args, prepareJobOutputData.state)
).resolves.not.toThrow()
})
it('Should have path variable changed in container with prepend path string array', async () => {
runScriptStepDefinition.args.prependPath = ['/some/other/path']
runScriptStepDefinition.args.entryPoint = '/bin/bash'
runScriptStepDefinition.args.entryPointArgs = [
'-c',
`'if [[ ! $(env | grep "^PATH=") = "PATH=${runScriptStepDefinition.args.prependPath.join(
':'
)}:"* ]]; then exit 1; fi'`
]
await expect(
runScriptStep(runScriptStepDefinition.args, prepareJobOutputData.state)
).resolves.not.toThrow() ).resolves.not.toThrow()
}) })
}) })

View File

@@ -0,0 +1,18 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
# add a mount from /path/to/my/files on the host to /files on the node
extraMounts:
- hostPath: {{PATHTOREPO}}
containerPath: {{PATHTOREPO}}
# optional: if set, the mount is read-only.
# default false
readOnly: false
# optional: if set, the mount needs SELinux relabeling.
# default false
selinuxRelabel: false
# optional: set propagation mode (None, HostToContainer or Bidirectional)
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
# default None
propagation: None

View File

@@ -1,28 +1,165 @@
import * as k8s from '@kubernetes/client-node'
import * as fs from 'fs' import * as fs from 'fs'
import { HookData } from 'hooklib/lib'
import * as path from 'path'
import { v4 as uuidv4 } from 'uuid' import { v4 as uuidv4 } from 'uuid'
export class TestTempOutput { const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.CoreV1Api)
export class TestHelper {
private tempDirPath: string private tempDirPath: string
private podName: string
private runnerWorkdir: string
private runnerTemp: string
constructor() { constructor() {
this.tempDirPath = `${__dirname}/_temp/${uuidv4()}` this.tempDirPath = `${__dirname}/_temp/runner`
this.runnerWorkdir = `${this.tempDirPath}/_work`
this.runnerTemp = `${this.tempDirPath}/_work/_temp`
this.podName = uuidv4().replace(/-/g, '')
} }
public initialize(): void { async initialize(): Promise<void> {
fs.mkdirSync(this.tempDirPath, { recursive: true }) process.env['ACTIONS_RUNNER_POD_NAME'] = `${this.podName}`
process.env['RUNNER_WORKSPACE'] = `${this.runnerWorkdir}/repo`
process.env['RUNNER_TEMP'] = `${this.runnerTemp}`
process.env['GITHUB_WORKSPACE'] = `${this.runnerWorkdir}/repo/repo`
process.env['ACTIONS_RUNNER_KUBERNETES_NAMESPACE'] = 'default'
fs.mkdirSync(`${this.runnerWorkdir}/repo/repo`, { recursive: true })
fs.mkdirSync(`${this.tempDirPath}/externals`, { recursive: true })
fs.mkdirSync(this.runnerTemp, { recursive: true })
fs.mkdirSync(`${this.runnerTemp}/_github_workflow`, { recursive: true })
fs.mkdirSync(`${this.runnerTemp}/_github_home`, { recursive: true })
fs.mkdirSync(`${this.runnerTemp}/_runner_file_commands`, {
recursive: true
})
fs.copyFileSync(
path.resolve(`${__dirname}/../../../examples/example-script.sh`),
`${this.runnerTemp}/example-script.sh`
)
await this.cleanupK8sResources()
try {
await this.createTestJobPod()
} catch (e) {
console.log(e)
}
} }
public cleanup(): void { async cleanup(): Promise<void> {
try {
await this.cleanupK8sResources()
fs.rmSync(this.tempDirPath, { recursive: true }) fs.rmSync(this.tempDirPath, { recursive: true })
} catch {
// Ignore errors during cleanup
}
} }
public createFile(fileName?: string): string { async cleanupK8sResources(): Promise<void> {
await k8sApi
.deleteNamespacedPod({
name: this.podName,
namespace: 'default',
gracePeriodSeconds: 0
})
.catch((e: k8s.ApiException<any>) => {
if (e.code !== 404) {
console.error(JSON.stringify(e))
}
})
await k8sApi
.deleteNamespacedPod({
name: `${this.podName}-workflow`,
namespace: 'default',
gracePeriodSeconds: 0
})
.catch((e: k8s.ApiException<any>) => {
if (e.code !== 404) {
console.error(JSON.stringify(e))
}
})
}
createFile(fileName?: string): string {
const filePath = `${this.tempDirPath}/${fileName || uuidv4()}` const filePath = `${this.tempDirPath}/${fileName || uuidv4()}`
fs.writeFileSync(filePath, '') fs.writeFileSync(filePath, '')
return filePath return filePath
} }
public removeFile(fileName: string): void { removeFile(fileName: string): void {
const filePath = `${this.tempDirPath}/${fileName}` const filePath = `${this.tempDirPath}/${fileName}`
fs.rmSync(filePath) fs.rmSync(filePath)
} }
async createTestJobPod(): Promise<void> {
const container = {
name: 'runner',
image: 'ghcr.io/actions/actions-runner:latest',
imagePullPolicy: 'IfNotPresent'
} as k8s.V1Container
const pod: k8s.V1Pod = {
metadata: {
name: this.podName
},
spec: {
restartPolicy: 'Never',
containers: [container],
securityContext: {
runAsUser: 1001,
runAsGroup: 1001,
fsGroup: 1001
}
}
} as k8s.V1Pod
await k8sApi.createNamespacedPod({ namespace: 'default', body: pod })
}
getPrepareJobDefinition(): HookData {
const prepareJob = JSON.parse(
fs.readFileSync(
path.resolve(__dirname + '/../../../examples/prepare-job.json'),
'utf8'
)
)
prepareJob.args.container.userMountVolumes = undefined
prepareJob.args.container.registry = null
prepareJob.args.services.forEach(s => {
s.registry = null
})
return prepareJob
}
getRunScriptStepDefinition(): HookData {
const runScriptStep = JSON.parse(
fs.readFileSync(
path.resolve(__dirname + '/../../../examples/run-script-step.json'),
'utf8'
)
)
runScriptStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
return runScriptStep
}
getRunContainerStepDefinition(): HookData {
const runContainerStep = JSON.parse(
fs.readFileSync(
path.resolve(__dirname + '/../../../examples/run-container-step.json'),
'utf8'
)
)
runContainerStep.args.entryPointArgs[1] = `/__w/_temp/example-script.sh`
runContainerStep.args.userMountVolumes = undefined
runContainerStep.args.registry = null
return runContainerStep
}
} }

View File

@@ -5,7 +5,8 @@
"outDir": "./lib", "outDir": "./lib",
"rootDir": "./src" "rootDir": "./src"
}, },
"esModuleInterop": true, /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */
"include": [ "include": [
"./src" "src/**/*",
] ]
} }

View File

@@ -0,0 +1,6 @@
{
"compilerOptions": {
"allowJs": true
},
"extends": "./tsconfig.json"
}

View File

@@ -1,7 +1,19 @@
## Features ## Features
- Initial Release
- k8s: remove dependency on the runner's volume [#244]
## Bugs ## Bugs
- docker: fix readOnly volumes in createContainer [#236]
## Misc ## Misc
- bump all dependencies [#234] [#240] [#239] [#238]
- bump actions [#254]
## SHA-256 Checksums
The SHA-256 checksums for the packages included in this build are shown below:
- actions-runner-hooks-docker-<HOOK_VERSION>.zip <DOCKER_SHA>
- actions-runner-hooks-k8s-<HOOK_VERSION>.zip <K8S_SHA>