Compare commits

..

1 Commits

Author SHA1 Message Date
Thomas Boop
3f71edd2af port hooklib to runner 2022-05-25 16:02:51 -04:00
92 changed files with 9434 additions and 2130 deletions

View File

@@ -1,596 +0,0 @@
# ADR 0000: Container Hooks
**Date**: 2022-05-12
**Status**: Accepted
# Background
[Job Hooks](https://github.com/actions/runner/blob/main/docs/adrs/1751-runner-job-hooks.md) have given users the ability to customize how their self hosted runners run a job.
Users also want the ability to customize how they run containers during the scope of the job, rather then being locked into the docker implementation we have in the runner. They may want to use podman, kubernetes, or even change the docker commands we run.
We should give them that option, and publish examples how how they can create their own hooks.
# Guiding Principles
- **Extensibility** is the focus, we need to make sure we are flexible enough to cover current and future scenarios, even at the cost of making it harder to utilize these hooks
- Args should map **directly** to yaml values provided by the user.
- For example, the current runner overrides `HOME`, we can do that in the hook, but we shouldn't pass that hook as an ENV with the other env's the user has set, as that is not user input, it is how the runner invokes containers
## Interface
- You will set the variable `ACTIONS_RUNNER_CONTAINER_HOOK=/Users/foo/runner/hooks.js` which is the entrypoint to your hook handler.
- There is no partial opt in, you must handle every hook
- We will pass a command and some args via `stdin`
- An exit code of 0 is a success, every other exit code is a failure
- We will support the same runner commands we support in [Job Hooks](https://github.com/actions/runner/blob/main/docs/adrs/1751-runner-job-hooks.md)
- On timeout, we will send a sigint to your process. If you fail to terminate within a reasonable amount of time, we will send a sigkill, and eventually kill the process tree.
An example input looks like
```json
{
"command": "job_cleanup",
"responseFile": "/users/thboop/runner/_work/{guid}.json",
"args": {},
"state":
{
"id": "82e8219701fe096a35941d869cf8d71af1d943b5d3bdd718850fb87ac3042480"
}
}
```
`command` is the command we expect you to invoke
`responseFile` is the file you need to write your output to, if the command has output
`args` are the specific arguments the command needs
`state` is a json blog you can pass around to maintain your state, this is covered in more details below.
### Writing responses to a file
All text written to stdout or stderr should appear in the job or step logs. With that in mind, we support a few ways to actually return data:
1. Wrapping the json in some unique tag and processing it like we do commands
2. Writing to a file
For 1, users typically view logging information as a safe action, so we worry someone accidentialy logging unsantized information and causing unexpected or un-secure behavior. We eventually plan to move off of stdout/stderr style commands in favor of a runner cli.
Investing in this area doesn't make a lot of sense at this time.
While writing to a file to communicate isn't the most ideal pattern, its an existing pattern in the runner and serves us well, so lets reuse it.
### Output
Your output must be correctly formatted json. An example output looks like:
```
{
"state": {},
"context"
{
"container" :
{
"id": "82e8219701fe096a35941d869cf8d71af1d943b5d3bdd718850fb87ac3042480"
"network": "github_network_53269bd575974817b43f4733536b200c"
}
"services": {
"redis": {
"id": "60972d9aa486605e66b0dad4abb638dc3d9116f566579e418166eedb8abb9105",
"ports": {
"8080": "8080"
},
"network": "github_network_53269bd575974817b43f4733536b200c"
}
}
"alpine: true,
}
```
`state` is a unique field any command can return. If it is not empty, we will store the state for you and pass it into all future commands. You can overwrite it by having the next hook invoked return a unique state.
Other fields are dependent upon the command being run.
### Versioning
We will not version these hooks at launch. If needed, we can always major version split these hooks in the future. We will ship in Beta to allow for breaking changes for a few months.
### The Job Context
The [job context](https://docs.github.com/en/actions/learn-github-actions/contexts#example-contents-of-the-job-context) currently has a variety of fields that correspond to containers. We should consider allowing hooks to populate new fields in the job context. That is out of scope for this original release however.
## Hooks
Hooks are to be implemented at a very high level, and map to actions the runner does, rather then specific docker actions like `docker build` or `docker create`. By mapping to runner actions, we create a very extensible framework that is flexible enough to solve any user concerns in the future. By providing first party implementations, we give users easy starting points to customize specific hooks (like `docker build`) without having to write full blown solutions.
The other would be to provide hooks that mirror every docker call we make, and expose more hooks to help support k8s users, with the expectation that users may have to no-op on multiple hooks if they don't correspond to our use case.
Why we don't want to go that way
- It feels clunky, users need to understand which hooks they need to implement and which they can ignore, which isn't a great UX
- It doesn't scale well, I don't want to build a solution where we may need to add more hooks, by mapping to runner actions, updating hooks is a painful experience for users
- Its overwhelming, its easier to tell users to build 4 hooks and track data themselves, rather then 16 hooks where the runner needs certain information and then needs to provide that information back into each hook. If we expose `Container Create`, you need to return the container you created, then we do `container run` which uses that container. If we just give you an image and say create and run this container, you don't need to store the container id in the runner, and it maps better to k8s scenarios where we don't really have container ids.
### Prepare_job hook
The `prepare_job` hook is called when a job is started. We pass in any job or service containers the job has. We expect that you:
- Prune anything from previous jobs if needed
- Create a network if needed
- Pull the job and service containers
- Start the job container
- Start the service containers
- Write to the response file some information we need
- Required: if the container is alpine, otherwise x64
- Optional: any context fields you want to set on the job context, otherwise they will be unavailable for users to use
- Return 0 when the health checks have succeeded and the job/service containers are started
This hook will **always** be called if you have container hooks enabled, even if no service or job containers exist in the job. This allows you to fail the job or implement a default job container if you want to and no job container has been provided.
<details>
<summary>Example Input</summary>
<br>
```
{
"command": "prepare_job",
"responseFile": "/users/thboop/runner/_work/{guid}.json",
"state": {},
"args":
{
"jobContainer": {
"image": "node:14.16",
"workingDirectory": "/__w/thboop-test2/thboop-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"userMountVolumes:[
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
},
],
"mountVolumes": [
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": {
"username": "foo",
"password": "bar",
"serverUrl": "https://index.docker.io/v1"
},
"portMappings": [ "8080:80/tcp", "8080:80/udp" ]
},
"services": [
{
"contextName": "redis",
"image": "redis",
"createOptions": "--cpus 1",
"environmentVariables": {},
"mountVolumes": [],
"portMappings": [ "8080:80/tcp", "8080:80/udp" ]
"registry": {
"username": "foo",
"password": "bar",
"serverUrl": "https://index.docker.io/v1"
}
}
]
}
}
```
</details>
<details>
<summary>Field Descriptions</summary>
<br>
```
Arg Fields:
jobContainer: **Optional** An Object containing information about the specified job container
"image": **Required** A string containing the docker image
"workingDirectory": **Required** A string containing the absolute path of the working directory
"createOptions": **Optional** The optional create options specified in the [YAML](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)
"environmentVariables": **Optional** A map of key value env's to set
"userMountVolumes: ** Optional** an array of user mount volumes set in the [YAML](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)
"sourceVolumePath": **Required** The source path to the volume to be mounted into the docker container
"targetVolumePath": **Required** The target path to the volume to be mounted into the docker container
"readOnly": false **Required** whether or not the mount should be read only
"mountVolumes": **Required** an array of mounts to mount into the container, same fields as above
"sourceVolumePath": **Required** The source path to the volume to be mounted into the docker container
"targetVolumePath": **Required** The target path to the volume to be mounted into the docker container
"readOnly": false **Required** whether or not the mount should be read only
"registry" **Optional** docker registry credentials to use when using a private container registry
"username": **Optional** the username
"password": **Optional** the password
"serverUrl": **Optional** the registry url
"portMappings": **Optional** an array of source:target ports to map into the container
"services": an array of service containers to spin up
"contextName": **Required** the name of the service in the Job context
"image": **Required** A string containing the docker image
"createOptions": **Optional** The optional create options specified in the [YAML](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)
"environmentVariables": **Optional** A map of key value env's to set
"mountVolumes": **Required** an array of mounts to mount into the container, same fields as above
"sourceVolumePath": **Required** The source path to the volume to be mounted into the docker container
"targetVolumePath": **Required** The target path to the volume to be mounted into the docker container
"readOnly": false **Required** whether or not the mount should be read only
"registry" **Optional** docker registry credentials to use when using a private container registry
"username": **Optional** the username
"password": **Optional** the password
"serverUrl": **Optional** the registry url
"portMappings": **Optional** an array of source:target ports to map into the container
```
</details>
<details>
<summary>Example Output</summary>
<br>
```
{
"state":
{
"network": "github_network_53269bd575974817b43f4733536b200c",
"jobContainer" : "82e8219701fe096a35941d869cf8d71af1d943b5d3bdd718850fb87ac3042480",
"serviceContainers":
{
"redis": "60972d9aa486605e66b0dad4abb638dc3d9116f566579e418166eedb8abb9105"
}
},
"context"
{
"container" :
{
"id": "82e8219701fe096a35941d869cf8d71af1d943b5d3bdd718850fb87ac3042480"
"network": "github_network_53269bd575974817b43f4733536b200c"
}
"services": {
"redis": {
"id": "60972d9aa486605e66b0dad4abb638dc3d9116f566579e418166eedb8abb9105",
"ports": {
"8080": "8080"
},
"network": "github_network_53269bd575974817b43f4733536b200c"
}
}
"alpine: true,
}
```
</details>
### Cleanup Job
The `cleanup_job` hook is called at the end of a job and expects you to:
- Stop any running service or job containers (or the equiavalent pod)
- Stop the network (if one exists)
- Delete any job or service containers (or the equiavalent pod)
- Delete the network (if one exists)
- Cleanup anything else that was created for the run
Its input looks like
<details>
<summary>Example Input</summary>
<br>
```
"command": "cleanup_job",
"responseFile": null,
"state":
{
"network": "github_network_53269bd575974817b43f4733536b200c",
"jobContainer" : "82e8219701fe096a35941d869cf8d71af1d943b5d3bdd718850fb87ac3042480",
"serviceContainers":
{
"redis": "60972d9aa486605e66b0dad4abb638dc3d9116f566579e418166eedb8abb9105"
}
}
"args": {}
```
</details>
No args are provided.
No output is expected.
### Run Container Step
The `run_container_step` is called once per container action in your job and expects you to:
- Pull or build the required container (or fail if you cannot)
- Run the container action and return the exit code of the container
- Stream any step logs output to stdout and stderr
- Cleanup the container after it executes
<details>
<summary>Example Input for Image</summary>
<br>
```
"command": "run_container_step",
"responseFile": null,
"state":
{
"network": "github_network_53269bd575974817b43f4733536b200c",
"jobContainer" : "82e8219701fe096a35941d869cf8d71af1d943b5d3bdd718850fb87ac3042480",
"serviceContainers":
{
"redis": "60972d9aa486605e66b0dad4abb638dc3d9116f566579e418166eedb8abb9105"
}
}
"args":
{
"image": "node:14.16",
"dockerfile": null,
"entryPointArgs": ["-f", "/dev/null"],
"entryPoint": "tail",
"workingDirectory": "/__w/thboop-test2/thboop-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath":["/foo/bar", "bar/foo"]
"userMountVolumes:[
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
},
],
"mountVolumes": [
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": null,
"portMappings": { "80": "801" }
},
```
</details>
<details>
<summary>Example Input for dockerfile</summary>
<br>
```
"command": "run_container_step",
"responseFile": null,
"state":
{
"network": "github_network_53269bd575974817b43f4733536b200c",
"jobContainer" : "82e8219701fe096a35941d869cf8d71af1d943b5d3bdd718850fb87ac3042480",
"services":
{
"redis": "60972d9aa486605e66b0dad4abb638dc3d9116f566579e418166eedb8abb9105"
}
}
"args":
{
"image": null,
"dockerfile": /__w/_actions/foo/dockerfile,
"entryPointArgs": ["hello world"],
"entryPoint": "echo",
"workingDirectory": "/__w/thboop-test2/thboop-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath":["/foo/bar", "bar/foo"]
"userMountVolumes:[
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
},
],
"mountVolumes": [
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/thomas/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": null,
"portMappings": [ "8080:80/tcp", "8080:80/udp" ]
},
}
```
</details>
<details>
<summary>Field Descriptions</summary>
<br>
```
Arg Fields:
"image": **Optional** A string containing the docker image. Otherwise a dockerfile must be provided
"dockerfile": **Optional** A string containing the path to the dockerfile, otherwise an image must be provided
"entryPointArgs": **Optional** A list containing the entry point args
"entryPoint": **Optional** The container entry point to use if the default image entrypoint should be overwritten
"workingDirectory": **Required** A string containing the absolute path of the working directory
"createOptions": **Optional** The optional create options specified in the [YAML](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)
"environmentVariables": **Optional** A map of key value env's to set
"prependPath": **Optional** an array of additional paths to prepend to the $PATH variable
"userMountVolumes: ** Optional** an array of user mount volumes set in the [YAML](https://docs.github.com/en/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)
"sourceVolumePath": **Required** The source path to the volume to be mounted into the docker container
"targetVolumePath": **Required** The target path to the volume to be mounted into the docker container
"readOnly": false **Required** whether or not the mount should be read only
"mountVolumes": **Required** an array of mounts to mount into the container, same fields as above
"sourceVolumePath": **Required** The source path to the volume to be mounted into the docker container
"targetVolumePath": **Required** The target path to the volume to be mounted into the docker container
"readOnly": false **Required** whether or not the mount should be read only
"registry" **Optional** docker registry credentials to use when using a private container registry
"username": **Optional** the username
"password": **Optional** the password
"serverUrl": **Optional** the registry url
"portMappings": **Optional** an array of source:target ports to map into the container
```
</details>
No output is expected
Currently we build all container actions at the start of the job. By doing it during the hook, we move this to just in time building for hooks. We could expose a hook to build/pull a container action, and have those called at the start of a job, but doing so would require hook authors to track the build containers in the state, which could be painful.
### Run Script Step
The `run_script_step` expects you to:
- Invoke the provided script inside the job container and return the exit code
- Stream any step log output to stdout and stderr
<details>
<summary>Example Input</summary>
<br>
```
"command": "run_script_step",
"responseFile": null,
"state":
{
"network": "github_network_53269bd575974817b43f4733536b200c",
"jobContainer" : "82e8219701fe096a35941d869cf8d71af1d943b5d3bdd718850fb87ac3042480",
"serviceContainers":
{
"redis": "60972d9aa486605e66b0dad4abb638dc3d9116f566579e418166eedb8abb9105"
}
}
"args":
{
"entryPointArgs": ["-e", "/runner/temp/abc123.sh"],
"entryPoint": "bash",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath": ["/foo/bar", "bar/foo"],
"workingDirectory": "/__w/thboop-test2/thboop-test2"
}
```
</details>
<details>
<summary>Field Descriptions</summary>
<br>
```
Arg Fields:
"entryPointArgs": **Optional** A list containing the entry point args
"entryPoint": **Optional** The container entry point to use if the default image entrypoint should be overwritten
"prependPath": **Optional** an array of additional paths to prepend to the $PATH variable
"workingDirectory": **Required** A string containing the absolute path of the working directory
"environmentVariables": **Optional** A map of key value env's to set
```
</details>
No output is expected
## Limitations
- We will only support linux on launch
- Hooks are set by the runner admin, and thus are only supported on self hosted runners
## Consequences
- We support non docker scenarios for self hosted runners and allow customers to customize their docker invocations
- We ship/maintain docs on docker hooks and an open source repo with examples
- We support these hooks and add enough telemetry to be able to troubleshoot support issues as they come in.

View File

@@ -15,7 +15,7 @@ Make sure the runner has access to actions service for GitHub.com or GitHub Ente
```
curl -v https://api.github.com/api/v3/zen
curl -v https://vstoken.actions.githubusercontent.com/_apis/health
curl -v https://pipelines.actions.githubusercontent.com/_apis/health
curl -v https://pipelines.actions.githubusercontent/_apis/health
```
- For GitHub Enterprise Server

View File

@@ -20,30 +20,11 @@ The test also set environment variable `GIT_TRACE=1` and `GIT_CURL_VERBOSE=1` be
## How to fix the issue?
### 1. Check global and system git config
If you are having issues connecting to the server, check your global and system git config for any unexpected authentication headers. You might be seeing an error like:
```
fatal: unable to access 'https://github.com/actions/checkout/': The requested URL returned error: 400
```
The following commands can be used to check for unexpected authentication headers:
```
$ git config --global --list | grep extraheader
http.extraheader=AUTHORIZATION: unexpected_auth_header
$ git config --system --list | grep extraheader
```
The following command can be used to remove the above value: `git config --global --unset http.extraheader`
### 2. Check the common network issue
### 1. Check the common network issue
> Please check the [network doc](./network.md)
### 3. SSL certificate related issue
### 2. SSL certificate related issue
If you are seeing `SSL Certificate problem:` in the log, it means the `git` can't connect to the GitHub server due to SSL handshake failure.
> Please check the [SSL cert doc](./sslcert.md)

View File

@@ -34,7 +34,7 @@ The `installdependencies.sh` script should install all required dependencies on
Debian based OS (Debian, Ubuntu, Linux Mint)
- liblttng-ust1 or liblttng-ust0
- liblttng-ust0
- libkrb5-3
- zlib1g
- libssl1.1, libssl1.0.2 or libssl1.0.0

View File

@@ -0,0 +1,4 @@
dist/
lib/
node_modules/
**/__tests__/**

View File

@@ -0,0 +1,56 @@
{
"plugins": ["@typescript-eslint"],
"extends": ["plugin:github/recommended"],
"parser": "@typescript-eslint/parser",
"parserOptions": {
"ecmaVersion": 9,
"sourceType": "module",
"project": "./tsconfig.json"
},
"rules": {
"eslint-comments/no-use": "off",
"import/no-namespace": "off",
"no-constant-condition": "off",
"no-unused-vars": "off",
"i18n-text/no-en": "off",
"@typescript-eslint/no-unused-vars": "error",
"@typescript-eslint/explicit-member-accessibility": ["error", {"accessibility": "no-public"}],
"@typescript-eslint/no-require-imports": "error",
"@typescript-eslint/array-type": "error",
"@typescript-eslint/await-thenable": "error",
"camelcase": "off",
"@typescript-eslint/explicit-function-return-type": ["error", {"allowExpressions": true}],
"@typescript-eslint/func-call-spacing": ["error", "never"],
"@typescript-eslint/no-array-constructor": "error",
"@typescript-eslint/no-empty-interface": "error",
"@typescript-eslint/no-explicit-any": "warn",
"@typescript-eslint/no-extraneous-class": "error",
"@typescript-eslint/no-floating-promises": "error",
"@typescript-eslint/no-for-in-array": "error",
"@typescript-eslint/no-inferrable-types": "error",
"@typescript-eslint/no-misused-new": "error",
"@typescript-eslint/no-namespace": "error",
"@typescript-eslint/no-non-null-assertion": "warn",
"@typescript-eslint/no-unnecessary-qualifier": "error",
"@typescript-eslint/no-unnecessary-type-assertion": "error",
"@typescript-eslint/no-useless-constructor": "error",
"@typescript-eslint/no-var-requires": "error",
"@typescript-eslint/prefer-for-of": "warn",
"@typescript-eslint/prefer-function-type": "warn",
"@typescript-eslint/prefer-includes": "error",
"@typescript-eslint/prefer-string-starts-ends-with": "error",
"@typescript-eslint/promise-function-async": "error",
"@typescript-eslint/require-array-sort-compare": "error",
"@typescript-eslint/restrict-plus-operands": "error",
"semi": "off",
"@typescript-eslint/semi": ["error", "never"],
"@typescript-eslint/type-annotation-spacing": "error",
"@typescript-eslint/unbound-method": "error",
"no-shadow": "off",
"@typescript-eslint/no-shadow": ["error"]
},
"env": {
"node": true,
"es6": true
}
}

3
hooks/container/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
node_modules/
lib/
dist/

View File

@@ -0,0 +1,3 @@
dist/
lib/
node_modules/

View File

@@ -0,0 +1,11 @@
{
"printWidth": 80,
"tabWidth": 2,
"useTabs": false,
"semi": false,
"singleQuote": true,
"trailingComma": "none",
"bracketSpacing": false,
"arrowParens": "avoid",
"parser": "typescript"
}

View File

@@ -0,0 +1,34 @@
# Basic setup
You'll need a runner compatible with hooks, a repository with container workflows to which you can register the runner and the hooks from this repository.
## Getting Started
- Run ` npm install && npm run bootstrap` to setup your environment and install all the needed packages
- Run `npm run lint` and `npm run format` to ensure your charges will pass CI
- Run `npm run build-all` to build and test end to end.
## E2E
- You'll need a runner compatible with hooks, a repository with container workflows to which you can register the runner and the hooks from this repository.
- See [the runner contributing.md](../../github/CONTRIBUTING.MD) for how to get started with runner development.
- Build your hook using `npm run build`
- Enable the hooks by setting `ACTIONS_RUNNER_CONTAINER_HOOK=./src/{libraryname}/dist/index.js` file generated by [ncc](https://github.com/vercel/ncc)
- Configure your self hosted runner against the a repository you have admin access
- Run a workflow with a container job, for example
```
name: myjob
on:
workflow_dispatch:
jobs:
my_job:
runs-on: self-hosted
services:
redis:
image: redis
container:
image: alpine:3.15
options: --cpus 1
steps:
- run: pwd
```

10
hooks/container/README.md Normal file
View File

@@ -0,0 +1,10 @@
# Container Hooks
This repo contains example implementation of the container hook feature across various container providers. More information on how to implement your own hooks can be found in the [github docs]().
Three projects are included in the `src` folder
- k8s: A kubernetes hook implementation that spins up pods dynamically to run a job
- docker: A hook implementation of the runner's docker implementation
- hooklib: a shared library which contains typescript definitions and utilities that the other projects consume
### Want to contribute
We welcome contributions. See [how to contribute](CONTRIBUTING.md).

View File

@@ -0,0 +1,3 @@
dist/
lib/
node_modules/

4350
hooks/container/hooklib/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,28 @@
{
"name": "hooklib",
"version": "1.0.0",
"description": "",
"main": "lib/index.js",
"types": "index.d.ts",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"build": "tsc && ncc build",
"format": "prettier --write '**/*.ts'",
"format-check": "prettier --check '**/*.ts'",
"lint": "eslint src/**/*.ts"
},
"author": "",
"license": "MIT",
"devDependencies": {
"@types/node": "^17.0.23",
"@typescript-eslint/parser": "^5.18.0",
"@zeit/ncc": "^0.22.3",
"eslint": "^8.12.0",
"eslint-plugin-github": "^4.3.6",
"prettier": "^2.6.2",
"typescript": "^4.6.3"
},
"dependencies": {
"@actions/core": "^1.6.0"
}
}

View File

@@ -0,0 +1,2 @@
export * from './interfaces'
export * from './utils'

View File

@@ -0,0 +1,87 @@
export enum Command {
PrepareJob = 'prepare_job',
CleanupJob = 'cleanup_job',
RunContainerStep = 'run_container_step',
RunScriptStep = 'run_script_step'
}
export interface HookData {
command: Command
responseFile: string
args?: PrepareJobArgs | RunContainerStepArgs | RunScriptStepArgs
state?: object
}
export interface PrepareJobArgs {
container?: JobContainerInfo
services?: ServiceContainerInfo[]
}
export type RunContainerStepArgs = StepContainerInfo
export interface RunScriptStepArgs {
entrypoint: string
entrypointArgs: string[]
environmentVariables?: {[key: string]: string}
prependPath?: string[]
workingDirectory: string
}
export interface ContainerInfo {
entrypoint?: string
entrypointArgs?: string[]
createOptions?: string
environmentVariables?: {[key: string]: string}
userMountVolumes?: Mount[]
registry?: Registry
portMappings?: string[]
}
export interface ServiceContainerInfo extends ContainerInfo {
contextName: string
image: string
}
export interface JobContainerInfo extends ContainerInfo {
image: string
workingDirectory: string
systemMountVolumes: Mount[]
}
export interface StepContainerInfo extends ContainerInfo {
prependPath?: string[]
workingDirectory: string
dockerfile?: string
image?: string
systemMountVolumes: Mount[]
}
export interface Mount {
sourceVolumePath: string
targetVolumePath: string
readOnly: boolean
}
export interface Registry {
username?: string
password?: string
serverUrl: string
}
export enum Protocol {
TCP = 'tcp',
UDP = 'udp'
}
export interface PrepareJobResponse {
state?: object
context?: ContainerContext
services?: {[key: string]: ContainerContext}
alpine: boolean
}
export interface ContainerContext {
id?: string
network?: string
ports?: {[key: string]: string}
}

View File

@@ -0,0 +1,44 @@
import * as core from '@actions/core'
import * as events from 'events'
import * as fs from 'fs'
import * as os from 'os'
import * as readline from 'readline'
import {HookData} from './interfaces'
export async function getInputFromStdin(): Promise<HookData> {
let input = ''
const rl = readline.createInterface({
input: process.stdin
})
rl.on('line', line => {
core.debug(`Line from STDIN: ${line}`)
input = line
})
await events.default.once(rl, 'close')
const inputJson = JSON.parse(input)
return inputJson as HookData
}
export function writeToResponseFile(filePath: string, message: any): void {
if (!filePath) {
throw new Error(`Expected file path`)
}
if (!fs.existsSync(filePath)) {
throw new Error(`Missing file at path: ${filePath}`)
}
fs.appendFileSync(filePath, `${toCommandValue(message)}${os.EOL}`, {
encoding: 'utf8'
})
}
function toCommandValue(input: any): string {
if (input === null || input === undefined) {
return ''
} else if (typeof input === 'string' || input instanceof String) {
return input as string
}
return JSON.stringify(input)
}

View File

@@ -0,0 +1,11 @@
{
"extends": "../../tsconfig.json",
"compilerOptions": {
"baseUrl": "./",
"outDir": "./lib",
"rootDir": "./src"
},
"include": [
"./src"
]
}

4368
hooks/container/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,37 @@
{
"name": "hooks",
"version": "1.0.0",
"description": "Three projects are included - k8s: a kubernetes hook implementation that spins up pods dynamically to run a job - docker: A hook implementation of the runner's docker implementation - A hook lib, which contains shared typescript definitions and utilities that the other packages consume",
"main": "",
"directories": {
"doc": "docs"
},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"bootstrap": "npm install --prefix src/hooklib && npm install --prefix src/k8s && npm install --prefix src/docker",
"format": "prettier --write '**/*.ts'",
"format-check": "prettier --check '**/*.ts'",
"lint": "eslint src/**/*.ts",
"build-all": "npm run build --prefix src/hooklib && npm run build --prefix src/k8s && npm run build --prefix src/docker"
},
"repository": {
"type": "git",
"url": "git+https://github.com/actions/runner.git"
},
"author": "",
"license": "MIT",
"bugs": {
"url": "https://github.com/actions/runner/issues"
},
"homepage": "https://github.com/actions/runner#readme",
"dependencies": {
},
"devDependencies": {
"@types/node": "^17.0.23",
"@typescript-eslint/parser": "^5.18.0",
"eslint": "^8.12.0",
"eslint-plugin-github": "^4.3.6",
"prettier": "^2.6.2",
"typescript": "^4.6.3"
}
}

View File

@@ -0,0 +1,70 @@
{
"compilerOptions": {
/* Visit https://aka.ms/tsconfig.json to read more about this file */
"outDir": "./lib",
"rootDir": "./src",
/* Basic Options */
// "incremental": true, /* Enable incremental compilation */
"target": "es5", /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019', 'ES2020', or 'ESNEXT'. */
"module": "commonjs", /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', 'es2020', or 'ESNext'. */
// "lib": [], /* Specify library files to be included in the compilation. */
// "allowJs": true, /* Allow javascript files to be compiled. */
// "checkJs": true, /* Report errors in .js files. */
// "jsx": "preserve", /* Specify JSX code generation: 'preserve', 'react-native', or 'react'. */
"declaration": true, /* Generates corresponding '.d.ts' file. */
// "declarationMap": true, /* Generates a sourcemap for each corresponding '.d.ts' file. */
// "sourceMap": true, /* Generates corresponding '.map' file. */
// "outFile": "./", /* Concatenate and emit output to single file. */
// "outDir": "./", /* Redirect output structure to the directory. */
// "rootDir": "./", /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
// "composite": true, /* Enable project compilation */
// "tsBuildInfoFile": "./", /* Specify file to store incremental compilation information */
// "removeComments": true, /* Do not emit comments to output. */
// "noEmit": true, /* Do not emit outputs. */
// "importHelpers": true, /* Import emit helpers from 'tslib'. */
// "downlevelIteration": true, /* Provide full support for iterables in 'for-of', spread, and destructuring when targeting 'ES5' or 'ES3'. */
// "isolatedModules": true, /* Transpile each file as a separate module (similar to 'ts.transpileModule'). */
/* Strict Type-Checking Options */
"strict": true, /* Enable all strict type-checking options. */
"noImplicitAny": false, /* Raise error on expressions and declarations with an implied 'any' type. */
// "strictNullChecks": true, /* Enable strict null checks. */
// "strictFunctionTypes": true, /* Enable strict checking of function types. */
// "strictBindCallApply": true, /* Enable strict 'bind', 'call', and 'apply' methods on functions. */
// "strictPropertyInitialization": true, /* Enable strict checking of property initialization in classes. */
// "noImplicitThis": true, /* Raise error on 'this' expressions with an implied 'any' type. */
// "alwaysStrict": true, /* Parse in strict mode and emit "use strict" for each source file. */
/* Additional Checks */
// "noUnusedLocals": true, /* Report errors on unused locals. */
// "noUnusedParameters": true, /* Report errors on unused parameters. */
// "noImplicitReturns": true, /* Report error when not all code paths in function return a value. */
// "noFallthroughCasesInSwitch": true, /* Report errors for fallthrough cases in switch statement. */
/* Module Resolution Options */
// "moduleResolution": "node", /* Specify module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6). */
// "baseUrl": "./", /* Base directory to resolve non-absolute module names. */
// "paths": {}, /* A series of entries which re-map imports to lookup locations relative to the 'baseUrl'. */
// "rootDirs": [], /* List of root folders whose combined content represents the structure of the project at runtime. */
// "typeRoots": [], /* List of folders to include type definitions from. */
// "types": [], /* Type declaration files to be included in compilation. */
// "allowSyntheticDefaultImports": true, /* Allow default imports from modules with no default export. This does not affect code emit, just typechecking. */
"esModuleInterop": true, /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */
// "preserveSymlinks": true, /* Do not resolve the real path of symlinks. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
/* Source Map Options */
// "sourceRoot": "", /* Specify the location where debugger should locate TypeScript files instead of source locations. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSourceMap": true, /* Emit a single file with source maps instead of having a separate file. */
// "inlineSources": true, /* Emit the source alongside the sourcemaps within a single file; requires '--inlineSourceMap' or '--sourceMap' to be set. */
/* Experimental Options */
// "experimentalDecorators": true, /* Enables experimental support for ES7 decorators. */
// "emitDecoratorMetadata": true, /* Enables experimental support for emitting type metadata for decorators. */
/* Advanced Options */
"skipLibCheck": true, /* Skip type checking of declaration files. */
"forceConsistentCasingInFileNames": true /* Disallow inconsistently-cased references to the same file. */
}
}

View File

@@ -1,5 +1,10 @@
## Features
- Added a pre-release package for the `macOS-arm64` architecture
- Note that this packages is pre-release status and may not work with all existing actions
## Bugs
- Fixed an issue where self hosted environments had their docker env's overwritten (#2107)
- Fixed an issue where live console logs would fail to close (#1903)
## Misc
## Windows x64
@@ -27,7 +32,7 @@ curl -O -L https://github.com/actions/runner/releases/download/v<RUNNER_VERSION>
tar xzf ./actions-runner-osx-x64-<RUNNER_VERSION>.tar.gz
```
## OSX arm64 (Apple silicon)
## [Pre-release] OSX arm64 (Apple silicon)
``` bash
# Create a folder

View File

@@ -1 +1 @@
2.296.2
<Update to ./src/runnerversion when creating release>

View File

@@ -66,7 +66,7 @@ then
fi
fi
$apt_get update && $apt_get install -y libkrb5-3 zlib1g
$apt_get update && $apt_get install -y liblttng-ust0 libkrb5-3 zlib1g
if [ $? -ne 0 ]
then
echo "'$apt_get' failed with exit code '$?'"
@@ -94,14 +94,6 @@ then
fi
}
apt_get_with_fallbacks liblttng-ust1 liblttng-ust0
if [ $? -ne 0 ]
then
echo "'$apt_get' failed with exit code '$?'"
print_errormessage
exit 1
fi
apt_get_with_fallbacks libssl1.1$ libssl1.0.2$ libssl1.0.0$
if [ $? -ne 0 ]
then

View File

@@ -120,9 +120,6 @@ if ERRORLEVEL 1 (
echo [%date% %time%] Update succeed >> "%logfile%" 2>&1
type nul > update.finished
echo [%date% %time%] update.finished file creation succeed >> "%logfile%" 2>&1
rem rename the update log file with %logfile%.succeed/.failed/succeedneedrestart
rem runner service host can base on the log file name determin the result of the runner update
echo [%date% %time%] Rename "%logfile%" to be "%logfile%.succeed" >> "%logfile%" 2>&1

View File

@@ -180,9 +180,6 @@ fi
date "+[%F %T-%4N] Update succeed" >> "$logfile"
touch update.finished
date "+[%F %T-%4N] update.finished file creation succeed" >> "$logfile"
# rename the update log file with %logfile%.succeed/.failed/succeedneedrestart
# runner service host can base on the log file name determin the result of the runner update
date "+[%F %T-%4N] Rename $logfile to be $logfile.succeed" >> "$logfile" 2>&1

View File

@@ -1,5 +1,5 @@
@echo off
SET UPDATEFILE=update.finished
"%~dp0\bin\Runner.Listener.exe" run %*
rem using `if %ERRORLEVEL% EQU N` insterad of `if ERRORLEVEL N`
@@ -22,30 +22,16 @@ if %ERRORLEVEL% EQU 2 (
)
if %ERRORLEVEL% EQU 3 (
rem Wait for 30 seconds or for flag file to exists for the ephemeral runner update process finish
echo "Runner listener exit because of updating, re-launch runner after successful update"
FOR /L %%G IN (1,1,30) DO (
IF EXIST %UPDATEFILE% (
echo "Update finished successfully."
del %FILE%
exit /b 1
)
ping 127.0.0.1 -n 2 -w 1000 >NUL
)
rem Sleep 5 seconds to wait for the runner update process finish
echo "Runner listener exit because of updating, re-launch runner in 5 seconds"
ping 127.0.0.1 -n 6 -w 1000 >NUL
exit /b 1
)
if %ERRORLEVEL% EQU 4 (
rem Wait for 30 seconds or for flag file to exists for the runner update process finish
echo "Runner listener exit because of updating, re-launch runner after successful update"
FOR /L %%G IN (1,1,30) DO (
IF EXIST %UPDATEFILE% (
echo "Update finished successfully."
del %FILE%
exit /b 1
)
ping 127.0.0.1 -n 2 -w 1000 >NUL
)
rem Sleep 5 seconds to wait for the ephemeral runner update process finish
echo "Runner listener exit because of updating, re-launch ephemeral runner in 5 seconds"
ping 127.0.0.1 -n 6 -w 1000 >NUL
exit /b 1
)

View File

@@ -17,8 +17,6 @@ while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symli
[[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
updateFile="update.finished"
"$DIR"/bin/Runner.Listener run $*
returnCode=$?
@@ -33,28 +31,14 @@ elif [[ $returnCode == 2 ]]; then
"$DIR"/safe_sleep.sh 5
exit 2
elif [[ $returnCode == 3 ]]; then
# Wait for 30 seconds or for flag file to exists for the runner update process finish
echo "Runner listener exit because of updating, re-launch runner after successful update"
for i in {0..30}; do
if test -f "$updateFile"; then
echo "Update finished successfully."
rm "$updateFile"
break
fi
"$DIR"/safe_sleep.sh 1
done
# Sleep 5 seconds to wait for the runner update process finish
echo "Runner listener exit because of updating, re-launch runner in 5 seconds"
"$DIR"/safe_sleep.sh 5
exit 2
elif [[ $returnCode == 4 ]]; then
# Wait for 30 seconds or for flag file to exists for the ephemeral runner update process finish
echo "Runner listener exit because of updating, re-launch runner after successful update"
for i in {0..30}; do
if test -f "$updateFile"; then
echo "Update finished successfully."
rm "$updateFile"
break
fi
"$DIR"/safe_sleep.sh 1
done
# Sleep 5 seconds to wait for the ephemeral runner update process finish
echo "Runner listener exit because of updating, re-launch ephemeral runner in 5 seconds"
"$DIR"/safe_sleep.sh 5
exit 2
else
echo "Exiting with unknown error code: ${returnCode}"

View File

@@ -9,10 +9,10 @@ while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symli
[[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
cp -f "$DIR"/run-helper.sh.template "$DIR"/run-helper.sh
# run the helper process which keep the listener alive
while :;
do
cp -f "$DIR"/run-helper.sh.template "$DIR"/run-helper.sh
"$DIR"/run-helper.sh $*
returnCode=$?
if [[ $returnCode -eq 2 ]]; then

View File

@@ -90,7 +90,6 @@ namespace GitHub.Runner.Common
public static class Args
{
public static readonly string Auth = "auth";
public static readonly string JitConfig = "jitconfig";
public static readonly string Labels = "labels";
public static readonly string MonitorSocketAddress = "monitorsocketaddress";
public static readonly string Name = "name";
@@ -152,7 +151,6 @@ namespace GitHub.Runner.Common
public static readonly string DiskSpaceWarning = "runner.diskspace.warning";
public static readonly string Node12Warning = "DistributedTask.AddWarningToNode12Action";
public static readonly string UseContainerPathForTemplate = "DistributedTask.UseContainerPathForTemplate";
public static readonly string AllowRunnerContainerHooks = "DistributedTask.AllowRunnerContainerHooks";
}
public static readonly string InternalTelemetryIssueDataKey = "_internal_telemetry";
@@ -198,7 +196,6 @@ namespace GitHub.Runner.Common
{
public static readonly string JobStartedStepName = "Set up runner";
public static readonly string JobCompletedStepName = "Complete runner";
public static readonly string ContainerHooksPath = "ACTIONS_RUNNER_CONTAINER_HOOKS";
}
public static class Path
@@ -230,7 +227,6 @@ namespace GitHub.Runner.Common
//
public static readonly string AllowUnsupportedCommands = "ACTIONS_ALLOW_UNSECURE_COMMANDS";
public static readonly string AllowUnsupportedStopCommandTokens = "ACTIONS_ALLOW_UNSECURE_STOPCOMMAND_TOKENS";
public static readonly string RequireJobContainer = "ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER";
public static readonly string RunnerDebug = "ACTIONS_RUNNER_DEBUG";
public static readonly string StepDebug = "ACTIONS_STEP_DEBUG";
public static readonly string AllowActionsUseUnsecureNodeVersion = "ACTIONS_ALLOW_USE_UNSECURE_NODE_VERSION";
@@ -242,7 +238,6 @@ namespace GitHub.Runner.Common
// Set this env var to "node12" to downgrade the node version for internal functions (e.g hashfiles). This does NOT affect the version of node actions.
public static readonly string ForcedInternalNodeVersion = "ACTIONS_RUNNER_FORCED_INTERNAL_NODE_VERSION";
public static readonly string ForcedActionsNodeVersion = "ACTIONS_RUNNER_FORCE_ACTIONS_NODE_VERSION";
}
public static class System

View File

@@ -13,7 +13,6 @@ using System.Runtime.Loader;
using System.Threading;
using System.Threading.Tasks;
using GitHub.DistributedTask.Logging;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
namespace GitHub.Runner.Common
@@ -642,31 +641,6 @@ namespace GitHub.Runner.Common
var handlerFactory = context.GetService<IHttpClientHandlerFactory>();
return handlerFactory.CreateClientHandler(context.WebProxy);
}
public static string GetDefaultShellForScript(this IHostContext hostContext, string path, string prependPath)
{
var trace = hostContext.GetTrace(nameof(GetDefaultShellForScript));
switch (Path.GetExtension(path))
{
case ".sh":
// use 'sh' args but prefer bash
if (WhichUtil.Which("bash", false, trace, prependPath) != null)
{
return "bash";
}
return "sh";
case ".ps1":
if (WhichUtil.Which("pwsh", false, trace, prependPath) != null)
{
return "pwsh";
}
return "powershell";
case ".js":
return Path.Combine(hostContext.GetDirectory(WellKnownDirectory.Externals), NodeUtil.GetInternalNodeVersion(), "bin", $"node{IOUtil.ExeExtension}") + " {0}";
default:
throw new ArgumentException($"{path} is not a valid path to a script. Make sure it ends in '.sh', '.ps1' or '.js'.");
}
}
}
public enum ShutdownReason

View File

@@ -1,14 +0,0 @@
using System;
using GitHub.DistributedTask.WebApi;
namespace GitHub.Runner.Common
{
public class JobStatusEventArgs : EventArgs
{
public JobStatusEventArgs(TaskAgentStatus status)
{
this.Status = status;
}
public TaskAgentStatus Status { get; private set; }
}
}

View File

@@ -1,106 +0,0 @@
using System;
using System.Collections.Generic;
using System.Runtime.Serialization;
using System.Threading;
using System.Threading.Tasks;
using GitHub.DistributedTask.Pipelines;
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
using GitHub.Services.Common;
using GitHub.Services.WebApi;
namespace GitHub.Runner.Common
{
[ServiceLocator(Default = typeof(RunServer))]
public interface IRunServer : IRunnerService
{
Task ConnectAsync(Uri serverUrl, VssCredentials credentials);
Task<AgentJobRequestMessage> GetJobMessageAsync(string id, CancellationToken token);
}
public sealed class RunServer : RunnerService, IRunServer
{
private bool _hasConnection;
private VssConnection _connection;
private TaskAgentHttpClient _taskAgentClient;
public async Task ConnectAsync(Uri serverUrl, VssCredentials credentials)
{
_connection = await EstablishVssConnection(serverUrl, credentials, TimeSpan.FromSeconds(100));
_taskAgentClient = _connection.GetClient<TaskAgentHttpClient>();
_hasConnection = true;
}
private async Task<VssConnection> EstablishVssConnection(Uri serverUrl, VssCredentials credentials, TimeSpan timeout)
{
Trace.Info($"EstablishVssConnection");
Trace.Info($"Establish connection with {timeout.TotalSeconds} seconds timeout.");
int attemptCount = 5;
while (attemptCount-- > 0)
{
var connection = VssUtil.CreateConnection(serverUrl, credentials, timeout: timeout);
try
{
await connection.ConnectAsync();
return connection;
}
catch (Exception ex) when (attemptCount > 0)
{
Trace.Info($"Catch exception during connect. {attemptCount} attempt left.");
Trace.Error(ex);
await HostContext.Delay(TimeSpan.FromMilliseconds(100), CancellationToken.None);
}
}
// should never reach here.
throw new InvalidOperationException(nameof(EstablishVssConnection));
}
private void CheckConnection()
{
if (!_hasConnection)
{
throw new InvalidOperationException($"SetConnection");
}
}
public Task<AgentJobRequestMessage> GetJobMessageAsync(string id, CancellationToken cancellationToken)
{
CheckConnection();
var jobMessage = RetryRequest<AgentJobRequestMessage>(async () =>
{
return await _taskAgentClient.GetJobMessageAsync(id, cancellationToken);
}, cancellationToken);
return jobMessage;
}
private async Task<T> RetryRequest<T>(Func<Task<T>> func,
CancellationToken cancellationToken,
int maxRetryAttemptsCount = 5
)
{
var retryCount = 0;
while (true)
{
retryCount++;
cancellationToken.ThrowIfCancellationRequested();
try
{
return await func();
}
// TODO: Add handling of non-retriable exceptions: https://github.com/github/actions-broker/issues/122
catch (Exception ex) when (retryCount < maxRetryAttemptsCount)
{
Trace.Error("Catch exception during get full job message");
Trace.Error(ex);
var backOff = BackoffTimerHelper.GetRandomBackoff(TimeSpan.FromSeconds(5), TimeSpan.FromSeconds(15));
Trace.Warning($"Back off {backOff.TotalSeconds} seconds before next retry. {maxRetryAttemptsCount - retryCount} attempt left.");
await Task.Delay(backOff, cancellationToken);
}
}
}
}
}

View File

@@ -16,7 +16,7 @@
<ItemGroup>
<PackageReference Include="Microsoft.Win32.Registry" Version="4.4.0" />
<PackageReference Include="Newtonsoft.Json" Version="13.0.1" />
<PackageReference Include="Newtonsoft.Json" Version="11.0.2" />
<PackageReference Include="System.Security.Cryptography.ProtectedData" Version="4.4.0" />
<PackageReference Include="System.Text.Encoding.CodePages" Version="4.4.0" />
<PackageReference Include="System.Threading.Channels" Version="4.4.0" />

View File

@@ -39,7 +39,7 @@ namespace GitHub.Runner.Common
Task<TaskAgentSession> CreateAgentSessionAsync(Int32 poolId, TaskAgentSession session, CancellationToken cancellationToken);
Task DeleteAgentMessageAsync(Int32 poolId, Int64 messageId, Guid sessionId, CancellationToken cancellationToken);
Task DeleteAgentSessionAsync(Int32 poolId, Guid sessionId, CancellationToken cancellationToken);
Task<TaskAgentMessage> GetAgentMessageAsync(Int32 poolId, Guid sessionId, Int64? lastMessageId, TaskAgentStatus status, CancellationToken cancellationToken);
Task<TaskAgentMessage> GetAgentMessageAsync(Int32 poolId, Guid sessionId, Int64? lastMessageId, CancellationToken cancellationToken);
// job request
Task<TaskAgentJobRequest> GetAgentRequestAsync(int poolId, long requestId, CancellationToken cancellationToken);
@@ -298,10 +298,10 @@ namespace GitHub.Runner.Common
return _messageTaskAgentClient.DeleteAgentSessionAsync(poolId, sessionId, cancellationToken: cancellationToken);
}
public Task<TaskAgentMessage> GetAgentMessageAsync(Int32 poolId, Guid sessionId, Int64? lastMessageId, TaskAgentStatus status, CancellationToken cancellationToken)
public Task<TaskAgentMessage> GetAgentMessageAsync(Int32 poolId, Guid sessionId, Int64? lastMessageId, CancellationToken cancellationToken)
{
CheckConnection(RunnerConnectionType.MessageQueue);
return _messageTaskAgentClient.GetMessageAsync(poolId, sessionId, lastMessageId, status, cancellationToken: cancellationToken);
return _messageTaskAgentClient.GetMessageAsync(poolId, sessionId, lastMessageId, cancellationToken: cancellationToken);
}
//-----------------------------------------------------------------

View File

@@ -15,14 +15,8 @@ namespace GitHub.Runner.Common.Util
public static string GetInternalNodeVersion()
{
var forcedInternalNodeVersion = Environment.GetEnvironmentVariable(Constants.Variables.Agent.ForcedInternalNodeVersion);
var isForcedInternalNodeVersion = !string.IsNullOrEmpty(forcedInternalNodeVersion) && BuiltInNodeVersions.Contains(forcedInternalNodeVersion);
if (isForcedInternalNodeVersion)
{
return forcedInternalNodeVersion;
}
return _defaultNodeVersion;
var forcedNodeVersion = Environment.GetEnvironmentVariable(Constants.Variables.Agent.ForcedInternalNodeVersion);
return !string.IsNullOrEmpty(forcedNodeVersion) && BuiltInNodeVersions.Contains(forcedNodeVersion) ? forcedNodeVersion : _defaultNodeVersion;
}
}
}

View File

@@ -63,7 +63,6 @@ namespace GitHub.Runner.Listener
new string[]
{
Constants.Runner.CommandLine.Flags.Once,
Constants.Runner.CommandLine.Args.JitConfig,
Constants.Runner.CommandLine.Args.StartupType
},
// valid warmup flags and args
@@ -214,12 +213,6 @@ namespace GitHub.Runner.Listener
validator: Validators.AuthSchemeValidator);
}
public string GetJitConfig()
{
return GetArg(
name: Constants.Runner.CommandLine.Args.JitConfig);
}
public string GetRunnerName()
{
return GetArgOrPrompt(

View File

@@ -3,7 +3,6 @@ using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
using GitHub.Services.Common;
using GitHub.Services.Common.Internal;
using GitHub.Services.OAuth;
using System;
using System.Collections.Generic;
@@ -129,7 +128,7 @@ namespace GitHub.Runner.Listener.Configuration
// Example githubServerUrl is https://my-ghes
var actionsServerUrl = new Uri(runnerSettings.ServerUrl);
var githubServerUrl = new Uri(runnerSettings.GitHubUrl);
if (!UriUtility.IsSubdomainOf(actionsServerUrl.Authority, githubServerUrl.Authority))
if (!string.Equals(actionsServerUrl.Authority, githubServerUrl.Authority, StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException($"GitHub Actions is not properly configured in GHES. GHES url: {runnerSettings.GitHubUrl}, Actions url: {runnerSettings.ServerUrl}.");
}

View File

@@ -27,7 +27,6 @@ namespace GitHub.Runner.Listener
bool Cancel(JobCancelMessage message);
Task WaitAsync(CancellationToken token);
Task ShutdownAsync();
event EventHandler<JobStatusEventArgs> JobStatus;
}
// This implementation of IJobDispatcher is not thread safe.
@@ -56,8 +55,6 @@ namespace GitHub.Runner.Listener
private TaskCompletionSource<bool> _runOnceJobCompleted = new TaskCompletionSource<bool>();
public event EventHandler<JobStatusEventArgs> JobStatus;
public override void Initialize(IHostContext hostContext)
{
base.Initialize(hostContext);
@@ -338,11 +335,6 @@ namespace GitHub.Runner.Listener
Busy = true;
try
{
if (JobStatus != null)
{
JobStatus(this, new JobStatusEventArgs(TaskAgentStatus.Busy));
}
if (previousJobDispatch != null)
{
Trace.Verbose($"Make sure the previous job request {previousJobDispatch.JobId} has successfully finished on worker.");
@@ -658,11 +650,6 @@ namespace GitHub.Runner.Listener
finally
{
Busy = false;
if (JobStatus != null)
{
JobStatus(this, new JobStatusEventArgs(TaskAgentStatus.Online));
}
}
}

View File

@@ -23,7 +23,6 @@ namespace GitHub.Runner.Listener
Task DeleteSessionAsync();
Task<TaskAgentMessage> GetNextMessageAsync(CancellationToken token);
Task DeleteMessageAsync(TaskAgentMessage message);
void OnJobStatus(object sender, JobStatusEventArgs e);
}
public sealed class MessageListener : RunnerService, IMessageListener
@@ -39,8 +38,6 @@ namespace GitHub.Runner.Listener
private readonly TimeSpan _sessionConflictRetryLimit = TimeSpan.FromMinutes(4);
private readonly TimeSpan _clockSkewRetryLimit = TimeSpan.FromMinutes(30);
private readonly Dictionary<string, int> _sessionCreationExceptionTracker = new Dictionary<string, int>();
private TaskAgentStatus runnerStatus = TaskAgentStatus.Online;
private CancellationTokenSource _getMessagesTokenSource;
public override void Initialize(IHostContext hostContext)
{
@@ -173,23 +170,6 @@ namespace GitHub.Runner.Listener
}
}
public void OnJobStatus(object sender, JobStatusEventArgs e)
{
if (StringUtil.ConvertToBoolean(Environment.GetEnvironmentVariable("USE_BROKER_FLOW")))
{
Trace.Info("Received job status event. JobState: {0}", e.Status);
runnerStatus = e.Status;
try
{
_getMessagesTokenSource?.Cancel();
}
catch (ObjectDisposedException)
{
Trace.Info("_getMessagesTokenSource is already disposed.");
}
}
}
public async Task<TaskAgentMessage> GetNextMessageAsync(CancellationToken token)
{
Trace.Entering();
@@ -204,14 +184,12 @@ namespace GitHub.Runner.Listener
{
token.ThrowIfCancellationRequested();
TaskAgentMessage message = null;
_getMessagesTokenSource = CancellationTokenSource.CreateLinkedTokenSource(token);
try
{
message = await _runnerServer.GetAgentMessageAsync(_settings.PoolId,
_session.SessionId,
_lastMessageId,
runnerStatus,
_getMessagesTokenSource.Token);
token);
// Decrypt the message body if the session is using encryption
message = DecryptMessage(message);
@@ -228,11 +206,6 @@ namespace GitHub.Runner.Listener
continuousError = 0;
}
}
catch (OperationCanceledException) when (_getMessagesTokenSource.Token.IsCancellationRequested && !token.IsCancellationRequested)
{
Trace.Info("Get messages has been cancelled using local token source. Continue to get messages with new status.");
continue;
}
catch (OperationCanceledException) when (token.IsCancellationRequested)
{
Trace.Info("Get next message has been cancelled.");
@@ -288,10 +261,6 @@ namespace GitHub.Runner.Listener
await HostContext.Delay(_getNextMessageRetryInterval, token);
}
}
finally
{
_getMessagesTokenSource.Dispose();
}
if (message == null)
{

View File

@@ -19,7 +19,7 @@
<ItemGroup>
<PackageReference Include="Microsoft.Win32.Registry" Version="4.4.0" />
<PackageReference Include="Newtonsoft.Json" Version="13.0.1" />
<PackageReference Include="Newtonsoft.Json" Version="11.0.2" />
<PackageReference Include="System.IO.FileSystem.AccessControl" Version="4.4.0" />
<PackageReference Include="System.Security.Cryptography.ProtectedData" Version="4.4.0" />
<PackageReference Include="System.ServiceProcess.ServiceController" Version="4.4.0" />

View File

@@ -1,20 +1,18 @@
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Listener.Configuration;
using System;
using System.Collections.Generic;
using System.IO;
using System.IO.Compression;
using System.Linq;
using System.Reflection;
using System.Runtime.CompilerServices;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Listener.Check;
using GitHub.Runner.Listener.Configuration;
using GitHub.Runner.Sdk;
using GitHub.Services.WebApi;
using Pipelines = GitHub.DistributedTask.Pipelines;
using System.IO;
using System.Reflection;
using System.Runtime.CompilerServices;
using GitHub.Runner.Common;
using GitHub.Runner.Sdk;
using System.Linq;
using GitHub.Runner.Listener.Check;
using System.Collections.Generic;
namespace GitHub.Runner.Listener
{
@@ -194,30 +192,6 @@ namespace GitHub.Runner.Listener
return Constants.Runner.ReturnCode.Success;
}
var base64JitConfig = command.GetJitConfig();
if (!string.IsNullOrEmpty(base64JitConfig))
{
try
{
var decodedJitConfig = Encoding.UTF8.GetString(Convert.FromBase64String(base64JitConfig));
var jitConfig = StringUtil.ConvertFromJson<Dictionary<string, string>>(decodedJitConfig);
foreach (var config in jitConfig)
{
var configFile = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Root), config.Key);
var configContent = Encoding.UTF8.GetString(Convert.FromBase64String(config.Value));
File.WriteAllText(configFile, configContent, Encoding.UTF8);
File.SetAttributes(configFile, File.GetAttributes(configFile) | FileAttributes.Hidden);
Trace.Info($"Save {configContent.Length} chars to '{configFile}'.");
}
}
catch (Exception ex)
{
Trace.Error(ex);
_term.WriteError(ex.Message);
return Constants.Runner.ReturnCode.TerminatedError;
}
}
RunnerSettings settings = configManager.LoadSettings();
var store = HostContext.GetService<IConfigurationStore>();
@@ -348,7 +322,6 @@ namespace GitHub.Runner.Listener
// Should we try to cleanup ephemeral runners
bool runOnceJobCompleted = false;
bool skipSessionDeletion = false;
try
{
var notification = HostContext.GetService<IJobNotification>();
@@ -360,8 +333,6 @@ namespace GitHub.Runner.Listener
bool runOnceJobReceived = false;
jobDispatcher = HostContext.CreateService<IJobDispatcher>();
jobDispatcher.JobStatus += _listener.OnJobStatus;
while (!HostContext.RunnerShutdownToken.IsCancellationRequested)
{
TaskAgentMessage message = null;
@@ -486,34 +457,6 @@ namespace GitHub.Runner.Listener
}
}
}
// Broker flow
else if (string.Equals(message.MessageType, JobRequestMessageTypes.RunnerJobRequest, StringComparison.OrdinalIgnoreCase))
{
if (autoUpdateInProgress || runOnceJobReceived)
{
skipMessageDeletion = true;
Trace.Info($"Skip message deletion for job request message '{message.MessageId}'.");
}
else
{
var messageRef = StringUtil.ConvertFromJson<RunnerJobRequestRef>(message.Body);
// Create connection
var credMgr = HostContext.GetService<ICredentialManager>();
var creds = credMgr.LoadCredentials();
var runServer = HostContext.CreateService<IRunServer>();
await runServer.ConnectAsync(new Uri(settings.ServerUrl), creds);
var jobMessage = await runServer.GetJobMessageAsync(messageRef.RunnerRequestId, messageQueueLoopTokenSource.Token);
jobDispatcher.Run(jobMessage, runOnce);
if (runOnce)
{
Trace.Info("One time used runner received job message.");
runOnceJobReceived = true;
}
}
}
else if (string.Equals(message.MessageType, JobCancelMessage.MessageType, StringComparison.OrdinalIgnoreCase))
{
var cancelJobMessage = JsonUtility.FromString<JobCancelMessage>(message.Body);
@@ -525,14 +468,6 @@ namespace GitHub.Runner.Listener
Trace.Info($"Skip message deletion for cancellation message '{message.MessageId}'.");
}
}
else if (string.Equals(message.MessageType, Pipelines.HostedRunnerShutdownMessage.MessageType, StringComparison.OrdinalIgnoreCase))
{
var HostedRunnerShutdownMessage = JsonUtility.FromString<Pipelines.HostedRunnerShutdownMessage>(message.Body);
skipMessageDeletion = true;
skipSessionDeletion = true;
Trace.Info($"Service requests the hosted runner to shutdown. Reason: '{HostedRunnerShutdownMessage.Reason}'.");
return Constants.Runner.ReturnCode.Success;
}
else
{
Trace.Error($"Received message {message.MessageId} with unsupported message type {message.MessageType}.");
@@ -563,22 +498,18 @@ namespace GitHub.Runner.Listener
{
if (jobDispatcher != null)
{
jobDispatcher.JobStatus -= _listener.OnJobStatus;
await jobDispatcher.ShutdownAsync();
}
if (!skipSessionDeletion)
try
{
try
{
await _listener.DeleteSessionAsync();
}
catch (Exception ex) when (runOnce)
{
// ignore exception during delete session for ephemeral runner since the runner might already be deleted from the server side
// and the delete session call will ends up with 401.
Trace.Info($"Ignore any exception during DeleteSession for an ephemeral runner. {ex}");
}
await _listener.DeleteSessionAsync();
}
catch (Exception ex) when (runOnce)
{
// ignore exception during delete session for ephemeral runner since the runner might already be deleted from the server side
// and the delete session call will ends up with 401.
Trace.Info($"Ignore any exception during DeleteSession for an ephemeral runner. {ex}");
}
messageQueueLoopTokenSource.Dispose();
@@ -630,7 +561,7 @@ Config Options:
--labels string Extra labels in addition to the default: 'self-hosted,{Constants.Runner.Platform},{Constants.Runner.PlatformArchitecture}'
--work string Relative runner work directory (default {Constants.Path.WorkDirectory})
--replace Replace any existing runner with the same name (default false)
--pat GitHub personal access token with repo scope. Used for checking network connectivity when executing `.{separator}run.{ext} --check`
--pat GitHub personal access token used for checking network connectivity when executing `.{separator}run.{ext} --check`
--disableupdate Disable self-hosted runner automatic update to the latest released version`
--ephemeral Configure the runner to only take one job and then let the service un-configure the runner after the job finishes (default false)");

View File

@@ -1,13 +0,0 @@
using System.Runtime.Serialization;
namespace GitHub.Runner.Listener
{
[DataContract]
public sealed class RunnerJobRequestRef
{
[DataMember(Name = "id")]
public string Id { get; set; }
[DataMember(Name = "runner_request_id")]
public string RunnerRequestId { get; set; }
}
}

View File

@@ -131,8 +131,6 @@ namespace GitHub.Runner.Listener
// For L0, we will skip execute update script.
if (string.IsNullOrEmpty(Environment.GetEnvironmentVariable("_GITHUB_ACTION_EXECUTE_UPDATE_SCRIPT")))
{
string flagFile = "update.finished";
IOUtil.DeleteFile(flagFile);
// kick off update script
Process invokeScript = new Process();
#if OS_WINDOWS
@@ -296,12 +294,12 @@ namespace GitHub.Runner.Listener
archiveFile = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Root), $"runner{targetVersion}.tar.gz");
}
if (File.Exists(archiveFile))
if (File.Exists(archiveFile))
{
_updateTrace.Enqueue($"Mocking update with file: '{archiveFile}' and targetVersion: '{targetVersion}', nothing is downloaded");
_terminal.WriteLine($"Mocking update with file: '{archiveFile}' and targetVersion: '{targetVersion}', nothing is downloaded");
}
else
else
{
archiveFile = null;
_terminal.WriteLine($"Mock runner archive not found at {archiveFile} for target version {targetVersion}, proceeding with download instead");

View File

@@ -424,12 +424,6 @@ namespace GitHub.Runner.Sdk
throw new NotSupportedException($"Unable to validate execute permissions for directory '{directory}'. Exceeded maximum iterations.");
}
public static void CreateEmptyFile(string path)
{
Directory.CreateDirectory(Path.GetDirectoryName(path));
File.WriteAllText(path, null);
}
/// <summary>
/// Recursively enumerates a directory without following directory reparse points.
/// </summary>

View File

@@ -57,10 +57,6 @@ namespace GitHub.Runner.Sdk
settings.SendTimeout = TimeSpan.FromSeconds(Math.Min(Math.Max(httpRequestTimeoutSeconds, 100), 1200));
}
if (StringUtil.ConvertToBoolean(Environment.GetEnvironmentVariable("USE_BROKER_FLOW")))
{
settings.AllowAutoRedirect = true;
}
// Remove Invariant from the list of accepted languages.
//

View File

@@ -1,6 +1,7 @@
using System;
using System.IO;
using System.Linq;
using GitHub.Runner.Sdk;
namespace GitHub.Runner.Sdk
{

View File

@@ -101,41 +101,38 @@ namespace GitHub.Runner.Worker
IEnumerable<Pipelines.ActionStep> actions = steps.OfType<Pipelines.ActionStep>();
executionContext.Output("Prepare all required actions");
var result = await PrepareActionsRecursiveAsync(executionContext, state, actions, depth, rootStepId);
if (!FeatureManager.IsContainerHooksEnabled(executionContext.Global.Variables))
if (state.ImagesToPull.Count > 0)
{
if (state.ImagesToPull.Count > 0)
foreach (var imageToPull in result.ImagesToPull)
{
foreach (var imageToPull in result.ImagesToPull)
{
Trace.Info($"{imageToPull.Value.Count} steps need to pull image '{imageToPull.Key}'");
containerSetupSteps.Add(new JobExtensionRunner(runAsync: this.PullActionContainerAsync,
condition: $"{PipelineTemplateConstants.Success}()",
displayName: $"Pull {imageToPull.Key}",
data: new ContainerSetupInfo(imageToPull.Value, imageToPull.Key)));
}
Trace.Info($"{imageToPull.Value.Count} steps need to pull image '{imageToPull.Key}'");
containerSetupSteps.Add(new JobExtensionRunner(runAsync: this.PullActionContainerAsync,
condition: $"{PipelineTemplateConstants.Success}()",
displayName: $"Pull {imageToPull.Key}",
data: new ContainerSetupInfo(imageToPull.Value, imageToPull.Key)));
}
}
if (result.ImagesToBuild.Count > 0)
if (result.ImagesToBuild.Count > 0)
{
foreach (var imageToBuild in result.ImagesToBuild)
{
foreach (var imageToBuild in result.ImagesToBuild)
{
var setupInfo = result.ImagesToBuildInfo[imageToBuild.Key];
Trace.Info($"{imageToBuild.Value.Count} steps need to build image from '{setupInfo.Dockerfile}'");
containerSetupSteps.Add(new JobExtensionRunner(runAsync: this.BuildActionContainerAsync,
condition: $"{PipelineTemplateConstants.Success}()",
displayName: $"Build {setupInfo.ActionRepository}",
data: new ContainerSetupInfo(imageToBuild.Value, setupInfo.Dockerfile, setupInfo.WorkingDirectory)));
}
var setupInfo = result.ImagesToBuildInfo[imageToBuild.Key];
Trace.Info($"{imageToBuild.Value.Count} steps need to build image from '{setupInfo.Dockerfile}'");
containerSetupSteps.Add(new JobExtensionRunner(runAsync: this.BuildActionContainerAsync,
condition: $"{PipelineTemplateConstants.Success}()",
displayName: $"Build {setupInfo.ActionRepository}",
data: new ContainerSetupInfo(imageToBuild.Value, setupInfo.Dockerfile, setupInfo.WorkingDirectory)));
}
}
#if !OS_LINUX
if (containerSetupSteps.Count > 0)
{
executionContext.Output("Container action is only supported on Linux, skip pull and build docker images.");
containerSetupSteps.Clear();
}
#endif
if (containerSetupSteps.Count > 0)
{
executionContext.Output("Container action is only supported on Linux, skip pull and build docker images.");
containerSetupSteps.Clear();
}
#endif
return new PrepareResult(containerSetupSteps, result.PreStepTracker);
}

View File

@@ -503,7 +503,7 @@ namespace GitHub.Runner.Worker
};
}
throw new NotSupportedException("Missing 'using' value. 'using' requires 'composite', 'docker', 'node12' or 'node16'.");
throw new NotSupportedException(nameof(ConvertRuns));
}
private void ConvertInputs(

View File

@@ -158,12 +158,8 @@ namespace GitHub.Runner.Worker
// Setup container stephost for running inside the container.
if (ExecutionContext.Global.Container != null)
{
// Make sure the required container is already created
// Container hooks do not necessarily set 'ContainerId'
if (!FeatureManager.IsContainerHooksEnabled(ExecutionContext.Global.Variables))
{
ArgUtil.NotNullOrEmpty(ExecutionContext.Global.Container.ContainerId, nameof(ExecutionContext.Global.Container.ContainerId));
}
// Make sure required container is already created.
ArgUtil.NotNullOrEmpty(ExecutionContext.Global.Container.ContainerId, nameof(ExecutionContext.Global.Container.ContainerId));
var containerStepHost = HostContext.CreateService<IContainerStepHost>();
containerStepHost.Container = ExecutionContext.Global.Container;
stepHost = containerStepHost;

View File

@@ -1,279 +0,0 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using GitHub.DistributedTask.Pipelines.ContextData;
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
using GitHub.Runner.Worker.Handlers;
using GitHub.Services.WebApi;
using Newtonsoft.Json.Linq;
namespace GitHub.Runner.Worker.Container.ContainerHooks
{
[ServiceLocator(Default = typeof(ContainerHookManager))]
public interface IContainerHookManager : IRunnerService
{
Task PrepareJobAsync(IExecutionContext context, List<ContainerInfo> containers);
Task RunContainerStepAsync(IExecutionContext context, ContainerInfo container, string dockerFile);
Task RunScriptStepAsync(IExecutionContext context, ContainerInfo container, string workingDirectory, string fileName, string arguments, IDictionary<string, string> environment, string prependPath);
Task CleanupJobAsync(IExecutionContext context, List<ContainerInfo> containers);
string GetContainerHookData();
}
public class ContainerHookManager : RunnerService, IContainerHookManager
{
private const string ResponseFolderName = "_runner_hook_responses";
private string HookScriptPath;
public override void Initialize(IHostContext hostContext)
{
base.Initialize(hostContext);
HookScriptPath = $"{Environment.GetEnvironmentVariable(Constants.Hooks.ContainerHooksPath)}";
}
public async Task PrepareJobAsync(IExecutionContext context, List<ContainerInfo> containers)
{
Trace.Entering();
var jobContainer = containers.Where(c => c.IsJobContainer).SingleOrDefault();
var serviceContainers = containers.Where(c => !c.IsJobContainer).ToList();
var input = new HookInput
{
Command = HookCommand.PrepareJob,
ResponseFile = GenerateResponsePath(),
Args = new PrepareJobArgs
{
Container = jobContainer?.GetHookContainer(),
Services = serviceContainers.Select(c => c.GetHookContainer()).ToList(),
}
};
var prependPath = GetPrependPath(context);
var response = await ExecuteHookScript<PrepareJobResponse>(context, input, ActionRunStage.Pre, prependPath);
if (jobContainer != null)
{
jobContainer.IsAlpine = response.IsAlpine.Value;
}
SaveHookState(context, response.State, input);
UpdateJobContext(context, jobContainer, serviceContainers, response);
}
public async Task RunContainerStepAsync(IExecutionContext context, ContainerInfo container, string dockerFile)
{
Trace.Entering();
var hookState = context.Global.ContainerHookState;
var containerStepArgs = new ContainerStepArgs(container);
if (!string.IsNullOrEmpty(dockerFile))
{
containerStepArgs.Dockerfile = dockerFile;
containerStepArgs.Image = null;
}
var input = new HookInput
{
Args = containerStepArgs,
Command = HookCommand.RunContainerStep,
ResponseFile = GenerateResponsePath(),
State = hookState
};
var prependPath = GetPrependPath(context);
var response = await ExecuteHookScript<HookResponse>(context, input, ActionRunStage.Pre, prependPath);
if (response == null)
{
return;
}
SaveHookState(context, response.State, input);
}
public async Task RunScriptStepAsync(IExecutionContext context, ContainerInfo container, string workingDirectory, string entryPoint, string entryPointArgs, IDictionary<string, string> environmentVariables, string prependPath)
{
Trace.Entering();
var input = new HookInput
{
Command = HookCommand.RunScriptStep,
ResponseFile = GenerateResponsePath(),
Args = new ScriptStepArgs
{
EntryPointArgs = entryPointArgs.Split(' ').Select(arg => arg.Trim()),
EntryPoint = entryPoint,
EnvironmentVariables = environmentVariables,
PrependPath = context.Global.PrependPath.Reverse<string>(),
WorkingDirectory = workingDirectory,
},
State = context.Global.ContainerHookState
};
var response = await ExecuteHookScript<HookResponse>(context, input, ActionRunStage.Pre, prependPath);
if (response == null)
{
return;
}
SaveHookState(context, response.State, input);
}
public async Task CleanupJobAsync(IExecutionContext context, List<ContainerInfo> containers)
{
Trace.Entering();
var input = new HookInput
{
Command = HookCommand.CleanupJob,
ResponseFile = GenerateResponsePath(),
Args = new CleanupJobArgs(),
State = context.Global.ContainerHookState
};
var prependPath = GetPrependPath(context);
await ExecuteHookScript<HookResponse>(context, input, ActionRunStage.Pre, prependPath);
}
public string GetContainerHookData()
{
return JsonUtility.ToString(new { HookScriptPath });
}
private async Task<T> ExecuteHookScript<T>(IExecutionContext context, HookInput input, ActionRunStage stage, string prependPath) where T : HookResponse
{
try
{
ValidateHookExecutable();
context.StepTelemetry.ContainerHookData = GetContainerHookData();
var scriptDirectory = Path.GetDirectoryName(HookScriptPath);
var stepHost = HostContext.CreateService<IDefaultStepHost>();
Dictionary<string, string> inputs = new()
{
["standardInInput"] = JsonUtility.ToString(input),
["path"] = HookScriptPath,
["shell"] = HostContext.GetDefaultShellForScript(HookScriptPath, prependPath)
};
var handlerFactory = HostContext.GetService<IHandlerFactory>();
var handler = handlerFactory.Create(
context,
null,
stepHost,
new ScriptActionExecutionData(),
inputs,
environment: new Dictionary<string, string>(VarUtil.EnvironmentVariableKeyComparer),
context.Global.Variables,
actionDirectory: scriptDirectory,
localActionContainerSetupSteps: null) as ScriptHandler;
handler.PrepareExecution(stage);
IOUtil.CreateEmptyFile(input.ResponseFile);
await handler.RunAsync(stage);
if (handler.ExecutionContext.Result == TaskResult.Failed)
{
throw new Exception($"The hook script at '{HookScriptPath}' running command '{input.Command}' did not execute successfully");
}
var response = GetResponse<T>(input);
return response;
}
catch (Exception ex)
{
throw new Exception($"Executing the custom container implementation failed. Please contact your self hosted runner administrator.", ex);
}
}
private string GenerateResponsePath() => Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Temp), ResponseFolderName, $"{Guid.NewGuid()}.json");
private static string GetPrependPath(IExecutionContext context) => string.Join(Path.PathSeparator.ToString(), context.Global.PrependPath.Reverse<string>());
private void ValidateHookExecutable()
{
if (!string.IsNullOrEmpty(HookScriptPath) && !File.Exists(HookScriptPath))
{
throw new FileNotFoundException($"File not found at '{HookScriptPath}'. Set {Constants.Hooks.ContainerHooksPath} to the path of an existing file.");
}
var supportedHookExtensions = new string[] { ".js", ".sh", ".ps1" };
if (!supportedHookExtensions.Any(extension => HookScriptPath.EndsWith(extension)))
{
throw new ArgumentOutOfRangeException($"Invalid file extension at '{HookScriptPath}'. {Constants.Hooks.ContainerHooksPath} must be a path to a file with one of the following extensions: {string.Join(", ", supportedHookExtensions)}");
}
}
private T GetResponse<T>(HookInput input) where T : HookResponse
{
if (!File.Exists(input.ResponseFile))
{
Trace.Info($"Response file for the hook script at '{HookScriptPath}' running command '{input.Command}' not found.");
if (input.Args.IsRequireAlpineInResponse())
{
throw new Exception($"Response file is required but not found for the hook script at '{HookScriptPath}' running command '{input.Command}'");
}
return null;
}
T response = IOUtil.LoadObject<T>(input.ResponseFile);
Trace.Info($"Response file for the hook script at '{HookScriptPath}' running command '{input.Command}' was processed successfully");
IOUtil.DeleteFile(input.ResponseFile);
Trace.Info($"Response file for the hook script at '{HookScriptPath}' running command '{input.Command}' was deleted successfully");
if (response == null && input.Args.IsRequireAlpineInResponse())
{
throw new Exception($"Response file could not be read at '{HookScriptPath}' running command '{input.Command}'");
}
response?.Validate(input);
return response;
}
private void SaveHookState(IExecutionContext context, JObject hookState, HookInput input)
{
if (hookState == null)
{
Trace.Info($"No 'state' property found in response file for '{input.Command}'. Global variable for 'ContainerHookState' will not be updated.");
return;
}
context.Global.ContainerHookState = hookState;
Trace.Info($"Global variable 'ContainerHookState' updated successfully for '{input.Command}' with data found in 'state' property of the response file.");
}
private void UpdateJobContext(IExecutionContext context, ContainerInfo jobContainer, List<ContainerInfo> serviceContainers, PrepareJobResponse response)
{
if (response.Context == null)
{
Trace.Info($"The response file does not contain a context. The fields 'jobContext.Container' and 'jobContext.Services' will not be set.");
return;
}
var containerId = response.Context.Container?.Id;
if (containerId != null)
{
context.JobContext.Container["id"] = new StringContextData(containerId);
jobContainer.ContainerId = containerId;
}
var containerNetwork = response.Context.Container?.Network;
if (containerNetwork != null)
{
context.JobContext.Container["network"] = new StringContextData(containerNetwork);
jobContainer.ContainerNetwork = containerNetwork;
}
for (var i = 0; i < response.Context.Services.Count; i++)
{
var responseContainerInfo = response.Context.Services[i];
var globalContainerInfo = serviceContainers[i];
globalContainerInfo.ContainerId = responseContainerInfo.Id;
globalContainerInfo.ContainerNetwork = responseContainerInfo.Network;
var service = new DictionaryContextData()
{
["id"] = new StringContextData(responseContainerInfo.Id),
["ports"] = new DictionaryContextData(),
["network"] = new StringContextData(responseContainerInfo.Network)
};
globalContainerInfo.AddPortMappings(responseContainerInfo.Ports);
foreach (var portMapping in responseContainerInfo.Ports)
{
(service["ports"] as DictionaryContextData)[portMapping.Key] = new StringContextData(portMapping.Value);
}
context.JobContext.Services[globalContainerInfo.ContainerNetworkAlias] = service;
}
}
}
}

View File

@@ -1,113 +0,0 @@
using System.Collections.Generic;
using System.Runtime.Serialization;
using Newtonsoft.Json;
using Newtonsoft.Json.Converters;
using Newtonsoft.Json.Linq;
using System.Linq;
namespace GitHub.Runner.Worker.Container.ContainerHooks
{
public class HookInput
{
public HookCommand Command { get; set; }
public string ResponseFile { get; set; }
public IHookArgs Args { get; set; }
public JObject State { get; set; }
}
[JsonConverter(typeof(StringEnumConverter))]
public enum HookCommand
{
[EnumMember(Value = "prepare_job")]
PrepareJob,
[EnumMember(Value = "cleanup_job")]
CleanupJob,
[EnumMember(Value = "run_script_step")]
RunScriptStep,
[EnumMember(Value = "run_container_step")]
RunContainerStep,
}
public interface IHookArgs
{
bool IsRequireAlpineInResponse();
}
public class PrepareJobArgs : IHookArgs
{
public HookContainer Container { get; set; }
public IList<HookContainer> Services { get; set; }
public bool IsRequireAlpineInResponse() => Container != null;
}
public class ScriptStepArgs : IHookArgs
{
public IEnumerable<string> EntryPointArgs { get; set; }
public string EntryPoint { get; set; }
public IDictionary<string, string> EnvironmentVariables { get; set; }
public IEnumerable<string> PrependPath { get; set; }
public string WorkingDirectory { get; set; }
public bool IsRequireAlpineInResponse() => false;
}
public class ContainerStepArgs : HookContainer, IHookArgs
{
public bool IsRequireAlpineInResponse() => false;
public ContainerStepArgs(ContainerInfo container) : base(container) { }
}
public class CleanupJobArgs : IHookArgs
{
public bool IsRequireAlpineInResponse() => false;
}
public class ContainerRegistry
{
public string Username { get; set; }
public string Password { get; set; }
public string ServerUrl { get; set; }
}
public class HookContainer
{
public string Image { get; set; }
public string Dockerfile { get; set; }
public IEnumerable<string> EntryPointArgs { get; set; } = new List<string>();
public string EntryPoint { get; set; }
public string WorkingDirectory { get; set; }
public string CreateOptions { get; private set; }
public ContainerRegistry Registry { get; set; }
public IDictionary<string, string> EnvironmentVariables { get; set; } = new Dictionary<string, string>();
public IEnumerable<string> PortMappings { get; set; } = new List<string>();
public IEnumerable<MountVolume> SystemMountVolumes { get; set; } = new List<MountVolume>();
public IEnumerable<MountVolume> UserMountVolumes { get; set; } = new List<MountVolume>();
public HookContainer() { } // For Json deserializer
public HookContainer(ContainerInfo container)
{
Image = container.ContainerImage;
EntryPointArgs = container.ContainerEntryPointArgs?.Split(' ').Select(arg => arg.Trim()) ?? new List<string>();
EntryPoint = container.ContainerEntryPoint;
WorkingDirectory = container.ContainerWorkDirectory;
CreateOptions = container.ContainerCreateOptions;
if (!string.IsNullOrEmpty(container.RegistryAuthUsername))
{
Registry = new ContainerRegistry
{
Username = container.RegistryAuthUsername,
Password = container.RegistryAuthPassword,
ServerUrl = container.RegistryServer,
};
}
EnvironmentVariables = container.ContainerEnvironmentVariables;
PortMappings = container.UserPortMappings.Select(p => p.Value).ToList();
SystemMountVolumes = container.SystemMountVolumes;
UserMountVolumes = container.UserMountVolumes;
}
}
public static class ContainerInfoExtensions
{
public static HookContainer GetHookContainer(this ContainerInfo containerInfo)
{
return new HookContainer(containerInfo);
}
}
}

View File

@@ -1,37 +0,0 @@
using System;
using System.Collections.Generic;
using Newtonsoft.Json.Linq;
namespace GitHub.Runner.Worker.Container.ContainerHooks
{
public class HookResponse
{
public JObject State { get; set; }
public virtual void Validate(HookInput input) { }
}
public class PrepareJobResponse : HookResponse
{
public ResponseContext Context { get; set; }
public bool? IsAlpine { get; set; }
public override void Validate(HookInput input)
{
bool hasJobContainer = ((PrepareJobArgs)input.Args).Container != null;
if (hasJobContainer && IsAlpine == null)
{
throw new Exception("The property 'isAlpine' is required but was not found in the response file.");
}
}
}
public class ResponseContext
{
public ResponseContainer Container { get; set; }
public IList<ResponseContainer> Services { get; set; } = new List<ResponseContainer>();
}
public class ResponseContainer
{
public string Id { get; set; }
public string Network { get; set; }
public IDictionary<string, string> Ports { get; set; }
}
}

View File

@@ -90,7 +90,6 @@ namespace GitHub.Runner.Worker.Container
public string RegistryAuthUsername { get; set; }
public string RegistryAuthPassword { get; set; }
public bool IsJobContainer { get; set; }
public bool IsAlpine { get; set; }
public IDictionary<string, string> ContainerEnvironmentVariables
{
@@ -233,14 +232,6 @@ namespace GitHub.Runner.Worker.Container
}
}
public void AddPortMappings(IDictionary<string, string> portMappings)
{
foreach (var pair in portMappings)
{
PortMappings.Add(new PortMapping(pair.Key, pair.Value));
}
}
public void AddPathTranslateMapping(string hostCommonPath, string containerCommonPath)
{
_pathMappings.Insert(0, new PathMapping(hostCommonPath, containerCommonPath));
@@ -331,12 +322,6 @@ namespace GitHub.Runner.Worker.Container
public class PortMapping
{
public PortMapping(string hostPort, string containerPort)
{
this.HostPort = hostPort;
this.ContainerPort = containerPort;
}
public PortMapping(string hostPort, string containerPort, string protocol)
{
this.HostPort = hostPort;

View File

@@ -131,11 +131,11 @@ namespace GitHub.Runner.Worker.Container
{
if (String.IsNullOrEmpty(env.Value))
{
dockerOptions.Add(DockerUtil.CreateEscapedOption("-e", env.Key));
dockerOptions.Add($"-e \"{env.Key}\"");
}
else
{
dockerOptions.Add(DockerUtil.CreateEscapedOption("-e", env.Key, env.Value));
dockerOptions.Add($"-e \"{env.Key}={env.Value.Replace("\"", "\\\"")}\"");
}
}
@@ -202,7 +202,7 @@ namespace GitHub.Runner.Worker.Container
{
// e.g. -e MY_SECRET maps the value into the exec'ed process without exposing
// the value directly in the command
dockerOptions.Add(DockerUtil.CreateEscapedOption("-e", env.Key));
dockerOptions.Add($"-e {env.Key}");
}
// Watermark for GitHub Action environment

View File

@@ -6,9 +6,6 @@ namespace GitHub.Runner.Worker.Container
{
public class DockerUtil
{
private static readonly Regex QuoteEscape = new Regex(@"(\\*)" + "\"", RegexOptions.Compiled);
private static readonly Regex EndOfStringEscape = new Regex(@"(\\+)$", RegexOptions.Compiled);
public static List<PortMapping> ParseDockerPort(IList<string> portMappingLines)
{
const string targetPort = "targetPort";
@@ -20,7 +17,7 @@ namespace GitHub.Runner.Worker.Container
string pattern = $"^(?<{targetPort}>\\d+)/(?<{proto}>\\w+) -> (?<{host}>.+):(?<{hostPort}>\\d+)$";
List<PortMapping> portMappings = new List<PortMapping>();
foreach (var line in portMappingLines)
foreach(var line in portMappingLines)
{
Match m = Regex.Match(line, pattern, RegexOptions.None, TimeSpan.FromSeconds(1));
if (m.Success)
@@ -64,44 +61,5 @@ namespace GitHub.Runner.Worker.Container
}
return "";
}
public static string CreateEscapedOption(string flag, string key)
{
if (String.IsNullOrEmpty(key))
{
return "";
}
return $"{flag} {EscapeString(key)}";
}
public static string CreateEscapedOption(string flag, string key, string value)
{
if (String.IsNullOrEmpty(key))
{
return "";
}
var escapedString = EscapeString($"{key}={value}");
return $"{flag} {escapedString}";
}
private static string EscapeString(string value)
{
if (String.IsNullOrEmpty(value))
{
return "";
}
// Dotnet escaping rules are weird here, we can only escape \ if it precedes a "
// If a double quotation mark follows two or an even number of backslashes, each proceeding backslash pair is replaced with one backslash and the double quotation mark is removed.
// If a double quotation mark follows an odd number of backslashes, including just one, each preceding pair is replaced with one backslash and the remaining backslash is removed; however, in this case the double quotation mark is not removed.
// https://docs.microsoft.com/en-us/dotnet/api/system.environment.getcommandlineargs?redirectedfrom=MSDN&view=net-6.0#remarks
// First, find any \ followed by a " and double the number of \ + 1.
value = QuoteEscape.Replace(value, @"$1$1\" + "\"");
// Next, what if it ends in `\`, it would escape the end quote. So, we need to detect that at the end of the string and perform the same escape
// Luckily, we can just use the $ character with detects the end of string in regex
value = EndOfStringEscape.Replace(value, @"$1$1");
// Finally, wrap it in quotes
return $"\"{value}\"";
}
}
}

View File

@@ -1,6 +1,7 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.ServiceProcess;
using System.Threading.Tasks;
using System.Linq;
using System.Threading;
@@ -9,12 +10,8 @@ using GitHub.Services.Common;
using GitHub.Runner.Common;
using GitHub.Runner.Sdk;
using GitHub.DistributedTask.Pipelines.ContextData;
using GitHub.DistributedTask.Pipelines.ObjectTemplating;
using GitHub.Runner.Worker.Container.ContainerHooks;
#if OS_WINDOWS // keep win specific imports around even through we don't support containers on win at the moment
using System.ServiceProcess;
using Microsoft.Win32;
#endif
using GitHub.DistributedTask.Pipelines.ObjectTemplating;
namespace GitHub.Runner.Worker
{
@@ -28,13 +25,11 @@ namespace GitHub.Runner.Worker
public class ContainerOperationProvider : RunnerService, IContainerOperationProvider
{
private IDockerCommandManager _dockerManager;
private IContainerHookManager _containerHookManager;
public override void Initialize(IHostContext hostContext)
{
base.Initialize(hostContext);
_dockerManager = HostContext.GetService<IDockerCommandManager>();
_containerHookManager = HostContext.GetService<IContainerHookManager>();
}
public async Task StartContainersAsync(IExecutionContext executionContext, object data)
@@ -55,15 +50,72 @@ namespace GitHub.Runner.Worker
executionContext.Debug($"Register post job cleanup for stopping/deleting containers.");
executionContext.RegisterPostJobStep(postJobStep);
if (FeatureManager.IsContainerHooksEnabled(executionContext.Global.Variables))
// Check whether we are inside a container.
// Our container feature requires to map working directory from host to the container.
// If we are already inside a container, we will not able to find out the real working direcotry path on the host.
#if OS_WINDOWS
#pragma warning disable CA1416
// service CExecSvc is Container Execution Agent.
ServiceController[] scServices = ServiceController.GetServices();
if (scServices.Any(x => String.Equals(x.ServiceName, "cexecsvc", StringComparison.OrdinalIgnoreCase) && x.Status == ServiceControllerStatus.Running))
{
// Initialize the containers
containers.ForEach(container => UpdateRegistryAuthForGitHubToken(executionContext, container));
containers.Where(container => container.IsJobContainer).ForEach(container => MountWellKnownDirectories(executionContext, container));
await _containerHookManager.PrepareJobAsync(executionContext, containers);
return;
throw new NotSupportedException("Container feature is not supported when runner is already running inside container.");
}
#pragma warning restore CA1416
#else
var initProcessCgroup = File.ReadLines("/proc/1/cgroup");
if (initProcessCgroup.Any(x => x.IndexOf(":/docker/", StringComparison.OrdinalIgnoreCase) >= 0))
{
throw new NotSupportedException("Container feature is not supported when runner is already running inside container.");
}
#endif
#if OS_WINDOWS
#pragma warning disable CA1416
// Check OS version (Windows server 1803 is required)
object windowsInstallationType = Registry.GetValue(@"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion", "InstallationType", defaultValue: null);
ArgUtil.NotNull(windowsInstallationType, nameof(windowsInstallationType));
object windowsReleaseId = Registry.GetValue(@"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion", "ReleaseId", defaultValue: null);
ArgUtil.NotNull(windowsReleaseId, nameof(windowsReleaseId));
executionContext.Debug($"Current Windows version: '{windowsReleaseId} ({windowsInstallationType})'");
if (int.TryParse(windowsReleaseId.ToString(), out int releaseId))
{
if (!windowsInstallationType.ToString().StartsWith("Server", StringComparison.OrdinalIgnoreCase) || releaseId < 1803)
{
throw new NotSupportedException("Container feature requires Windows Server 1803 or higher.");
}
}
else
{
throw new ArgumentOutOfRangeException(@"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ReleaseId");
}
#pragma warning restore CA1416
#endif
// Check docker client/server version
executionContext.Output("##[group]Checking docker version");
DockerVersion dockerVersion = await _dockerManager.DockerVersion(executionContext);
executionContext.Output("##[endgroup]");
ArgUtil.NotNull(dockerVersion.ServerVersion, nameof(dockerVersion.ServerVersion));
ArgUtil.NotNull(dockerVersion.ClientVersion, nameof(dockerVersion.ClientVersion));
#if OS_WINDOWS
Version requiredDockerEngineAPIVersion = new Version(1, 30); // Docker-EE version 17.6
#else
Version requiredDockerEngineAPIVersion = new Version(1, 35); // Docker-CE version 17.12
#endif
if (dockerVersion.ServerVersion < requiredDockerEngineAPIVersion)
{
throw new NotSupportedException($"Min required docker engine API server version is '{requiredDockerEngineAPIVersion}', your docker ('{_dockerManager.DockerPath}') server version is '{dockerVersion.ServerVersion}'");
}
if (dockerVersion.ClientVersion < requiredDockerEngineAPIVersion)
{
throw new NotSupportedException($"Min required docker engine API client version is '{requiredDockerEngineAPIVersion}', your docker ('{_dockerManager.DockerPath}') client version is '{dockerVersion.ClientVersion}'");
}
await AssertCompatibleOS(executionContext);
// Clean up containers left by previous runs
executionContext.Output("##[group]Clean up resources from previous jobs");
@@ -114,12 +166,6 @@ namespace GitHub.Runner.Worker
List<ContainerInfo> containers = data as List<ContainerInfo>;
ArgUtil.NotNull(containers, nameof(containers));
if (FeatureManager.IsContainerHooksEnabled(executionContext.Global.Variables))
{
await _containerHookManager.CleanupJobAsync(executionContext, containers);
return;
}
foreach (var container in containers)
{
await StopContainerAsync(executionContext, container);
@@ -192,7 +238,35 @@ namespace GitHub.Runner.Worker
if (container.IsJobContainer)
{
MountWellKnownDirectories(executionContext, container);
// Configure job container - Mount workspace and tools, set up environment, and start long running process
var githubContext = executionContext.ExpressionValues["github"] as GitHubContext;
ArgUtil.NotNull(githubContext, nameof(githubContext));
var workingDirectory = githubContext["workspace"] as StringContextData;
ArgUtil.NotNullOrEmpty(workingDirectory, nameof(workingDirectory));
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Work), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Work))));
#if OS_WINDOWS
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Externals), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Externals))));
#else
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Externals), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Externals)), true));
#endif
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Temp), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Temp))));
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Actions), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Actions))));
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Tools), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Tools))));
var tempHomeDirectory = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Temp), "_github_home");
Directory.CreateDirectory(tempHomeDirectory);
container.MountVolumes.Add(new MountVolume(tempHomeDirectory, "/github/home"));
container.AddPathTranslateMapping(tempHomeDirectory, "/github/home");
container.ContainerEnvironmentVariables["HOME"] = container.TranslateToContainerPath(tempHomeDirectory);
var tempWorkflowDirectory = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Temp), "_github_workflow");
Directory.CreateDirectory(tempWorkflowDirectory);
container.MountVolumes.Add(new MountVolume(tempWorkflowDirectory, "/github/workflow"));
container.AddPathTranslateMapping(tempWorkflowDirectory, "/github/workflow");
container.ContainerWorkDirectory = container.TranslateToContainerPath(workingDirectory);
container.ContainerEntryPoint = "tail";
container.ContainerEntryPointArgs = "\"-f\" \"/dev/null\"";
}
container.ContainerId = await _dockerManager.DockerCreate(executionContext, container);
@@ -255,42 +329,6 @@ namespace GitHub.Runner.Worker
executionContext.Output("##[endgroup]");
}
private void MountWellKnownDirectories(IExecutionContext executionContext, ContainerInfo container)
{
// Configure job container - Mount workspace and tools, set up environment, and start long running process
var githubContext = executionContext.ExpressionValues["github"] as GitHubContext;
ArgUtil.NotNull(githubContext, nameof(githubContext));
var workingDirectory = githubContext["workspace"] as StringContextData;
ArgUtil.NotNullOrEmpty(workingDirectory, nameof(workingDirectory));
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Work), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Work))));
#if OS_WINDOWS
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Externals), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Externals))));
#else
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Externals), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Externals)), true));
#endif
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Temp), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Temp))));
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Actions), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Actions))));
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Tools), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Tools))));
var tempHomeDirectory = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Temp), "_github_home");
Directory.CreateDirectory(tempHomeDirectory);
container.MountVolumes.Add(new MountVolume(tempHomeDirectory, "/github/home"));
container.AddPathTranslateMapping(tempHomeDirectory, "/github/home");
container.ContainerEnvironmentVariables["HOME"] = container.TranslateToContainerPath(tempHomeDirectory);
var tempWorkflowDirectory = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Temp), "_github_workflow");
Directory.CreateDirectory(tempWorkflowDirectory);
container.MountVolumes.Add(new MountVolume(tempWorkflowDirectory, "/github/workflow"));
container.AddPathTranslateMapping(tempWorkflowDirectory, "/github/workflow");
container.ContainerWorkDirectory = container.TranslateToContainerPath(workingDirectory);
if (!FeatureManager.IsContainerHooksEnabled(executionContext.Global.Variables))
{
container.ContainerEntryPoint = "tail";
container.ContainerEntryPointArgs = "\"-f\" \"/dev/null\"";
}
}
private async Task StopContainerAsync(IExecutionContext executionContext, ContainerInfo container)
{
Trace.Entering();
@@ -299,11 +337,11 @@ namespace GitHub.Runner.Worker
if (!string.IsNullOrEmpty(container.ContainerId))
{
if (!container.IsJobContainer)
if(!container.IsJobContainer)
{
// Print logs for service container jobs (not the "action" job itself b/c that's already logged).
executionContext.Output($"Print service container logs: {container.ContainerDisplayName}");
int logsExitCode = await _dockerManager.DockerLogs(executionContext, container.ContainerId);
if (logsExitCode != 0)
{
@@ -484,74 +522,5 @@ namespace GitHub.Runner.Worker
container.RegistryAuthPassword = executionContext.GetGitHubContext("token");
}
}
private async Task AssertCompatibleOS(IExecutionContext executionContext)
{
// Check whether we are inside a container.
// Our container feature requires to map working directory from host to the container.
// If we are already inside a container, we will not able to find out the real working direcotry path on the host.
#if OS_WINDOWS
#pragma warning disable CA1416
// service CExecSvc is Container Execution Agent.
ServiceController[] scServices = ServiceController.GetServices();
if (scServices.Any(x => String.Equals(x.ServiceName, "cexecsvc", StringComparison.OrdinalIgnoreCase) && x.Status == ServiceControllerStatus.Running))
{
throw new NotSupportedException("Container feature is not supported when runner is already running inside container.");
}
#pragma warning restore CA1416
#else
var initProcessCgroup = File.ReadLines("/proc/1/cgroup");
if (initProcessCgroup.Any(x => x.IndexOf(":/docker/", StringComparison.OrdinalIgnoreCase) >= 0))
{
throw new NotSupportedException("Container feature is not supported when runner is already running inside container.");
}
#endif
#if OS_WINDOWS
#pragma warning disable CA1416
// Check OS version (Windows server 1803 is required)
object windowsInstallationType = Registry.GetValue(@"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion", "InstallationType", defaultValue: null);
ArgUtil.NotNull(windowsInstallationType, nameof(windowsInstallationType));
object windowsReleaseId = Registry.GetValue(@"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion", "ReleaseId", defaultValue: null);
ArgUtil.NotNull(windowsReleaseId, nameof(windowsReleaseId));
executionContext.Debug($"Current Windows version: '{windowsReleaseId} ({windowsInstallationType})'");
if (int.TryParse(windowsReleaseId.ToString(), out int releaseId))
{
if (!windowsInstallationType.ToString().StartsWith("Server", StringComparison.OrdinalIgnoreCase) || releaseId < 1803)
{
throw new NotSupportedException("Container feature requires Windows Server 1803 or higher.");
}
}
else
{
throw new ArgumentOutOfRangeException(@"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ReleaseId");
}
#pragma warning restore CA1416
#endif
// Check docker client/server version
executionContext.Output("##[group]Checking docker version");
DockerVersion dockerVersion = await _dockerManager.DockerVersion(executionContext);
executionContext.Output("##[endgroup]");
ArgUtil.NotNull(dockerVersion.ServerVersion, nameof(dockerVersion.ServerVersion));
ArgUtil.NotNull(dockerVersion.ClientVersion, nameof(dockerVersion.ClientVersion));
#if OS_WINDOWS
Version requiredDockerEngineAPIVersion = new Version(1, 30); // Docker-EE version 17.6
#else
Version requiredDockerEngineAPIVersion = new Version(1, 35); // Docker-CE version 17.12
#endif
if (dockerVersion.ServerVersion < requiredDockerEngineAPIVersion)
{
throw new NotSupportedException($"Min required docker engine API server version is '{requiredDockerEngineAPIVersion}', your docker ('{_dockerManager.DockerPath}') server version is '{dockerVersion.ServerVersion}'");
}
if (dockerVersion.ClientVersion < requiredDockerEngineAPIVersion)
{
throw new NotSupportedException($"Min required docker engine API client version is '{requiredDockerEngineAPIVersion}', your docker ('{_dockerManager.DockerPath}') client version is '{dockerVersion.ClientVersion}'");
}
}
}
}

View File

@@ -67,8 +67,6 @@ namespace GitHub.Runner.Worker
bool IsEmbedded { get; }
List<string> StepEnvironmentOverrides { get; }
ExecutionContext Root { get; }
// Initialize
@@ -239,8 +237,6 @@ namespace GitHub.Runner.Worker
}
}
public List<string> StepEnvironmentOverrides { get; } = new List<string>();
public override void Initialize(IHostContext hostContext)
{
base.Initialize(hostContext);
@@ -369,7 +365,6 @@ namespace GitHub.Runner.Worker
child.StepTelemetry.StepId = recordId;
child.StepTelemetry.Stage = stage.ToString();
child.StepTelemetry.IsEmbedded = isEmbedded;
child.StepTelemetry.StepContextName = child.GetFullyQualifiedContextName(); ;
return child;
}
@@ -960,8 +955,6 @@ namespace GitHub.Runner.Worker
_record.StartTime != null)
{
StepTelemetry.ExecutionTimeInSeconds = (int)Math.Ceiling((_record.FinishTime - _record.StartTime)?.TotalSeconds ?? 0);
StepTelemetry.StartTime = _record.StartTime;
StepTelemetry.FinishTime = _record.FinishTime;
}
if (!IsEmbedded &&
@@ -1233,7 +1226,7 @@ namespace GitHub.Runner.Worker
var value = dict[key].ToString();
if (!string.IsNullOrEmpty(value))
{
dict[key] = new StringContextData(stepHost.ResolvePathForStepHost(context, value));
dict[key] = new StringContextData(stepHost.ResolvePathForStepHost(value));
}
}
else if (dict[key] is DictionaryContextData)

View File

@@ -1,15 +0,0 @@
using System;
using GitHub.Runner.Common;
namespace GitHub.Runner.Worker
{
public class FeatureManager
{
public static bool IsContainerHooksEnabled(Variables variables)
{
var isContainerHookFeatureFlagSet = variables?.GetBoolean(Constants.Runner.Features.AllowRunnerContainerHooks) ?? false;
var isContainerHooksPathSet = !string.IsNullOrEmpty(Environment.GetEnvironmentVariable(Constants.Hooks.ContainerHooksPath));
return isContainerHookFeatureFlagSet && isContainerHooksPathSet;
}
}
}

View File

@@ -275,6 +275,12 @@ namespace GitHub.Runner.Worker
public void ProcessCommand(IExecutionContext context, string filePath, ContainerInfo container)
{
if (!context.Global.Variables.GetBoolean("DistributedTask.UploadStepSummary") ?? true)
{
Trace.Info("Step Summary is disabled; skipping attachment upload");
return;
}
if (String.IsNullOrEmpty(filePath) || !File.Exists(filePath))
{
Trace.Info($"Step Summary file ({filePath}) does not exist; skipping attachment upload");

View File

@@ -3,7 +3,6 @@ using System.Collections.Generic;
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Worker.Container;
using Newtonsoft.Json.Linq;
namespace GitHub.Runner.Worker
{
@@ -23,6 +22,5 @@ namespace GitHub.Runner.Worker
public StepsContext StepsContext { get; set; }
public Variables Variables { get; set; }
public bool WriteDebug { get; set; }
public JObject ContainerHookState { get; set; }
}
}

View File

@@ -11,8 +11,6 @@ using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
using GitHub.Runner.Worker.Container;
using GitHub.Runner.Worker.Container.ContainerHooks;
using GitHub.Runner.Worker.Expressions;
using Pipelines = GitHub.DistributedTask.Pipelines;
@@ -266,11 +264,7 @@ namespace GitHub.Runner.Worker.Handlers
#endif
foreach (var pair in dict)
{
// Skip global env, otherwise we merge an outdated global env
if (ExecutionContext.StepEnvironmentOverrides.Contains(pair.Key))
{
envContext[pair.Key] = pair.Value;
}
envContext[pair.Key] = pair.Value;
}
}
@@ -279,13 +273,11 @@ namespace GitHub.Runner.Worker.Handlers
if (step is IActionRunner actionStep)
{
// Evaluate and merge embedded-step env
step.ExecutionContext.StepEnvironmentOverrides.AddRange(ExecutionContext.StepEnvironmentOverrides);
var templateEvaluator = step.ExecutionContext.ToPipelineTemplateEvaluator();
var actionEnvironment = templateEvaluator.EvaluateStepEnvironment(actionStep.Action.Environment, step.ExecutionContext.ExpressionValues, step.ExecutionContext.ExpressionFunctions, Common.Util.VarUtil.EnvironmentVariableKeyComparer);
foreach (var env in actionEnvironment)
{
envContext[env.Key] = new StringContextData(env.Value ?? string.Empty);
step.ExecutionContext.StepEnvironmentOverrides.Add(env.Key);
}
}
}

View File

@@ -8,7 +8,6 @@ using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Sdk;
using GitHub.Runner.Worker.Container;
using GitHub.Runner.Worker.Container.ContainerHooks;
using Pipelines = GitHub.DistributedTask.Pipelines;
namespace GitHub.Runner.Worker.Handlers
@@ -39,8 +38,6 @@ namespace GitHub.Runner.Worker.Handlers
AddInputsToEnvironment();
var dockerManager = HostContext.GetService<IDockerCommandManager>();
var containerHookManager = HostContext.GetService<IContainerHookManager>();
string dockerFile = null;
// container image haven't built/pull
if (Data.Image.StartsWith("docker://", StringComparison.OrdinalIgnoreCase))
@@ -50,28 +47,26 @@ namespace GitHub.Runner.Worker.Handlers
else if (Data.Image.EndsWith("Dockerfile") || Data.Image.EndsWith("dockerfile"))
{
// ensure docker file exist
dockerFile = Path.Combine(ActionDirectory, Data.Image);
var dockerFile = Path.Combine(ActionDirectory, Data.Image);
ArgUtil.File(dockerFile, nameof(Data.Image));
if (!FeatureManager.IsContainerHooksEnabled(ExecutionContext.Global.Variables))
ExecutionContext.Output($"##[group]Building docker image");
ExecutionContext.Output($"Dockerfile for action: '{dockerFile}'.");
var imageName = $"{dockerManager.DockerInstanceLabel}:{ExecutionContext.Id.ToString("N")}";
var buildExitCode = await dockerManager.DockerBuild(
ExecutionContext,
ExecutionContext.GetGitHubContext("workspace"),
dockerFile,
Directory.GetParent(dockerFile).FullName,
imageName);
ExecutionContext.Output("##[endgroup]");
if (buildExitCode != 0)
{
ExecutionContext.Output($"##[group]Building docker image");
ExecutionContext.Output($"Dockerfile for action: '{dockerFile}'.");
var imageName = $"{dockerManager.DockerInstanceLabel}:{ExecutionContext.Id.ToString("N")}";
var buildExitCode = await dockerManager.DockerBuild(
ExecutionContext,
ExecutionContext.GetGitHubContext("workspace"),
dockerFile,
Directory.GetParent(dockerFile).FullName,
imageName);
ExecutionContext.Output("##[endgroup]");
if (buildExitCode != 0)
{
throw new InvalidOperationException($"Docker build failed with exit code {buildExitCode}");
}
Data.Image = imageName;
throw new InvalidOperationException($"Docker build failed with exit code {buildExitCode}");
}
Data.Image = imageName;
}
string type = Action.Type == Pipelines.ActionSourceType.Repository ? "Dockerfile" : "DockerHub";
@@ -225,21 +220,14 @@ namespace GitHub.Runner.Worker.Handlers
container.ContainerEnvironmentVariables[variable.Key] = container.TranslateToContainerPath(variable.Value);
}
if (FeatureManager.IsContainerHooksEnabled(ExecutionContext.Global.Variables))
using (var stdoutManager = new OutputManager(ExecutionContext, ActionCommandManager, container))
using (var stderrManager = new OutputManager(ExecutionContext, ActionCommandManager, container))
{
await containerHookManager.RunContainerStepAsync(ExecutionContext, container, dockerFile);
}
else
{
using (var stdoutManager = new OutputManager(ExecutionContext, ActionCommandManager, container))
using (var stderrManager = new OutputManager(ExecutionContext, ActionCommandManager, container))
var runExitCode = await dockerManager.DockerRun(ExecutionContext, container, stdoutManager.OnDataReceived, stderrManager.OnDataReceived);
ExecutionContext.Debug($"Docker Action run completed with exit code {runExitCode}");
if (runExitCode != 0)
{
var runExitCode = await dockerManager.DockerRun(ExecutionContext, container, stdoutManager.OnDataReceived, stderrManager.OnDataReceived);
ExecutionContext.Debug($"Docker Action run completed with exit code {runExitCode}");
if (runExitCode != 0)
{
ExecutionContext.Result = TaskResult.Failed;
}
ExecutionContext.Result = TaskResult.Failed;
}
}
#endif

View File

@@ -8,9 +8,6 @@ using GitHub.DistributedTask.Pipelines.ContextData;
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Sdk;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Worker.Container;
using GitHub.Runner.Worker.Container.ContainerHooks;
namespace GitHub.Runner.Worker.Handlers
{
@@ -105,12 +102,6 @@ namespace GitHub.Runner.Worker.Handlers
Data.NodeVersion = "node16";
}
#endif
string forcedNodeVersion = System.Environment.GetEnvironmentVariable(Constants.Variables.Agent.ForcedActionsNodeVersion);
if (forcedNodeVersion == "node16" && Data.NodeVersion != "node16")
{
Data.NodeVersion = "node16";
}
var nodeRuntimeVersion = await StepHost.DetermineNodeRuntimeVersion(ExecutionContext, Data.NodeVersion);
string file = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Externals), nodeRuntimeVersion, "bin", $"node{IOUtil.ExeExtension}");
@@ -118,7 +109,7 @@ namespace GitHub.Runner.Worker.Handlers
// 1) Wrap the script file path in double quotes.
// 2) Escape double quotes within the script file path. Double-quote is a valid
// file name character on Linux.
string arguments = StepHost.ResolvePathForStepHost(ExecutionContext, StringUtil.Format(@"""{0}""", target.Replace(@"""", @"\""")));
string arguments = StepHost.ResolvePathForStepHost(StringUtil.Format(@"""{0}""", target.Replace(@"""", @"\""")));
#if OS_WINDOWS
// It appears that node.exe outputs UTF8 when not in TTY mode.
@@ -151,16 +142,14 @@ namespace GitHub.Runner.Worker.Handlers
// Execute the process. Exit code 0 should always be returned.
// A non-zero exit code indicates infrastructural failure.
// Task failure should be communicated over STDOUT using ## commands.
Task<int> step = StepHost.ExecuteAsync(ExecutionContext,
workingDirectory: StepHost.ResolvePathForStepHost(ExecutionContext, workingDirectory),
fileName: StepHost.ResolvePathForStepHost(ExecutionContext, file),
Task<int> step = StepHost.ExecuteAsync(workingDirectory: StepHost.ResolvePathForStepHost(workingDirectory),
fileName: StepHost.ResolvePathForStepHost(file),
arguments: arguments,
environment: Environment,
requireExitCodeZero: false,
outputEncoding: outputEncoding,
killProcessOnCancel: false,
inheritConsoleHandler: !ExecutionContext.Global.Variables.Retain_Default_Encoding,
standardInInput: null,
cancellationToken: ExecutionContext.CancellationToken);
// Wait for either the node exit or force finish through ##vso command

View File

@@ -8,8 +8,6 @@ using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
using GitHub.Runner.Worker.Container;
using GitHub.Runner.Worker.Container.ContainerHooks;
using Pipelines = GitHub.DistributedTask.Pipelines;
namespace GitHub.Runner.Worker.Handlers
@@ -204,32 +202,14 @@ namespace GitHub.Runner.Worker.Handlers
}
else
{
// For these shells, we want to use system binaries
var systemShells = new string[] { "bash", "sh", "powershell", "pwsh" };
if (!IsActionStep && systemShells.Contains(shell))
var parsed = ScriptHandlerHelpers.ParseShellOptionString(shell);
shellCommand = parsed.shellCommand;
// For non-ContainerStepHost, the command must be located on the host by Which
commandPath = WhichUtil.Which(parsed.shellCommand, !isContainerStepHost, Trace, prependPath);
argFormat = $"{parsed.shellArgs}".TrimStart();
if (string.IsNullOrEmpty(argFormat))
{
shellCommand = shell;
commandPath = WhichUtil.Which(shell, !isContainerStepHost, Trace, prependPath);
if (shell == "bash")
{
argFormat = ScriptHandlerHelpers.GetScriptArgumentsFormat("sh");
}
else
{
argFormat = ScriptHandlerHelpers.GetScriptArgumentsFormat(shell);
}
}
else
{
var parsed = ScriptHandlerHelpers.ParseShellOptionString(shell);
shellCommand = parsed.shellCommand;
// For non-ContainerStepHost, the command must be located on the host by Which
commandPath = WhichUtil.Which(parsed.shellCommand, !isContainerStepHost, Trace, prependPath);
argFormat = $"{parsed.shellArgs}".TrimStart();
if (string.IsNullOrEmpty(argFormat))
{
argFormat = ScriptHandlerHelpers.GetScriptArgumentsFormat(shellCommand);
}
argFormat = ScriptHandlerHelpers.GetScriptArgumentsFormat(shellCommand);
}
}
@@ -249,7 +229,7 @@ namespace GitHub.Runner.Worker.Handlers
{
// We do not not the full path until we know what shell is being used, so that we can determine the file extension
scriptFilePath = Path.Combine(tempDirectory, $"{Guid.NewGuid()}{ScriptHandlerHelpers.GetScriptFileExtension(shellCommand)}");
resolvedScriptPath = StepHost.ResolvePathForStepHost(ExecutionContext, scriptFilePath).Replace("\"", "\\\"");
resolvedScriptPath = $"{StepHost.ResolvePathForStepHost(scriptFilePath).Replace("\"", "\\\"")}";
}
else
{
@@ -320,7 +300,6 @@ namespace GitHub.Runner.Worker.Handlers
ExecutionContext.Debug($"{fileName} {arguments}");
Inputs.TryGetValue("standardInInput", out var standardInInput);
using (var stdoutManager = new OutputManager(ExecutionContext, ActionCommandManager))
using (var stderrManager = new OutputManager(ExecutionContext, ActionCommandManager))
{
@@ -328,8 +307,7 @@ namespace GitHub.Runner.Worker.Handlers
StepHost.ErrorDataReceived += stderrManager.OnDataReceived;
// Execute
int exitCode = await StepHost.ExecuteAsync(ExecutionContext,
workingDirectory: StepHost.ResolvePathForStepHost(ExecutionContext, workingDirectory),
int exitCode = await StepHost.ExecuteAsync(workingDirectory: StepHost.ResolvePathForStepHost(workingDirectory),
fileName: fileName,
arguments: arguments,
environment: Environment,
@@ -337,7 +315,6 @@ namespace GitHub.Runner.Worker.Handlers
outputEncoding: null,
killProcessOnCancel: false,
inheritConsoleHandler: !ExecutionContext.Global.Variables.Retain_Default_Encoding,
standardInInput: standardInInput,
cancellationToken: ExecutionContext.CancellationToken);
// Error

View File

@@ -3,12 +3,10 @@ using System;
using System.Collections.Generic;
using System.IO;
using GitHub.Runner.Sdk;
using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
namespace GitHub.Runner.Worker.Handlers
{
internal static class ScriptHandlerHelpers
internal class ScriptHandlerHelpers
{
private static readonly Dictionary<string, string> _defaultArguments = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase)
{
@@ -83,5 +81,22 @@ namespace GitHub.Runner.Worker.Handlers
throw new ArgumentException($"Failed to parse COMMAND [..ARGS] from {shellOption}");
}
}
internal static string GetDefaultShellForScript(string path, Common.Tracing trace, string prependPath)
{
var format = "{0} {1}";
switch (Path.GetExtension(path))
{
case ".sh":
// use 'sh' args but prefer bash
var pathToShell = WhichUtil.Which("bash", false, trace, prependPath) ?? WhichUtil.Which("sh", true, trace, prependPath);
return string.Format(format, pathToShell, _defaultArguments["sh"]);
case ".ps1":
var pathToPowershell = WhichUtil.Which("pwsh", false, trace, prependPath) ?? WhichUtil.Which("powershell", true, trace, prependPath);
return string.Format(format, pathToPowershell, _defaultArguments["powershell"]);
default:
throw new ArgumentException($"{path} is not a valid path to a script. Make sure it ends in '.sh' or '.ps1'.");
}
}
}
}

View File

@@ -1,6 +1,5 @@
using System;
using System.Collections.Generic;
using GitHub.DistributedTask.Pipelines.ContextData;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
@@ -8,9 +7,7 @@ using GitHub.Runner.Worker.Container;
using GitHub.Runner.Common;
using GitHub.Runner.Sdk;
using System.Linq;
using GitHub.Runner.Worker.Container.ContainerHooks;
using System.IO;
using System.Threading.Channels;
using GitHub.DistributedTask.Pipelines.ContextData;
namespace GitHub.Runner.Worker.Handlers
{
@@ -19,12 +16,11 @@ namespace GitHub.Runner.Worker.Handlers
event EventHandler<ProcessDataReceivedEventArgs> OutputDataReceived;
event EventHandler<ProcessDataReceivedEventArgs> ErrorDataReceived;
string ResolvePathForStepHost(IExecutionContext executionContext, string path);
string ResolvePathForStepHost(string path);
Task<string> DetermineNodeRuntimeVersion(IExecutionContext executionContext, string preferredVersion);
Task<int> ExecuteAsync(IExecutionContext context,
string workingDirectory,
Task<int> ExecuteAsync(string workingDirectory,
string fileName,
string arguments,
IDictionary<string, string> environment,
@@ -32,7 +28,6 @@ namespace GitHub.Runner.Worker.Handlers
Encoding outputEncoding,
bool killProcessOnCancel,
bool inheritConsoleHandler,
string standardInInput,
CancellationToken cancellationToken);
}
@@ -53,7 +48,7 @@ namespace GitHub.Runner.Worker.Handlers
public event EventHandler<ProcessDataReceivedEventArgs> OutputDataReceived;
public event EventHandler<ProcessDataReceivedEventArgs> ErrorDataReceived;
public string ResolvePathForStepHost(IExecutionContext executionContext, string path)
public string ResolvePathForStepHost(string path)
{
return path;
}
@@ -63,8 +58,7 @@ namespace GitHub.Runner.Worker.Handlers
return Task.FromResult<string>(preferredVersion);
}
public async Task<int> ExecuteAsync(IExecutionContext context,
string workingDirectory,
public async Task<int> ExecuteAsync(string workingDirectory,
string fileName,
string arguments,
IDictionary<string, string> environment,
@@ -72,17 +66,10 @@ namespace GitHub.Runner.Worker.Handlers
Encoding outputEncoding,
bool killProcessOnCancel,
bool inheritConsoleHandler,
string standardInInput,
CancellationToken cancellationToken)
{
using (var processInvoker = HostContext.CreateService<IProcessInvoker>())
{
Channel<string> redirectStandardIn = null;
if (standardInInput != null)
{
redirectStandardIn = Channel.CreateUnbounded<string>(new UnboundedChannelOptions() { SingleReader = true, SingleWriter = true });
redirectStandardIn.Writer.TryWrite(standardInInput);
}
processInvoker.OutputDataReceived += OutputDataReceived;
processInvoker.ErrorDataReceived += ErrorDataReceived;
@@ -93,7 +80,7 @@ namespace GitHub.Runner.Worker.Handlers
requireExitCodeZero: requireExitCodeZero,
outputEncoding: outputEncoding,
killProcessOnCancel: killProcessOnCancel,
redirectStandardIn: redirectStandardIn,
redirectStandardIn: null,
inheritConsoleHandler: inheritConsoleHandler,
cancellationToken: cancellationToken);
}
@@ -107,15 +94,11 @@ namespace GitHub.Runner.Worker.Handlers
public event EventHandler<ProcessDataReceivedEventArgs> OutputDataReceived;
public event EventHandler<ProcessDataReceivedEventArgs> ErrorDataReceived;
public string ResolvePathForStepHost(IExecutionContext executionContext, string path)
public string ResolvePathForStepHost(string path)
{
// make sure container exist.
ArgUtil.NotNull(Container, nameof(Container));
if (!FeatureManager.IsContainerHooksEnabled(executionContext.Global?.Variables))
{
// TODO: Remove nullcheck with executionContext.Global? by setting up ExecutionContext.Global at GitHub.Runner.Common.Tests.Worker.ExecutionContextL0.GetExpressionValues_ContainerStepHost
ArgUtil.NotNullOrEmpty(Container.ContainerId, nameof(Container.ContainerId));
}
ArgUtil.NotNullOrEmpty(Container.ContainerId, nameof(Container.ContainerId));
// remove double quotes around the path
path = path.Trim('\"');
@@ -137,19 +120,6 @@ namespace GitHub.Runner.Worker.Handlers
public async Task<string> DetermineNodeRuntimeVersion(IExecutionContext executionContext, string preferredVersion)
{
// Optimistically use the default
string nodeExternal = preferredVersion;
if (FeatureManager.IsContainerHooksEnabled(executionContext.Global.Variables))
{
if (Container.IsAlpine)
{
nodeExternal = CheckPlatformForAlpineContainer(executionContext, preferredVersion);
}
executionContext.Debug($"Running JavaScript Action with default external tool: {nodeExternal}");
return nodeExternal;
}
// Best effort to determine a compatible node runtime
// There may be more variation in which libraries are linked than just musl/glibc,
// so determine based on known distribtutions instead
@@ -158,6 +128,7 @@ namespace GitHub.Runner.Worker.Handlers
var output = new List<string>();
var execExitCode = await dockerManager.DockerExec(executionContext, Container.ContainerId, string.Empty, osReleaseIdCmd, output);
string nodeExternal;
if (execExitCode == 0)
{
foreach (var line in output)
@@ -165,17 +136,26 @@ namespace GitHub.Runner.Worker.Handlers
executionContext.Debug(line);
if (line.ToLower().Contains("alpine"))
{
nodeExternal = CheckPlatformForAlpineContainer(executionContext, preferredVersion);
if (!Constants.Runner.PlatformArchitecture.Equals(Constants.Architecture.X64))
{
var os = Constants.Runner.Platform.ToString();
var arch = Constants.Runner.PlatformArchitecture.ToString();
var msg = $"JavaScript Actions in Alpine containers are only supported on x64 Linux runners. Detected {os} {arch}";
throw new NotSupportedException(msg);
}
nodeExternal = $"{preferredVersion}_alpine";
executionContext.Debug($"Container distribution is alpine. Running JavaScript Action with external tool: {nodeExternal}");
return nodeExternal;
}
}
}
// Optimistically use the default
nodeExternal = preferredVersion;
executionContext.Debug($"Running JavaScript Action with default external tool: {nodeExternal}");
return nodeExternal;
}
public async Task<int> ExecuteAsync(IExecutionContext context,
string workingDirectory,
public async Task<int> ExecuteAsync(string workingDirectory,
string fileName,
string arguments,
IDictionary<string, string> environment,
@@ -183,25 +163,12 @@ namespace GitHub.Runner.Worker.Handlers
Encoding outputEncoding,
bool killProcessOnCancel,
bool inheritConsoleHandler,
string standardInInput,
CancellationToken cancellationToken)
{
// make sure container exist.
ArgUtil.NotNull(Container, nameof(Container));
var containerHookManager = HostContext.GetService<IContainerHookManager>();
if (FeatureManager.IsContainerHooksEnabled(context.Global.Variables))
{
TranslateToContainerPath(environment);
await containerHookManager.RunScriptStepAsync(context,
Container,
workingDirectory,
fileName,
arguments,
environment,
PrependPath);
return (int)(context.Result ?? 0);
}
ArgUtil.NotNullOrEmpty(Container.ContainerId, nameof(Container.ContainerId));
var dockerManager = HostContext.GetService<IDockerCommandManager>();
string dockerClientPath = dockerManager.DockerPath;
@@ -216,7 +183,7 @@ namespace GitHub.Runner.Worker.Handlers
{
// e.g. -e MY_SECRET maps the value into the exec'ed process without exposing
// the value directly in the command
dockerCommandArgs.Add(DockerUtil.CreateEscapedOption("-e", env.Key));
dockerCommandArgs.Add($"-e {env.Key}");
}
if (!string.IsNullOrEmpty(PrependPath))
{
@@ -235,7 +202,12 @@ namespace GitHub.Runner.Worker.Handlers
dockerCommandArgs.Add(arguments);
string dockerCommandArgstring = string.Join(" ", dockerCommandArgs);
TranslateToContainerPath(environment);
// make sure all env are using container path
foreach (var envKey in environment.Keys.ToList())
{
environment[envKey] = this.Container.TranslateToContainerPath(environment[envKey]);
}
using (var processInvoker = HostContext.CreateService<IProcessInvoker>())
{
@@ -249,6 +221,7 @@ namespace GitHub.Runner.Worker.Handlers
// Let .NET choose the default.
outputEncoding = null;
#endif
return await processInvoker.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Work),
fileName: dockerClientPath,
arguments: dockerCommandArgstring,
@@ -261,28 +234,5 @@ namespace GitHub.Runner.Worker.Handlers
cancellationToken: cancellationToken);
}
}
private string CheckPlatformForAlpineContainer(IExecutionContext executionContext, string preferredVersion)
{
string nodeExternal = preferredVersion;
if (!Constants.Runner.PlatformArchitecture.Equals(Constants.Architecture.X64))
{
var os = Constants.Runner.Platform.ToString();
var arch = Constants.Runner.PlatformArchitecture.ToString();
var msg = $"JavaScript Actions in Alpine containers are only supported on x64 Linux runners. Detected {os} {arch}";
throw new NotSupportedException(msg);
}
nodeExternal = $"{preferredVersion}_alpine";
executionContext.Debug($"Container distribution is alpine. Running JavaScript Action with external tool: {nodeExternal}");
return nodeExternal;
}
private void TranslateToContainerPath(IDictionary<string, string> environment)
{
foreach (var envKey in environment.Keys.ToList())
{
environment[envKey] = this.Container.TranslateToContainerPath(environment[envKey]);
}
}
}
}

View File

@@ -9,7 +9,6 @@ using System.Threading;
using System.Threading.Tasks;
using GitHub.DistributedTask.Expressions2;
using GitHub.DistributedTask.ObjectTemplating.Tokens;
using GitHub.DistributedTask.Pipelines;
using GitHub.DistributedTask.Pipelines.ContextData;
using GitHub.DistributedTask.Pipelines.ObjectTemplating;
using GitHub.DistributedTask.WebApi;
@@ -207,7 +206,6 @@ namespace GitHub.Runner.Worker
// Evaluate the job container
context.Debug("Evaluating job container");
var container = templateEvaluator.EvaluateJobContainer(message.JobContainer, jobContext.ExpressionValues, jobContext.ExpressionFunctions);
ValidateJobContainer(container);
if (container != null)
{
jobContext.Global.Container = new Container.ContainerInfo(HostContext, container);
@@ -316,29 +314,6 @@ namespace GitHub.Runner.Worker
}
}
if (message.Variables.TryGetValue("system.workflowFileFullPath", out VariableValue workflowFileFullPath))
{
context.Output($"Uses: {workflowFileFullPath.Value}");
if (message.ContextData.TryGetValue("inputs", out var pipelineContextData))
{
var inputs = pipelineContextData.AssertDictionary("inputs");
if (inputs.Any())
{
context.Output($"##[group] Inputs");
foreach (var input in inputs)
{
context.Output($" {input.Key}: {input.Value}");
}
context.Output("##[endgroup]");
}
}
if (!string.IsNullOrWhiteSpace(message.JobDisplayName))
{
context.Output($"Complete job name: {message.JobDisplayName}");
}
}
var intraActionStates = new Dictionary<Guid, Dictionary<string, string>>();
foreach (var preStep in prepareResult.PreStepTracker)
{
@@ -697,13 +672,5 @@ namespace GitHub.Runner.Worker
Trace.Info($"Total accessible running process: {snapshot.Count}.");
return snapshot;
}
private static void ValidateJobContainer(JobContainer container)
{
if (StringUtil.ConvertToBoolean(Environment.GetEnvironmentVariable(Constants.Variables.Actions.RequireJobContainer)) && container == null)
{
throw new ArgumentException("Jobs without a job container are forbidden on this runner, please add a 'container:' to your job or contact your self-hosted runner administrator.");
}
}
}
}

View File

@@ -18,8 +18,8 @@ namespace GitHub.Runner.Worker
public class JobHookData
{
public string Path { get; private set; }
public ActionRunStage Stage { get; private set; }
public string Path {get; private set;}
public ActionRunStage Stage {get; private set;}
public JobHookData(ActionRunStage stage, string path)
{
@@ -60,7 +60,7 @@ namespace GitHub.Runner.Worker
Dictionary<string, string> inputs = new()
{
["path"] = hookData.Path,
["shell"] = HostContext.GetDefaultShellForScript(hookData.Path, prependPath)
["shell"] = ScriptHandlerHelpers.GetDefaultShellForScript(hookData.Path, Trace, prependPath)
};
// Create the handler

View File

@@ -112,7 +112,6 @@ namespace GitHub.Runner.Worker
foreach (var env in actionEnvironment)
{
envContext[env.Key] = new StringContextData(env.Value ?? string.Empty);
step.ExecutionContext.StepEnvironmentOverrides.Add(env.Key);
}
}
catch (Exception ex)

View File

@@ -351,18 +351,6 @@ namespace GitHub.Services.Common.Diagnostics
}
}
[NonEvent]
public void AuthenticationFailedOnFirstRequest(
VssTraceActivity activity,
HttpResponseMessage response)
{
if (IsEnabled())
{
SetActivityId(activity);
WriteMessageEvent((Int32)response.StatusCode, response.Headers.ToString(), this.AuthenticationFailedOnFirstRequest);
}
}
[NonEvent]
public void IssuedTokenProviderCreated(
VssTraceActivity activity,
@@ -463,7 +451,7 @@ namespace GitHub.Services.Common.Diagnostics
[NonEvent]
public void IssuedTokenInvalidated(
VssTraceActivity activity,
IssuedTokenProvider provider,
IssuedTokenProvider provider,
IssuedToken token)
{
if (IsEnabled())
@@ -825,7 +813,7 @@ namespace GitHub.Services.Common.Diagnostics
[Event(31, Keywords = Keywords.Authentication, Level = EventLevel.Warning, Task = Tasks.Authentication, Opcode = EventOpcode.Info, Message = "Retrieving an AAD auth token took a long time ({0} seconds)")]
public void AuthorizationDelayed(string timespan)
{
if (IsEnabled(EventLevel.Warning, Keywords.Authentication))
if(IsEnabled(EventLevel.Warning, Keywords.Authentication))
{
WriteEvent(31, timespan);
}
@@ -840,17 +828,6 @@ namespace GitHub.Services.Common.Diagnostics
}
}
[Event(33, Keywords = Keywords.Authentication, Level = EventLevel.Verbose, Task = Tasks.HttpRequest, Message = "Authentication failed on first request with status code {0}.%n{1}")]
private void AuthenticationFailedOnFirstRequest(
Int32 statusCode,
String headers)
{
if (IsEnabled(EventLevel.Verbose, Keywords.Authentication))
{
WriteEvent(33, statusCode, headers);
}
}
/// <summary>
/// Sets the activity ID of the current thread.
/// </summary>

View File

@@ -251,14 +251,7 @@ namespace GitHub.Services.Common
// Invalidate the token and ensure that we have the correct token provider for the challenge
// which we just received
if (retries < m_maxAuthRetries)
{
VssHttpEventSource.Log.AuthenticationFailed(traceActivity, response);
}
else
{
VssHttpEventSource.Log.AuthenticationFailedOnFirstRequest(traceActivity, response);
}
VssHttpEventSource.Log.AuthenticationFailed(traceActivity, response);
if (provider != null)
{

View File

@@ -457,7 +457,6 @@ namespace GitHub.DistributedTask.WebApi
int poolId,
Guid sessionId,
long? lastMessageId = null,
TaskAgentStatus? status = null,
object userState = null,
CancellationToken cancellationToken = default)
{
@@ -471,10 +470,6 @@ namespace GitHub.DistributedTask.WebApi
{
queryParams.Add("lastMessageId", lastMessageId.Value.ToString(CultureInfo.InvariantCulture));
}
if (status != null)
{
queryParams.Add("status", status.Value.ToString());
}
return SendAsync<TaskAgentMessage>(
httpMethod,

View File

@@ -1,39 +0,0 @@
using System;
using System.Runtime.Serialization;
using GitHub.Services.WebApi;
using Newtonsoft.Json;
namespace GitHub.DistributedTask.Pipelines
{
[DataContract]
public sealed class HostedRunnerShutdownMessage
{
public static readonly String MessageType = "RunnerShutdown";
[JsonConstructor]
internal HostedRunnerShutdownMessage()
{
}
public HostedRunnerShutdownMessage(String reason)
{
this.Reason = reason;
}
[DataMember]
public String Reason
{
get;
private set;
}
public WebApi.TaskAgentMessage GetAgentMessage()
{
return new WebApi.TaskAgentMessage
{
Body = JsonUtility.ToString(this),
MessageType = HostedRunnerShutdownMessage.MessageType,
};
}
}
}

View File

@@ -30,9 +30,6 @@ namespace GitHub.DistributedTask.WebApi
[DataMember(EmitDefaultValue = false)]
public Guid StepId { get; set; }
[DataMember(EmitDefaultValue = false)]
public string StepContextName { get; set; }
[DataMember(EmitDefaultValue = false)]
public bool? HasRunsStep { get; set; }
@@ -59,14 +56,5 @@ namespace GitHub.DistributedTask.WebApi
[DataMember(EmitDefaultValue = false)]
public int? ExecutionTimeInSeconds { get; set; }
[DataMember(EmitDefaultValue = false)]
public DateTime? StartTime { get; set; }
[DataMember(EmitDefaultValue = false)]
public DateTime? FinishTime { get; set; }
[DataMember(EmitDefaultValue = false)]
public string ContainerHookData { get; set; }
}
}

View File

@@ -5,6 +5,5 @@ namespace GitHub.DistributedTask.WebApi
public static class JobRequestMessageTypes
{
public const String PipelineAgentJobRequest = "PipelineAgentJobRequest";
public const String RunnerJobRequest = "RunnerJobRequest";
}
}

View File

@@ -141,24 +141,6 @@ namespace GitHub.DistributedTask.WebApi
return ReplaceAgentAsync(poolId, agent.Id, agent, userState, cancellationToken);
}
public Task<Pipelines.AgentJobRequestMessage> GetJobMessageAsync(
string messageId,
object userState = null,
CancellationToken cancellationToken = default)
{
HttpMethod httpMethod = new HttpMethod("GET");
Guid locationId = new Guid("25adab70-1379-4186-be8e-b643061ebe3a");
object routeValues = new { messageId = messageId };
return SendAsync<Pipelines.AgentJobRequestMessage>(
httpMethod,
locationId,
routeValues: routeValues,
version: new ApiResourceVersion(6.0, 1),
userState: userState,
cancellationToken: cancellationToken);
}
protected Task<T> SendAsync<T>(
HttpMethod method,
Guid locationId,

View File

@@ -10,8 +10,5 @@ namespace GitHub.DistributedTask.WebApi
[EnumMember]
Online = 2,
[EnumMember]
Busy = 3,
}
}

View File

@@ -2,7 +2,6 @@
using System.Linq;
using Newtonsoft.Json;
using Newtonsoft.Json.Converters;
using Newtonsoft.Json.Serialization;
namespace GitHub.Actions.Pipelines.WebApi
{
@@ -10,7 +9,7 @@ namespace GitHub.Actions.Pipelines.WebApi
{
public UnknownEnumJsonConverter()
{
this.NamingStrategy = new CamelCaseNamingStrategy();
this.CamelCaseText = true;
}
public override bool CanConvert(Type objectType)

View File

@@ -14,7 +14,7 @@
<ItemGroup>
<PackageReference Include="Microsoft.Win32.Registry" Version="4.4.0" />
<PackageReference Include="Newtonsoft.Json" Version="13.0.1" />
<PackageReference Include="Newtonsoft.Json" Version="11.0.2" />
<PackageReference Include="Microsoft.AspNet.WebApi.Client" Version="5.2.4" />
<PackageReference Include="System.IdentityModel.Tokens.Jwt" Version="5.2.1" />
<PackageReference Include="System.Security.Cryptography.Cng" Version="4.4.0" />

View File

@@ -84,7 +84,7 @@ namespace GitHub.Services.WebApi
if (!enumsAsNumbers)
{
// Serialze enums as camelCased string values
this.SerializerSettings.Converters.Add(new StringEnumConverter { NamingStrategy = new CamelCaseNamingStrategy() });
this.SerializerSettings.Converters.Add(new StringEnumConverter { CamelCaseText = true });
}
if (useMsDateFormat)

View File

@@ -144,54 +144,5 @@ namespace GitHub.Runner.Common.Tests.Worker.Container
var actual = DockerUtil.ParseRegistryHostnameFromImageName(input);
Assert.Equal(expected, actual);
}
[Theory]
[Trait("Level", "L0")]
[Trait("Category", "Worker")]
[InlineData("", "")]
[InlineData("foo", "foo")]
[InlineData("foo \\ bar", "foo \\ bar")]
[InlineData("foo \\", "foo \\\\")]
[InlineData("foo \\\\", "foo \\\\\\\\")]
[InlineData("foo \\\" bar", "foo \\\\\\\" bar")]
[InlineData("foo \\\\\" bar", "foo \\\\\\\\\\\" bar")]
public void CreateEscapedOption_keyOnly(string input, string escaped)
{
var flag = "--example";
var actual = DockerUtil.CreateEscapedOption(flag, input);
string expected;
if (String.IsNullOrEmpty(input))
{
expected = "";
}
else
{
expected = $"{flag} \"{escaped}\"";
}
Assert.Equal(expected, actual);
}
[Theory]
[Trait("Level", "L0")]
[Trait("Category", "Worker")]
[InlineData("foo", "bar", "foo=bar")]
[InlineData("foo\\", "bar", "foo\\=bar")]
[InlineData("foo\\", "bar\\", "foo\\=bar\\\\")]
[InlineData("foo \\","bar \\", "foo \\=bar \\\\")]
public void CreateEscapedOption_keyValue(string keyInput, string valueInput, string escapedString)
{
var flag = "--example";
var actual = DockerUtil.CreateEscapedOption(flag, keyInput, valueInput);
string expected;
if (String.IsNullOrEmpty(keyInput))
{
expected = "";
}
else
{
expected = $"{flag} \"{escapedString}\"";
}
Assert.Equal(expected, actual);
}
}
}

View File

@@ -192,8 +192,8 @@ namespace GitHub.Runner.Common.Tests.Listener
_runnerServer
.Setup(x => x.GetAgentMessageAsync(
_settings.PoolId, expectedSession.SessionId, It.IsAny<long?>(), TaskAgentStatus.Online, It.IsAny<CancellationToken>()))
.Returns(async (Int32 poolId, Guid sessionId, Int64? lastMessageId, TaskAgentStatus status, CancellationToken cancellationToken) =>
_settings.PoolId, expectedSession.SessionId, It.IsAny<long?>(), tokenSource.Token))
.Returns(async (Int32 poolId, Guid sessionId, Int64? lastMessageId, CancellationToken cancellationToken) =>
{
await Task.Yield();
return messages.Dequeue();
@@ -208,7 +208,7 @@ namespace GitHub.Runner.Common.Tests.Listener
//Assert
_runnerServer
.Verify(x => x.GetAgentMessageAsync(
_settings.PoolId, expectedSession.SessionId, It.IsAny<long?>(), TaskAgentStatus.Online, It.IsAny<CancellationToken>()), Times.Exactly(arMessages.Length));
_settings.PoolId, expectedSession.SessionId, It.IsAny<long?>(), tokenSource.Token), Times.Exactly(arMessages.Length));
}
}
@@ -293,7 +293,7 @@ namespace GitHub.Runner.Common.Tests.Listener
_runnerServer
.Setup(x => x.GetAgentMessageAsync(
_settings.PoolId, expectedSession.SessionId, It.IsAny<long?>(), TaskAgentStatus.Online, It.IsAny<CancellationToken>()))
_settings.PoolId, expectedSession.SessionId, It.IsAny<long?>(), tokenSource.Token))
.Throws(new TaskAgentAccessTokenExpiredException("test"));
try
{
@@ -311,7 +311,7 @@ namespace GitHub.Runner.Common.Tests.Listener
//Assert
_runnerServer
.Verify(x => x.GetAgentMessageAsync(
_settings.PoolId, expectedSession.SessionId, It.IsAny<long?>(), TaskAgentStatus.Online, It.IsAny<CancellationToken>()), Times.Once);
_settings.PoolId, expectedSession.SessionId, It.IsAny<long?>(), tokenSource.Token), Times.Once);
_runnerServer
.Verify(x => x.DeleteAgentSessionAsync(

View File

@@ -2,7 +2,6 @@
using GitHub.Runner.Listener.Check;
using GitHub.Runner.Listener.Configuration;
using GitHub.Runner.Worker;
using GitHub.Runner.Worker.Container.ContainerHooks;
using GitHub.Runner.Worker.Handlers;
using System;
using System.Collections.Generic;
@@ -69,8 +68,7 @@ namespace GitHub.Runner.Common.Tests
typeof(IStep),
typeof(IStepHost),
typeof(IDiagnosticLogManager),
typeof(IEnvironmentContextData),
typeof(IHookArgs),
typeof(IEnvironmentContextData)
};
Validate(
assembly: typeof(IStepsRunner).GetTypeInfo().Assembly,

View File

@@ -698,31 +698,6 @@ namespace GitHub.Runner.Common.Tests.Worker
}
}
[Fact]
[Trait("Level", "L0")]
[Trait("Category", "Worker")]
public void Load_CompositeActionNoUsing()
{
try
{
//Arrange
Setup();
var actionManifest = new ActionManifestManager();
actionManifest.Initialize(_hc);
var action_path = Path.Combine(TestUtil.GetTestDataPath(), "composite_action_without_using_token.yml");
//Assert
var err = Assert.Throws<ArgumentException>(() => actionManifest.Load(_ec.Object, action_path));
Assert.Contains($"Fail to load {action_path}", err.Message);
_ec.Verify(x => x.AddIssue(It.Is<Issue>(s => s.Message.Contains("Missing 'using' value. 'using' requires 'composite', 'docker', 'node12' or 'node16'.")), It.IsAny<string>()), Times.Once);
}
finally
{
Teardown();
}
}
[Fact]
[Trait("Level", "L0")]
[Trait("Category", "Worker")]

View File

@@ -25,6 +25,23 @@ namespace GitHub.Runner.Common.Tests.Worker
private CreateStepSummaryCommand _createStepCommand;
private ITraceWriter _trace;
[Fact]
[Trait("Level", "L0")]
[Trait("Category", "Worker")]
public void CreateStepSummaryCommand_FeatureDisabled()
{
using (var hostContext = Setup(featureFlagState: "false"))
{
var stepSummaryFile = Path.Combine(_rootDirectory, "feature-off");
_createStepCommand.ProcessCommand(_executionContext.Object, stepSummaryFile, null);
_jobExecutionContext.Complete();
_jobServerQueue.Verify(x => x.QueueFileUpload(It.IsAny<Guid>(), It.IsAny<Guid>(), It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>(), It.IsAny<bool>()), Times.Never());
Assert.Equal(0, _issues.Count);
}
}
[Fact]
[Trait("Level", "L0")]
[Trait("Category", "Worker")]
@@ -182,7 +199,7 @@ namespace GitHub.Runner.Common.Tests.Worker
File.WriteAllText(path, contentStr, encoding);
}
private TestHostContext Setup([CallerMemberName] string name = "")
private TestHostContext Setup([CallerMemberName] string name = "", string featureFlagState = "true")
{
var hostContext = new TestHostContext(this, name);
@@ -224,6 +241,7 @@ namespace GitHub.Runner.Common.Tests.Worker
_variables = new Variables(hostContext, new Dictionary<string, VariableValue>
{
{ "MySecretName", new VariableValue("My secret value", true) },
{ "DistributedTask.UploadStepSummary", featureFlagState },
});
// Directory for test data

View File

@@ -100,6 +100,7 @@ namespace GitHub.Runner.Common.Tests.Worker
{
using (TestHostContext hc = CreateTestContext())
{
// Arrange: Create a job request message.
TaskOrchestrationPlanReference plan = new TaskOrchestrationPlanReference();
TimelineReference timeline = new TimelineReference();

View File

@@ -211,24 +211,6 @@ namespace GitHub.Runner.Common.Tests.Worker
}
}
[Fact]
[Trait("Level", "L0")]
[Trait("Category", "Worker")]
public async Task JobExtensionBuildFailsWithoutContainerIfRequired()
{
Environment.SetEnvironmentVariable(Constants.Variables.Actions.RequireJobContainer, "true");
using (TestHostContext hc = CreateTestContext())
{
var jobExtension = new JobExtension();
jobExtension.Initialize(hc);
_actionManager.Setup(x => x.PrepareActionsAsync(It.IsAny<IExecutionContext>(), It.IsAny<IEnumerable<Pipelines.JobStep>>(), It.IsAny<Guid>()))
.Returns(Task.FromResult(new PrepareResult(new List<JobExtensionRunner>() { new JobExtensionRunner(null, "", "prepare1", null), new JobExtensionRunner(null, "", "prepare2", null) }, new Dictionary<Guid, IActionRunner>())));
await Assert.ThrowsAsync<ArgumentException>(() => jobExtension.InitializeJob(_jobEc, _message));
}
}
[Fact]
[Trait("Level", "L0")]
[Trait("Category", "Worker")]

View File

@@ -10,7 +10,6 @@ using GitHub.Runner.Worker.Container;
using GitHub.DistributedTask.Pipelines.ContextData;
using System.Linq;
using GitHub.DistributedTask.Pipelines;
using GitHub.DistributedTask.WebApi;
namespace GitHub.Runner.Common.Tests.Worker
{
@@ -25,7 +24,6 @@ namespace GitHub.Runner.Common.Tests.Worker
_ec = new Mock<IExecutionContext>();
_ec.SetupAllProperties();
_ec.Setup(x => x.Global).Returns(new GlobalContext { WriteDebug = true });
_ec.Object.Global.Variables = new Variables(hc, new Dictionary<string, VariableValue>());
var trace = hc.GetTrace();
_ec.Setup(x => x.Write(It.IsAny<string>(), It.IsAny<string>())).Callback((string tag, string message) => { trace.Info($"[{tag}]{message}"); });

View File

@@ -622,7 +622,6 @@ namespace GitHub.Runner.Common.Tests.Worker
_stepContext.SetOutcome("", stepContext.Object.ContextName, (stepContext.Object.Outcome ?? stepContext.Object.Result ?? TaskResult.Succeeded).ToActionResult());
_stepContext.SetConclusion("", stepContext.Object.ContextName, (stepContext.Object.Result ?? TaskResult.Succeeded).ToActionResult());
});
stepContext.Setup(x => x.StepEnvironmentOverrides).Returns(new List<string>());
stepContext.Setup(x => x.UpdateGlobalStepsContext()).Callback(() =>
{

View File

@@ -1,9 +0,0 @@
name: "composite action"
description: "test composite action without value for the 'using' token in 'runs'"
runs:
steps:
- id: mystep
shell: bash
run: |
echo "hello world"

View File

@@ -1 +1 @@
2.296.2
2.292.0