mirror of
https://github.com/actions/runner.git
synced 2025-12-10 20:36:49 +00:00
Compare commits
33 Commits
users/tihu
...
users/tihu
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c645de9aee | ||
|
|
0e4f76ec4e | ||
|
|
af18df4621 | ||
|
|
5215d95637 | ||
|
|
e750eb7e38 | ||
|
|
ca1f621077 | ||
|
|
80d0b58f3c | ||
|
|
11ff2be7e9 | ||
|
|
3ce763338d | ||
|
|
a45c0278e6 | ||
|
|
658d36c1bc | ||
|
|
ca3b803237 | ||
|
|
4fa691f73e | ||
|
|
dfcfae49e5 | ||
|
|
1235dc1cea | ||
|
|
cc0d0bed90 | ||
|
|
0fac863568 | ||
|
|
42e7359f5c | ||
|
|
5639175ecb | ||
|
|
7128998d77 | ||
|
|
f37e9f80a6 | ||
|
|
0fa08423d2 | ||
|
|
029106a1dc | ||
|
|
493a2a0bf7 | ||
|
|
43f983486e | ||
|
|
f6053b616c | ||
|
|
4f4608b710 | ||
|
|
28686c40d2 | ||
|
|
ce1679bb6f | ||
|
|
0a7611b0b5 | ||
|
|
b3fee33a92 | ||
|
|
d83ef5549e | ||
|
|
fe6719d120 |
8
.github/workflows/build.yml
vendored
8
.github/workflows/build.yml
vendored
@@ -18,7 +18,7 @@ jobs:
|
||||
build:
|
||||
strategy:
|
||||
matrix:
|
||||
runtime: [ linux-x64, linux-musl-x64, linux-arm64, linux-arm, win-x64, osx-x64 ]
|
||||
runtime: [ linux-x64, linux-arm64, linux-arm, win-x64, osx-x64 ]
|
||||
include:
|
||||
- runtime: linux-x64
|
||||
os: ubuntu-latest
|
||||
@@ -32,10 +32,6 @@ jobs:
|
||||
os: ubuntu-latest
|
||||
devScript: ./dev.sh
|
||||
|
||||
- runtime: linux-musl-x64
|
||||
os: ubuntu-latest
|
||||
devScript: ./dev.sh
|
||||
|
||||
- runtime: osx-x64
|
||||
os: macOS-latest
|
||||
devScript: ./dev.sh
|
||||
@@ -59,7 +55,7 @@ jobs:
|
||||
run: |
|
||||
${{ matrix.devScript }} test
|
||||
working-directory: src
|
||||
if: matrix.runtime != 'linux-arm64' && matrix.runtime != 'linux-arm' && matrix.runtime != 'linux-musl-x64'
|
||||
if: matrix.runtime != 'linux-arm64' && matrix.runtime != 'linux-arm'
|
||||
|
||||
# Create runner package tar.gz/zip
|
||||
- name: Package Release
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
# ADR 1438: Support Conditionals In Composite Actions
|
||||
|
||||
**Date**: 2021-10-13
|
||||
|
||||
**Status**: Accepted
|
||||
|
||||
## Context
|
||||
|
||||
We recently shipped composite actions, which allows you to reuse individual steps inside an action.
|
||||
However, one of the [most requested features](https://github.com/actions/runner/issues/834) has been a way to support the `if` keyword.
|
||||
|
||||
### Goals
|
||||
- We want to keep consistent with current behavior
|
||||
- We want to support conditionals via the `if` keyword
|
||||
- Our built in functions like `success` should be implementable without calling them, for example you can do `job.status == success` rather then `success()` currently.
|
||||
|
||||
### How does composite currently work?
|
||||
|
||||
Currently, we have limited conditional support in composite actions for `pre` and `post` steps.
|
||||
These are based on the `job status`, and support keywords like `always()`, `failed()`, `success()` and `cancelled()`.
|
||||
However, generic or main steps do **not** support conditionals.
|
||||
|
||||
By default, in a regular workflow, a step runs on the `success()` condition. Which looks at the **job** **status**, sees if it is successful and runs.
|
||||
|
||||
By default, in a composite action, main steps run until a single step fails in that composite action, then the composite action is halted early. It does **not** care about the job status.
|
||||
Pre, and post steps in composite actions use the job status to determine if they should run.
|
||||
|
||||
### How do we go forward?
|
||||
|
||||
Well, if we think about what composite actions are currently doing when invoking main steps, they are checking if the current composite action is successful.
|
||||
Lets formalize that concept into a "real" idea.
|
||||
|
||||
- We will add an `action_status` field to the github context to mimic the [job's context status](https://docs.github.com/en/actions/learn-github-actions/contexts#job-context).
|
||||
- We have an existing concept that does this `action_path` which is only set for composite actions on the github context.
|
||||
- In a composite action during a main step, the `success()` function will check if `action_status == success`, rather then `job_status == success`. Failure will work the same way.
|
||||
- Pre and post steps in composite actions will not change, they will continue to check the job status.
|
||||
|
||||
|
||||
### Nested Scenario
|
||||
For nested composite actions, we will follow the existing behavior, you only care about your current composite action, not any parents.
|
||||
For example, lets imagine a scenario with a simple nested composite action
|
||||
|
||||
```
|
||||
- Job
|
||||
- Regular Step
|
||||
- Composite Action
|
||||
- runs: exit 1
|
||||
- if: always()
|
||||
uses: A child composite action
|
||||
- if: success()
|
||||
runs: echo "this should print"
|
||||
- runs: echo "this should also print"
|
||||
- if: success()
|
||||
runs: echo "this will not print as the current composite action has failed already"
|
||||
|
||||
```
|
||||
The child composite actions steps should run in this example, the child composite action has not yet failed, so it should run all steps until a step fails. This is consistent with how a composite action currently works in production if the main job fails but a composite action is invoked with `if:always()` or `if: failure()`
|
||||
|
||||
### Other options explored
|
||||
We could add the `current_step_status` to the job context rather then `__status` to the steps context, however this comes with two major downsides:
|
||||
- We need to support the field for every type of step, because its non trivial to remove a field from the job context once it has been added (its readonly)
|
||||
- For all actions besides composite it would only every be `success`
|
||||
- Its weird to have a `current_step` value on the job context
|
||||
- We also explored a `__status` on the steps context.
|
||||
- The `__` is required to prevent us from colliding with a step with id: status
|
||||
- This felt wrong because the naming was not smooth, and did not fit into current conventions.
|
||||
|
||||
### Consequences
|
||||
- github context has a new field for the status of the current composite action.
|
||||
- We support conditional's in composite actions
|
||||
- We keep the existing behavior for all users, but allow them to expand that functionality.
|
||||
74
job.yml
Normal file
74
job.yml
Normal file
@@ -0,0 +1,74 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: pod-admin
|
||||
namespace: default
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["pods", "pods/log", "pods/attach", "pods/exec"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: default-pod-admin
|
||||
namespace: default
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: pod-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: default
|
||||
namespace: default
|
||||
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
namespace: default
|
||||
name: actions-runners
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
# hostNetwork: true
|
||||
volumes:
|
||||
- name: runner-working
|
||||
emptyDir: {}
|
||||
containers:
|
||||
- name: k8srunner
|
||||
image: huangtingluo/kube-runner:v0
|
||||
imagePullPolicy: Always
|
||||
volumeMounts:
|
||||
- mountPath: /actions-runner/_work
|
||||
name: runner-working
|
||||
env:
|
||||
- name: GITHUB_PAT
|
||||
value: ghp_
|
||||
- name: RUNNER_CONFIG_URL
|
||||
value: https://github.com/bbq-beets/ting-test
|
||||
- name: K8S_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: K8S_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: K8S_POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: K8S_POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: K8S_POD_SERVICE_ACCOUNT
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.serviceAccountName
|
||||
restartPolicy: Never
|
||||
backoffLimit: 1
|
||||
completions: 1
|
||||
parallelism: 1
|
||||
@@ -1,20 +1,11 @@
|
||||
## Features
|
||||
|
||||
- Expose GITHUB_REF_* as environment variable (#1314)
|
||||
- Add arch to runner context (#1372)
|
||||
- Support Conditional Steps in Composite Actions (#1438)
|
||||
- Log current runner version in terminal (#1441)
|
||||
|
||||
## Bugs
|
||||
|
||||
- Makes the user keychains available to the service (#847)
|
||||
- Use Actions Service health and api.github.com endpoints after connection failure on Actions Server and Hosted (#1385)
|
||||
- Fix an issue where nested local composite actions did not correctly register post steps (#1433)
|
||||
- Fixed an issue where ephemeral runners did not restart after upgrading (#1396)
|
||||
|
||||
## Misc
|
||||
|
||||
- Cleanup Older versions on MacOS now that we recreate node versions as needed (#1410)
|
||||
|
||||
## Windows x64
|
||||
We recommend configuring the runner in a root folder of the Windows drive (e.g. "C:\actions-runner"). This will help avoid issues related to service identity folder permissions and long file path restrictions on Windows.
|
||||
|
||||
|
||||
@@ -29,7 +29,7 @@
|
||||
<DefineConstants>$(DefineConstants);X64</DefineConstants>
|
||||
</PropertyGroup>
|
||||
|
||||
<PropertyGroup Condition="'$(BUILD_OS)' == 'Linux' AND ('$(PackageRuntime)' == 'linux-x64' OR '$(PackageRuntime)' == 'linux-musl-x64' OR '$(PackageRuntime)' == '')">
|
||||
<PropertyGroup Condition="'$(BUILD_OS)' == 'Linux' AND ('$(PackageRuntime)' == 'linux-x64' OR '$(PackageRuntime)' == '')">
|
||||
<DefineConstants>$(DefineConstants);X64</DefineConstants>
|
||||
</PropertyGroup>
|
||||
<PropertyGroup Condition="'$(BUILD_OS)' == 'Linux' AND '$(PackageRuntime)' == 'linux-arm'">
|
||||
|
||||
78
src/Dockerfile
Normal file
78
src/Dockerfile
Normal file
@@ -0,0 +1,78 @@
|
||||
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS Build
|
||||
|
||||
# ENV RUNNER_CONFIG_URL=""
|
||||
# ENV GITHUB_PAT=""
|
||||
# ENV RUNNER_NAME=""
|
||||
# ENV RUNNER_GROUP=""
|
||||
# ENV RUNNER_LABELS=""
|
||||
# ENV GITHUB_RUNNER_SCOPE=""
|
||||
# ENV GITHUB_SERVER_URL=""
|
||||
# ENV GITHUB_API_URL=""
|
||||
# ENV K8S_HOST_IP=""
|
||||
|
||||
RUN apt-get update --fix-missing \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
curl \
|
||||
# jq \
|
||||
# git \
|
||||
apt-utils \
|
||||
apt-transport-https \
|
||||
unzip \
|
||||
net-tools\
|
||||
gnupg2\
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install kubectl
|
||||
# RUN curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
|
||||
# echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && \
|
||||
# apt-get update && apt-get -y install --no-install-recommends kubectl
|
||||
|
||||
# Install docker
|
||||
# RUN curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
# RUN sh get-docker.sh
|
||||
|
||||
# Allow runner to run as root
|
||||
# ENV RUNNER_ALLOW_RUNASROOT=1
|
||||
|
||||
# Directory for runner to operate in
|
||||
RUN mkdir /actions-runner
|
||||
RUN mkdir /actions-runner/src
|
||||
WORKDIR /actions-runner/src
|
||||
|
||||
COPY ./ /actions-runner/src
|
||||
|
||||
RUN /actions-runner/src/dev.sh l
|
||||
|
||||
FROM mcr.microsoft.com/dotnet/core/runtime-deps:3.1
|
||||
|
||||
ENV RUNNER_CONFIG_URL=""
|
||||
ENV GITHUB_PAT=""
|
||||
|
||||
RUN apt-get update --fix-missing \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
curl \
|
||||
# jq \
|
||||
# git \
|
||||
# apt-utils \
|
||||
# apt-transport-https \
|
||||
# unzip \
|
||||
# net-tools\
|
||||
gnupg2\
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install kubectl
|
||||
RUN curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
|
||||
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && \
|
||||
apt-get update && apt-get -y install --no-install-recommends kubectl
|
||||
|
||||
|
||||
# Allow runner to run as root
|
||||
ENV RUNNER_ALLOW_RUNASROOT=1
|
||||
|
||||
# Directory for runner to operate in
|
||||
RUN mkdir /actions-runner
|
||||
WORKDIR /actions-runner
|
||||
COPY --from=Build /actions-runner/_layout /actions-runner
|
||||
ENTRYPOINT ["./entrypoint.sh"]
|
||||
@@ -0,0 +1,3 @@
|
||||
dist/
|
||||
lib/
|
||||
node_modules/
|
||||
@@ -0,0 +1,59 @@
|
||||
{
|
||||
"plugins": ["jest", "@typescript-eslint"],
|
||||
"extends": ["plugin:github/es6"],
|
||||
"parser": "@typescript-eslint/parser",
|
||||
"parserOptions": {
|
||||
"ecmaVersion": 9,
|
||||
"sourceType": "module",
|
||||
"project": "./tsconfig.json"
|
||||
},
|
||||
"rules": {
|
||||
"eslint-comments/no-use": "off",
|
||||
"import/no-namespace": "off",
|
||||
"no-console": "off",
|
||||
"no-unused-vars": "off",
|
||||
"@typescript-eslint/no-unused-vars": "error",
|
||||
"@typescript-eslint/explicit-member-accessibility": ["error", {"accessibility": "no-public"}],
|
||||
"@typescript-eslint/no-require-imports": "error",
|
||||
"@typescript-eslint/array-type": "error",
|
||||
"@typescript-eslint/await-thenable": "error",
|
||||
"@typescript-eslint/ban-ts-ignore": "error",
|
||||
"camelcase": "off",
|
||||
"@typescript-eslint/camelcase": "error",
|
||||
"@typescript-eslint/class-name-casing": "error",
|
||||
"@typescript-eslint/explicit-function-return-type": ["error", {"allowExpressions": true}],
|
||||
"@typescript-eslint/func-call-spacing": ["error", "never"],
|
||||
"@typescript-eslint/generic-type-naming": ["error", "^[A-Z][A-Za-z]*$"],
|
||||
"@typescript-eslint/no-array-constructor": "error",
|
||||
"@typescript-eslint/no-empty-interface": "error",
|
||||
"@typescript-eslint/no-explicit-any": "error",
|
||||
"@typescript-eslint/no-extraneous-class": "error",
|
||||
"@typescript-eslint/no-for-in-array": "error",
|
||||
"@typescript-eslint/no-inferrable-types": "error",
|
||||
"@typescript-eslint/no-misused-new": "error",
|
||||
"@typescript-eslint/no-namespace": "error",
|
||||
"@typescript-eslint/no-non-null-assertion": "warn",
|
||||
"@typescript-eslint/no-object-literal-type-assertion": "error",
|
||||
"@typescript-eslint/no-unnecessary-qualifier": "error",
|
||||
"@typescript-eslint/no-unnecessary-type-assertion": "error",
|
||||
"@typescript-eslint/no-useless-constructor": "error",
|
||||
"@typescript-eslint/no-var-requires": "error",
|
||||
"@typescript-eslint/prefer-for-of": "warn",
|
||||
"@typescript-eslint/prefer-function-type": "warn",
|
||||
"@typescript-eslint/prefer-includes": "error",
|
||||
"@typescript-eslint/prefer-interface": "error",
|
||||
"@typescript-eslint/prefer-string-starts-ends-with": "error",
|
||||
"@typescript-eslint/promise-function-async": "error",
|
||||
"@typescript-eslint/require-array-sort-compare": "error",
|
||||
"@typescript-eslint/restrict-plus-operands": "error",
|
||||
"semi": "off",
|
||||
"@typescript-eslint/semi": ["error", "never"],
|
||||
"@typescript-eslint/type-annotation-spacing": "error",
|
||||
"@typescript-eslint/unbound-method": "error"
|
||||
},
|
||||
"env": {
|
||||
"node": true,
|
||||
"es6": true,
|
||||
"jest/globals": true
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
dist/
|
||||
lib/
|
||||
node_modules/
|
||||
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"printWidth": 80,
|
||||
"tabWidth": 2,
|
||||
"useTabs": false,
|
||||
"semi": false,
|
||||
"singleQuote": true,
|
||||
"trailingComma": "none",
|
||||
"bracketSpacing": false,
|
||||
"arrowParens": "avoid",
|
||||
"parser": "typescript"
|
||||
}
|
||||
@@ -0,0 +1 @@
|
||||
To update kubeInnerHandler under `Misc/layoutbin` run `npm install && npm run all`
|
||||
6034
src/Misc/containerEngineHandlers/kubeInnerHandler/package-lock.json
generated
Normal file
6034
src/Misc/containerEngineHandlers/kubeInnerHandler/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,36 @@
|
||||
{
|
||||
"name": "kubeInnerHandler",
|
||||
"version": "1.0.0",
|
||||
"description": "GitHub Actions",
|
||||
"main": "lib/kubeInnerHandler.js",
|
||||
"scripts": {
|
||||
"build": "tsc",
|
||||
"format": "prettier --write **/*.ts",
|
||||
"format-check": "prettier --check **/*.ts",
|
||||
"lint": "eslint src/**/*.ts",
|
||||
"pack": "ncc build -o ../../layoutbin/kubeInnerHandler",
|
||||
"all": "npm run build && npm run format && npm run lint && npm run pack"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/actions/runner.git"
|
||||
},
|
||||
"keywords": [
|
||||
"actions"
|
||||
],
|
||||
"author": "GitHub Actions",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@actions/exec": "^1.1.0",
|
||||
"@actions/core": "^1.6.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^12.7.12",
|
||||
"@typescript-eslint/parser": "^2.8.0",
|
||||
"@zeit/ncc": "^0.20.5",
|
||||
"eslint": "^6.8.0",
|
||||
"eslint-plugin-github": "^2.0.0",
|
||||
"prettier": "^1.19.1",
|
||||
"typescript": "^3.6.4"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,49 @@
|
||||
import * as exec from '@actions/exec'
|
||||
import * as core from '@actions/core'
|
||||
import * as events from 'events'
|
||||
import * as readline from 'readline'
|
||||
|
||||
async function run(): Promise<void> {
|
||||
let input = ''
|
||||
|
||||
const rl = readline.createInterface({
|
||||
input: process.stdin
|
||||
})
|
||||
|
||||
rl.on('line', line => {
|
||||
core.debug(`Line from STDIN: ${line}`)
|
||||
input = line
|
||||
})
|
||||
|
||||
await events.once(rl, 'close')
|
||||
|
||||
core.debug(input)
|
||||
|
||||
const execInput = JSON.parse(input)
|
||||
core.debug(JSON.stringify(execInput))
|
||||
|
||||
// podman exec -i --workdir /__w/canary/canary
|
||||
// -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY
|
||||
// -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER
|
||||
// -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR
|
||||
// -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME
|
||||
// -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL
|
||||
// -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY
|
||||
// -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_DEBUG
|
||||
// -e RUNNER_OS -e RUNNER_NAME -e RUNNER_TOOL_CACHE
|
||||
// -e RUNNER_TEMP -e RUNNER_WORKSPACE
|
||||
// eccdf520697a035599d6e8c8dc801f004fdd3797cdce88f590aba3669a88d9bc sh -e /__w/_temp/d3b30383-719c-4e76-a16f-8f85443352be.sh
|
||||
|
||||
const execArgs = []
|
||||
const args = (<string>execInput.arguments).split(' ')
|
||||
core.debug(JSON.stringify(args))
|
||||
execArgs.push(...args)
|
||||
|
||||
core.debug(JSON.stringify(execArgs))
|
||||
|
||||
await exec.exec(execInput.fileName, execArgs, {
|
||||
env: execInput.environmentVariables
|
||||
})
|
||||
}
|
||||
|
||||
run()
|
||||
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "es6", /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019' or 'ESNEXT'. */
|
||||
"module": "commonjs", /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
|
||||
"outDir": "./lib", /* Redirect output structure to the directory. */
|
||||
"rootDir": "./src", /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
|
||||
"strict": true, /* Enable all strict type-checking options. */
|
||||
"noImplicitAny": true, /* Raise error on expressions and declarations with an implied 'any' type. */
|
||||
"esModuleInterop": true /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */
|
||||
},
|
||||
"exclude": ["node_modules", "**/*.test.ts"]
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
dist/
|
||||
lib/
|
||||
node_modules/
|
||||
@@ -0,0 +1,59 @@
|
||||
{
|
||||
"plugins": ["jest", "@typescript-eslint"],
|
||||
"extends": ["plugin:github/es6"],
|
||||
"parser": "@typescript-eslint/parser",
|
||||
"parserOptions": {
|
||||
"ecmaVersion": 9,
|
||||
"sourceType": "module",
|
||||
"project": "./tsconfig.json"
|
||||
},
|
||||
"rules": {
|
||||
"eslint-comments/no-use": "off",
|
||||
"import/no-namespace": "off",
|
||||
"no-console": "off",
|
||||
"no-unused-vars": "off",
|
||||
"@typescript-eslint/no-unused-vars": "error",
|
||||
"@typescript-eslint/explicit-member-accessibility": ["error", {"accessibility": "no-public"}],
|
||||
"@typescript-eslint/no-require-imports": "error",
|
||||
"@typescript-eslint/array-type": "error",
|
||||
"@typescript-eslint/await-thenable": "error",
|
||||
"@typescript-eslint/ban-ts-ignore": "error",
|
||||
"camelcase": "off",
|
||||
"@typescript-eslint/camelcase": "error",
|
||||
"@typescript-eslint/class-name-casing": "error",
|
||||
"@typescript-eslint/explicit-function-return-type": ["error", {"allowExpressions": true}],
|
||||
"@typescript-eslint/func-call-spacing": ["error", "never"],
|
||||
"@typescript-eslint/generic-type-naming": ["error", "^[A-Z][A-Za-z]*$"],
|
||||
"@typescript-eslint/no-array-constructor": "error",
|
||||
"@typescript-eslint/no-empty-interface": "error",
|
||||
"@typescript-eslint/no-explicit-any": "error",
|
||||
"@typescript-eslint/no-extraneous-class": "error",
|
||||
"@typescript-eslint/no-for-in-array": "error",
|
||||
"@typescript-eslint/no-inferrable-types": "error",
|
||||
"@typescript-eslint/no-misused-new": "error",
|
||||
"@typescript-eslint/no-namespace": "error",
|
||||
"@typescript-eslint/no-non-null-assertion": "warn",
|
||||
"@typescript-eslint/no-object-literal-type-assertion": "error",
|
||||
"@typescript-eslint/no-unnecessary-qualifier": "error",
|
||||
"@typescript-eslint/no-unnecessary-type-assertion": "error",
|
||||
"@typescript-eslint/no-useless-constructor": "error",
|
||||
"@typescript-eslint/no-var-requires": "error",
|
||||
"@typescript-eslint/prefer-for-of": "warn",
|
||||
"@typescript-eslint/prefer-function-type": "warn",
|
||||
"@typescript-eslint/prefer-includes": "error",
|
||||
"@typescript-eslint/prefer-interface": "error",
|
||||
"@typescript-eslint/prefer-string-starts-ends-with": "error",
|
||||
"@typescript-eslint/promise-function-async": "error",
|
||||
"@typescript-eslint/require-array-sort-compare": "error",
|
||||
"@typescript-eslint/restrict-plus-operands": "error",
|
||||
"semi": "off",
|
||||
"@typescript-eslint/semi": ["error", "never"],
|
||||
"@typescript-eslint/type-annotation-spacing": "error",
|
||||
"@typescript-eslint/unbound-method": "error"
|
||||
},
|
||||
"env": {
|
||||
"node": true,
|
||||
"es6": true,
|
||||
"jest/globals": true
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
dist/
|
||||
lib/
|
||||
node_modules/
|
||||
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"printWidth": 80,
|
||||
"tabWidth": 2,
|
||||
"useTabs": false,
|
||||
"semi": false,
|
||||
"singleQuote": true,
|
||||
"trailingComma": "none",
|
||||
"bracketSpacing": false,
|
||||
"arrowParens": "avoid",
|
||||
"parser": "typescript"
|
||||
}
|
||||
@@ -0,0 +1 @@
|
||||
To update kubectlHandler under `Misc/layoutbin` run `npm install && npm run all`
|
||||
6034
src/Misc/containerEngineHandlers/kubectlHandler/package-lock.json
generated
Normal file
6034
src/Misc/containerEngineHandlers/kubectlHandler/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
36
src/Misc/containerEngineHandlers/kubectlHandler/package.json
Normal file
36
src/Misc/containerEngineHandlers/kubectlHandler/package.json
Normal file
@@ -0,0 +1,36 @@
|
||||
{
|
||||
"name": "kubectlHandler",
|
||||
"version": "1.0.0",
|
||||
"description": "GitHub Actions",
|
||||
"main": "lib/kubectlHandler.js",
|
||||
"scripts": {
|
||||
"build": "tsc",
|
||||
"format": "prettier --write **/*.ts",
|
||||
"format-check": "prettier --check **/*.ts",
|
||||
"lint": "eslint src/**/*.ts",
|
||||
"pack": "ncc build -o ../../layoutbin/kubectlHandler",
|
||||
"all": "npm run build && npm run format && npm run lint && npm run pack"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/actions/runner.git"
|
||||
},
|
||||
"keywords": [
|
||||
"actions"
|
||||
],
|
||||
"author": "GitHub Actions",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@actions/exec": "^1.1.0",
|
||||
"@actions/core": "^1.6.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^12.7.12",
|
||||
"@typescript-eslint/parser": "^2.8.0",
|
||||
"@zeit/ncc": "^0.20.5",
|
||||
"eslint": "^6.8.0",
|
||||
"eslint-plugin-github": "^2.0.0",
|
||||
"prettier": "^1.19.1",
|
||||
"typescript": "^3.6.4"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,156 @@
|
||||
import * as exec from '@actions/exec'
|
||||
import * as core from '@actions/core'
|
||||
import * as events from 'events'
|
||||
import * as readline from 'readline'
|
||||
|
||||
async function run(): Promise<void> {
|
||||
let input = ''
|
||||
|
||||
const rl = readline.createInterface({
|
||||
input: process.stdin
|
||||
})
|
||||
|
||||
rl.on('line', line => {
|
||||
core.debug(`Line from STDIN: ${line}`)
|
||||
input = line
|
||||
})
|
||||
|
||||
await events.once(rl, 'close')
|
||||
|
||||
core.debug(input)
|
||||
|
||||
const inputJson = JSON.parse(input)
|
||||
core.debug(JSON.stringify(inputJson))
|
||||
|
||||
const command = inputJson.command
|
||||
if (command === 'Create') {
|
||||
const creationInput = inputJson.creationInput
|
||||
core.debug(JSON.stringify(creationInput))
|
||||
const containers = creationInput.containers
|
||||
const jobContainer = containers[0]
|
||||
|
||||
// const networkName = 'actions_podman_network'
|
||||
// // podman network create {network} -> track and return `network` for ${{job.container.network}}
|
||||
// await exec.exec('podman', ['network', 'create', networkName])
|
||||
|
||||
const containerImage = `${jobContainer.containerImage}`
|
||||
// podman pull docker.io/library/{image}
|
||||
// await exec.exec('podman', ['pull', containerImage])
|
||||
|
||||
// kubectl run e088c842be1f46b394212618408aaba0_node1016jessie_6196c9
|
||||
// --image=node:10.16-jessie
|
||||
// -- tail -f /dev/null
|
||||
const runArgs = ['run', 'job-container']
|
||||
// runArgs.push(`--workdir=${jobContainer.containerWorkDirectory}`)
|
||||
// runArgs.push(`--network=${networkName}`)
|
||||
|
||||
// for (const mountVolume of jobContainer.mountVolumes) {
|
||||
// runArgs.push(
|
||||
// `-v=${mountVolume.sourceVolumePath}:${mountVolume.targetVolumePath}`
|
||||
// )
|
||||
// }
|
||||
runArgs.push(`--image=${containerImage}`)
|
||||
runArgs.push(`--`)
|
||||
runArgs.push(`tail`)
|
||||
runArgs.push(`-f`)
|
||||
runArgs.push(`/dev/null`)
|
||||
|
||||
core.debug(JSON.stringify(runArgs))
|
||||
|
||||
// const containerId = await exec.getExecOutput('podman', [
|
||||
// 'create',
|
||||
// // `--workdir ${jobContainer.containerWorkDirectory}`,
|
||||
// `--network=${networkName}`,
|
||||
// // `-v=/Users/ting/Desktop/runner/_layout/_work:/__w`,
|
||||
// `--entrypoint=${jobContainer.containerEntryPoint}`,
|
||||
// `${containerImage}`,
|
||||
// `${jobContainer.containerEntryPointArgs}`
|
||||
// ])
|
||||
|
||||
await exec.exec('kubectl', runArgs)
|
||||
|
||||
// get PATH inside the container
|
||||
|
||||
const waitArgs = ['wait', '--for=condition=Ready', 'pod/job-container']
|
||||
await exec.exec('kubectl', waitArgs)
|
||||
|
||||
// output containerId for ${{job.container.id}}
|
||||
|
||||
// copy over node.js
|
||||
const cpNodeArgs = [
|
||||
'cp',
|
||||
'/actions-runner/externals/node12/bin',
|
||||
'job-container:/__runner_util/'
|
||||
]
|
||||
await exec.exec('kubectl', cpNodeArgs)
|
||||
|
||||
// copy over innerhandler
|
||||
const cpKubeInnerArgs = [
|
||||
'cp',
|
||||
'/actions-runner/bin/kubeInnerHandler',
|
||||
'job-container:/__runner_util/kubeInnerHandler'
|
||||
]
|
||||
await exec.exec('kubectl', cpKubeInnerArgs)
|
||||
|
||||
// copy over _work
|
||||
const cpWorkArgs = ['cp', '/actions-runner/_work', 'job-container:/__w/']
|
||||
await exec.exec('kubectl', cpWorkArgs)
|
||||
|
||||
const creationOutput = {
|
||||
JobContainerId: 'job-container',
|
||||
Network: 'job-container'
|
||||
}
|
||||
|
||||
const output = JSON.stringify({CreationOutput: creationOutput})
|
||||
core.debug(output)
|
||||
|
||||
process.stderr.write(
|
||||
`___CONTAINER_ENGINE_HANDLER_OUTPUT___${output}___CONTAINER_ENGINE_HANDLER_OUTPUT___`
|
||||
)
|
||||
} else if (command === 'Remove') {
|
||||
const removeInput = inputJson.removeInput
|
||||
core.debug(JSON.stringify(removeInput))
|
||||
// const jobContainerId = removeInput.jobContainerId
|
||||
|
||||
// await exec.exec('kubectl', ['delete', 'pod', jobContainerId, '--force'])
|
||||
// await exec.exec('podman', ['network', 'rm', '-f', network])
|
||||
} else if (command === 'Exec') {
|
||||
const execInput = inputJson.execInput
|
||||
core.debug(JSON.stringify(execInput))
|
||||
|
||||
// podman exec -i --workdir /__w/canary/canary
|
||||
// -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY
|
||||
// -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER
|
||||
// -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR
|
||||
// -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME
|
||||
// -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL
|
||||
// -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY
|
||||
// -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_DEBUG
|
||||
// -e RUNNER_OS -e RUNNER_NAME -e RUNNER_TOOL_CACHE
|
||||
// -e RUNNER_TEMP -e RUNNER_WORKSPACE
|
||||
// eccdf520697a035599d6e8c8dc801f004fdd3797cdce88f590aba3669a88d9bc sh -e /__w/_temp/d3b30383-719c-4e76-a16f-8f85443352be.sh
|
||||
|
||||
const cpTempArgs = [
|
||||
'cp',
|
||||
'/actions-runner/_work/_temp',
|
||||
'job-container:/__w/'
|
||||
]
|
||||
await exec.exec('kubectl', cpTempArgs)
|
||||
|
||||
const execArgs = ['exec']
|
||||
execArgs.push(execInput.jobContainer.containerId)
|
||||
execArgs.push('-i')
|
||||
execArgs.push('-t')
|
||||
execArgs.push('--')
|
||||
execArgs.push('/__runner_util/node')
|
||||
execArgs.push('/__runner_util/kubeInnerHandler')
|
||||
|
||||
core.debug(JSON.stringify(execArgs))
|
||||
|
||||
await exec.exec('kubectl', execArgs, {
|
||||
input: Buffer.from(JSON.stringify(execInput))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
run()
|
||||
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "es6", /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019' or 'ESNEXT'. */
|
||||
"module": "commonjs", /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
|
||||
"outDir": "./lib", /* Redirect output structure to the directory. */
|
||||
"rootDir": "./src", /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
|
||||
"strict": true, /* Enable all strict type-checking options. */
|
||||
"noImplicitAny": true, /* Raise error on expressions and declarations with an implied 'any' type. */
|
||||
"esModuleInterop": true /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */
|
||||
},
|
||||
"exclude": ["node_modules", "**/*.test.ts"]
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
dist/
|
||||
lib/
|
||||
node_modules/
|
||||
@@ -0,0 +1,59 @@
|
||||
{
|
||||
"plugins": ["jest", "@typescript-eslint"],
|
||||
"extends": ["plugin:github/es6"],
|
||||
"parser": "@typescript-eslint/parser",
|
||||
"parserOptions": {
|
||||
"ecmaVersion": 9,
|
||||
"sourceType": "module",
|
||||
"project": "./tsconfig.json"
|
||||
},
|
||||
"rules": {
|
||||
"eslint-comments/no-use": "off",
|
||||
"import/no-namespace": "off",
|
||||
"no-console": "off",
|
||||
"no-unused-vars": "off",
|
||||
"@typescript-eslint/no-unused-vars": "error",
|
||||
"@typescript-eslint/explicit-member-accessibility": ["error", {"accessibility": "no-public"}],
|
||||
"@typescript-eslint/no-require-imports": "error",
|
||||
"@typescript-eslint/array-type": "error",
|
||||
"@typescript-eslint/await-thenable": "error",
|
||||
"@typescript-eslint/ban-ts-ignore": "error",
|
||||
"camelcase": "off",
|
||||
"@typescript-eslint/camelcase": "error",
|
||||
"@typescript-eslint/class-name-casing": "error",
|
||||
"@typescript-eslint/explicit-function-return-type": ["error", {"allowExpressions": true}],
|
||||
"@typescript-eslint/func-call-spacing": ["error", "never"],
|
||||
"@typescript-eslint/generic-type-naming": ["error", "^[A-Z][A-Za-z]*$"],
|
||||
"@typescript-eslint/no-array-constructor": "error",
|
||||
"@typescript-eslint/no-empty-interface": "error",
|
||||
"@typescript-eslint/no-explicit-any": "error",
|
||||
"@typescript-eslint/no-extraneous-class": "error",
|
||||
"@typescript-eslint/no-for-in-array": "error",
|
||||
"@typescript-eslint/no-inferrable-types": "error",
|
||||
"@typescript-eslint/no-misused-new": "error",
|
||||
"@typescript-eslint/no-namespace": "error",
|
||||
"@typescript-eslint/no-non-null-assertion": "warn",
|
||||
"@typescript-eslint/no-object-literal-type-assertion": "error",
|
||||
"@typescript-eslint/no-unnecessary-qualifier": "error",
|
||||
"@typescript-eslint/no-unnecessary-type-assertion": "error",
|
||||
"@typescript-eslint/no-useless-constructor": "error",
|
||||
"@typescript-eslint/no-var-requires": "error",
|
||||
"@typescript-eslint/prefer-for-of": "warn",
|
||||
"@typescript-eslint/prefer-function-type": "warn",
|
||||
"@typescript-eslint/prefer-includes": "error",
|
||||
"@typescript-eslint/prefer-interface": "error",
|
||||
"@typescript-eslint/prefer-string-starts-ends-with": "error",
|
||||
"@typescript-eslint/promise-function-async": "error",
|
||||
"@typescript-eslint/require-array-sort-compare": "error",
|
||||
"@typescript-eslint/restrict-plus-operands": "error",
|
||||
"semi": "off",
|
||||
"@typescript-eslint/semi": ["error", "never"],
|
||||
"@typescript-eslint/type-annotation-spacing": "error",
|
||||
"@typescript-eslint/unbound-method": "error"
|
||||
},
|
||||
"env": {
|
||||
"node": true,
|
||||
"es6": true,
|
||||
"jest/globals": true
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
dist/
|
||||
lib/
|
||||
node_modules/
|
||||
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"printWidth": 80,
|
||||
"tabWidth": 2,
|
||||
"useTabs": false,
|
||||
"semi": false,
|
||||
"singleQuote": true,
|
||||
"trailingComma": "none",
|
||||
"bracketSpacing": false,
|
||||
"arrowParens": "avoid",
|
||||
"parser": "typescript"
|
||||
}
|
||||
1
src/Misc/containerEngineHandlers/podmanHandler/README.md
Normal file
1
src/Misc/containerEngineHandlers/podmanHandler/README.md
Normal file
@@ -0,0 +1 @@
|
||||
To update podmanHandler under `Misc/layoutbin` run `npm install && npm run all`
|
||||
6034
src/Misc/containerEngineHandlers/podmanHandler/package-lock.json
generated
Normal file
6034
src/Misc/containerEngineHandlers/podmanHandler/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
36
src/Misc/containerEngineHandlers/podmanHandler/package.json
Normal file
36
src/Misc/containerEngineHandlers/podmanHandler/package.json
Normal file
@@ -0,0 +1,36 @@
|
||||
{
|
||||
"name": "podmanHandler",
|
||||
"version": "1.0.0",
|
||||
"description": "GitHub Actions",
|
||||
"main": "lib/podmanHandler.js",
|
||||
"scripts": {
|
||||
"build": "tsc",
|
||||
"format": "prettier --write **/*.ts",
|
||||
"format-check": "prettier --check **/*.ts",
|
||||
"lint": "eslint src/**/*.ts",
|
||||
"pack": "ncc build -o ../../layoutbin/podmanHandler",
|
||||
"all": "npm run build && npm run format && npm run lint && npm run pack"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/actions/runner.git"
|
||||
},
|
||||
"keywords": [
|
||||
"actions"
|
||||
],
|
||||
"author": "GitHub Actions",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@actions/exec": "^1.1.0",
|
||||
"@actions/core": "^1.6.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^12.7.12",
|
||||
"@typescript-eslint/parser": "^2.8.0",
|
||||
"@zeit/ncc": "^0.20.5",
|
||||
"eslint": "^6.8.0",
|
||||
"eslint-plugin-github": "^2.0.0",
|
||||
"prettier": "^1.19.1",
|
||||
"typescript": "^3.6.4"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,150 @@
|
||||
import * as exec from '@actions/exec'
|
||||
import * as core from '@actions/core'
|
||||
import * as events from 'events'
|
||||
import * as readline from 'readline'
|
||||
|
||||
async function run(): Promise<void> {
|
||||
let input = ''
|
||||
|
||||
const rl = readline.createInterface({
|
||||
input: process.stdin
|
||||
})
|
||||
|
||||
rl.on('line', line => {
|
||||
core.debug(`Line from STDIN: ${line}`)
|
||||
input = line
|
||||
})
|
||||
|
||||
await events.once(rl, 'close')
|
||||
|
||||
core.debug(input)
|
||||
|
||||
const inputJson = JSON.parse(input)
|
||||
core.debug(JSON.stringify(inputJson))
|
||||
|
||||
const command = inputJson.command
|
||||
if (command === 'Create') {
|
||||
const creationInput = inputJson.creationInput
|
||||
core.debug(JSON.stringify(creationInput))
|
||||
const containers = creationInput.containers
|
||||
const jobContainer = containers[0]
|
||||
|
||||
const networkName = 'actions_podman_network'
|
||||
// podman network create {network} -> track and return `network` for ${{job.container.network}}
|
||||
await exec.exec('podman', ['network', 'create', networkName])
|
||||
|
||||
const containerImage = `docker.io/library/${jobContainer.containerImage}`
|
||||
// podman pull docker.io/library/{image}
|
||||
await exec.exec('podman', ['pull', containerImage])
|
||||
|
||||
// podman create --name e088c842be1f46b394212618408aaba0_node1016jessie_6196c9
|
||||
// --label fa4e14
|
||||
// --workdir /__w/canary/canary
|
||||
// --network github_network_f98a6e1e96e74d919d814c165641cba3
|
||||
// -e "HOME=/github/home" -e GITHUB_ACTIONS=true -e CI=true
|
||||
// -v "/var/run/docker.sock":"/var/run/docker.sock"
|
||||
// -v "/home/runner/work":"/__w"
|
||||
// -v "/home/runner/runners/2.283.2/externals":"/__e":ro
|
||||
// -v "/home/runner/work/_temp":"/__w/_temp"
|
||||
// -v "/home/runner/work/_actions":"/__w/_actions"
|
||||
// -v "/opt/hostedtoolcache":"/__t"
|
||||
// -v "/home/runner/work/_temp/_github_home":"/github/home"
|
||||
// -v "/home/runner/work/_temp/_github_workflow":"/github/workflow"
|
||||
// --entrypoint "tail" node:10.16-jessie "-f" "/dev/null"
|
||||
const creatArgs = ['create']
|
||||
creatArgs.push(`--workdir=${jobContainer.containerWorkDirectory}`)
|
||||
creatArgs.push(`--network=${networkName}`)
|
||||
|
||||
for (const mountVolume of jobContainer.mountVolumes) {
|
||||
creatArgs.push(
|
||||
`-v=${mountVolume.sourceVolumePath}:${mountVolume.targetVolumePath}`
|
||||
)
|
||||
}
|
||||
|
||||
creatArgs.push(`--entrypoint=tail`)
|
||||
creatArgs.push(containerImage)
|
||||
creatArgs.push(`-f`)
|
||||
creatArgs.push(`/dev/null`)
|
||||
|
||||
core.debug(JSON.stringify(creatArgs))
|
||||
|
||||
// const containerId = await exec.getExecOutput('podman', [
|
||||
// 'create',
|
||||
// // `--workdir ${jobContainer.containerWorkDirectory}`,
|
||||
// `--network=${networkName}`,
|
||||
// // `-v=/Users/ting/Desktop/runner/_layout/_work:/__w`,
|
||||
// `--entrypoint=${jobContainer.containerEntryPoint}`,
|
||||
// `${containerImage}`,
|
||||
// `${jobContainer.containerEntryPointArgs}`
|
||||
// ])
|
||||
|
||||
const containerId = await exec.getExecOutput('podman', creatArgs)
|
||||
|
||||
core.debug(JSON.stringify(containerId))
|
||||
|
||||
// podman start {containerId}
|
||||
await exec.exec('podman', ['start', containerId.stdout.trim()])
|
||||
|
||||
// get PATH inside the container
|
||||
|
||||
// output containerId for ${{job.container.id}}
|
||||
|
||||
const creationOutput = {
|
||||
JobContainerId: containerId.stdout.trim(),
|
||||
Network: networkName
|
||||
}
|
||||
|
||||
const output = JSON.stringify({CreationOutput: creationOutput})
|
||||
core.debug(output)
|
||||
|
||||
process.stderr.write(
|
||||
`___CONTAINER_ENGINE_HANDLER_OUTPUT___${output}___CONTAINER_ENGINE_HANDLER_OUTPUT___`
|
||||
)
|
||||
} else if (command === 'Remove') {
|
||||
const removeInput = inputJson.removeInput
|
||||
core.debug(JSON.stringify(removeInput))
|
||||
const jobContainerId = removeInput.jobContainerId
|
||||
const network = removeInput.network
|
||||
|
||||
await exec.exec('podman', ['rm', '-f', jobContainerId])
|
||||
await exec.exec('podman', ['network', 'rm', '-f', network])
|
||||
} else if (command === 'Exec') {
|
||||
const execInput = inputJson.execInput
|
||||
core.debug(JSON.stringify(execInput))
|
||||
|
||||
// podman exec -i --workdir /__w/canary/canary
|
||||
// -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY
|
||||
// -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER
|
||||
// -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR
|
||||
// -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME
|
||||
// -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL
|
||||
// -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY
|
||||
// -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_DEBUG
|
||||
// -e RUNNER_OS -e RUNNER_NAME -e RUNNER_TOOL_CACHE
|
||||
// -e RUNNER_TEMP -e RUNNER_WORKSPACE
|
||||
// eccdf520697a035599d6e8c8dc801f004fdd3797cdce88f590aba3669a88d9bc sh -e /__w/_temp/d3b30383-719c-4e76-a16f-8f85443352be.sh
|
||||
|
||||
const execArgs = ['exec']
|
||||
execArgs.push('-i')
|
||||
execArgs.push(`--workdir=${execInput.workingDirectory}`)
|
||||
for (const envKey of execInput.environmentKeys) {
|
||||
execArgs.push(`-e=${envKey}`)
|
||||
}
|
||||
execArgs.push(execInput.jobContainer.containerId)
|
||||
execArgs.push(execInput.fileName)
|
||||
|
||||
const args = (<string>execInput.arguments).split(' ')
|
||||
core.debug(JSON.stringify(args))
|
||||
|
||||
execArgs.push(...args)
|
||||
|
||||
core.debug(JSON.stringify(execArgs))
|
||||
|
||||
await exec.exec('podman', execArgs)
|
||||
}
|
||||
|
||||
await exec.exec('podman', ['network', 'ls'])
|
||||
await exec.exec('podman', ['ps', '-a'])
|
||||
}
|
||||
|
||||
run()
|
||||
12
src/Misc/containerEngineHandlers/podmanHandler/tsconfig.json
Normal file
12
src/Misc/containerEngineHandlers/podmanHandler/tsconfig.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "es6", /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019' or 'ESNEXT'. */
|
||||
"module": "commonjs", /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
|
||||
"outDir": "./lib", /* Redirect output structure to the directory. */
|
||||
"rootDir": "./src", /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
|
||||
"strict": true, /* Enable all strict type-checking options. */
|
||||
"noImplicitAny": true, /* Raise error on expressions and declarations with an implied 'any' type. */
|
||||
"esModuleInterop": true /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */
|
||||
},
|
||||
"exclude": ["node_modules", "**/*.test.ts"]
|
||||
}
|
||||
723
src/Misc/dotnet-install.ps1
vendored
723
src/Misc/dotnet-install.ps1
vendored
@@ -16,27 +16,15 @@
|
||||
- LTS - most current supported release
|
||||
- 2-part version in a format A.B - represents a specific release
|
||||
examples: 2.0, 1.0
|
||||
- 3-part version in a format A.B.Cxx - represents a specific SDK release
|
||||
examples: 5.0.1xx, 5.0.2xx
|
||||
Supported since 5.0 release
|
||||
Note: The version parameter overrides the channel parameter when any version other than 'latest' is used.
|
||||
.PARAMETER Quality
|
||||
Download the latest build of specified quality in the channel. The possible values are: daily, signed, validated, preview, GA.
|
||||
Works only in combination with channel. Not applicable for current and LTS channels and will be ignored if those channels are used.
|
||||
For SDK use channel in A.B.Cxx format: using quality together with channel in A.B format is not supported.
|
||||
Supported since 5.0 release.
|
||||
Note: The version parameter overrides the channel parameter when any version other than 'latest' is used, and therefore overrides the quality.
|
||||
- Branch name
|
||||
examples: release/2.0.0, Master
|
||||
Note: The version parameter overrides the channel parameter.
|
||||
.PARAMETER Version
|
||||
Default: latest
|
||||
Represents a build version on specific channel. Possible values:
|
||||
- latest - most latest build on specific channel
|
||||
- 3-part version in a format A.B.C - represents specific version of build
|
||||
examples: 2.0.0-preview2-006120, 1.1.0
|
||||
.PARAMETER Internal
|
||||
Download internal builds. Requires providing credentials via -FeedCredential parameter.
|
||||
.PARAMETER FeedCredential
|
||||
Token to access Azure feed. Used as a query string to append to the Azure feed.
|
||||
This parameter typically is not specified.
|
||||
.PARAMETER InstallDir
|
||||
Default: %LocalAppData%\Microsoft\dotnet
|
||||
Path to where to install dotnet. Note that binaries will be placed directly in a given directory.
|
||||
@@ -71,6 +59,9 @@
|
||||
.PARAMETER UncachedFeed
|
||||
This parameter typically is not changed by the user.
|
||||
It allows changing the URL for the Uncached feed used by this installer.
|
||||
.PARAMETER FeedCredential
|
||||
Used as a query string to append to the Azure feed.
|
||||
It allows changing the URL to use non-public blob storage accounts.
|
||||
.PARAMETER ProxyAddress
|
||||
If set, the installer will use the proxy when making web requests
|
||||
.PARAMETER ProxyUseDefaultCredentials
|
||||
@@ -86,19 +77,15 @@
|
||||
.PARAMETER JSonFile
|
||||
Determines the SDK version from a user specified global.json file
|
||||
Note: global.json must have a value for 'SDK:Version'
|
||||
.PARAMETER DownloadTimeout
|
||||
Determines timeout duration in seconds for dowloading of the SDK file
|
||||
Default: 1200 seconds (20 minutes)
|
||||
#>
|
||||
[cmdletbinding()]
|
||||
param(
|
||||
[string]$Channel="LTS",
|
||||
[string]$Quality,
|
||||
[string]$Version="Latest",
|
||||
[switch]$Internal,
|
||||
[string]$JSonFile,
|
||||
[Alias('i')][string]$InstallDir="<auto>",
|
||||
[string]$InstallDir="<auto>",
|
||||
[string]$Architecture="<auto>",
|
||||
[ValidateSet("dotnet", "aspnetcore", "windowsdesktop", IgnoreCase = $false)]
|
||||
[string]$Runtime,
|
||||
[Obsolete("This parameter may be removed in a future version of this script. The recommended alternative is '-Runtime dotnet'.")]
|
||||
[switch]$SharedRuntime,
|
||||
@@ -111,8 +98,7 @@ param(
|
||||
[switch]$ProxyUseDefaultCredentials,
|
||||
[string[]]$ProxyBypassList=@(),
|
||||
[switch]$SkipNonVersionedFiles,
|
||||
[switch]$NoCdn,
|
||||
[int]$DownloadTimeout=1200
|
||||
[switch]$NoCdn
|
||||
)
|
||||
|
||||
Set-StrictMode -Version Latest
|
||||
@@ -181,7 +167,7 @@ function Say-Invocation($Invocation) {
|
||||
Say-Verbose "$command $args"
|
||||
}
|
||||
|
||||
function Invoke-With-Retry([ScriptBlock]$ScriptBlock, [System.Threading.CancellationToken]$cancellationToken = [System.Threading.CancellationToken]::None, [int]$MaxAttempts = 3, [int]$SecondsBetweenAttempts = 1) {
|
||||
function Invoke-With-Retry([ScriptBlock]$ScriptBlock, [int]$MaxAttempts = 3, [int]$SecondsBetweenAttempts = 1) {
|
||||
$Attempts = 0
|
||||
|
||||
while ($true) {
|
||||
@@ -190,7 +176,7 @@ function Invoke-With-Retry([ScriptBlock]$ScriptBlock, [System.Threading.Cancella
|
||||
}
|
||||
catch {
|
||||
$Attempts++
|
||||
if (($Attempts -lt $MaxAttempts) -and -not $cancellationToken.IsCancellationRequested) {
|
||||
if ($Attempts -lt $MaxAttempts) {
|
||||
Start-Sleep $SecondsBetweenAttempts
|
||||
}
|
||||
else {
|
||||
@@ -219,7 +205,7 @@ function Get-Machine-Architecture() {
|
||||
function Get-CLIArchitecture-From-Architecture([string]$Architecture) {
|
||||
Say-Invocation $MyInvocation
|
||||
|
||||
switch ($Architecture.ToLowerInvariant()) {
|
||||
switch ($Architecture.ToLower()) {
|
||||
{ $_ -eq "<auto>" } { return Get-CLIArchitecture-From-Architecture $(Get-Machine-Architecture) }
|
||||
{ ($_ -eq "amd64") -or ($_ -eq "x64") } { return "x64" }
|
||||
{ $_ -eq "x86" } { return "x86" }
|
||||
@@ -229,53 +215,6 @@ function Get-CLIArchitecture-From-Architecture([string]$Architecture) {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
function Get-NormalizedQuality([string]$Quality) {
|
||||
Say-Invocation $MyInvocation
|
||||
|
||||
if ([string]::IsNullOrEmpty($Quality)) {
|
||||
return ""
|
||||
}
|
||||
|
||||
switch ($Quality) {
|
||||
{ @("daily", "signed", "validated", "preview") -contains $_ } { return $Quality.ToLowerInvariant() }
|
||||
#ga quality is available without specifying quality, so normalizing it to empty
|
||||
{ $_ -eq "ga" } { return "" }
|
||||
default { throw "'$Quality' is not a supported value for -Quality option. Supported values are: daily, signed, validated, preview, ga. If you think this is a bug, report it at https://github.com/dotnet/install-scripts/issues." }
|
||||
}
|
||||
}
|
||||
|
||||
function Get-NormalizedChannel([string]$Channel) {
|
||||
Say-Invocation $MyInvocation
|
||||
|
||||
if ([string]::IsNullOrEmpty($Channel)) {
|
||||
return ""
|
||||
}
|
||||
|
||||
if ($Channel.StartsWith('release/')) {
|
||||
Say-Warning 'Using branch name with -Channel option is no longer supported with newer releases. Use -Quality option with a channel in X.Y format instead, such as "-Channel 5.0 -Quality Daily."'
|
||||
}
|
||||
|
||||
switch ($Channel) {
|
||||
{ $_ -eq "lts" } { return "LTS" }
|
||||
{ $_ -eq "current" } { return "current" }
|
||||
default { return $Channel.ToLowerInvariant() }
|
||||
}
|
||||
}
|
||||
|
||||
function Get-NormalizedProduct([string]$Runtime) {
|
||||
Say-Invocation $MyInvocation
|
||||
|
||||
switch ($Runtime) {
|
||||
{ $_ -eq "dotnet" } { return "dotnet-runtime" }
|
||||
{ $_ -eq "aspnetcore" } { return "aspnetcore-runtime" }
|
||||
{ $_ -eq "windowsdesktop" } { return "windowsdesktop-runtime" }
|
||||
{ [string]::IsNullOrEmpty($_) } { return "dotnet-sdk" }
|
||||
default { throw "'$Runtime' is not a supported value for -Runtime option, supported values are: dotnet, aspnetcore, windowsdesktop. If you think this is a bug, report it at https://github.com/dotnet/install-scripts/issues." }
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# The version text returned from the feeds is a 1-line or 2-line string:
|
||||
# For the SDK and the dotnet runtime (2 lines):
|
||||
# Line 1: # commit_hash
|
||||
@@ -304,11 +243,10 @@ function Load-Assembly([string] $Assembly) {
|
||||
}
|
||||
}
|
||||
|
||||
function GetHTTPResponse([Uri] $Uri, [bool]$HeaderOnly, [bool]$DisableRedirect, [bool]$DisableFeedCredential)
|
||||
function GetHTTPResponse([Uri] $Uri)
|
||||
{
|
||||
$cts = New-Object System.Threading.CancellationTokenSource
|
||||
|
||||
$downloadScript = {
|
||||
Invoke-With-Retry(
|
||||
{
|
||||
|
||||
$HttpClient = $null
|
||||
|
||||
@@ -332,53 +270,32 @@ function GetHTTPResponse([Uri] $Uri, [bool]$HeaderOnly, [bool]$DisableRedirect,
|
||||
}
|
||||
}
|
||||
|
||||
$HttpClientHandler = New-Object System.Net.Http.HttpClientHandler
|
||||
if($ProxyAddress) {
|
||||
$HttpClientHandler = New-Object System.Net.Http.HttpClientHandler
|
||||
$HttpClientHandler.Proxy = New-Object System.Net.WebProxy -Property @{
|
||||
Address=$ProxyAddress;
|
||||
UseDefaultCredentials=$ProxyUseDefaultCredentials;
|
||||
BypassList = $ProxyBypassList;
|
||||
}
|
||||
}
|
||||
if ($DisableRedirect)
|
||||
{
|
||||
$HttpClientHandler.AllowAutoRedirect = $false
|
||||
$HttpClient = New-Object System.Net.Http.HttpClient -ArgumentList $HttpClientHandler
|
||||
}
|
||||
$HttpClient = New-Object System.Net.Http.HttpClient -ArgumentList $HttpClientHandler
|
||||
else {
|
||||
|
||||
$HttpClient = New-Object System.Net.Http.HttpClient
|
||||
}
|
||||
# Default timeout for HttpClient is 100s. For a 50 MB download this assumes 500 KB/s average, any less will time out
|
||||
# Defaulting to 20 minutes allows it to work over much slower connections.
|
||||
$HttpClient.Timeout = New-TimeSpan -Seconds $DownloadTimeout
|
||||
|
||||
if ($HeaderOnly){
|
||||
$completionOption = [System.Net.Http.HttpCompletionOption]::ResponseHeadersRead
|
||||
}
|
||||
else {
|
||||
$completionOption = [System.Net.Http.HttpCompletionOption]::ResponseContentRead
|
||||
}
|
||||
|
||||
if ($DisableFeedCredential) {
|
||||
$UriWithCredential = $Uri
|
||||
}
|
||||
else {
|
||||
$UriWithCredential = "${Uri}${FeedCredential}"
|
||||
}
|
||||
|
||||
$Task = $HttpClient.GetAsync("$UriWithCredential", $completionOption).ConfigureAwait("false");
|
||||
# 20 minutes allows it to work over much slower connections.
|
||||
$HttpClient.Timeout = New-TimeSpan -Minutes 20
|
||||
$Task = $HttpClient.GetAsync("${Uri}${FeedCredential}").ConfigureAwait("false");
|
||||
$Response = $Task.GetAwaiter().GetResult();
|
||||
|
||||
if (($null -eq $Response) -or ((-not $HeaderOnly) -and (-not ($Response.IsSuccessStatusCode)))) {
|
||||
if (($null -eq $Response) -or (-not ($Response.IsSuccessStatusCode))) {
|
||||
# The feed credential is potentially sensitive info. Do not log FeedCredential to console output.
|
||||
$DownloadException = [System.Exception] "Unable to download $Uri."
|
||||
|
||||
if ($null -ne $Response) {
|
||||
$DownloadException.Data["StatusCode"] = [int] $Response.StatusCode
|
||||
$DownloadException.Data["ErrorMessage"] = "Unable to download $Uri. Returned HTTP status code: " + $DownloadException.Data["StatusCode"]
|
||||
|
||||
if (404 -eq [int] $Response.StatusCode)
|
||||
{
|
||||
$cts.Cancel()
|
||||
}
|
||||
}
|
||||
|
||||
throw $DownloadException
|
||||
@@ -406,22 +323,11 @@ function GetHTTPResponse([Uri] $Uri, [bool]$HeaderOnly, [bool]$DisableRedirect,
|
||||
throw $DownloadException
|
||||
}
|
||||
finally {
|
||||
if ($null -ne $HttpClient) {
|
||||
if ($HttpClient -ne $null) {
|
||||
$HttpClient.Dispose()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
return Invoke-With-Retry $downloadScript $cts.Token
|
||||
}
|
||||
finally
|
||||
{
|
||||
if ($null -ne $cts)
|
||||
{
|
||||
$cts.Dispose()
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
function Get-Latest-Version-Info([string]$AzureFeed, [string]$Channel) {
|
||||
@@ -443,9 +349,6 @@ function Get-Latest-Version-Info([string]$AzureFeed, [string]$Channel) {
|
||||
else {
|
||||
throw "Invalid value for `$Runtime"
|
||||
}
|
||||
|
||||
Say-Verbose "Constructed latest.version URL: $VersionFileUrl"
|
||||
|
||||
try {
|
||||
$Response = GetHTTPResponse -Uri $VersionFileUrl
|
||||
}
|
||||
@@ -508,7 +411,7 @@ function Get-Specific-Version-From-Version([string]$AzureFeed, [string]$Channel,
|
||||
Say-Invocation $MyInvocation
|
||||
|
||||
if (-not $JSonFile) {
|
||||
if ($Version.ToLowerInvariant() -eq "latest") {
|
||||
if ($Version.ToLower() -eq "latest") {
|
||||
$LatestVersionInfo = Get-Latest-Version-Info -AzureFeed $AzureFeed -Channel $Channel
|
||||
return $LatestVersionInfo.Version
|
||||
}
|
||||
@@ -575,116 +478,58 @@ function Get-LegacyDownload-Link([string]$AzureFeed, [string]$SpecificVersion, [
|
||||
return $PayloadURL
|
||||
}
|
||||
|
||||
function Get-Product-Version([string]$AzureFeed, [string]$SpecificVersion, [string]$PackageDownloadLink) {
|
||||
function Get-Product-Version([string]$AzureFeed, [string]$SpecificVersion) {
|
||||
Say-Invocation $MyInvocation
|
||||
|
||||
# Try to get the version number, using the productVersion.txt file located next to the installer file.
|
||||
$ProductVersionTxtURLs = (Get-Product-Version-Url $AzureFeed $SpecificVersion $PackageDownloadLink -Flattened $true),
|
||||
(Get-Product-Version-Url $AzureFeed $SpecificVersion $PackageDownloadLink -Flattened $false)
|
||||
|
||||
Foreach ($ProductVersionTxtURL in $ProductVersionTxtURLs) {
|
||||
Say-Verbose "Checking for the existence of $ProductVersionTxtURL"
|
||||
|
||||
try {
|
||||
$productVersionResponse = GetHTTPResponse($productVersionTxtUrl)
|
||||
|
||||
if ($productVersionResponse.StatusCode -eq 200) {
|
||||
$productVersion = $productVersionResponse.Content.ReadAsStringAsync().Result.Trim()
|
||||
if ($productVersion -ne $SpecificVersion)
|
||||
{
|
||||
Say "Using alternate version $productVersion found in $ProductVersionTxtURL"
|
||||
}
|
||||
return $productVersion
|
||||
}
|
||||
else {
|
||||
Say-Verbose "Got StatusCode $($productVersionResponse.StatusCode) when trying to get productVersion.txt at $productVersionTxtUrl."
|
||||
}
|
||||
}
|
||||
catch {
|
||||
Say-Verbose "Could not read productVersion.txt at $productVersionTxtUrl (Exception: '$($_.Exception.Message)'. )"
|
||||
}
|
||||
if ($Runtime -eq "dotnet") {
|
||||
$ProductVersionTxtURL = "$AzureFeed/Runtime/$SpecificVersion/productVersion.txt"
|
||||
}
|
||||
|
||||
# Getting the version number with productVersion.txt has failed. Try parsing the download link for a version number.
|
||||
if ([string]::IsNullOrEmpty($PackageDownloadLink))
|
||||
{
|
||||
Say-Verbose "Using the default value '$SpecificVersion' as the product version."
|
||||
return $SpecificVersion
|
||||
elseif ($Runtime -eq "aspnetcore") {
|
||||
$ProductVersionTxtURL = "$AzureFeed/aspnetcore/Runtime/$SpecificVersion/productVersion.txt"
|
||||
}
|
||||
|
||||
$productVersion = Get-ProductVersionFromDownloadLink $PackageDownloadLink $SpecificVersion
|
||||
return $productVersion
|
||||
}
|
||||
|
||||
function Get-Product-Version-Url([string]$AzureFeed, [string]$SpecificVersion, [string]$PackageDownloadLink, [bool]$Flattened) {
|
||||
Say-Invocation $MyInvocation
|
||||
|
||||
$majorVersion=$null
|
||||
if ($SpecificVersion -match '^(\d+)\.(.*)') {
|
||||
$majorVersion = $Matches[1] -as[int]
|
||||
}
|
||||
|
||||
$pvFileName='productVersion.txt'
|
||||
if($Flattened) {
|
||||
if(-not $Runtime) {
|
||||
$pvFileName='sdk-productVersion.txt'
|
||||
}
|
||||
elseif($Runtime -eq "dotnet") {
|
||||
$pvFileName='runtime-productVersion.txt'
|
||||
}
|
||||
else {
|
||||
$pvFileName="$Runtime-productVersion.txt"
|
||||
}
|
||||
}
|
||||
|
||||
if ([string]::IsNullOrEmpty($PackageDownloadLink)) {
|
||||
if ($Runtime -eq "dotnet") {
|
||||
$ProductVersionTxtURL = "$AzureFeed/Runtime/$SpecificVersion/$pvFileName"
|
||||
}
|
||||
elseif ($Runtime -eq "aspnetcore") {
|
||||
$ProductVersionTxtURL = "$AzureFeed/aspnetcore/Runtime/$SpecificVersion/$pvFileName"
|
||||
}
|
||||
elseif ($Runtime -eq "windowsdesktop") {
|
||||
# The windows desktop runtime is part of the core runtime layout prior to 5.0
|
||||
$ProductVersionTxtURL = "$AzureFeed/Runtime/$SpecificVersion/$pvFileName"
|
||||
if ($majorVersion -ne $null -and $majorVersion -ge 5) {
|
||||
$ProductVersionTxtURL = "$AzureFeed/WindowsDesktop/$SpecificVersion/$pvFileName"
|
||||
elseif ($Runtime -eq "windowsdesktop") {
|
||||
# The windows desktop runtime is part of the core runtime layout prior to 5.0
|
||||
$ProductVersionTxtURL = "$AzureFeed/Runtime/$SpecificVersion/productVersion.txt"
|
||||
if ($SpecificVersion -match '^(\d+)\.(.*)')
|
||||
{
|
||||
$majorVersion = [int]$Matches[1]
|
||||
if ($majorVersion -ge 5)
|
||||
{
|
||||
$ProductVersionTxtURL = "$AzureFeed/WindowsDesktop/$SpecificVersion/productVersion.txt"
|
||||
}
|
||||
}
|
||||
elseif (-not $Runtime) {
|
||||
$ProductVersionTxtURL = "$AzureFeed/Sdk/$SpecificVersion/$pvFileName"
|
||||
}
|
||||
else {
|
||||
throw "Invalid value '$Runtime' specified for `$Runtime"
|
||||
}
|
||||
}
|
||||
elseif (-not $Runtime) {
|
||||
$ProductVersionTxtURL = "$AzureFeed/Sdk/$SpecificVersion/productVersion.txt"
|
||||
}
|
||||
else {
|
||||
$ProductVersionTxtURL = $PackageDownloadLink.Substring(0, $PackageDownloadLink.LastIndexOf("/")) + "/$pvFileName"
|
||||
throw "Invalid value '$Runtime' specified for `$Runtime"
|
||||
}
|
||||
|
||||
Say-Verbose "Constructed productVersion link: $ProductVersionTxtURL"
|
||||
Say-Verbose "Checking for existence of $ProductVersionTxtURL"
|
||||
|
||||
return $ProductVersionTxtURL
|
||||
}
|
||||
try {
|
||||
$productVersionResponse = GetHTTPResponse($productVersionTxtUrl)
|
||||
|
||||
function Get-ProductVersionFromDownloadLink([string]$PackageDownloadLink, [string]$SpecificVersion)
|
||||
{
|
||||
Say-Invocation $MyInvocation
|
||||
if ($productVersionResponse.StatusCode -eq 200) {
|
||||
$productVersion = $productVersionResponse.Content.ReadAsStringAsync().Result.Trim()
|
||||
if ($productVersion -ne $SpecificVersion)
|
||||
{
|
||||
Say "Using alternate version $productVersion found in $ProductVersionTxtURL"
|
||||
}
|
||||
|
||||
#product specific version follows the product name
|
||||
#for filename 'dotnet-sdk-3.1.404-win-x64.zip': the product version is 3.1.400
|
||||
$filename = $PackageDownloadLink.Substring($PackageDownloadLink.LastIndexOf("/") + 1)
|
||||
$filenameParts = $filename.Split('-')
|
||||
if ($filenameParts.Length -gt 2)
|
||||
{
|
||||
$productVersion = $filenameParts[2]
|
||||
Say-Verbose "Extracted product version '$productVersion' from download link '$PackageDownloadLink'."
|
||||
}
|
||||
else {
|
||||
Say-Verbose "Using the default value '$SpecificVersion' as the product version."
|
||||
return $productVersion
|
||||
}
|
||||
else {
|
||||
Say-Verbose "Got StatusCode $($productVersionResponse.StatusCode) trying to get productVersion.txt at $productVersionTxtUrl, so using default value of $SpecificVersion"
|
||||
$productVersion = $SpecificVersion
|
||||
}
|
||||
} catch {
|
||||
Say-Verbose "Could not read productVersion.txt at $productVersionTxtUrl, so using default value of $SpecificVersion (Exception: '$($_.Exception.Message)' )"
|
||||
$productVersion = $SpecificVersion
|
||||
}
|
||||
return $productVersion
|
||||
|
||||
return $productVersion
|
||||
}
|
||||
|
||||
function Get-User-Share-Path() {
|
||||
@@ -857,150 +702,15 @@ function Prepend-Sdk-InstallRoot-To-Path([string]$InstallRoot, [string]$BinFolde
|
||||
}
|
||||
}
|
||||
|
||||
function Get-AkaMSDownloadLink([string]$Channel, [string]$Quality, [bool]$Internal, [string]$Product, [string]$Architecture) {
|
||||
Say-Invocation $MyInvocation
|
||||
|
||||
#quality is not supported for LTS or current channel
|
||||
if (![string]::IsNullOrEmpty($Quality) -and (@("LTS", "current") -contains $Channel)) {
|
||||
$Quality = ""
|
||||
Say-Warning "Specifying quality for current or LTS channel is not supported, the quality will be ignored."
|
||||
}
|
||||
Say-Verbose "Retrieving primary payload URL from aka.ms link for channel: '$Channel', quality: '$Quality' product: '$Product', os: 'win', architecture: '$Architecture'."
|
||||
|
||||
#construct aka.ms link
|
||||
$akaMsLink = "https://aka.ms/dotnet"
|
||||
if ($Internal) {
|
||||
$akaMsLink += "/internal"
|
||||
}
|
||||
$akaMsLink += "/$Channel"
|
||||
if (-not [string]::IsNullOrEmpty($Quality)) {
|
||||
$akaMsLink +="/$Quality"
|
||||
}
|
||||
$akaMsLink +="/$Product-win-$Architecture.zip"
|
||||
Say-Verbose "Constructed aka.ms link: '$akaMsLink'."
|
||||
$akaMsDownloadLink=$null
|
||||
|
||||
for ($maxRedirections = 9; $maxRedirections -ge 0; $maxRedirections--)
|
||||
{
|
||||
#get HTTP response
|
||||
#do not pass credentials as a part of the $akaMsLink and do not apply credentials in the GetHTTPResponse function
|
||||
#otherwise the redirect link would have credentials as well
|
||||
#it would result in applying credentials twice to the resulting link and thus breaking it, and in echoing credentials to the output as a part of redirect link
|
||||
$Response= GetHTTPResponse -Uri $akaMsLink -HeaderOnly $true -DisableRedirect $true -DisableFeedCredential $true
|
||||
Say-Verbose "Received response:`n$Response"
|
||||
|
||||
if ([string]::IsNullOrEmpty($Response)) {
|
||||
Say-Verbose "The link '$akaMsLink' is not valid: failed to get redirect location. The resource is not available."
|
||||
return $null
|
||||
}
|
||||
|
||||
#if HTTP code is 301 (Moved Permanently), the redirect link exists
|
||||
if ($Response.StatusCode -eq 301)
|
||||
{
|
||||
try {
|
||||
$akaMsDownloadLink = $Response.Headers.GetValues("Location")[0]
|
||||
|
||||
if ([string]::IsNullOrEmpty($akaMsDownloadLink)) {
|
||||
Say-Verbose "The link '$akaMsLink' is not valid: server returned 301 (Moved Permanently), but the headers do not contain the redirect location."
|
||||
return $null
|
||||
}
|
||||
|
||||
Say-Verbose "The redirect location retrieved: '$akaMsDownloadLink'."
|
||||
# This may yet be a link to another redirection. Attempt to retrieve the page again.
|
||||
$akaMsLink = $akaMsDownloadLink
|
||||
continue
|
||||
}
|
||||
catch {
|
||||
Say-Verbose "The link '$akaMsLink' is not valid: failed to get redirect location."
|
||||
return $null
|
||||
}
|
||||
}
|
||||
elseif ((($Response.StatusCode -lt 300) -or ($Response.StatusCode -ge 400)) -and (-not [string]::IsNullOrEmpty($akaMsDownloadLink)))
|
||||
{
|
||||
# Redirections have ended.
|
||||
return $akaMsDownloadLink
|
||||
}
|
||||
|
||||
Say-Verbose "The link '$akaMsLink' is not valid: failed to retrieve the redirection location."
|
||||
return $null
|
||||
}
|
||||
|
||||
Say-Verbose "Aka.ms links have redirected more than the maximum allowed redirections. This may be caused by a cyclic redirection of aka.ms links."
|
||||
return $null
|
||||
|
||||
}
|
||||
|
||||
Say "Note that the intended use of this script is for Continuous Integration (CI) scenarios, where:"
|
||||
Say "- The SDK needs to be installed without user interaction and without admin rights."
|
||||
Say "- The SDK installation doesn't need to persist across multiple CI runs."
|
||||
Say "To set up a development environment or to run apps, use installers rather than this script. Visit https://dotnet.microsoft.com/download to get the installer.`r`n"
|
||||
|
||||
if ($Internal -and [string]::IsNullOrWhitespace($FeedCredential)) {
|
||||
$message = "Provide credentials via -FeedCredential parameter."
|
||||
if ($DryRun) {
|
||||
Say-Warning "$message"
|
||||
} else {
|
||||
throw "$message"
|
||||
}
|
||||
}
|
||||
|
||||
#FeedCredential should start with "?", for it to be added to the end of the link.
|
||||
#adding "?" at the beginning of the FeedCredential if needed.
|
||||
if ((![string]::IsNullOrWhitespace($FeedCredential)) -and ($FeedCredential[0] -ne '?')) {
|
||||
$FeedCredential = "?" + $FeedCredential
|
||||
}
|
||||
|
||||
$CLIArchitecture = Get-CLIArchitecture-From-Architecture $Architecture
|
||||
$NormalizedQuality = Get-NormalizedQuality $Quality
|
||||
Say-Verbose "Normalized quality: '$NormalizedQuality'"
|
||||
$NormalizedChannel = Get-NormalizedChannel $Channel
|
||||
Say-Verbose "Normalized channel: '$NormalizedChannel'"
|
||||
$NormalizedProduct = Get-NormalizedProduct $Runtime
|
||||
Say-Verbose "Normalized product: '$NormalizedProduct'"
|
||||
$DownloadLink = $null
|
||||
|
||||
#try to get download location from aka.ms link
|
||||
#not applicable when exact version is specified via command or json file
|
||||
if ([string]::IsNullOrEmpty($JSonFile) -and ($Version -eq "latest")) {
|
||||
$AkaMsDownloadLink = Get-AkaMSDownloadLink -Channel $NormalizedChannel -Quality $NormalizedQuality -Internal $Internal -Product $NormalizedProduct -Architecture $CLIArchitecture
|
||||
|
||||
if ([string]::IsNullOrEmpty($AkaMsDownloadLink)){
|
||||
if (-not [string]::IsNullOrEmpty($NormalizedQuality)) {
|
||||
# if quality is specified - exit with error - there is no fallback approach
|
||||
Say-Error "Failed to locate the latest version in the channel '$NormalizedChannel' with '$NormalizedQuality' quality for '$NormalizedProduct', os: 'win', architecture: '$CLIArchitecture'."
|
||||
Say-Error "Refer to: https://aka.ms/dotnet-os-lifecycle for information on .NET Core support."
|
||||
throw "aka.ms link resolution failure"
|
||||
}
|
||||
Say-Verbose "Falling back to latest.version file approach."
|
||||
}
|
||||
else {
|
||||
Say-Verbose "Retrieved primary named payload URL from aka.ms link: '$AkaMsDownloadLink'."
|
||||
$DownloadLink = $AkaMsDownloadLink
|
||||
Say-Verbose "Downloading using legacy url will not be attempted."
|
||||
$LegacyDownloadLink = $null
|
||||
|
||||
#get version from the path
|
||||
$pathParts = $DownloadLink.Split('/')
|
||||
if ($pathParts.Length -ge 2) {
|
||||
$SpecificVersion = $pathParts[$pathParts.Length - 2]
|
||||
Say-Verbose "Version: '$SpecificVersion'."
|
||||
}
|
||||
else {
|
||||
Say-Error "Failed to extract the version from download link '$DownloadLink'."
|
||||
}
|
||||
|
||||
#retrieve effective (product) version
|
||||
$EffectiveVersion = Get-Product-Version -AzureFeed $AzureFeed -SpecificVersion $SpecificVersion -PackageDownloadLink $DownloadLink
|
||||
Say-Verbose "Product version: '$EffectiveVersion'."
|
||||
}
|
||||
}
|
||||
|
||||
if ([string]::IsNullOrEmpty($DownloadLink)) {
|
||||
$SpecificVersion = Get-Specific-Version-From-Version -AzureFeed $AzureFeed -Channel $Channel -Version $Version -JSonFile $JSonFile
|
||||
$DownloadLink, $EffectiveVersion = Get-Download-Link -AzureFeed $AzureFeed -SpecificVersion $SpecificVersion -CLIArchitecture $CLIArchitecture
|
||||
$LegacyDownloadLink = Get-LegacyDownload-Link -AzureFeed $AzureFeed -SpecificVersion $SpecificVersion -CLIArchitecture $CLIArchitecture
|
||||
}
|
||||
|
||||
$SpecificVersion = Get-Specific-Version-From-Version -AzureFeed $AzureFeed -Channel $Channel -Version $Version -JSonFile $JSonFile
|
||||
$DownloadLink, $EffectiveVersion = Get-Download-Link -AzureFeed $AzureFeed -SpecificVersion $SpecificVersion -CLIArchitecture $CLIArchitecture
|
||||
$LegacyDownloadLink = Get-LegacyDownload-Link -AzureFeed $AzureFeed -SpecificVersion $SpecificVersion -CLIArchitecture $CLIArchitecture
|
||||
|
||||
$InstallRoot = Resolve-Installation-Path $InstallDir
|
||||
Say-Verbose "InstallRoot: $InstallRoot"
|
||||
@@ -1008,9 +718,9 @@ $ScriptName = $MyInvocation.MyCommand.Name
|
||||
|
||||
if ($DryRun) {
|
||||
Say "Payload URLs:"
|
||||
Say "Primary named payload URL: ${DownloadLink}"
|
||||
Say "Primary named payload URL: $DownloadLink"
|
||||
if ($LegacyDownloadLink) {
|
||||
Say "Legacy named payload URL: ${LegacyDownloadLink}"
|
||||
Say "Legacy named payload URL: $LegacyDownloadLink"
|
||||
}
|
||||
$RepeatableCommand = ".\$ScriptName -Version `"$SpecificVersion`" -InstallDir `"$InstallRoot`" -Architecture `"$CLIArchitecture`""
|
||||
if ($Runtime -eq "dotnet") {
|
||||
@@ -1019,20 +729,11 @@ if ($DryRun) {
|
||||
elseif ($Runtime -eq "aspnetcore") {
|
||||
$RepeatableCommand+=" -Runtime `"aspnetcore`""
|
||||
}
|
||||
|
||||
if (-not [string]::IsNullOrEmpty($NormalizedQuality))
|
||||
{
|
||||
$RepeatableCommand+=" -Quality `"$NormalizedQuality`""
|
||||
}
|
||||
|
||||
foreach ($key in $MyInvocation.BoundParameters.Keys) {
|
||||
if (-not (@("Architecture","Channel","DryRun","InstallDir","Runtime","SharedRuntime","Version","Quality","FeedCredential") -contains $key)) {
|
||||
if (-not (@("Architecture","Channel","DryRun","InstallDir","Runtime","SharedRuntime","Version") -contains $key)) {
|
||||
$RepeatableCommand+=" -$key `"$($MyInvocation.BoundParameters[$key])`""
|
||||
}
|
||||
}
|
||||
if ($MyInvocation.BoundParameters.Keys -contains "FeedCredential") {
|
||||
$RepeatableCommand+=" -FeedCredential `"<feedCredential>`""
|
||||
}
|
||||
Say "Repeatable invocation: $RepeatableCommand"
|
||||
if ($SpecificVersion -ne $EffectiveVersion)
|
||||
{
|
||||
@@ -1078,16 +779,9 @@ if ($isAssetInstalled) {
|
||||
|
||||
New-Item -ItemType Directory -Force -Path $InstallRoot | Out-Null
|
||||
|
||||
$installDrive = $((Get-Item $InstallRoot -Force).PSDrive.Name);
|
||||
$diskInfo = $null
|
||||
try{
|
||||
$diskInfo = Get-PSDrive -Name $installDrive
|
||||
}
|
||||
catch{
|
||||
Say-Warning "Failed to check the disk space. Installation will continue, but it may fail if you do not have enough disk space."
|
||||
}
|
||||
|
||||
if ( ($diskInfo -ne $null) -and ($diskInfo.Free / 1MB -le 100)) {
|
||||
$installDrive = $((Get-Item $InstallRoot).PSDrive.Name);
|
||||
$diskInfo = Get-PSDrive -Name $installDrive
|
||||
if ($diskInfo.Free / 1MB -le 100) {
|
||||
throw "There is not enough disk space on drive ${installDrive}:"
|
||||
}
|
||||
|
||||
@@ -1207,44 +901,43 @@ Prepend-Sdk-InstallRoot-To-Path -InstallRoot $InstallRoot -BinFolderRelativePath
|
||||
Say "Note that the script does not resolve dependencies during installation."
|
||||
Say "To check the list of dependencies, go to https://docs.microsoft.com/dotnet/core/install/windows#dependencies"
|
||||
Say "Installation finished"
|
||||
|
||||
# SIG # Begin signature block
|
||||
# MIIjhgYJKoZIhvcNAQcCoIIjdzCCI3MCAQExDzANBglghkgBZQMEAgEFADB5Bgor
|
||||
# MIIjjwYJKoZIhvcNAQcCoIIjgDCCI3wCAQExDzANBglghkgBZQMEAgEFADB5Bgor
|
||||
# BgEEAYI3AgEEoGswaTA0BgorBgEEAYI3AgEeMCYCAwEAAAQQH8w7YFlLCE63JNLG
|
||||
# KX7zUQIBAAIBAAIBAAIBAAIBADAxMA0GCWCGSAFlAwQCAQUABCDO8obeUp97UA1H
|
||||
# qy3NSLDwxxLLlEbGxeCzMOcKVT/3eKCCDYEwggX/MIID56ADAgECAhMzAAAB32vw
|
||||
# LpKnSrTQAAAAAAHfMA0GCSqGSIb3DQEBCwUAMH4xCzAJBgNVBAYTAlVTMRMwEQYD
|
||||
# KX7zUQIBAAIBAAIBAAIBAAIBADAxMA0GCWCGSAFlAwQCAQUABCCNsnhcJvx/hXmM
|
||||
# w8KjuvvIMDBFonhg9XJFc1QwfTyH4aCCDYEwggX/MIID56ADAgECAhMzAAABh3IX
|
||||
# chVZQMcJAAAAAAGHMA0GCSqGSIb3DQEBCwUAMH4xCzAJBgNVBAYTAlVTMRMwEQYD
|
||||
# VQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNy
|
||||
# b3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25p
|
||||
# bmcgUENBIDIwMTEwHhcNMjAxMjE1MjEzMTQ1WhcNMjExMjAyMjEzMTQ1WjB0MQsw
|
||||
# bmcgUENBIDIwMTEwHhcNMjAwMzA0MTgzOTQ3WhcNMjEwMzAzMTgzOTQ3WjB0MQsw
|
||||
# CQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9u
|
||||
# ZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMR4wHAYDVQQDExVNaWNy
|
||||
# b3NvZnQgQ29ycG9yYXRpb24wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
|
||||
# AQC2uxlZEACjqfHkuFyoCwfL25ofI9DZWKt4wEj3JBQ48GPt1UsDv834CcoUUPMn
|
||||
# s/6CtPoaQ4Thy/kbOOg/zJAnrJeiMQqRe2Lsdb/NSI2gXXX9lad1/yPUDOXo4GNw
|
||||
# PjXq1JZi+HZV91bUr6ZjzePj1g+bepsqd/HC1XScj0fT3aAxLRykJSzExEBmU9eS
|
||||
# yuOwUuq+CriudQtWGMdJU650v/KmzfM46Y6lo/MCnnpvz3zEL7PMdUdwqj/nYhGG
|
||||
# 3UVILxX7tAdMbz7LN+6WOIpT1A41rwaoOVnv+8Ua94HwhjZmu1S73yeV7RZZNxoh
|
||||
# EegJi9YYssXa7UZUUkCCA+KnAgMBAAGjggF+MIIBejAfBgNVHSUEGDAWBgorBgEE
|
||||
# AYI3TAgBBggrBgEFBQcDAzAdBgNVHQ4EFgQUOPbML8IdkNGtCfMmVPtvI6VZ8+Mw
|
||||
# AQDOt8kLc7P3T7MKIhouYHewMFmnq8Ayu7FOhZCQabVwBp2VS4WyB2Qe4TQBT8aB
|
||||
# znANDEPjHKNdPT8Xz5cNali6XHefS8i/WXtF0vSsP8NEv6mBHuA2p1fw2wB/F0dH
|
||||
# sJ3GfZ5c0sPJjklsiYqPw59xJ54kM91IOgiO2OUzjNAljPibjCWfH7UzQ1TPHc4d
|
||||
# weils8GEIrbBRb7IWwiObL12jWT4Yh71NQgvJ9Fn6+UhD9x2uk3dLj84vwt1NuFQ
|
||||
# itKJxIV0fVsRNR3abQVOLqpDugbr0SzNL6o8xzOHL5OXiGGwg6ekiXA1/2XXY7yV
|
||||
# Fc39tledDtZjSjNbex1zzwSXAgMBAAGjggF+MIIBejAfBgNVHSUEGDAWBgorBgEE
|
||||
# AYI3TAgBBggrBgEFBQcDAzAdBgNVHQ4EFgQUhov4ZyO96axkJdMjpzu2zVXOJcsw
|
||||
# UAYDVR0RBEkwR6RFMEMxKTAnBgNVBAsTIE1pY3Jvc29mdCBPcGVyYXRpb25zIFB1
|
||||
# ZXJ0byBSaWNvMRYwFAYDVQQFEw0yMzAwMTIrNDYzMDA5MB8GA1UdIwQYMBaAFEhu
|
||||
# ZXJ0byBSaWNvMRYwFAYDVQQFEw0yMzAwMTIrNDU4Mzg1MB8GA1UdIwQYMBaAFEhu
|
||||
# ZOVQBdOCqhc3NyK1bajKdQKVMFQGA1UdHwRNMEswSaBHoEWGQ2h0dHA6Ly93d3cu
|
||||
# bWljcm9zb2Z0LmNvbS9wa2lvcHMvY3JsL01pY0NvZFNpZ1BDQTIwMTFfMjAxMS0w
|
||||
# Ny0wOC5jcmwwYQYIKwYBBQUHAQEEVTBTMFEGCCsGAQUFBzAChkVodHRwOi8vd3d3
|
||||
# Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NlcnRzL01pY0NvZFNpZ1BDQTIwMTFfMjAx
|
||||
# MS0wNy0wOC5jcnQwDAYDVR0TAQH/BAIwADANBgkqhkiG9w0BAQsFAAOCAgEAnnqH
|
||||
# tDyYUFaVAkvAK0eqq6nhoL95SZQu3RnpZ7tdQ89QR3++7A+4hrr7V4xxmkB5BObS
|
||||
# 0YK+MALE02atjwWgPdpYQ68WdLGroJZHkbZdgERG+7tETFl3aKF4KpoSaGOskZXp
|
||||
# TPnCaMo2PXoAMVMGpsQEQswimZq3IQ3nRQfBlJ0PoMMcN/+Pks8ZTL1BoPYsJpok
|
||||
# t6cql59q6CypZYIwgyJ892HpttybHKg1ZtQLUlSXccRMlugPgEcNZJagPEgPYni4
|
||||
# b11snjRAgf0dyQ0zI9aLXqTxWUU5pCIFiPT0b2wsxzRqCtyGqpkGM8P9GazO8eao
|
||||
# mVItCYBcJSByBx/pS0cSYwBBHAZxJODUqxSXoSGDvmTfqUJXntnWkL4okok1FiCD
|
||||
# Z4jpyXOQunb6egIXvkgQ7jb2uO26Ow0m8RwleDvhOMrnHsupiOPbozKroSa6paFt
|
||||
# VSh89abUSooR8QdZciemmoFhcWkEwFg4spzvYNP4nIs193261WyTaRMZoceGun7G
|
||||
# CT2Rl653uUj+F+g94c63AhzSq4khdL4HlFIP2ePv29smfUnHtGq6yYFDLnT0q/Y+
|
||||
# Di3jwloF8EWkkHRtSuXlFUbTmwr/lDDgbpZiKhLS7CBTDj32I0L5i532+uHczw82
|
||||
# oZDmYmYmIUSMbZOgS65h797rj5JJ6OkeEUJoAVwwggd6MIIFYqADAgECAgphDpDS
|
||||
# MS0wNy0wOC5jcnQwDAYDVR0TAQH/BAIwADANBgkqhkiG9w0BAQsFAAOCAgEAixmy
|
||||
# S6E6vprWD9KFNIB9G5zyMuIjZAOuUJ1EK/Vlg6Fb3ZHXjjUwATKIcXbFuFC6Wr4K
|
||||
# NrU4DY/sBVqmab5AC/je3bpUpjtxpEyqUqtPc30wEg/rO9vmKmqKoLPT37svc2NV
|
||||
# BmGNl+85qO4fV/w7Cx7J0Bbqk19KcRNdjt6eKoTnTPHBHlVHQIHZpMxacbFOAkJr
|
||||
# qAVkYZdz7ikNXTxV+GRb36tC4ByMNxE2DF7vFdvaiZP0CVZ5ByJ2gAhXMdK9+usx
|
||||
# zVk913qKde1OAuWdv+rndqkAIm8fUlRnr4saSCg7cIbUwCCf116wUJ7EuJDg0vHe
|
||||
# yhnCeHnBbyH3RZkHEi2ofmfgnFISJZDdMAeVZGVOh20Jp50XBzqokpPzeZ6zc1/g
|
||||
# yILNyiVgE+RPkjnUQshd1f1PMgn3tns2Cz7bJiVUaqEO3n9qRFgy5JuLae6UweGf
|
||||
# AeOo3dgLZxikKzYs3hDMaEtJq8IP71cX7QXe6lnMmXU/Hdfz2p897Zd+kU+vZvKI
|
||||
# 3cwLfuVQgK2RZ2z+Kc3K3dRPz2rXycK5XCuRZmvGab/WbrZiC7wJQapgBodltMI5
|
||||
# GMdFrBg9IeF7/rP4EqVQXeKtevTlZXjpuNhhjuR+2DMt/dWufjXpiW91bo3aH6Ea
|
||||
# jOALXmoxgltCp1K7hrS6gmsvj94cLRf50QQ4U8Qwggd6MIIFYqADAgECAgphDpDS
|
||||
# AAAAAAADMA0GCSqGSIb3DQEBCwUAMIGIMQswCQYDVQQGEwJVUzETMBEGA1UECBMK
|
||||
# V2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0
|
||||
# IENvcnBvcmF0aW9uMTIwMAYDVQQDEylNaWNyb3NvZnQgUm9vdCBDZXJ0aWZpY2F0
|
||||
@@ -1284,119 +977,119 @@ Say "Installation finished"
|
||||
# xw4o7t5lL+yX9qFcltgA1qFGvVnzl6UJS0gQmYAf0AApxbGbpT9Fdx41xtKiop96
|
||||
# eiL6SJUfq/tHI4D1nvi/a7dLl+LrdXga7Oo3mXkYS//WsyNodeav+vyL6wuA6mk7
|
||||
# r/ww7QRMjt/fdW1jkT3RnVZOT7+AVyKheBEyIXrvQQqxP/uozKRdwaGIm1dxVk5I
|
||||
# RcBCyZt2WwqASGv9eZ/BvW1taslScxMNelDNMYIVWzCCFVcCAQEwgZUwfjELMAkG
|
||||
# RcBCyZt2WwqASGv9eZ/BvW1taslScxMNelDNMYIVZDCCFWACAQEwgZUwfjELMAkG
|
||||
# A1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQx
|
||||
# HjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEoMCYGA1UEAxMfTWljcm9z
|
||||
# b2Z0IENvZGUgU2lnbmluZyBQQ0EgMjAxMQITMwAAAd9r8C6Sp0q00AAAAAAB3zAN
|
||||
# b2Z0IENvZGUgU2lnbmluZyBQQ0EgMjAxMQITMwAAAYdyF3IVWUDHCQAAAAABhzAN
|
||||
# BglghkgBZQMEAgEFAKCBrjAZBgkqhkiG9w0BCQMxDAYKKwYBBAGCNwIBBDAcBgor
|
||||
# BgEEAYI3AgELMQ4wDAYKKwYBBAGCNwIBFTAvBgkqhkiG9w0BCQQxIgQg7EExOCvj
|
||||
# ZDkub0Fc7HAVivJIlz+Omt11fROkTFKR+KQwQgYKKwYBBAGCNwIBDDE0MDKgFIAS
|
||||
# BgEEAYI3AgELMQ4wDAYKKwYBBAGCNwIBFTAvBgkqhkiG9w0BCQQxIgQgpT/bxWwe
|
||||
# aW0EinKMWCAzDXUjwXkIHldYzR6lw4/1Pc0wQgYKKwYBBAGCNwIBDDE0MDKgFIAS
|
||||
# AE0AaQBjAHIAbwBzAG8AZgB0oRqAGGh0dHA6Ly93d3cubWljcm9zb2Z0LmNvbTAN
|
||||
# BgkqhkiG9w0BAQEFAASCAQBPvflyU+HQEIM4XvImAK2eQp74ABDaxeWVbl21ZF0L
|
||||
# cKf5GkIzdvlCoKxforXBkRIBtLxU9Sb4ff+AO3jiw2M36ZLwZQRYcn003FTR3BAY
|
||||
# 3XVGdxb64vCEMUb6Dzh9Hb1lmJjyn2NHsB514ZuUez4v/UIsV9e3TV731kY6gcQV
|
||||
# 3ridn3IKj3RMJGtPD5qxVMZohvVIy1Xiz4n/9BpBkk4HHsSuGPmBj08jWnIcI5TF
|
||||
# eXYC4LonTOFZLcsKiW4eMRaDt0fXxJjhQF8DSJX3PAxCC73YkPrJJyXAkGNHcXCg
|
||||
# 2Lyi9Yd6oV0nbJlCSLWK7jcn5lxUIgsrb+U6F8yXoNsXoYIS5TCCEuEGCisGAQQB
|
||||
# gjcDAwExghLRMIISzQYJKoZIhvcNAQcCoIISvjCCEroCAQMxDzANBglghkgBZQME
|
||||
# AgEFADCCAVEGCyqGSIb3DQEJEAEEoIIBQASCATwwggE4AgEBBgorBgEEAYRZCgMB
|
||||
# MDEwDQYJYIZIAWUDBAIBBQAEIEwFz8qgqbYKKOXyaZMOTE0/ve/rpdJjnq1eyDVG
|
||||
# mWB6AgZhHpsDT9wYEzIwMjEwODIzMTUxNzA2LjAyMVowBIACAfSggdCkgc0wgcox
|
||||
# BgkqhkiG9w0BAQEFAASCAQCHd7sSQVq0YDg8QDx6/kLWn3s6jtvvIDCCgsO9spHM
|
||||
# quPd4FPbG67DCsKDClekQs52qrtRO3Zo+JMnCw4j3bS+gZHzeJr2shbftOrpsFoD
|
||||
# l7OPcUmtrqul9dkQCOp8t0MP3ls0n96/YyNy6lz4BAlTdkdDx957uAxalKaCIBzb
|
||||
# R9QyppOKIfNFvwD4EI5KI6tpmSy/uH8SrRg7ZExAYZl6J6R18WkL7KHn649lPoAQ
|
||||
# ujwrIXH10xOJops45ILGzKWQcHmCzLJGYapL4VHUuK+73nT+9ZROGHdk/PyvIcdw
|
||||
# iERa+C06v305t3DA+CuHFy1tvyw7IFF6RVbLZPwxrJjToYIS7jCCEuoGCisGAQQB
|
||||
# gjcDAwExghLaMIIS1gYJKoZIhvcNAQcCoIISxzCCEsMCAQMxDzANBglghkgBZQME
|
||||
# AgEFADCCAVUGCyqGSIb3DQEJEAEEoIIBRASCAUAwggE8AgEBBgorBgEEAYRZCgMB
|
||||
# MDEwDQYJYIZIAWUDBAIBBQAEIOCaTmvM1AP0WaEVqzKaaCu/R+bTlR4kCrM/ZXsb
|
||||
# /eNOAgZgGeLsMwsYEzIwMjEwMjAzMjExNzQ5LjU5MVowBIACAfSggdSkgdEwgc4x
|
||||
# CzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRt
|
||||
# b25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJTAjBgNVBAsTHE1p
|
||||
# Y3Jvc29mdCBBbWVyaWNhIE9wZXJhdGlvbnMxJjAkBgNVBAsTHVRoYWxlcyBUU1Mg
|
||||
# RVNOOjhBODItRTM0Ri05RERBMSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFt
|
||||
# cCBTZXJ2aWNloIIOPDCCBPEwggPZoAMCAQICEzMAAAFLT7KmSNXkwlEAAAAAAUsw
|
||||
# DQYJKoZIhvcNAQELBQAwfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0
|
||||
# b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3Jh
|
||||
# dGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTAwHhcN
|
||||
# MjAxMTEyMTgyNTU5WhcNMjIwMjExMTgyNTU5WjCByjELMAkGA1UEBhMCVVMxEzAR
|
||||
# BgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1p
|
||||
# Y3Jvc29mdCBDb3Jwb3JhdGlvbjElMCMGA1UECxMcTWljcm9zb2Z0IEFtZXJpY2Eg
|
||||
# T3BlcmF0aW9uczEmMCQGA1UECxMdVGhhbGVzIFRTUyBFU046OEE4Mi1FMzRGLTlE
|
||||
# REExJTAjBgNVBAMTHE1pY3Jvc29mdCBUaW1lLVN0YW1wIFNlcnZpY2UwggEiMA0G
|
||||
# CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQChNnpQx3YuJr/ivobPoLtpQ9egUFl8
|
||||
# THdWZ6SAKIJdtP3L24D3/d63ommmjZjCyrQm+j/1tHDAwjQGuOwYvn79ecPCQfAB
|
||||
# 91JnEp/wP4BMF2SXyMf8k9R84RthIdfGHPXTWqzpCCfNWolVEcUVm8Ad/r1LrikR
|
||||
# O+4KKo6slDQJKsgKApfBU/9J7Rudvhw1rEQw0Nk1BRGWjrIp7/uWoUIfR4rcl6U1
|
||||
# utOiYIonC87PPpAJQXGRsDdKnVFF4NpWvMiyeuksn5t/Otwz82sGlne/HNQpmMzi
|
||||
# gR8cZ8eXEDJJNIZxov9WAHHj28gUE29D8ivAT706ihxvTv50ZY8W51uxAgMBAAGj
|
||||
# ggEbMIIBFzAdBgNVHQ4EFgQUUqpqftASlue6K3LePlTTn01K68YwHwYDVR0jBBgw
|
||||
# FoAU1WM6XIoxkPNDe3xGG8UzaFqFbVUwVgYDVR0fBE8wTTBLoEmgR4ZFaHR0cDov
|
||||
# L2NybC5taWNyb3NvZnQuY29tL3BraS9jcmwvcHJvZHVjdHMvTWljVGltU3RhUENB
|
||||
# XzIwMTAtMDctMDEuY3JsMFoGCCsGAQUFBwEBBE4wTDBKBggrBgEFBQcwAoY+aHR0
|
||||
# cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNUaW1TdGFQQ0FfMjAx
|
||||
# MC0wNy0wMS5jcnQwDAYDVR0TAQH/BAIwADATBgNVHSUEDDAKBggrBgEFBQcDCDAN
|
||||
# BgkqhkiG9w0BAQsFAAOCAQEAFtq51Zc/O1AfJK4tEB2Nr8bGEVD5qQ8l8gXIQMrM
|
||||
# ZYtddHH+cGiqgF/4GmvmPfl5FAYh+gf/8Yd3q4/iD2+K4LtJbs/3v6mpyBl1mQ4v
|
||||
# usK65dAypWmiT1W3FiXjsmCIkjSDDsKLFBYH5yGFnNFOEMgL+O7u4osH42f80nc2
|
||||
# WdnZV6+OvW035XPV6ZttUBfFWHdIbUkdOG1O2n4yJm10OfacItZ08fzgMMqE+f/S
|
||||
# TgVWNCHbR2EYqTWayrGP69jMwtVD9BGGTWti1XjpvE6yKdO8H9nuRi3L+C6jYntf
|
||||
# aEmBTbnTFEV+kRx1CNcpSb9os86CAUehZU1aRzQ6CQ/pjzCCBnEwggRZoAMCAQIC
|
||||
# CmEJgSoAAAAAAAIwDQYJKoZIhvcNAQELBQAwgYgxCzAJBgNVBAYTAlVTMRMwEQYD
|
||||
# VQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNy
|
||||
# b3NvZnQgQ29ycG9yYXRpb24xMjAwBgNVBAMTKU1pY3Jvc29mdCBSb290IENlcnRp
|
||||
# ZmljYXRlIEF1dGhvcml0eSAyMDEwMB4XDTEwMDcwMTIxMzY1NVoXDTI1MDcwMTIx
|
||||
# NDY1NVowfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNV
|
||||
# BAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQG
|
||||
# A1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTAwggEiMA0GCSqGSIb3
|
||||
# DQEBAQUAA4IBDwAwggEKAoIBAQCpHQ28dxGKOiDs/BOX9fp/aZRrdFQQ1aUKAIKF
|
||||
# ++18aEssX8XD5WHCdrc+Zitb8BVTJwQxH0EbGpUdzgkTjnxhMFmxMEQP8WCIhFRD
|
||||
# DNdNuDgIs0Ldk6zWczBXJoKjRQ3Q6vVHgc2/JGAyWGBG8lhHhjKEHnRhZ5FfgVSx
|
||||
# z5NMksHEpl3RYRNuKMYa+YaAu99h/EbBJx0kZxJyGiGKr0tkiVBisV39dx898Fd1
|
||||
# rL2KQk1AUdEPnAY+Z3/1ZsADlkR+79BL/W7lmsqxqPJ6Kgox8NpOBpG2iAg16Hgc
|
||||
# sOmZzTznL0S6p/TcZL2kAcEgCZN4zfy8wMlEXV4WnAEFTyJNAgMBAAGjggHmMIIB
|
||||
# 4jAQBgkrBgEEAYI3FQEEAwIBADAdBgNVHQ4EFgQU1WM6XIoxkPNDe3xGG8UzaFqF
|
||||
# bVUwGQYJKwYBBAGCNxQCBAweCgBTAHUAYgBDAEEwCwYDVR0PBAQDAgGGMA8GA1Ud
|
||||
# EwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU1fZWy4/oolxiaNE9lJBb186aGMQwVgYD
|
||||
# VR0fBE8wTTBLoEmgR4ZFaHR0cDovL2NybC5taWNyb3NvZnQuY29tL3BraS9jcmwv
|
||||
# cHJvZHVjdHMvTWljUm9vQ2VyQXV0XzIwMTAtMDYtMjMuY3JsMFoGCCsGAQUFBwEB
|
||||
# BE4wTDBKBggrBgEFBQcwAoY+aHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9j
|
||||
# ZXJ0cy9NaWNSb29DZXJBdXRfMjAxMC0wNi0yMy5jcnQwgaAGA1UdIAEB/wSBlTCB
|
||||
# kjCBjwYJKwYBBAGCNy4DMIGBMD0GCCsGAQUFBwIBFjFodHRwOi8vd3d3Lm1pY3Jv
|
||||
# c29mdC5jb20vUEtJL2RvY3MvQ1BTL2RlZmF1bHQuaHRtMEAGCCsGAQUFBwICMDQe
|
||||
# MiAdAEwAZQBnAGEAbABfAFAAbwBsAGkAYwB5AF8AUwB0AGEAdABlAG0AZQBuAHQA
|
||||
# LiAdMA0GCSqGSIb3DQEBCwUAA4ICAQAH5ohRDeLG4Jg/gXEDPZ2joSFvs+umzPUx
|
||||
# vs8F4qn++ldtGTCzwsVmyWrf9efweL3HqJ4l4/m87WtUVwgrUYJEEvu5U4zM9GAS
|
||||
# inbMQEBBm9xcF/9c+V4XNZgkVkt070IQyK+/f8Z/8jd9Wj8c8pl5SpFSAK84Dxf1
|
||||
# L3mBZdmptWvkx872ynoAb0swRCQiPM/tA6WWj1kpvLb9BOFwnzJKJ/1Vry/+tuWO
|
||||
# M7tiX5rbV0Dp8c6ZZpCM/2pif93FSguRJuI57BlKcWOdeyFtw5yjojz6f32WapB4
|
||||
# pm3S4Zz5Hfw42JT0xqUKloakvZ4argRCg7i1gJsiOCC1JeVk7Pf0v35jWSUPei45
|
||||
# V3aicaoGig+JFrphpxHLmtgOR5qAxdDNp9DvfYPw4TtxCd9ddJgiCGHasFAeb73x
|
||||
# 4QDf5zEHpJM692VHeOj4qEir995yfmFrb3epgcunCaw5u+zGy9iCtHLNHfS4hQEe
|
||||
# gPsbiSpUObJb2sgNVZl6h3M7COaYLeqN4DMuEin1wC9UJyH3yKxO2ii4sanblrKn
|
||||
# QqLJzxlBTeCG+SqaoxFmMNO7dDJL32N79ZmKLxvHIa9Zta7cRDyXUHHXodLFVeNp
|
||||
# 3lfB0d4wwP3M5k37Db9dT+mdHhk4L7zPWAUu7w2gUDXa7wknHNWzfjUeCLraNtvT
|
||||
# X4/edIhJEqGCAs4wggI3AgEBMIH4oYHQpIHNMIHKMQswCQYDVQQGEwJVUzETMBEG
|
||||
# A1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWlj
|
||||
# cm9zb2Z0IENvcnBvcmF0aW9uMSUwIwYDVQQLExxNaWNyb3NvZnQgQW1lcmljYSBP
|
||||
# cGVyYXRpb25zMSYwJAYDVQQLEx1UaGFsZXMgVFNTIEVTTjo4QTgyLUUzNEYtOURE
|
||||
# QTElMCMGA1UEAxMcTWljcm9zb2Z0IFRpbWUtU3RhbXAgU2VydmljZaIjCgEBMAcG
|
||||
# BSsOAwIaAxUAkToz97fseHxNOUSQ5O/bBVSF+e6ggYMwgYCkfjB8MQswCQYDVQQG
|
||||
# EwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwG
|
||||
# A1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQg
|
||||
# VGltZS1TdGFtcCBQQ0EgMjAxMDANBgkqhkiG9w0BAQUFAAIFAOTNtrcwIhgPMjAy
|
||||
# MTA4MjMxMzU1MDNaGA8yMDIxMDgyNDEzNTUwM1owdzA9BgorBgEEAYRZCgQBMS8w
|
||||
# LTAKAgUA5M22twIBADAKAgEAAgIcfAIB/zAHAgEAAgIQOTAKAgUA5M8INwIBADA2
|
||||
# BgorBgEEAYRZCgQCMSgwJjAMBgorBgEEAYRZCgMCoAowCAIBAAIDB6EgoQowCAIB
|
||||
# AAIDAYagMA0GCSqGSIb3DQEBBQUAA4GBADrUnASEIEovjIFdqxhyYsyiBSfNeD7e
|
||||
# W4qeRl1B4DTAY8Z/U1SpnK8yy7r6oZn0D8og+QJmnWEnCPqNXAreyg8tlFyTF1TV
|
||||
# K6uUlslIY4WCmrAECIFua/DuksTerny66SBw4aSEeQHtGiynnR87WcdqItBJxxRA
|
||||
# pElXzs8QzC0EMYIDDTCCAwkCAQEwgZMwfDELMAkGA1UEBhMCVVMxEzARBgNVBAgT
|
||||
# Cldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29m
|
||||
# dCBDb3Jwb3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENB
|
||||
# IDIwMTACEzMAAAFLT7KmSNXkwlEAAAAAAUswDQYJYIZIAWUDBAIBBQCgggFKMBoG
|
||||
# CSqGSIb3DQEJAzENBgsqhkiG9w0BCRABBDAvBgkqhkiG9w0BCQQxIgQg87M/Jiqh
|
||||
# MlaXNWAIAn7CM9CZw6QCPgBx3voUSIH6+qUwgfoGCyqGSIb3DQEJEAIvMYHqMIHn
|
||||
# MIHkMIG9BCBr9u6EInnsZYEts/Fj/rIFv0YZW1ynhXKOP2hVPUU5IzCBmDCBgKR+
|
||||
# MHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdS
|
||||
# ZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMT
|
||||
# HU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwAhMzAAABS0+ypkjV5MJRAAAA
|
||||
# AAFLMCIEIGs9Mn51TUGRleeUO5lQCRHZSu0G/8Ir92l3NgMGq1/GMA0GCSqGSIb3
|
||||
# DQEBCwUABIIBAA1oBIiM3guh89/Pvwwcn4hdZGGBsMk56MRMvOir8XQM0Je/rSwP
|
||||
# P11S2VwYUMmBjfFX1QK7r280k3k9xcDWW79ZC4Kn2UsV7jxuUwFHgdVlgDNjg6QL
|
||||
# unGuwFX5lStgNevwUH9DotoASXxgtwNNmtZNDB4NYVCxQLS7RtVSbxP+xFYXbokG
|
||||
# G4DFVHBr3CGPga3OduE/YUBpxIRxtiDF/9WHo0tJEdS95tfdnsyaGo8eHNzyfTdS
|
||||
# kjrNsAlqpJRT0Va/ooBXCV7Uny7PlCkRIQRTSJDy/kSXpsL22CTEHw6mU0jEU99k
|
||||
# OL4uwu+aVnolWFlpbiJjQxBqfT3plBLIg88=
|
||||
# b25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKTAnBgNVBAsTIE1p
|
||||
# Y3Jvc29mdCBPcGVyYXRpb25zIFB1ZXJ0byBSaWNvMSYwJAYDVQQLEx1UaGFsZXMg
|
||||
# VFNTIEVTTjo4OTdBLUUzNTYtMTcwMTElMCMGA1UEAxMcTWljcm9zb2Z0IFRpbWUt
|
||||
# U3RhbXAgU2VydmljZaCCDkEwggT1MIID3aADAgECAhMzAAABLCKvRZd1+RvuAAAA
|
||||
# AAEsMA0GCSqGSIb3DQEBCwUAMHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNo
|
||||
# aW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29y
|
||||
# cG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEw
|
||||
# MB4XDTE5MTIxOTAxMTUwM1oXDTIxMDMxNzAxMTUwM1owgc4xCzAJBgNVBAYTAlVT
|
||||
# MRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQK
|
||||
# ExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKTAnBgNVBAsTIE1pY3Jvc29mdCBPcGVy
|
||||
# YXRpb25zIFB1ZXJ0byBSaWNvMSYwJAYDVQQLEx1UaGFsZXMgVFNTIEVTTjo4OTdB
|
||||
# LUUzNTYtMTcwMTElMCMGA1UEAxMcTWljcm9zb2Z0IFRpbWUtU3RhbXAgU2Vydmlj
|
||||
# ZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPK1zgSSq+MxAYo3qpCt
|
||||
# QDxSMPPJy6mm/wfEJNjNUnYtLFBwl1BUS5trEk/t41ldxITKehs+ABxYqo4Qxsg3
|
||||
# Gy1ugKiwHAnYiiekfC+ZhptNFgtnDZIn45zC0AlVr/6UfLtsLcHCh1XElLUHfEC0
|
||||
# nBuQcM/SpYo9e3l1qY5NdMgDGxCsmCKdiZfYXIu+U0UYIBhdzmSHnB3fxZOBVcr5
|
||||
# htFHEBBNt/rFJlm/A4yb8oBsp+Uf0p5QwmO/bCcdqB15JpylOhZmWs0sUfJKlK9E
|
||||
# rAhBwGki2eIRFKsQBdkXS9PWpF1w2gIJRvSkDEaCf+lbGTPdSzHSbfREWOF9wY3i
|
||||
# Yj8CAwEAAaOCARswggEXMB0GA1UdDgQWBBRRahZSGfrCQhCyIyGH9DkiaW7L0zAf
|
||||
# BgNVHSMEGDAWgBTVYzpcijGQ80N7fEYbxTNoWoVtVTBWBgNVHR8ETzBNMEugSaBH
|
||||
# hkVodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20vcGtpL2NybC9wcm9kdWN0cy9NaWNU
|
||||
# aW1TdGFQQ0FfMjAxMC0wNy0wMS5jcmwwWgYIKwYBBQUHAQEETjBMMEoGCCsGAQUF
|
||||
# BzAChj5odHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpL2NlcnRzL01pY1RpbVN0
|
||||
# YVBDQV8yMDEwLTA3LTAxLmNydDAMBgNVHRMBAf8EAjAAMBMGA1UdJQQMMAoGCCsG
|
||||
# AQUFBwMIMA0GCSqGSIb3DQEBCwUAA4IBAQBPFxHIwi4vAH49w9Svmz6K3tM55RlW
|
||||
# 5pPeULXdut2Rqy6Ys0+VpZsbuaEoxs6Z1C3hMbkiqZFxxyltxJpuHTyGTg61zfNI
|
||||
# F5n6RsYF3s7IElDXNfZznF1/2iWc6uRPZK8rxxUJ/7emYXZCYwuUY0XjsCpP9pbR
|
||||
# RKeJi6r5arSyI+NfKxvgoM21JNt1BcdlXuAecdd/k8UjxCscffanoK2n6LFw1PcZ
|
||||
# lEO7NId7o+soM2C0QY5BYdghpn7uqopB6ixyFIIkDXFub+1E7GmAEwfU6VwEHL7y
|
||||
# 9rNE8bd+JrQs+yAtkkHy9FmXg/PsGq1daVzX1So7CJ6nyphpuHSN3VfTMIIGcTCC
|
||||
# BFmgAwIBAgIKYQmBKgAAAAAAAjANBgkqhkiG9w0BAQsFADCBiDELMAkGA1UEBhMC
|
||||
# VVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNV
|
||||
# BAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMpTWljcm9zb2Z0IFJv
|
||||
# b3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTAwHhcNMTAwNzAxMjEzNjU1WhcN
|
||||
# MjUwNzAxMjE0NjU1WjB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3Rv
|
||||
# bjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0
|
||||
# aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMDCCASIw
|
||||
# DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKkdDbx3EYo6IOz8E5f1+n9plGt0
|
||||
# VBDVpQoAgoX77XxoSyxfxcPlYcJ2tz5mK1vwFVMnBDEfQRsalR3OCROOfGEwWbEw
|
||||
# RA/xYIiEVEMM1024OAizQt2TrNZzMFcmgqNFDdDq9UeBzb8kYDJYYEbyWEeGMoQe
|
||||
# dGFnkV+BVLHPk0ySwcSmXdFhE24oxhr5hoC732H8RsEnHSRnEnIaIYqvS2SJUGKx
|
||||
# Xf13Hz3wV3WsvYpCTUBR0Q+cBj5nf/VmwAOWRH7v0Ev9buWayrGo8noqCjHw2k4G
|
||||
# kbaICDXoeByw6ZnNPOcvRLqn9NxkvaQBwSAJk3jN/LzAyURdXhacAQVPIk0CAwEA
|
||||
# AaOCAeYwggHiMBAGCSsGAQQBgjcVAQQDAgEAMB0GA1UdDgQWBBTVYzpcijGQ80N7
|
||||
# fEYbxTNoWoVtVTAZBgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTALBgNVHQ8EBAMC
|
||||
# AYYwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBTV9lbLj+iiXGJo0T2UkFvX
|
||||
# zpoYxDBWBgNVHR8ETzBNMEugSaBHhkVodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20v
|
||||
# cGtpL2NybC9wcm9kdWN0cy9NaWNSb29DZXJBdXRfMjAxMC0wNi0yMy5jcmwwWgYI
|
||||
# KwYBBQUHAQEETjBMMEoGCCsGAQUFBzAChj5odHRwOi8vd3d3Lm1pY3Jvc29mdC5j
|
||||
# b20vcGtpL2NlcnRzL01pY1Jvb0NlckF1dF8yMDEwLTA2LTIzLmNydDCBoAYDVR0g
|
||||
# AQH/BIGVMIGSMIGPBgkrBgEEAYI3LgMwgYEwPQYIKwYBBQUHAgEWMWh0dHA6Ly93
|
||||
# d3cubWljcm9zb2Z0LmNvbS9QS0kvZG9jcy9DUFMvZGVmYXVsdC5odG0wQAYIKwYB
|
||||
# BQUHAgIwNB4yIB0ATABlAGcAYQBsAF8AUABvAGwAaQBjAHkAXwBTAHQAYQB0AGUA
|
||||
# bQBlAG4AdAAuIB0wDQYJKoZIhvcNAQELBQADggIBAAfmiFEN4sbgmD+BcQM9naOh
|
||||
# IW+z66bM9TG+zwXiqf76V20ZMLPCxWbJat/15/B4vceoniXj+bzta1RXCCtRgkQS
|
||||
# +7lTjMz0YBKKdsxAQEGb3FwX/1z5Xhc1mCRWS3TvQhDIr79/xn/yN31aPxzymXlK
|
||||
# kVIArzgPF/UveYFl2am1a+THzvbKegBvSzBEJCI8z+0DpZaPWSm8tv0E4XCfMkon
|
||||
# /VWvL/625Y4zu2JfmttXQOnxzplmkIz/amJ/3cVKC5Em4jnsGUpxY517IW3DnKOi
|
||||
# PPp/fZZqkHimbdLhnPkd/DjYlPTGpQqWhqS9nhquBEKDuLWAmyI4ILUl5WTs9/S/
|
||||
# fmNZJQ96LjlXdqJxqgaKD4kWumGnEcua2A5HmoDF0M2n0O99g/DhO3EJ3110mCII
|
||||
# YdqwUB5vvfHhAN/nMQekkzr3ZUd46PioSKv33nJ+YWtvd6mBy6cJrDm77MbL2IK0
|
||||
# cs0d9LiFAR6A+xuJKlQ5slvayA1VmXqHczsI5pgt6o3gMy4SKfXAL1QnIffIrE7a
|
||||
# KLixqduWsqdCosnPGUFN4Ib5KpqjEWYw07t0MkvfY3v1mYovG8chr1m1rtxEPJdQ
|
||||
# cdeh0sVV42neV8HR3jDA/czmTfsNv11P6Z0eGTgvvM9YBS7vDaBQNdrvCScc1bN+
|
||||
# NR4Iuto229Nfj950iEkSoYICzzCCAjgCAQEwgfyhgdSkgdEwgc4xCzAJBgNVBAYT
|
||||
# AlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYD
|
||||
# VQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKTAnBgNVBAsTIE1pY3Jvc29mdCBP
|
||||
# cGVyYXRpb25zIFB1ZXJ0byBSaWNvMSYwJAYDVQQLEx1UaGFsZXMgVFNTIEVTTjo4
|
||||
# OTdBLUUzNTYtMTcwMTElMCMGA1UEAxMcTWljcm9zb2Z0IFRpbWUtU3RhbXAgU2Vy
|
||||
# dmljZaIjCgEBMAcGBSsOAwIaAxUADE5OKSMoNx/mYxYWap1RTOohbJ2ggYMwgYCk
|
||||
# fjB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMH
|
||||
# UmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSYwJAYDVQQD
|
||||
# Ex1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMDANBgkqhkiG9w0BAQUFAAIF
|
||||
# AOPFChkwIhgPMjAyMTAyMDMxNTQwMDlaGA8yMDIxMDIwNDE1NDAwOVowdDA6Bgor
|
||||
# BgEEAYRZCgQBMSwwKjAKAgUA48UKGQIBADAHAgEAAgIXmDAHAgEAAgIRyTAKAgUA
|
||||
# 48ZbmQIBADA2BgorBgEEAYRZCgQCMSgwJjAMBgorBgEEAYRZCgMCoAowCAIBAAID
|
||||
# B6EgoQowCAIBAAIDAYagMA0GCSqGSIb3DQEBBQUAA4GBAHeeznL2n6HWCjHH94Fl
|
||||
# hcdW6TEXzq4XNgp1Gx1W9F8gJ4x+SwoV7elJZkwgGffcpHomLvIY/VSuzsl1NgtJ
|
||||
# TWM2UxoqSv58BBOrl4eGhH6kkg8Ucy2tdeK5T8cHa8pMkq2j9pFd2mRG/6VMk0dl
|
||||
# Xz7Uy3Z6bZqkcABMyAfuAaGbMYIDDTCCAwkCAQEwgZMwfDELMAkGA1UEBhMCVVMx
|
||||
# EzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoT
|
||||
# FU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUt
|
||||
# U3RhbXAgUENBIDIwMTACEzMAAAEsIq9Fl3X5G+4AAAAAASwwDQYJYIZIAWUDBAIB
|
||||
# BQCgggFKMBoGCSqGSIb3DQEJAzENBgsqhkiG9w0BCRABBDAvBgkqhkiG9w0BCQQx
|
||||
# IgQg/QYv7yp+354WTjWUIsXWndTEzXjaYjqwYjcBxCJKjdUwgfoGCyqGSIb3DQEJ
|
||||
# EAIvMYHqMIHnMIHkMIG9BCBbn/0uFFh42hTM5XOoKdXevBaiSxmYK9Ilcn9nu5ZH
|
||||
# 4TCBmDCBgKR+MHwxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAw
|
||||
# DgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24x
|
||||
# JjAkBgNVBAMTHU1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQSAyMDEwAhMzAAABLCKv
|
||||
# RZd1+RvuAAAAAAEsMCIEIIfIM3YbzHswb/Kj/qq1l1cHA6QBl+gEXYanUNJomrpT
|
||||
# MA0GCSqGSIb3DQEBCwUABIIBAAwdcXssUZGO7ho5+NHLjIxLtQk543aKGo+lrRMY
|
||||
# Q9abE1h/AaaNJl0iGxX4IihNWyfovSfYL3L4eODUBAu68tWSxeceRfWNsb/ZZfUi
|
||||
# v89hpLssI/Gf1BEgNMA4zCuIGQiC8okusVumEpAhhvCEbSiTTTtBdolTnU/CAKui
|
||||
# oxaU3R9XkKh1F4oAM26+dJ1J2BLQXPs5afNvvedDsZWNQUPK1sFF3JRfzxiTrwBW
|
||||
# EJRyflev9gyDoqCHzippgb+6+eti1WTkcA9Q49GIT11S6LOAVqkSC9N7Nqf8ksh8
|
||||
# ARdwT8jigpsm+mj7lrVU9upDkhVYhKeO8oiZq95Q53Zkteo=
|
||||
# SIG # End signature block
|
||||
|
||||
428
src/Misc/dotnet-install.sh
vendored
428
src/Misc/dotnet-install.sh
vendored
@@ -24,7 +24,7 @@ exec 3>&1
|
||||
# See if stdout is a terminal
|
||||
if [ -t 1 ] && command -v tput > /dev/null; then
|
||||
# see if it supports colors
|
||||
ncolors=$(tput colors || echo 0)
|
||||
ncolors=$(tput colors)
|
||||
if [ -n "$ncolors" ] && [ $ncolors -ge 8 ]; then
|
||||
bold="$(tput bold || echo)"
|
||||
normal="$(tput sgr0 || echo)"
|
||||
@@ -224,7 +224,7 @@ get_legacy_os_name() {
|
||||
machine_has() {
|
||||
eval $invocation
|
||||
|
||||
command -v "$1" > /dev/null 2>&1
|
||||
hash "$1" > /dev/null 2>&1
|
||||
return $?
|
||||
}
|
||||
|
||||
@@ -368,75 +368,6 @@ get_normalized_os() {
|
||||
return 0
|
||||
}
|
||||
|
||||
# args:
|
||||
# quality - $1
|
||||
get_normalized_quality() {
|
||||
eval $invocation
|
||||
|
||||
local quality="$(to_lowercase "$1")"
|
||||
if [ ! -z "$quality" ]; then
|
||||
case "$quality" in
|
||||
daily | signed | validated | preview)
|
||||
echo "$quality"
|
||||
return 0
|
||||
;;
|
||||
ga)
|
||||
#ga quality is available without specifying quality, so normalizing it to empty
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
say_err "'$quality' is not a supported value for --quality option. Supported values are: daily, signed, validated, preview, ga. If you think this is a bug, report it at https://github.com/dotnet/install-scripts/issues."
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# args:
|
||||
# channel - $1
|
||||
get_normalized_channel() {
|
||||
eval $invocation
|
||||
|
||||
local channel="$(to_lowercase "$1")"
|
||||
|
||||
if [[ $channel == release/* ]]; then
|
||||
say_warning 'Using branch name with -Channel option is no longer supported with newer releases. Use -Quality option with a channel in X.Y format instead.';
|
||||
fi
|
||||
|
||||
if [ ! -z "$channel" ]; then
|
||||
case "$channel" in
|
||||
lts)
|
||||
echo "LTS"
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
echo "$channel"
|
||||
return 0
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# args:
|
||||
# runtime - $1
|
||||
get_normalized_product() {
|
||||
eval $invocation
|
||||
|
||||
local runtime="$(to_lowercase "$1")"
|
||||
if [[ "$runtime" == "dotnet" ]]; then
|
||||
product="dotnet-runtime"
|
||||
elif [[ "$runtime" == "aspnetcore" ]]; then
|
||||
product="aspnetcore-runtime"
|
||||
elif [ -z "$runtime" ]; then
|
||||
product="dotnet-sdk"
|
||||
fi
|
||||
echo "$product"
|
||||
return 0
|
||||
}
|
||||
|
||||
# The version text returned from the feeds is a 1-line or 2-line string:
|
||||
# For the SDK and the dotnet runtime (2 lines):
|
||||
# Line 1: # commit_hash
|
||||
@@ -610,131 +541,43 @@ construct_download_link() {
|
||||
# args:
|
||||
# azure_feed - $1
|
||||
# specific_version - $2
|
||||
# download link - $3 (optional)
|
||||
get_specific_product_version() {
|
||||
# If we find a 'productVersion.txt' at the root of any folder, we'll use its contents
|
||||
# to resolve the version of what's in the folder, superseding the specified version.
|
||||
# if 'productVersion.txt' is missing but download link is already available, product version will be taken from download link
|
||||
eval $invocation
|
||||
|
||||
local azure_feed="$1"
|
||||
local specific_version="${2//[$'\t\r\n']}"
|
||||
local package_download_link=""
|
||||
if [ $# -gt 2 ]; then
|
||||
local package_download_link="$3"
|
||||
fi
|
||||
local specific_product_version=null
|
||||
|
||||
# Try to get the version number, using the productVersion.txt file located next to the installer file.
|
||||
local download_links=($(get_specific_product_version_url "$azure_feed" "$specific_version" true "$package_download_link")
|
||||
$(get_specific_product_version_url "$azure_feed" "$specific_version" false "$package_download_link"))
|
||||
|
||||
for download_link in "${download_links[@]}"
|
||||
do
|
||||
say_verbose "Checking for the existence of $download_link"
|
||||
|
||||
if machine_has "curl"
|
||||
then
|
||||
specific_product_version=$(curl -s --fail "${download_link}${feed_credential}")
|
||||
if [ $? = 0 ]; then
|
||||
echo "${specific_product_version//[$'\t\r\n']}"
|
||||
return 0
|
||||
fi
|
||||
elif machine_has "wget"
|
||||
then
|
||||
specific_product_version=$(wget -qO- "${download_link}${feed_credential}")
|
||||
if [ $? = 0 ]; then
|
||||
echo "${specific_product_version//[$'\t\r\n']}"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# Getting the version number with productVersion.txt has failed. Try parsing the download link for a version number.
|
||||
say_verbose "Failed to get the version using productVersion.txt file. Download link will be parsed instead."
|
||||
specific_product_version="$(get_product_specific_version_from_download_link "$package_download_link" "$specific_version")"
|
||||
echo "${specific_product_version//[$'\t\r\n']}"
|
||||
return 0
|
||||
}
|
||||
|
||||
# args:
|
||||
# azure_feed - $1
|
||||
# specific_version - $2
|
||||
# is_flattened - $3
|
||||
# download link - $4 (optional)
|
||||
get_specific_product_version_url() {
|
||||
eval $invocation
|
||||
|
||||
local azure_feed="$1"
|
||||
local specific_version="$2"
|
||||
local is_flattened="$3"
|
||||
local package_download_link=""
|
||||
if [ $# -gt 3 ]; then
|
||||
local package_download_link="$4"
|
||||
fi
|
||||
|
||||
local pvFileName="productVersion.txt"
|
||||
if [ "$is_flattened" = true ]; then
|
||||
if [ -z "$runtime" ]; then
|
||||
pvFileName="sdk-productVersion.txt"
|
||||
elif [[ "$runtime" == "dotnet" ]]; then
|
||||
pvFileName="runtime-productVersion.txt"
|
||||
else
|
||||
pvFileName="$runtime-productVersion.txt"
|
||||
fi
|
||||
fi
|
||||
local specific_product_version=$specific_version
|
||||
|
||||
local download_link=null
|
||||
if [[ "$runtime" == "dotnet" ]]; then
|
||||
download_link="$azure_feed/Runtime/$specific_version/productVersion.txt${feed_credential}"
|
||||
elif [[ "$runtime" == "aspnetcore" ]]; then
|
||||
download_link="$azure_feed/aspnetcore/Runtime/$specific_version/productVersion.txt${feed_credential}"
|
||||
elif [ -z "$runtime" ]; then
|
||||
download_link="$azure_feed/Sdk/$specific_version/productVersion.txt${feed_credential}"
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ -z "$package_download_link" ]; then
|
||||
if [[ "$runtime" == "dotnet" ]]; then
|
||||
download_link="$azure_feed/Runtime/$specific_version/${pvFileName}"
|
||||
elif [[ "$runtime" == "aspnetcore" ]]; then
|
||||
download_link="$azure_feed/aspnetcore/Runtime/$specific_version/${pvFileName}"
|
||||
elif [ -z "$runtime" ]; then
|
||||
download_link="$azure_feed/Sdk/$specific_version/${pvFileName}"
|
||||
else
|
||||
return 1
|
||||
if machine_has "curl"
|
||||
then
|
||||
specific_product_version=$(curl -s --fail "$download_link")
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
specific_product_version=$specific_version
|
||||
fi
|
||||
elif machine_has "wget"
|
||||
then
|
||||
specific_product_version=$(wget -qO- "$download_link")
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
specific_product_version=$specific_version
|
||||
fi
|
||||
else
|
||||
download_link="${package_download_link%/*}/${pvFileName}"
|
||||
fi
|
||||
specific_product_version="${specific_product_version//[$'\t\r\n']}"
|
||||
|
||||
say_verbose "Constructed productVersion link: $download_link"
|
||||
echo "$download_link"
|
||||
return 0
|
||||
}
|
||||
|
||||
# args:
|
||||
# download link - $1
|
||||
# specific version - $2
|
||||
get_product_specific_version_from_download_link()
|
||||
{
|
||||
eval $invocation
|
||||
|
||||
local download_link="$1"
|
||||
local specific_version="$2"
|
||||
local specific_product_version=""
|
||||
|
||||
if [ -z "$download_link" ]; then
|
||||
echo "$specific_version"
|
||||
return 0
|
||||
fi
|
||||
|
||||
#get filename
|
||||
filename="${download_link##*/}"
|
||||
|
||||
#product specific version follows the product name
|
||||
#for filename 'dotnet-sdk-3.1.404-linux-x64.tar.gz': the product version is 3.1.404
|
||||
IFS='-'
|
||||
read -ra filename_elems <<< "$filename"
|
||||
count=${#filename_elems[@]}
|
||||
if [[ "$count" -gt 2 ]]; then
|
||||
specific_product_version="${filename_elems[2]}"
|
||||
else
|
||||
specific_product_version=$specific_version
|
||||
fi
|
||||
unset IFS;
|
||||
echo "$specific_product_version"
|
||||
return 0
|
||||
}
|
||||
@@ -868,62 +711,19 @@ extract_dotnet_package() {
|
||||
return 0
|
||||
}
|
||||
|
||||
# args:
|
||||
# remote_path - $1
|
||||
# disable_feed_credential - $2
|
||||
get_http_header()
|
||||
{
|
||||
eval $invocation
|
||||
local remote_path="$1"
|
||||
local disable_feed_credential="$2"
|
||||
|
||||
local failed=false
|
||||
local response
|
||||
if machine_has "curl"; then
|
||||
get_http_header_curl $remote_path $disable_feed_credential || failed=true
|
||||
elif machine_has "wget"; then
|
||||
get_http_header_wget $remote_path $disable_feed_credential || failed=true
|
||||
else
|
||||
failed=true
|
||||
fi
|
||||
if [ "$failed" = true ]; then
|
||||
say_verbose "Failed to get HTTP header: '$remote_path'."
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# args:
|
||||
# remote_path - $1
|
||||
# disable_feed_credential - $2
|
||||
get_http_header_curl() {
|
||||
eval $invocation
|
||||
local remote_path="$1"
|
||||
local disable_feed_credential="$2"
|
||||
|
||||
remote_path_with_credential="$remote_path"
|
||||
if [ "$disable_feed_credential" = false ]; then
|
||||
remote_path_with_credential+="$feed_credential"
|
||||
fi
|
||||
|
||||
remote_path_with_credential="${remote_path}${feed_credential}"
|
||||
curl_options="-I -sSL --retry 5 --retry-delay 2 --connect-timeout 15 "
|
||||
curl $curl_options "$remote_path_with_credential" || return 1
|
||||
return 0
|
||||
}
|
||||
|
||||
# args:
|
||||
# remote_path - $1
|
||||
# disable_feed_credential - $2
|
||||
get_http_header_wget() {
|
||||
eval $invocation
|
||||
local remote_path="$1"
|
||||
local disable_feed_credential="$2"
|
||||
|
||||
remote_path_with_credential="$remote_path"
|
||||
if [ "$disable_feed_credential" = false ]; then
|
||||
remote_path_with_credential+="$feed_credential"
|
||||
fi
|
||||
|
||||
remote_path_with_credential="${remote_path}${feed_credential}"
|
||||
wget_options="-q -S --spider --tries 5 --waitretry 2 --connect-timeout 15 "
|
||||
wget $wget_options "$remote_path_with_credential" 2>&1 || return 1
|
||||
return 0
|
||||
@@ -963,7 +763,7 @@ download() {
|
||||
|
||||
say "Download attempt #$attempts has failed: $http_code $download_error_msg"
|
||||
say "Attempt #$((attempts+1)) will start in $((attempts*10)) seconds."
|
||||
sleep $((attempts*10))
|
||||
sleep $((attempts*20))
|
||||
done
|
||||
|
||||
|
||||
@@ -983,7 +783,6 @@ downloadcurl() {
|
||||
local remote_path="$1"
|
||||
local out_path="${2:-}"
|
||||
# Append feed_credential as late as possible before calling curl to avoid logging feed_credential
|
||||
# Avoid passing URI with credentials to functions: note, most of them echoing parameters of invocation in verbose output.
|
||||
local remote_path_with_credential="${remote_path}${feed_credential}"
|
||||
local curl_options="--retry 20 --retry-delay 2 --connect-timeout 15 -sSL -f --create-dirs "
|
||||
local failed=false
|
||||
@@ -993,8 +792,7 @@ downloadcurl() {
|
||||
curl $curl_options -o "$out_path" "$remote_path_with_credential" || failed=true
|
||||
fi
|
||||
if [ "$failed" = true ]; then
|
||||
local disable_feed_credential=false
|
||||
local response=$(get_http_header_curl $remote_path $disable_feed_credential)
|
||||
local response=$(get_http_header_curl $remote_path_with_credential)
|
||||
http_code=$( echo "$response" | awk '/^HTTP/{print $2}' | tail -1 )
|
||||
download_error_msg="Unable to download $remote_path."
|
||||
if [[ $http_code != 2* ]]; then
|
||||
@@ -1024,8 +822,7 @@ downloadwget() {
|
||||
wget $wget_options -O "$out_path" "$remote_path_with_credential" || failed=true
|
||||
fi
|
||||
if [ "$failed" = true ]; then
|
||||
local disable_feed_credential=false
|
||||
local response=$(get_http_header_wget $remote_path $disable_feed_credential)
|
||||
local response=$(get_http_header_wget $remote_path_with_credential)
|
||||
http_code=$( echo "$response" | awk '/^ HTTP/{print $2}' | tail -1 )
|
||||
download_error_msg="Unable to download $remote_path."
|
||||
if [[ $http_code != 2* ]]; then
|
||||
@@ -1037,115 +834,15 @@ downloadwget() {
|
||||
return 0
|
||||
}
|
||||
|
||||
get_download_link_from_aka_ms() {
|
||||
eval $invocation
|
||||
|
||||
#quality is not supported for LTS or current channel
|
||||
if [[ ! -z "$normalized_quality" && ("$normalized_channel" == "LTS" || "$normalized_channel" == "current") ]]; then
|
||||
normalized_quality=""
|
||||
say_warning "Specifying quality for current or LTS channel is not supported, the quality will be ignored."
|
||||
fi
|
||||
|
||||
say_verbose "Retrieving primary payload URL from aka.ms for channel: '$normalized_channel', quality: '$normalized_quality', product: '$normalized_product', os: '$normalized_os', architecture: '$normalized_architecture'."
|
||||
|
||||
#construct aka.ms link
|
||||
aka_ms_link="https://aka.ms/dotnet"
|
||||
if [ "$internal" = true ]; then
|
||||
aka_ms_link="$aka_ms_link/internal"
|
||||
fi
|
||||
aka_ms_link="$aka_ms_link/$normalized_channel"
|
||||
if [[ ! -z "$normalized_quality" ]]; then
|
||||
aka_ms_link="$aka_ms_link/$normalized_quality"
|
||||
fi
|
||||
aka_ms_link="$aka_ms_link/$normalized_product-$normalized_os-$normalized_architecture.tar.gz"
|
||||
say_verbose "Constructed aka.ms link: '$aka_ms_link'."
|
||||
|
||||
#get HTTP response
|
||||
#do not pass credentials as a part of the $aka_ms_link and do not apply credentials in the get_http_header function
|
||||
#otherwise the redirect link would have credentials as well
|
||||
#it would result in applying credentials twice to the resulting link and thus breaking it, and in echoing credentials to the output as a part of redirect link
|
||||
disable_feed_credential=true
|
||||
response="$(get_http_header $aka_ms_link $disable_feed_credential)"
|
||||
|
||||
say_verbose "Received response: $response"
|
||||
# Get results of all the redirects.
|
||||
http_codes=$( echo "$response" | awk '$1 ~ /^HTTP/ {print $2}' )
|
||||
# They all need to be 301, otherwise some links are broken (except for the last, which is not a redirect but 200 or 404).
|
||||
broken_redirects=$( echo "$http_codes" | sed '$d' | grep -v '301' )
|
||||
|
||||
# All HTTP codes are 301 (Moved Permanently), the redirect link exists.
|
||||
if [[ -z "$broken_redirects" ]]; then
|
||||
aka_ms_download_link=$( echo "$response" | awk '$1 ~ /^Location/{print $2}' | tail -1 | tr -d '\r')
|
||||
|
||||
if [[ -z "$aka_ms_download_link" ]]; then
|
||||
say_verbose "The aka.ms link '$aka_ms_link' is not valid: failed to get redirect location."
|
||||
return 1
|
||||
fi
|
||||
|
||||
say_verbose "The redirect location retrieved: '$aka_ms_download_link'."
|
||||
return 0
|
||||
else
|
||||
say_verbose "The aka.ms link '$aka_ms_link' is not valid: received HTTP code: $(echo "$broken_redirects" | paste -sd "," -)."
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
calculate_vars() {
|
||||
eval $invocation
|
||||
valid_legacy_download_link=true
|
||||
|
||||
#normalize input variables
|
||||
normalized_architecture="$(get_normalized_architecture_from_architecture "$architecture")"
|
||||
say_verbose "Normalized architecture: '$normalized_architecture'."
|
||||
say_verbose "normalized_architecture=$normalized_architecture"
|
||||
|
||||
normalized_os="$(get_normalized_os "$user_defined_os")"
|
||||
say_verbose "Normalized OS: '$normalized_os'."
|
||||
normalized_quality="$(get_normalized_quality "$quality")"
|
||||
say_verbose "Normalized quality: '$normalized_quality'."
|
||||
normalized_channel="$(get_normalized_channel "$channel")"
|
||||
say_verbose "Normalized channel: '$normalized_channel'."
|
||||
normalized_product="$(get_normalized_product "$runtime")"
|
||||
say_verbose "Normalized product: '$normalized_product'."
|
||||
|
||||
#try to get download location from aka.ms link
|
||||
#not applicable when exact version is specified via command or json file
|
||||
normalized_version="$(to_lowercase "$version")"
|
||||
if [[ -z "$json_file" && "$normalized_version" == "latest" ]]; then
|
||||
|
||||
valid_aka_ms_link=true;
|
||||
get_download_link_from_aka_ms || valid_aka_ms_link=false
|
||||
|
||||
if [ "$valid_aka_ms_link" == false ]; then
|
||||
# if quality is specified - exit with error - there is no fallback approach
|
||||
if [ ! -z "$normalized_quality" ]; then
|
||||
say_err "Failed to locate the latest version in the channel '$normalized_channel' with '$normalized_quality' quality for '$normalized_product', os: '$normalized_os', architecture: '$normalized_architecture'."
|
||||
say_err "Refer to: https://aka.ms/dotnet-os-lifecycle for information on .NET Core support."
|
||||
return 1
|
||||
fi
|
||||
say_verbose "Falling back to latest.version file approach."
|
||||
else
|
||||
say_verbose "Retrieved primary payload URL from aka.ms link: '$aka_ms_download_link'."
|
||||
download_link=$aka_ms_download_link
|
||||
|
||||
say_verbose "Downloading using legacy url will not be attempted."
|
||||
valid_legacy_download_link=false
|
||||
|
||||
#get version from the path
|
||||
IFS='/'
|
||||
read -ra pathElems <<< "$download_link"
|
||||
count=${#pathElems[@]}
|
||||
specific_version="${pathElems[count-2]}"
|
||||
unset IFS;
|
||||
say_verbose "Version: '$specific_version'."
|
||||
|
||||
#Retrieve product specific version
|
||||
specific_product_version="$(get_specific_product_version "$azure_feed" "$specific_version" "$download_link")"
|
||||
say_verbose "Product specific version: '$specific_product_version'."
|
||||
|
||||
install_root="$(resolve_installation_path "$install_dir")"
|
||||
say_verbose "InstallRoot: '$install_root'."
|
||||
return
|
||||
fi
|
||||
fi
|
||||
say_verbose "normalized_os=$normalized_os"
|
||||
|
||||
specific_version="$(get_specific_version_from_version "$azure_feed" "$channel" "$normalized_architecture" "$version" "$json_file")"
|
||||
specific_product_version="$(get_specific_product_version "$azure_feed" "$specific_version")"
|
||||
@@ -1314,8 +1011,6 @@ feed_credential=""
|
||||
verbose=false
|
||||
runtime=""
|
||||
runtime_id=""
|
||||
quality=""
|
||||
internal=false
|
||||
override_non_versioned_files=true
|
||||
non_dynamic_parameters=""
|
||||
user_defined_os=""
|
||||
@@ -1332,14 +1027,6 @@ do
|
||||
shift
|
||||
version="$1"
|
||||
;;
|
||||
-q|--quality|-[Qq]uality)
|
||||
shift
|
||||
quality="$1"
|
||||
;;
|
||||
--internal|-[Ii]nternal)
|
||||
internal=true
|
||||
non_dynamic_parameters+=" $name"
|
||||
;;
|
||||
-i|--install-dir|-[Ii]nstall[Dd]ir)
|
||||
shift
|
||||
install_dir="$1"
|
||||
@@ -1397,9 +1084,7 @@ do
|
||||
--feed-credential|-[Ff]eed[Cc]redential)
|
||||
shift
|
||||
feed_credential="$1"
|
||||
#feed_credential should start with "?", for it to be added to the end of the link.
|
||||
#adding "?" at the beginning of the feed_credential if needed.
|
||||
[[ -z "$(echo $feed_credential)" ]] || [[ $feed_credential == \?* ]] || feed_credential="?$feed_credential"
|
||||
non_dynamic_parameters+=" $name "\""$1"\"""
|
||||
;;
|
||||
--runtime-id|-[Rr]untime[Ii]d)
|
||||
shift
|
||||
@@ -1431,26 +1116,15 @@ do
|
||||
echo " - LTS - most current supported release"
|
||||
echo " - 2-part version in a format A.B - represents a specific release"
|
||||
echo " examples: 2.0; 1.0"
|
||||
echo " - 3-part version in a format A.B.Cxx - represents a specific SDK release"
|
||||
echo " examples: 5.0.1xx, 5.0.2xx."
|
||||
echo " Supported since 5.0 release"
|
||||
echo " Note: The version parameter overrides the channel parameter when any version other than `latest` is used."
|
||||
echo " - Branch name"
|
||||
echo " examples: release/2.0.0; Master"
|
||||
echo " Note: The version parameter overrides the channel parameter."
|
||||
echo " -v,--version <VERSION> Use specific VERSION, Defaults to \`$version\`."
|
||||
echo " -Version"
|
||||
echo " Possible values:"
|
||||
echo " - latest - most latest build on specific channel"
|
||||
echo " - 3-part version in a format A.B.C - represents specific version of build"
|
||||
echo " examples: 2.0.0-preview2-006120; 1.1.0"
|
||||
echo " -q,--quality <quality> Download the latest build of specified quality in the channel."
|
||||
echo " -Quality"
|
||||
echo " The possible values are: daily, signed, validated, preview, GA."
|
||||
echo " Works only in combination with channel. Not applicable for current and LTS channels and will be ignored if those channels are used."
|
||||
echo " For SDK use channel in A.B.Cxx format. Using quality for SDK together with channel in A.B format is not supported."
|
||||
echo " Supported since 5.0 release."
|
||||
echo " Note: The version parameter overrides the channel parameter when any version other than `latest` is used, and therefore overrides the quality."
|
||||
echo " --internal,-Internal Download internal builds. Requires providing credentials via --feed-credential parameter."
|
||||
echo " --feed-credential <FEEDCREDENTIAL> Token to access Azure feed. Used as a query string to append to the Azure feed."
|
||||
echo " -FeedCredential This parameter typically is not specified."
|
||||
echo " -i,--install-dir <DIR> Install under specified location (see Install Location below)"
|
||||
echo " -InstallDir"
|
||||
echo " --architecture <ARCHITECTURE> Architecture of dotnet binaries to be installed, Defaults to \`$architecture\`."
|
||||
@@ -1471,6 +1145,7 @@ do
|
||||
echo " --verbose,-Verbose Display diagnostics information."
|
||||
echo " --azure-feed,-AzureFeed Azure feed location. Defaults to $azure_feed, This parameter typically is not changed by the user."
|
||||
echo " --uncached-feed,-UncachedFeed Uncached feed location. This parameter typically is not changed by the user."
|
||||
echo " --feed-credential,-FeedCredential Azure feed shared access token. This parameter typically is not specified."
|
||||
echo " --skip-non-versioned-files Skips non-versioned files if they already exist, such as the dotnet executable."
|
||||
echo " -SkipNonVersionedFiles"
|
||||
echo " --no-cdn,-NoCdn Disable downloading from the Azure CDN, and use the uncached feed directly."
|
||||
@@ -1511,44 +1186,23 @@ say "- The SDK needs to be installed without user interaction and without admin
|
||||
say "- The SDK installation doesn't need to persist across multiple CI runs."
|
||||
say "To set up a development environment or to run apps, use installers rather than this script. Visit https://dotnet.microsoft.com/download to get the installer.\n"
|
||||
|
||||
if [ "$internal" = true ] && [ -z "$(echo $feed_credential)" ]; then
|
||||
message="Provide credentials via --feed-credential parameter."
|
||||
if [ "$dry_run" = true ]; then
|
||||
say_warning "$message"
|
||||
else
|
||||
say_err "$message"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
check_min_reqs
|
||||
calculate_vars
|
||||
script_name=$(basename "$0")
|
||||
|
||||
if [ "$dry_run" = true ]; then
|
||||
say "Payload URLs:"
|
||||
say "Primary named payload URL: ${download_link}"
|
||||
say "Primary named payload URL: $download_link"
|
||||
if [ "$valid_legacy_download_link" = true ]; then
|
||||
say "Legacy named payload URL: ${legacy_download_link}"
|
||||
say "Legacy named payload URL: $legacy_download_link"
|
||||
fi
|
||||
repeatable_command="./$script_name --version "\""$specific_version"\"" --install-dir "\""$install_root"\"" --architecture "\""$normalized_architecture"\"" --os "\""$normalized_os"\"""
|
||||
|
||||
if [ ! -z "$normalized_quality" ]; then
|
||||
repeatable_command+=" --quality "\""$normalized_quality"\"""
|
||||
fi
|
||||
|
||||
if [[ "$runtime" == "dotnet" ]]; then
|
||||
repeatable_command+=" --runtime "\""dotnet"\"""
|
||||
elif [[ "$runtime" == "aspnetcore" ]]; then
|
||||
repeatable_command+=" --runtime "\""aspnetcore"\"""
|
||||
fi
|
||||
|
||||
repeatable_command+="$non_dynamic_parameters"
|
||||
|
||||
if [ -n "$feed_credential" ]; then
|
||||
repeatable_command+=" --feed-credential "\""<feed_credential>"\"""
|
||||
fi
|
||||
|
||||
say "Repeatable invocation: $repeatable_command"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
@@ -3,8 +3,7 @@ PACKAGERUNTIME=$1
|
||||
PRECACHE=$2
|
||||
|
||||
NODE_URL=https://nodejs.org/dist
|
||||
NODE12_VERSION="12.22.7"
|
||||
NODE16_VERSION="16.13.0"
|
||||
NODE12_VERSION="12.13.1"
|
||||
|
||||
get_abs_path() {
|
||||
# exploits the fact that pwd will print abs path when no args
|
||||
@@ -127,8 +126,6 @@ function acquireExternalTool() {
|
||||
if [[ "$PACKAGERUNTIME" == "win-x64" || "$PACKAGERUNTIME" == "win-x86" ]]; then
|
||||
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/$PACKAGERUNTIME/node.exe" node12/bin
|
||||
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/$PACKAGERUNTIME/node.lib" node12/bin
|
||||
acquireExternalTool "$NODE_URL/v${NODE16_VERSION}/$PACKAGERUNTIME/node.exe" node16/bin
|
||||
acquireExternalTool "$NODE_URL/v${NODE16_VERSION}/$PACKAGERUNTIME/node.lib" node16/bin
|
||||
if [[ "$PRECACHE" != "" ]]; then
|
||||
acquireExternalTool "https://github.com/microsoft/vswhere/releases/download/2.6.7/vswhere.exe" vswhere
|
||||
fi
|
||||
@@ -137,30 +134,18 @@ fi
|
||||
# Download the external tools only for OSX.
|
||||
if [[ "$PACKAGERUNTIME" == "osx-x64" ]]; then
|
||||
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/node-v${NODE12_VERSION}-darwin-x64.tar.gz" node12 fix_nested_dir
|
||||
acquireExternalTool "$NODE_URL/v${NODE16_VERSION}/node-v${NODE16_VERSION}-darwin-x64.tar.gz" node16 fix_nested_dir
|
||||
fi
|
||||
|
||||
# Download the external tools for Linux PACKAGERUNTIMEs.
|
||||
if [[ "$PACKAGERUNTIME" == "linux-x64" ]]; then
|
||||
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/node-v${NODE12_VERSION}-linux-x64.tar.gz" node12 fix_nested_dir
|
||||
acquireExternalTool "https://vstsagenttools.blob.core.windows.net/tools/nodejs/${NODE12_VERSION}/alpine/x64/node-v${NODE12_VERSION}-alpine-x64.tar.gz" node12_alpine
|
||||
acquireExternalTool "$NODE_URL/v${NODE16_VERSION}/node-v${NODE16_VERSION}-linux-x64.tar.gz" node16 fix_nested_dir
|
||||
acquireExternalTool "https://vstsagenttools.blob.core.windows.net/tools/nodejs/${NODE16_VERSION}/alpine/x64/node-v${NODE16_VERSION}-alpine-x64.tar.gz" node16_alpine
|
||||
acquireExternalTool "https://vstsagenttools.blob.core.windows.net/tools/nodejs/${NODE12_VERSION}/alpine/x64/node-${NODE12_VERSION}-alpine-x64.tar.gz" node12_alpine
|
||||
fi
|
||||
|
||||
if [[ "$PACKAGERUNTIME" == "linux-arm64" ]]; then
|
||||
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/node-v${NODE12_VERSION}-linux-arm64.tar.gz" node12 fix_nested_dir
|
||||
acquireExternalTool "$NODE_URL/v${NODE16_VERSION}/node-v${NODE16_VERSION}-linux-arm64.tar.gz" node16 fix_nested_dir
|
||||
fi
|
||||
|
||||
if [[ "$PACKAGERUNTIME" == "linux-arm" ]]; then
|
||||
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/node-v${NODE12_VERSION}-linux-armv7l.tar.gz" node12 fix_nested_dir
|
||||
acquireExternalTool "$NODE_URL/v${NODE16_VERSION}/node-v${NODE16_VERSION}-linux-armv7l.tar.gz" node16 fix_nested_dir
|
||||
fi
|
||||
|
||||
if [[ "$PACKAGERUNTIME" == "linux-musl-x64" ]]; then
|
||||
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/node-v${NODE12_VERSION}-linux-x64.tar.gz" node12_glib fix_nested_dir
|
||||
acquireExternalTool "https://vstsagenttools.blob.core.windows.net/tools/nodejs/${NODE12_VERSION}/alpine/x64/node-v${NODE12_VERSION}-alpine-x64.tar.gz" node12
|
||||
acquireExternalTool "$NODE_URL/v${NODE16_VERSION}/node-v${NODE16_VERSION}-linux-x64.tar.gz" node16_glib fix_nested_dir
|
||||
acquireExternalTool "https://vstsagenttools.blob.core.windows.net/tools/nodejs/${NODE16_VERSION}/alpine/x64/node-v${NODE16_VERSION}-alpine-x64.tar.gz" node16
|
||||
fi
|
||||
3031
src/Misc/layoutbin/kubeInnerHandler/index.js
Normal file
3031
src/Misc/layoutbin/kubeInnerHandler/index.js
Normal file
File diff suppressed because it is too large
Load Diff
3119
src/Misc/layoutbin/kubectlHandler/index.js
Normal file
3119
src/Misc/layoutbin/kubectlHandler/index.js
Normal file
File diff suppressed because it is too large
Load Diff
49
src/Misc/layoutbin/podman-handler.js
Normal file
49
src/Misc/layoutbin/podman-handler.js
Normal file
@@ -0,0 +1,49 @@
|
||||
// Job container creation
|
||||
|
||||
// podman network create {network} -> track and return `network` for ${{job.container.network}}
|
||||
|
||||
// podman pull docker.io/library/{image}
|
||||
|
||||
// podman create --name e088c842be1f46b394212618408aaba0_node1016jessie_6196c9
|
||||
// --label fa4e14
|
||||
// --workdir /__w/canary/canary
|
||||
// --network github_network_f98a6e1e96e74d919d814c165641cba3
|
||||
// -e "HOME=/github/home" -e GITHUB_ACTIONS=true -e CI=true
|
||||
// -v "/var/run/docker.sock":"/var/run/docker.sock"
|
||||
// -v "/home/runner/work":"/__w"
|
||||
// -v "/home/runner/runners/2.283.2/externals":"/__e":ro
|
||||
// -v "/home/runner/work/_temp":"/__w/_temp"
|
||||
// -v "/home/runner/work/_actions":"/__w/_actions"
|
||||
// -v "/opt/hostedtoolcache":"/__t"
|
||||
// -v "/home/runner/work/_temp/_github_home":"/github/home"
|
||||
// -v "/home/runner/work/_temp/_github_workflow":"/github/workflow"
|
||||
// --entrypoint "tail" node:10.16-jessie "-f" "/dev/null"
|
||||
|
||||
// podman start {containerId}
|
||||
|
||||
// get PATH inside the container
|
||||
|
||||
// output containerId for ${{job.container.id}}
|
||||
|
||||
|
||||
|
||||
// Job container stop
|
||||
|
||||
// podman rm --force {containerId}
|
||||
|
||||
// podman network rm {network}
|
||||
|
||||
|
||||
// Run step
|
||||
|
||||
// podman exec -i --workdir /__w/canary/canary
|
||||
// -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY
|
||||
// -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER
|
||||
// -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR
|
||||
// -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME
|
||||
// -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL
|
||||
// -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY
|
||||
// -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_DEBUG
|
||||
// -e RUNNER_OS -e RUNNER_NAME -e RUNNER_TOOL_CACHE
|
||||
// -e RUNNER_TEMP -e RUNNER_WORKSPACE
|
||||
// eccdf520697a035599d6e8c8dc801f004fdd3797cdce88f590aba3669a88d9bc sh -e /__w/_temp/d3b30383-719c-4e76-a16f-8f85443352be.sh
|
||||
3110
src/Misc/layoutbin/podmanHandler/index.js
Normal file
3110
src/Misc/layoutbin/podmanHandler/index.js
Normal file
File diff suppressed because it is too large
Load Diff
@@ -161,13 +161,59 @@ if [[ "$currentplatform" == 'darwin' && restartinteractiverunner -eq 0 ]]; then
|
||||
date "+[%F %T-%4N] DarwinRunnerUpgrade: Failed to find runner path. Path: $path, pgid: $procgroup, root: $rootfolder" >> "$telemetryfile" 2>&1
|
||||
fi
|
||||
else
|
||||
runproc=$(ps x -o pgid,command | grep "run.sh" | grep -v grep | awk '{print $1}')
|
||||
if [[ $? -eq 0 && -n "$runproc" ]]
|
||||
then
|
||||
date "+[%F %T-%4N] Running as ephemeral using run.sh, no need to recreate node folder" >> "$logfile" 2>&1
|
||||
else
|
||||
date "+[%F %T-%4N] DarwinRunnerUpgrade: Failed to find runner pgid. pgid: $procgroup, root: $rootfolder" >> "$logfile" 2>&1
|
||||
date "+[%F %T-%4N] DarwinRunnerUpgrade: Failed to find runner pgid. pgid: $procgroup, root: $rootfolder" >> "$telemetryfile" 2>&1
|
||||
fi
|
||||
|
||||
if [ $attemptedtargetedfix -eq 0 ]
|
||||
then
|
||||
|
||||
date "+[%F %T-%4N] DarwinRunnerUpgrade: Defaulting to old macOS service fix" >> "$logfile" 2>&1
|
||||
date "+[%F %T-%4N] DarwinRunnerUpgrade: Defaulting to old macOS service fix" >> "$telemetryfile" 2>&1
|
||||
if [[ ! -e "$rootfolder/externals.2.280.3/node12/bin/node" ]]
|
||||
then
|
||||
mkdir -p "$rootfolder/externals.2.280.3/node12/bin"
|
||||
cp "$rootfolder/externals/node12/bin/node" "$rootfolder/externals.2.280.3/node12/bin/node"
|
||||
fi
|
||||
|
||||
if [[ ! -e "$rootfolder/externals.2.280.2/node12/bin/node" ]]
|
||||
then
|
||||
mkdir -p "$rootfolder/externals.2.280.2/node12/bin"
|
||||
cp "$rootfolder/externals/node12/bin/node" "$rootfolder/externals.2.280.2/node12/bin/node"
|
||||
fi
|
||||
|
||||
if [[ ! -e "$rootfolder/externals.2.280.1/node12/bin/node" ]]
|
||||
then
|
||||
mkdir -p "$rootfolder/externals.2.280.1/node12/bin"
|
||||
cp "$rootfolder/externals/node12/bin/node" "$rootfolder/externals.2.280.1/node12/bin/node"
|
||||
fi
|
||||
|
||||
# GHES 3.2
|
||||
if [[ ! -e "$rootfolder/externals.2.279.0/node12/bin/node" ]]
|
||||
then
|
||||
mkdir -p "$rootfolder/externals.2.279.0/node12/bin"
|
||||
cp "$rootfolder/externals/node12/bin/node" "$rootfolder/externals.2.279.0/node12/bin/node"
|
||||
fi
|
||||
|
||||
# GHES 3.1.2 or later
|
||||
if [[ ! -e "$rootfolder/externals.2.278.0/node12/bin/node" ]]
|
||||
then
|
||||
mkdir -p "$rootfolder/externals.2.278.0/node12/bin"
|
||||
cp "$rootfolder/externals/node12/bin/node" "$rootfolder/externals.2.278.0/node12/bin/node"
|
||||
fi
|
||||
|
||||
# GHES 3.1.0
|
||||
if [[ ! -e "$rootfolder/externals.2.276.1/node12/bin/node" ]]
|
||||
then
|
||||
mkdir -p "$rootfolder/externals.2.276.1/node12/bin"
|
||||
cp "$rootfolder/externals/node12/bin/node" "$rootfolder/externals.2.276.1/node12/bin/node"
|
||||
fi
|
||||
|
||||
# GHES 3.0
|
||||
if [[ ! -e "$rootfolder/externals.2.273.5/node12/bin/node" ]]
|
||||
then
|
||||
mkdir -p "$rootfolder/externals.2.273.5/node12/bin"
|
||||
cp "$rootfolder/externals/node12/bin/node" "$rootfolder/externals.2.273.5/node12/bin/node"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
68
src/Misc/layoutroot/entrypoint.sh
Executable file
68
src/Misc/layoutroot/entrypoint.sh
Executable file
@@ -0,0 +1,68 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
function fatal() {
|
||||
echo "error: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
[ -n "${GITHUB_PAT:-""}" ] || fatal "GITHUB_PAT variable must be set"
|
||||
[ -n "${RUNNER_CONFIG_URL:-""}" ] || fatal "RUNNER_CONFIG_URL variable must be set"
|
||||
# [ -n "${RUNNER_NAME:-""}" ] || fatal "RUNNER_NAME variable must be set"
|
||||
|
||||
# if [ -n "${RUNNER_NAME}" ]; then
|
||||
# # Use container id to gen unique runner name if name not provide
|
||||
# CONTAINER_ID=$(cat /proc/self/cgroup | head -n 1 | tr '/' '\n' | tail -1 | cut -c1-12)
|
||||
# RUNNER_NAME="actions-runner-${CONTAINER_ID}"
|
||||
# fi
|
||||
|
||||
# if the scope has a slash, it's a repo runner
|
||||
# orgs_or_repos="orgs"
|
||||
# if [[ "$GITHUB_RUNNER_SCOPE" == *\/* ]]; then
|
||||
# orgs_or_repos="repos"
|
||||
# fi
|
||||
|
||||
# RUNNER_REG_URL="${GITHUB_SERVER_URL:=https://github.com}/${GITHUB_RUNNER_SCOPE}"
|
||||
|
||||
# echo "Runner Name : ${RUNNER_NAME}"
|
||||
echo "Registration URL : ${RUNNER_CONFIG_URL}"
|
||||
# echo "GitHub API URL : ${GITHUB_API_URL:=https://api.github.com}"
|
||||
# echo "Runner Labels : ${RUNNER_LABELS:=""}"
|
||||
|
||||
# TODO: if api url is not default, validate it ends in /api/v3
|
||||
|
||||
# RUNNER_LABELS_ARG=""
|
||||
# if [ -n "${RUNNER_LABELS}" ]; then
|
||||
# RUNNER_LABELS_ARG="--labels ${RUNNER_LABELS}"
|
||||
# fi
|
||||
|
||||
# RUNNER_GROUP_ARG=""
|
||||
# if [ -n "${RUNNER_GROUP}" ]; then
|
||||
# RUNNER_GROUP_ARG="--runnergroup ${RUNNER_GROUP}"
|
||||
# fi
|
||||
|
||||
# if [ -n "${K8S_HOST_IP}" ]; then
|
||||
# export http_proxy=http://$K8S_HOST_IP:9090
|
||||
# fi
|
||||
|
||||
# curl -v -s -X POST ${GITHUB_API_URL}/${orgs_or_repos}/${GITHUB_RUNNER_SCOPE}/actions/runners/registration-token -H "authorization: token $GITHUB_PAT" -H "accept: application/vnd.github.everest-preview+json"
|
||||
|
||||
# Generate registration token
|
||||
# RUNNER_REG_TOKEN=$(curl -s -X POST ${GITHUB_API_URL}/${orgs_or_repos}/${GITHUB_RUNNER_SCOPE}/actions/runners/registration-token -H "authorization: token $GITHUB_PAT" -H "accept: application/vnd.github.everest-preview+json" | jq -r '.token')
|
||||
|
||||
# Create the runner and configure it
|
||||
./config.sh --unattended --url $RUNNER_CONFIG_URL --pat $GITHUB_PAT --replace --ephemeral
|
||||
|
||||
# while (! docker version ); do
|
||||
# # Docker takes a few seconds to initialize
|
||||
# echo "Waiting for Docker to launch..."
|
||||
# sleep 1
|
||||
# done
|
||||
|
||||
# unset env
|
||||
unset RUNNER_CONFIG_URL
|
||||
unset GITHUB_PAT
|
||||
|
||||
# Run it
|
||||
./run.sh
|
||||
@@ -38,8 +38,7 @@ namespace GitHub.Runner.Common
|
||||
public async Task ConnectAsync(VssConnection jobConnection)
|
||||
{
|
||||
_connection = jobConnection;
|
||||
int totalAttempts = 5;
|
||||
int attemptCount = totalAttempts;
|
||||
int attemptCount = 5;
|
||||
var configurationStore = HostContext.GetService<IConfigurationStore>();
|
||||
var runnerSettings = configurationStore.GetSettings();
|
||||
|
||||
@@ -57,21 +56,18 @@ namespace GitHub.Runner.Common
|
||||
|
||||
if (runnerSettings.IsHostedServer)
|
||||
{
|
||||
await CheckNetworkEndpointsAsync(attemptCount);
|
||||
await CheckNetworkEndpointsAsync();
|
||||
}
|
||||
}
|
||||
|
||||
int attempt = totalAttempts - attemptCount;
|
||||
TimeSpan backoff = BackoffTimerHelper.GetExponentialBackoff(attempt, TimeSpan.FromMilliseconds(100), TimeSpan.FromSeconds(3.2), TimeSpan.FromMilliseconds(100));
|
||||
|
||||
await Task.Delay(backoff);
|
||||
await Task.Delay(100);
|
||||
}
|
||||
|
||||
_taskClient = _connection.GetClient<TaskHttpClient>();
|
||||
_hasConnection = true;
|
||||
}
|
||||
|
||||
private async Task CheckNetworkEndpointsAsync(int attemptsLeft)
|
||||
private async Task CheckNetworkEndpointsAsync()
|
||||
{
|
||||
try
|
||||
{
|
||||
@@ -83,8 +79,8 @@ namespace GitHub.Runner.Common
|
||||
|
||||
actionsClient.DefaultRequestHeaders.UserAgent.AddRange(HostContext.UserAgents);
|
||||
|
||||
// Call the _apis/health endpoint, and include how many attempts are left as a URL query for easy tracking
|
||||
var response = await actionsClient.GetAsync(new Uri(baseUri, $"_apis/health?_internalRunnerAttemptsLeft={attemptsLeft}"));
|
||||
// Call the _apis/health endpoint
|
||||
var response = await actionsClient.GetAsync(new Uri(baseUri, "_apis/health"));
|
||||
Trace.Info($"Actions health status code: {response.StatusCode}");
|
||||
}
|
||||
}
|
||||
@@ -104,8 +100,8 @@ namespace GitHub.Runner.Common
|
||||
{
|
||||
gitHubClient.DefaultRequestHeaders.UserAgent.AddRange(HostContext.UserAgents);
|
||||
|
||||
// Call the api.github.com endpoint, and include how many attempts are left as a URL query for easy tracking
|
||||
var response = await gitHubClient.GetAsync($"https://api.github.com?_internalRunnerAttemptsLeft={attemptsLeft}");
|
||||
// Call the api.github.com endpoint
|
||||
var response = await gitHubClient.GetAsync("https://api.github.com");
|
||||
Trace.Info($"api.github.com status code: {response.StatusCode}");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
<PropertyGroup>
|
||||
<TargetFramework>netcoreapp3.1</TargetFramework>
|
||||
<OutputType>Library</OutputType>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;linux-musl-x64;osx-x64</RuntimeIdentifiers>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64</RuntimeIdentifiers>
|
||||
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
|
||||
<AssetTargetFallback>portable-net45+win8</AssetTargetFallback>
|
||||
<NoWarn>NU1701;NU1603</NoWarn>
|
||||
|
||||
@@ -3,12 +3,13 @@
|
||||
<PropertyGroup>
|
||||
<TargetFramework>netcoreapp3.1</TargetFramework>
|
||||
<OutputType>Exe</OutputType>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;linux-musl-x64;osx-x64</RuntimeIdentifiers>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64</RuntimeIdentifiers>
|
||||
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
|
||||
<AssetTargetFallback>portable-net45+win8</AssetTargetFallback>
|
||||
<NoWarn>NU1701;NU1603</NoWarn>
|
||||
<Version>$(Version)</Version>
|
||||
<TieredCompilationQuickJit>true</TieredCompilationQuickJit>
|
||||
<PublishReadyToRun>true</PublishReadyToRun>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
|
||||
@@ -312,8 +312,6 @@ namespace GitHub.Runner.Listener
|
||||
}
|
||||
|
||||
HostContext.WritePerfCounter("SessionCreated");
|
||||
|
||||
_term.WriteLine($"Current runner version: '{BuildConstants.RunnerPackage.Version}'");
|
||||
_term.WriteLine($"{DateTime.UtcNow:u}: Listening for Jobs");
|
||||
|
||||
IJobDispatcher jobDispatcher = null;
|
||||
|
||||
@@ -75,9 +75,11 @@ namespace GitHub.Runner.Listener
|
||||
Trace.Info($"All running job has exited.");
|
||||
|
||||
// We need to keep runner backup around for macOS until we fixed https://github.com/actions/runner/issues/743
|
||||
#if !OS_OSX
|
||||
// delete runner backup
|
||||
DeletePreviousVersionRunnerBackup(token);
|
||||
Trace.Info($"Delete old version runner backup.");
|
||||
#endif
|
||||
// generate update script from template
|
||||
await UpdateRunnerUpdateStateAsync("Generate and execute update script.");
|
||||
|
||||
|
||||
@@ -3,12 +3,13 @@
|
||||
<PropertyGroup>
|
||||
<TargetFramework>netcoreapp3.1</TargetFramework>
|
||||
<OutputType>Exe</OutputType>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;linux-musl-x64;osx-x64</RuntimeIdentifiers>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64</RuntimeIdentifiers>
|
||||
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
|
||||
<AssetTargetFallback>portable-net45+win8</AssetTargetFallback>
|
||||
<NoWarn>NU1701;NU1603</NoWarn>
|
||||
<Version>$(Version)</Version>
|
||||
<TieredCompilationQuickJit>true</TieredCompilationQuickJit>
|
||||
<PublishReadyToRun>true</PublishReadyToRun>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
<PropertyGroup>
|
||||
<TargetFramework>netcoreapp3.1</TargetFramework>
|
||||
<OutputType>Library</OutputType>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;linux-musl-x64;osx-x64</RuntimeIdentifiers>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64</RuntimeIdentifiers>
|
||||
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
|
||||
<AssetTargetFallback>portable-net45+win8</AssetTargetFallback>
|
||||
<NoWarn>NU1701;NU1603</NoWarn>
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
<PropertyGroup>
|
||||
<TargetFramework>netcoreapp3.1</TargetFramework>
|
||||
<OutputType>Library</OutputType>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;linux-musl-x64;osx-x64</RuntimeIdentifiers>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64</RuntimeIdentifiers>
|
||||
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
|
||||
<AssetTargetFallback>portable-net45+win8</AssetTargetFallback>
|
||||
<NoWarn>NU1701;NU1603</NoWarn>
|
||||
|
||||
@@ -267,19 +267,6 @@ namespace GitHub.Runner.Worker
|
||||
_cachedEmbeddedPostSteps[parentStepId].Push(clonedAction);
|
||||
}
|
||||
}
|
||||
else if (depth > 0)
|
||||
{
|
||||
// if we're in a composite action and haven't loaded the local action yet
|
||||
// we assume it has a post step
|
||||
if (!_cachedEmbeddedPostSteps.ContainsKey(parentStepId))
|
||||
{
|
||||
// If we haven't done so already, add the parent to the post steps
|
||||
_cachedEmbeddedPostSteps[parentStepId] = new Stack<Pipelines.ActionStep>();
|
||||
}
|
||||
// Clone action so we can modify the condition without affecting the original
|
||||
var clonedAction = action.Clone() as Pipelines.ActionStep;
|
||||
_cachedEmbeddedPostSteps[parentStepId].Push(clonedAction);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1047,6 +1034,7 @@ namespace GitHub.Runner.Worker
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: remove once we remove the DistributedTask.EnableCompositeActions FF
|
||||
foreach (var step in compositeAction.Steps)
|
||||
{
|
||||
if (string.IsNullOrEmpty(executionContext.Global.Variables.Get("DistributedTask.EnableCompositeActions")) && step.Reference.Type != Pipelines.ActionSourceType.Script)
|
||||
@@ -1189,8 +1177,6 @@ namespace GitHub.Runner.Worker
|
||||
public string Pre { get; set; }
|
||||
|
||||
public string Post { get; set; }
|
||||
|
||||
public string NodeVersion { get; set; }
|
||||
}
|
||||
|
||||
public sealed class PluginActionExecutionData : ActionExecutionData
|
||||
|
||||
@@ -451,8 +451,7 @@ namespace GitHub.Runner.Worker
|
||||
};
|
||||
}
|
||||
}
|
||||
else if (string.Equals(usingToken.Value, "node12", StringComparison.OrdinalIgnoreCase)||
|
||||
string.Equals(usingToken.Value, "node16", StringComparison.OrdinalIgnoreCase))
|
||||
else if (string.Equals(usingToken.Value, "node12", StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
if (string.IsNullOrEmpty(mainToken?.Value))
|
||||
{
|
||||
@@ -462,7 +461,6 @@ namespace GitHub.Runner.Worker
|
||||
{
|
||||
return new NodeJSActionExecutionData()
|
||||
{
|
||||
NodeVersion = usingToken.Value,
|
||||
Script = mainToken.Value,
|
||||
Pre = preToken?.Value,
|
||||
InitCondition = preIfToken?.Value ?? "always()",
|
||||
@@ -492,7 +490,7 @@ namespace GitHub.Runner.Worker
|
||||
}
|
||||
else
|
||||
{
|
||||
throw new ArgumentOutOfRangeException($"'using: {usingToken.Value}' is not supported, use 'docker', 'node12' or 'node16' instead.");
|
||||
throw new ArgumentOutOfRangeException($"'using: {usingToken.Value}' is not supported, use 'docker' or 'node12' instead.");
|
||||
}
|
||||
}
|
||||
else if (pluginToken != null)
|
||||
|
||||
@@ -54,7 +54,7 @@ namespace GitHub.Runner.Worker.Container
|
||||
_pathMappings.Add(new PathMapping(hostContext.GetDirectory(WellKnownDirectory.Externals), "/__e"));
|
||||
if (this.IsJobContainer)
|
||||
{
|
||||
this.MountVolumes.Add(new MountVolume("/var/run/docker.sock", "/var/run/docker.sock"));
|
||||
// this.MountVolumes.Add(new MountVolume("/var/run/docker.sock", "/var/run/docker.sock"));
|
||||
}
|
||||
#endif
|
||||
if (container.Ports?.Count > 0)
|
||||
|
||||
@@ -12,9 +12,88 @@ using GitHub.Runner.Sdk;
|
||||
using GitHub.DistributedTask.Pipelines.ContextData;
|
||||
using Microsoft.Win32;
|
||||
using GitHub.DistributedTask.Pipelines.ObjectTemplating;
|
||||
using System.Threading.Channels;
|
||||
using GitHub.Services.WebApi;
|
||||
using System.Text;
|
||||
using System.Runtime.Serialization;
|
||||
|
||||
namespace GitHub.Runner.Worker
|
||||
{
|
||||
[DataContract]
|
||||
public class ContainerEngineHandlerInput
|
||||
{
|
||||
[DataMember]
|
||||
public string Command { get; set; }
|
||||
|
||||
[DataMember]
|
||||
public ContainersCreationInput CreationInput { get; set; }
|
||||
|
||||
[DataMember]
|
||||
public JobContainerExecInput ExecInput { get; set; }
|
||||
|
||||
[DataMember]
|
||||
public ContainersRemoveInput RemoveInput { get; set; }
|
||||
}
|
||||
|
||||
[DataContract]
|
||||
public class ContainersCreationInput
|
||||
{
|
||||
[DataMember]
|
||||
public List<ContainerInfo> Containers { get; set; }
|
||||
}
|
||||
|
||||
[DataContract]
|
||||
public class JobContainerExecInput
|
||||
{
|
||||
[DataMember]
|
||||
public ContainerInfo JobContainer { get; set; }
|
||||
|
||||
[DataMember]
|
||||
public string WorkingDirectory { get; set; }
|
||||
|
||||
|
||||
[DataMember]
|
||||
public string FileName { get; set; }
|
||||
|
||||
|
||||
[DataMember]
|
||||
public string Arguments { get; set; }
|
||||
|
||||
|
||||
[DataMember]
|
||||
public List<string> EnvironmentKeys { get; set; }
|
||||
|
||||
[DataMember]
|
||||
public Dictionary<string, string> EnvironmentVariables { get; set; }
|
||||
}
|
||||
|
||||
|
||||
|
||||
[DataContract]
|
||||
public class ContainersRemoveInput
|
||||
{
|
||||
[DataMember]
|
||||
public string Network { get; set; }
|
||||
[DataMember]
|
||||
public string JobContainerId { get; set; }
|
||||
}
|
||||
|
||||
[DataContract]
|
||||
public class ContainersCreationOutput
|
||||
{
|
||||
[DataMember]
|
||||
public string Network { get; set; }
|
||||
[DataMember]
|
||||
public string JobContainerId { get; set; }
|
||||
}
|
||||
|
||||
[DataContract]
|
||||
public class ContainerEngineHandlerOutput
|
||||
{
|
||||
[DataMember]
|
||||
public ContainersCreationOutput CreationOutput { get; set; }
|
||||
}
|
||||
|
||||
[ServiceLocator(Default = typeof(ContainerOperationProvider))]
|
||||
public interface IContainerOperationProvider : IRunnerService
|
||||
{
|
||||
@@ -24,25 +103,57 @@ namespace GitHub.Runner.Worker
|
||||
|
||||
public class ContainerOperationProvider : RunnerService, IContainerOperationProvider
|
||||
{
|
||||
private IDockerCommandManager _dockerManager;
|
||||
private IDockerCommandManager _dockerManager = null;
|
||||
|
||||
public override void Initialize(IHostContext hostContext)
|
||||
{
|
||||
base.Initialize(hostContext);
|
||||
_dockerManager = HostContext.GetService<IDockerCommandManager>();
|
||||
// _dockerManager = HostContext.GetService<IDockerCommandManager>();
|
||||
}
|
||||
|
||||
public async Task StartContainersAsync(IExecutionContext executionContext, object data)
|
||||
{
|
||||
Trace.Entering();
|
||||
if (!Constants.Runner.Platform.Equals(Constants.OSPlatform.Linux))
|
||||
{
|
||||
throw new NotSupportedException("Container operations are only supported on Linux runners");
|
||||
}
|
||||
// if (!Constants.Runner.Platform.Equals(Constants.OSPlatform.Linux))
|
||||
// {
|
||||
// throw new NotSupportedException("Container operations are only supported on Linux runners");
|
||||
// }
|
||||
ArgUtil.NotNull(executionContext, nameof(executionContext));
|
||||
List<ContainerInfo> containers = data as List<ContainerInfo>;
|
||||
ArgUtil.NotNull(containers, nameof(containers));
|
||||
|
||||
foreach (var container in containers)
|
||||
{
|
||||
if (container.IsJobContainer)
|
||||
{
|
||||
// Configure job container - Mount workspace and tools, set up environment, and start long running process
|
||||
var githubContext = executionContext.ExpressionValues["github"] as GitHubContext;
|
||||
ArgUtil.NotNull(githubContext, nameof(githubContext));
|
||||
var workingDirectory = githubContext["workspace"] as StringContextData;
|
||||
ArgUtil.NotNullOrEmpty(workingDirectory, nameof(workingDirectory));
|
||||
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Work), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Work))));
|
||||
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Externals), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Externals)), true));
|
||||
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Temp), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Temp))));
|
||||
// container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Actions), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Actions))));
|
||||
container.MountVolumes.Add(new MountVolume(HostContext.GetDirectory(WellKnownDirectory.Tools), container.TranslateToContainerPath(HostContext.GetDirectory(WellKnownDirectory.Tools))));
|
||||
|
||||
var tempHomeDirectory = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Temp), "_github_home");
|
||||
Directory.CreateDirectory(tempHomeDirectory);
|
||||
container.MountVolumes.Add(new MountVolume(tempHomeDirectory, "/github/home"));
|
||||
container.AddPathTranslateMapping(tempHomeDirectory, "/github/home");
|
||||
container.ContainerEnvironmentVariables["HOME"] = container.TranslateToContainerPath(tempHomeDirectory);
|
||||
|
||||
var tempWorkflowDirectory = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Temp), "_github_workflow");
|
||||
Directory.CreateDirectory(tempWorkflowDirectory);
|
||||
container.MountVolumes.Add(new MountVolume(tempWorkflowDirectory, "/github/workflow"));
|
||||
container.AddPathTranslateMapping(tempWorkflowDirectory, "/github/workflow");
|
||||
|
||||
container.ContainerWorkDirectory = container.TranslateToContainerPath(workingDirectory);
|
||||
container.ContainerEntryPoint = "tail";
|
||||
container.ContainerEntryPointArgs = "-f /dev/null";
|
||||
}
|
||||
}
|
||||
|
||||
var postJobStep = new JobExtensionRunner(runAsync: this.StopContainersAsync,
|
||||
condition: $"{PipelineTemplateConstants.Always}()",
|
||||
displayName: "Stop containers",
|
||||
@@ -51,9 +162,71 @@ namespace GitHub.Runner.Worker
|
||||
executionContext.Debug($"Register post job cleanup for stopping/deleting containers.");
|
||||
executionContext.RegisterPostJobStep(postJobStep);
|
||||
|
||||
// Check whether we are inside a container.
|
||||
// Our container feature requires to map working directory from host to the container.
|
||||
// If we are already inside a container, we will not able to find out the real working direcotry path on the host.
|
||||
var podManHandler = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Bin), "kubectlHandler", "index.js");
|
||||
if (File.Exists(podManHandler))
|
||||
{
|
||||
var podmanInput = new ContainerEngineHandlerInput()
|
||||
{
|
||||
Command = "Create",
|
||||
CreationInput = new ContainersCreationInput()
|
||||
{
|
||||
Containers = containers
|
||||
}
|
||||
};
|
||||
|
||||
ContainerEngineHandlerOutput podmanOutput = null;
|
||||
using (var processInvoker = HostContext.CreateService<IProcessInvoker>())
|
||||
{
|
||||
var redirectStandardIn = Channel.CreateUnbounded<string>(new UnboundedChannelOptions() { SingleReader = true, SingleWriter = true });
|
||||
redirectStandardIn.Writer.TryWrite(JsonUtility.ToString(podmanInput));
|
||||
|
||||
processInvoker.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs message)
|
||||
{
|
||||
executionContext.Output(message.Data);
|
||||
};
|
||||
|
||||
processInvoker.ErrorDataReceived += delegate (object sender, ProcessDataReceivedEventArgs message)
|
||||
{
|
||||
executionContext.Output(message.Data);
|
||||
if (podmanOutput == null && message.Data.IndexOf("___CONTAINER_ENGINE_HANDLER_OUTPUT___") >= 0)
|
||||
{
|
||||
try
|
||||
{
|
||||
podmanOutput = JsonUtility.FromString<ContainerEngineHandlerOutput>(message.Data.Replace("___CONTAINER_ENGINE_HANDLER_OUTPUT___", ""));
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
executionContext.Error(ex);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Execute the process. Exit code 0 should always be returned.
|
||||
// A non-zero exit code indicates infrastructural failure.
|
||||
// Task failure should be communicated over STDOUT using ## commands.
|
||||
await processInvoker.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Bin),
|
||||
fileName: Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Externals), "node12", "bin", $"node{IOUtil.ExeExtension}"),
|
||||
arguments: podManHandler,
|
||||
environment: null,
|
||||
requireExitCodeZero: false,
|
||||
outputEncoding: Encoding.UTF8,
|
||||
killProcessOnCancel: false,
|
||||
redirectStandardIn: redirectStandardIn,
|
||||
cancellationToken: executionContext.CancellationToken);
|
||||
}
|
||||
|
||||
if (podmanOutput != null)
|
||||
{
|
||||
executionContext.JobContext.Container["network"] = new StringContextData(podmanOutput.CreationOutput.Network);
|
||||
executionContext.JobContext.Container["id"] = new StringContextData(podmanOutput.CreationOutput.JobContainerId);
|
||||
executionContext.Global.Container.ContainerId = podmanOutput.CreationOutput.JobContainerId;
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
// Check whether we are inside a container.
|
||||
// Our container feature requires to map working directory from host to the container.
|
||||
// If we are already inside a container, we will not able to find out the real working direcotry path on the host.
|
||||
#if OS_WINDOWS
|
||||
// service CExecSvc is Container Execution Agent.
|
||||
ServiceController[] scServices = ServiceController.GetServices();
|
||||
@@ -62,11 +235,11 @@ namespace GitHub.Runner.Worker
|
||||
throw new NotSupportedException("Container feature is not supported when runner is already running inside container.");
|
||||
}
|
||||
#else
|
||||
var initProcessCgroup = File.ReadLines("/proc/1/cgroup");
|
||||
if (initProcessCgroup.Any(x => x.IndexOf(":/docker/", StringComparison.OrdinalIgnoreCase) >= 0))
|
||||
{
|
||||
throw new NotSupportedException("Container feature is not supported when runner is already running inside container.");
|
||||
}
|
||||
var initProcessCgroup = File.ReadLines("/proc/1/cgroup");
|
||||
if (initProcessCgroup.Any(x => x.IndexOf(":/docker/", StringComparison.OrdinalIgnoreCase) >= 0))
|
||||
{
|
||||
throw new NotSupportedException("Container feature is not supported when runner is already running inside container.");
|
||||
}
|
||||
#endif
|
||||
|
||||
#if OS_WINDOWS
|
||||
@@ -90,68 +263,69 @@ namespace GitHub.Runner.Worker
|
||||
}
|
||||
#endif
|
||||
|
||||
// Check docker client/server version
|
||||
executionContext.Output("##[group]Checking docker version");
|
||||
DockerVersion dockerVersion = await _dockerManager.DockerVersion(executionContext);
|
||||
executionContext.Output("##[endgroup]");
|
||||
// Check docker client/server version
|
||||
executionContext.Output("##[group]Checking docker version");
|
||||
DockerVersion dockerVersion = await _dockerManager.DockerVersion(executionContext);
|
||||
executionContext.Output("##[endgroup]");
|
||||
|
||||
ArgUtil.NotNull(dockerVersion.ServerVersion, nameof(dockerVersion.ServerVersion));
|
||||
ArgUtil.NotNull(dockerVersion.ClientVersion, nameof(dockerVersion.ClientVersion));
|
||||
ArgUtil.NotNull(dockerVersion.ServerVersion, nameof(dockerVersion.ServerVersion));
|
||||
ArgUtil.NotNull(dockerVersion.ClientVersion, nameof(dockerVersion.ClientVersion));
|
||||
|
||||
#if OS_WINDOWS
|
||||
Version requiredDockerEngineAPIVersion = new Version(1, 30); // Docker-EE version 17.6
|
||||
#else
|
||||
Version requiredDockerEngineAPIVersion = new Version(1, 35); // Docker-CE version 17.12
|
||||
Version requiredDockerEngineAPIVersion = new Version(1, 35); // Docker-CE version 17.12
|
||||
#endif
|
||||
|
||||
if (dockerVersion.ServerVersion < requiredDockerEngineAPIVersion)
|
||||
{
|
||||
throw new NotSupportedException($"Min required docker engine API server version is '{requiredDockerEngineAPIVersion}', your docker ('{_dockerManager.DockerPath}') server version is '{dockerVersion.ServerVersion}'");
|
||||
}
|
||||
if (dockerVersion.ClientVersion < requiredDockerEngineAPIVersion)
|
||||
{
|
||||
throw new NotSupportedException($"Min required docker engine API client version is '{requiredDockerEngineAPIVersion}', your docker ('{_dockerManager.DockerPath}') client version is '{dockerVersion.ClientVersion}'");
|
||||
}
|
||||
|
||||
// Clean up containers left by previous runs
|
||||
executionContext.Output("##[group]Clean up resources from previous jobs");
|
||||
var staleContainers = await _dockerManager.DockerPS(executionContext, $"--all --quiet --no-trunc --filter \"label={_dockerManager.DockerInstanceLabel}\"");
|
||||
foreach (var staleContainer in staleContainers)
|
||||
{
|
||||
int containerRemoveExitCode = await _dockerManager.DockerRemove(executionContext, staleContainer);
|
||||
if (containerRemoveExitCode != 0)
|
||||
if (dockerVersion.ServerVersion < requiredDockerEngineAPIVersion)
|
||||
{
|
||||
executionContext.Warning($"Delete stale containers failed, docker rm fail with exit code {containerRemoveExitCode} for container {staleContainer}");
|
||||
throw new NotSupportedException($"Min required docker engine API server version is '{requiredDockerEngineAPIVersion}', your docker ('{_dockerManager.DockerPath}') server version is '{dockerVersion.ServerVersion}'");
|
||||
}
|
||||
if (dockerVersion.ClientVersion < requiredDockerEngineAPIVersion)
|
||||
{
|
||||
throw new NotSupportedException($"Min required docker engine API client version is '{requiredDockerEngineAPIVersion}', your docker ('{_dockerManager.DockerPath}') client version is '{dockerVersion.ClientVersion}'");
|
||||
}
|
||||
}
|
||||
|
||||
int networkPruneExitCode = await _dockerManager.DockerNetworkPrune(executionContext);
|
||||
if (networkPruneExitCode != 0)
|
||||
{
|
||||
executionContext.Warning($"Delete stale container networks failed, docker network prune fail with exit code {networkPruneExitCode}");
|
||||
}
|
||||
executionContext.Output("##[endgroup]");
|
||||
// Clean up containers left by previous runs
|
||||
executionContext.Output("##[group]Clean up resources from previous jobs");
|
||||
var staleContainers = await _dockerManager.DockerPS(executionContext, $"--all --quiet --no-trunc --filter \"label={_dockerManager.DockerInstanceLabel}\"");
|
||||
foreach (var staleContainer in staleContainers)
|
||||
{
|
||||
int containerRemoveExitCode = await _dockerManager.DockerRemove(executionContext, staleContainer);
|
||||
if (containerRemoveExitCode != 0)
|
||||
{
|
||||
executionContext.Warning($"Delete stale containers failed, docker rm fail with exit code {containerRemoveExitCode} for container {staleContainer}");
|
||||
}
|
||||
}
|
||||
|
||||
// Create local docker network for this job to avoid port conflict when multiple runners run on same machine.
|
||||
// All containers within a job join the same network
|
||||
executionContext.Output("##[group]Create local container network");
|
||||
var containerNetwork = $"github_network_{Guid.NewGuid().ToString("N")}";
|
||||
await CreateContainerNetworkAsync(executionContext, containerNetwork);
|
||||
executionContext.JobContext.Container["network"] = new StringContextData(containerNetwork);
|
||||
executionContext.Output("##[endgroup]");
|
||||
int networkPruneExitCode = await _dockerManager.DockerNetworkPrune(executionContext);
|
||||
if (networkPruneExitCode != 0)
|
||||
{
|
||||
executionContext.Warning($"Delete stale container networks failed, docker network prune fail with exit code {networkPruneExitCode}");
|
||||
}
|
||||
executionContext.Output("##[endgroup]");
|
||||
|
||||
foreach (var container in containers)
|
||||
{
|
||||
container.ContainerNetwork = containerNetwork;
|
||||
await StartContainerAsync(executionContext, container);
|
||||
}
|
||||
// Create local docker network for this job to avoid port conflict when multiple runners run on same machine.
|
||||
// All containers within a job join the same network
|
||||
executionContext.Output("##[group]Create local container network");
|
||||
var containerNetwork = $"github_network_{Guid.NewGuid().ToString("N")}";
|
||||
await CreateContainerNetworkAsync(executionContext, containerNetwork);
|
||||
executionContext.JobContext.Container["network"] = new StringContextData(containerNetwork);
|
||||
executionContext.Output("##[endgroup]");
|
||||
|
||||
executionContext.Output("##[group]Waiting for all services to be ready");
|
||||
foreach (var container in containers.Where(c => !c.IsJobContainer))
|
||||
{
|
||||
await ContainerHealthcheck(executionContext, container);
|
||||
foreach (var container in containers)
|
||||
{
|
||||
container.ContainerNetwork = containerNetwork;
|
||||
await StartContainerAsync(executionContext, container);
|
||||
}
|
||||
|
||||
executionContext.Output("##[group]Waiting for all services to be ready");
|
||||
foreach (var container in containers.Where(c => !c.IsJobContainer))
|
||||
{
|
||||
await ContainerHealthcheck(executionContext, container);
|
||||
}
|
||||
executionContext.Output("##[endgroup]");
|
||||
}
|
||||
executionContext.Output("##[endgroup]");
|
||||
}
|
||||
|
||||
public async Task StopContainersAsync(IExecutionContext executionContext, object data)
|
||||
@@ -162,12 +336,69 @@ namespace GitHub.Runner.Worker
|
||||
List<ContainerInfo> containers = data as List<ContainerInfo>;
|
||||
ArgUtil.NotNull(containers, nameof(containers));
|
||||
|
||||
foreach (var container in containers)
|
||||
var podManHandler = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Bin), "kubectlHandler", "index.js");
|
||||
if (File.Exists(podManHandler))
|
||||
{
|
||||
await StopContainerAsync(executionContext, container);
|
||||
var podmanInput = new ContainerEngineHandlerInput()
|
||||
{
|
||||
Command = "Remove",
|
||||
RemoveInput = new ContainersRemoveInput()
|
||||
{
|
||||
Network = executionContext.JobContext.Container["network"].ToString(),
|
||||
JobContainerId = executionContext.JobContext.Container["id"].ToString()
|
||||
}
|
||||
};
|
||||
|
||||
ContainerEngineHandlerOutput podmanOutput = null;
|
||||
using (var processInvoker = HostContext.CreateService<IProcessInvoker>())
|
||||
{
|
||||
var redirectStandardIn = Channel.CreateUnbounded<string>(new UnboundedChannelOptions() { SingleReader = true, SingleWriter = true });
|
||||
redirectStandardIn.Writer.TryWrite(JsonUtility.ToString(podmanInput));
|
||||
|
||||
processInvoker.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs message)
|
||||
{
|
||||
executionContext.Output(message.Data);
|
||||
};
|
||||
|
||||
processInvoker.ErrorDataReceived += delegate (object sender, ProcessDataReceivedEventArgs message)
|
||||
{
|
||||
executionContext.Output(message.Data);
|
||||
if (podmanOutput == null && message.Data.IndexOf("___CONTAINER_ENGINE_HANDLER_OUTPUT___") >= 0)
|
||||
{
|
||||
try
|
||||
{
|
||||
podmanOutput = JsonUtility.FromString<ContainerEngineHandlerOutput>(message.Data.Replace("___CONTAINER_ENGINE_HANDLER_OUTPUT___", ""));
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
executionContext.Error(ex);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Execute the process. Exit code 0 should always be returned.
|
||||
// A non-zero exit code indicates infrastructural failure.
|
||||
// Task failure should be communicated over STDOUT using ## commands.
|
||||
await processInvoker.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Work),
|
||||
fileName: Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Externals), "node12", "bin", $"node{IOUtil.ExeExtension}"),
|
||||
arguments: podManHandler,
|
||||
environment: null,
|
||||
requireExitCodeZero: false,
|
||||
outputEncoding: Encoding.UTF8,
|
||||
killProcessOnCancel: false,
|
||||
redirectStandardIn: redirectStandardIn,
|
||||
cancellationToken: executionContext.CancellationToken);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
foreach (var container in containers)
|
||||
{
|
||||
await StopContainerAsync(executionContext, container);
|
||||
}
|
||||
// Remove the container network
|
||||
await RemoveContainerNetworkAsync(executionContext, containers.First().ContainerNetwork);
|
||||
}
|
||||
// Remove the container network
|
||||
await RemoveContainerNetworkAsync(executionContext, containers.First().ContainerNetwork);
|
||||
}
|
||||
|
||||
private async Task StartContainerAsync(IExecutionContext executionContext, ContainerInfo container)
|
||||
|
||||
@@ -40,7 +40,6 @@ namespace GitHub.Runner.Worker
|
||||
string ScopeName { get; }
|
||||
string SiblingScopeName { get; }
|
||||
string ContextName { get; }
|
||||
ActionRunStage Stage { get; }
|
||||
Task ForceCompleted { get; }
|
||||
TaskResult? Result { get; set; }
|
||||
TaskResult? Outcome { get; set; }
|
||||
@@ -63,7 +62,7 @@ namespace GitHub.Runner.Worker
|
||||
|
||||
// Only job level ExecutionContext has PostJobSteps
|
||||
Stack<IStep> PostJobSteps { get; }
|
||||
Dictionary<Guid, string> EmbeddedStepsWithPostRegistered { get; }
|
||||
HashSet<Guid> EmbeddedStepsWithPostRegistered{ get; }
|
||||
|
||||
// Keep track of embedded steps states
|
||||
Dictionary<Guid, Dictionary<string, string>> EmbeddedIntraActionState { get; }
|
||||
@@ -77,8 +76,8 @@ namespace GitHub.Runner.Worker
|
||||
// Initialize
|
||||
void InitializeJob(Pipelines.AgentJobRequestMessage message, CancellationToken token);
|
||||
void CancelToken();
|
||||
IExecutionContext CreateChild(Guid recordId, string displayName, string refName, string scopeName, string contextName, ActionRunStage stage, Dictionary<string, string> intraActionState = null, int? recordOrder = null, IPagingLogger logger = null, bool isEmbedded = false, CancellationTokenSource cancellationTokenSource = null, Guid embeddedId = default(Guid), string siblingScopeName = null);
|
||||
IExecutionContext CreateEmbeddedChild(string scopeName, string contextName, Guid embeddedId, ActionRunStage stage, Dictionary<string, string> intraActionState = null, string siblingScopeName = null);
|
||||
IExecutionContext CreateChild(Guid recordId, string displayName, string refName, string scopeName, string contextName, Dictionary<string, string> intraActionState = null, int? recordOrder = null, IPagingLogger logger = null, bool isEmbedded = false, CancellationTokenSource cancellationTokenSource = null, Guid embeddedId = default(Guid), string siblingScopeName = null);
|
||||
IExecutionContext CreateEmbeddedChild(string scopeName, string contextName, Guid embeddedId, Dictionary<string, string> intraActionState = null, string siblingScopeName = null);
|
||||
|
||||
// logging
|
||||
long Write(string tag, string message);
|
||||
@@ -145,7 +144,6 @@ namespace GitHub.Runner.Worker
|
||||
public string ScopeName { get; private set; }
|
||||
public string SiblingScopeName { get; private set; }
|
||||
public string ContextName { get; private set; }
|
||||
public ActionRunStage Stage { get; private set; }
|
||||
public Task ForceCompleted => _forceCompleted.Task;
|
||||
public CancellationToken CancellationToken => _cancellationTokenSource.Token;
|
||||
public Dictionary<string, string> IntraActionState { get; private set; }
|
||||
@@ -170,7 +168,7 @@ namespace GitHub.Runner.Worker
|
||||
public HashSet<Guid> StepsWithPostRegistered { get; private set; }
|
||||
|
||||
// Only job level ExecutionContext has EmbeddedStepsWithPostRegistered
|
||||
public Dictionary<Guid, string> EmbeddedStepsWithPostRegistered { get; private set; }
|
||||
public HashSet<Guid> EmbeddedStepsWithPostRegistered { get; private set; }
|
||||
|
||||
public Dictionary<Guid, Dictionary<string, string>> EmbeddedIntraActionState { get; private set; }
|
||||
|
||||
@@ -267,19 +265,12 @@ namespace GitHub.Runner.Worker
|
||||
string siblingScopeName = null;
|
||||
if (this.IsEmbedded)
|
||||
{
|
||||
if (step is IActionRunner actionRunner)
|
||||
if (step is IActionRunner actionRunner && !Root.EmbeddedStepsWithPostRegistered.Add(actionRunner.Action.Id))
|
||||
{
|
||||
if (Root.EmbeddedStepsWithPostRegistered.ContainsKey(actionRunner.Action.Id))
|
||||
{
|
||||
Trace.Info($"'post' of '{actionRunner.DisplayName}' already push to child post step stack.");
|
||||
}
|
||||
else
|
||||
{
|
||||
Root.EmbeddedStepsWithPostRegistered[actionRunner.Action.Id] = actionRunner.Condition;
|
||||
}
|
||||
return;
|
||||
Trace.Info($"'post' of '{actionRunner.DisplayName}' already push to child post step stack.");
|
||||
}
|
||||
}
|
||||
return;
|
||||
}
|
||||
else if (step is IActionRunner actionRunner && !Root.StepsWithPostRegistered.Add(actionRunner.Action.Id))
|
||||
{
|
||||
Trace.Info($"'post' of '{actionRunner.DisplayName}' already push to post step stack.");
|
||||
@@ -294,7 +285,7 @@ namespace GitHub.Runner.Worker
|
||||
Root.PostJobSteps.Push(step);
|
||||
}
|
||||
|
||||
public IExecutionContext CreateChild(Guid recordId, string displayName, string refName, string scopeName, string contextName, ActionRunStage stage, Dictionary<string, string> intraActionState = null, int? recordOrder = null, IPagingLogger logger = null, bool isEmbedded = false, CancellationTokenSource cancellationTokenSource = null, Guid embeddedId = default(Guid), string siblingScopeName = null)
|
||||
public IExecutionContext CreateChild(Guid recordId, string displayName, string refName, string scopeName, string contextName, Dictionary<string, string> intraActionState = null, int? recordOrder = null, IPagingLogger logger = null, bool isEmbedded = false, CancellationTokenSource cancellationTokenSource = null, Guid embeddedId = default(Guid), string siblingScopeName = null)
|
||||
{
|
||||
Trace.Entering();
|
||||
|
||||
@@ -303,7 +294,6 @@ namespace GitHub.Runner.Worker
|
||||
child.Global = Global;
|
||||
child.ScopeName = scopeName;
|
||||
child.ContextName = contextName;
|
||||
child.Stage = stage;
|
||||
child.EmbeddedId = embeddedId;
|
||||
child.SiblingScopeName = siblingScopeName;
|
||||
child.JobTelemetry = JobTelemetry;
|
||||
@@ -354,9 +344,9 @@ namespace GitHub.Runner.Worker
|
||||
/// An embedded execution context shares the same record ID, record name, logger,
|
||||
/// and a linked cancellation token.
|
||||
/// </summary>
|
||||
public IExecutionContext CreateEmbeddedChild(string scopeName, string contextName, Guid embeddedId, ActionRunStage stage, Dictionary<string, string> intraActionState = null, string siblingScopeName = null)
|
||||
public IExecutionContext CreateEmbeddedChild(string scopeName, string contextName, Guid embeddedId, Dictionary<string, string> intraActionState = null, string siblingScopeName = null)
|
||||
{
|
||||
return Root.CreateChild(_record.Id, _record.Name, _record.Id.ToString("N"), scopeName, contextName, stage, logger: _logger, isEmbedded: true, cancellationTokenSource: CancellationTokenSource.CreateLinkedTokenSource(_cancellationTokenSource.Token), intraActionState: intraActionState, embeddedId: embeddedId, siblingScopeName: siblingScopeName);
|
||||
return Root.CreateChild(_record.Id, _record.Name, _record.Id.ToString("N"), scopeName, contextName, logger: _logger, isEmbedded: true, cancellationTokenSource: CancellationTokenSource.CreateLinkedTokenSource(_cancellationTokenSource.Token), intraActionState: intraActionState, embeddedId: embeddedId, siblingScopeName: siblingScopeName);
|
||||
}
|
||||
|
||||
public void Start(string currentOperation = null)
|
||||
@@ -553,8 +543,8 @@ namespace GitHub.Runner.Worker
|
||||
}
|
||||
|
||||
_record.WarningCount++;
|
||||
}
|
||||
else if (issue.Type == IssueType.Notice)
|
||||
}
|
||||
else if (issue.Type == IssueType.Notice)
|
||||
{
|
||||
|
||||
// tracking line number for each issue in log file
|
||||
@@ -726,10 +716,10 @@ namespace GitHub.Runner.Worker
|
||||
StepsWithPostRegistered = new HashSet<Guid>();
|
||||
|
||||
// EmbeddedStepsWithPostRegistered for job ExecutionContext
|
||||
EmbeddedStepsWithPostRegistered = new Dictionary<Guid, string>();
|
||||
EmbeddedStepsWithPostRegistered = new HashSet<Guid>();
|
||||
|
||||
// EmbeddedIntraActionState for job ExecutionContext
|
||||
EmbeddedIntraActionState = new Dictionary<Guid, Dictionary<string, string>>();
|
||||
EmbeddedIntraActionState = new Dictionary<Guid, Dictionary<string,string>>();
|
||||
|
||||
// Job timeline record.
|
||||
InitializeTimelineRecord(
|
||||
@@ -947,7 +937,7 @@ namespace GitHub.Runner.Worker
|
||||
}
|
||||
|
||||
var newGuid = Guid.NewGuid();
|
||||
return CreateChild(newGuid, displayName, newGuid.ToString("N"), null, null, ActionRunStage.Post, intraActionState, _childTimelineRecordOrder - Root.PostJobSteps.Count, siblingScopeName: siblingScopeName);
|
||||
return CreateChild(newGuid, displayName, newGuid.ToString("N"), null, null, intraActionState, _childTimelineRecordOrder - Root.PostJobSteps.Count, siblingScopeName: siblingScopeName);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -980,7 +970,7 @@ namespace GitHub.Runner.Worker
|
||||
// Do not add a format string overload. See comment on ExecutionContext.Write().
|
||||
public static void InfrastructureError(this IExecutionContext context, string message)
|
||||
{
|
||||
context.AddIssue(new Issue() { Type = IssueType.Error, Message = message, IsInfrastructureIssue = true });
|
||||
context.AddIssue(new Issue() { Type = IssueType.Error, Message = message, IsInfrastructureIssue = true});
|
||||
}
|
||||
|
||||
// Do not add a format string overload. See comment on ExecutionContext.Write().
|
||||
|
||||
@@ -24,19 +24,8 @@ namespace GitHub.Runner.Worker.Expressions
|
||||
ArgUtil.NotNull(templateContext, nameof(templateContext));
|
||||
var executionContext = templateContext.State[nameof(IExecutionContext)] as IExecutionContext;
|
||||
ArgUtil.NotNull(executionContext, nameof(executionContext));
|
||||
|
||||
// Decide based on 'action_status' for composite MAIN steps and 'job.status' for pre, post and job-level steps
|
||||
var isCompositeMainStep = executionContext.IsEmbedded && executionContext.Stage == ActionRunStage.Main;
|
||||
if (isCompositeMainStep)
|
||||
{
|
||||
ActionResult actionStatus = EnumUtil.TryParse<ActionResult>(executionContext.GetGitHubContext("action_status")) ?? ActionResult.Success;
|
||||
return actionStatus == ActionResult.Failure;
|
||||
}
|
||||
else
|
||||
{
|
||||
ActionResult jobStatus = executionContext.JobContext.Status ?? ActionResult.Success;
|
||||
return jobStatus == ActionResult.Failure;
|
||||
}
|
||||
ActionResult jobStatus = executionContext.JobContext.Status ?? ActionResult.Success;
|
||||
return jobStatus == ActionResult.Failure;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,19 +24,8 @@ namespace GitHub.Runner.Worker.Expressions
|
||||
ArgUtil.NotNull(templateContext, nameof(templateContext));
|
||||
var executionContext = templateContext.State[nameof(IExecutionContext)] as IExecutionContext;
|
||||
ArgUtil.NotNull(executionContext, nameof(executionContext));
|
||||
|
||||
// Decide based on 'action_status' for composite MAIN steps and 'job.status' for pre, post and job-level steps
|
||||
var isCompositeMainStep = executionContext.IsEmbedded && executionContext.Stage == ActionRunStage.Main;
|
||||
if (isCompositeMainStep)
|
||||
{
|
||||
ActionResult actionStatus = EnumUtil.TryParse<ActionResult>(executionContext.GetGitHubContext("action_status")) ?? ActionResult.Success;
|
||||
return actionStatus == ActionResult.Success;
|
||||
}
|
||||
else
|
||||
{
|
||||
ActionResult jobStatus = executionContext.JobContext.Status ?? ActionResult.Success;
|
||||
return jobStatus == ActionResult.Success;
|
||||
}
|
||||
ActionResult jobStatus = executionContext.JobContext.Status ?? ActionResult.Success;
|
||||
return jobStatus == ActionResult.Success;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -50,9 +50,8 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
// Only register post steps for steps that actually ran
|
||||
foreach (var step in Data.PostSteps.ToList())
|
||||
{
|
||||
if (ExecutionContext.Root.EmbeddedStepsWithPostRegistered.ContainsKey(step.Id))
|
||||
if (ExecutionContext.Root.EmbeddedStepsWithPostRegistered.Contains(step.Id))
|
||||
{
|
||||
step.Condition = ExecutionContext.Root.EmbeddedStepsWithPostRegistered[step.Id];
|
||||
steps.Add(step);
|
||||
}
|
||||
else
|
||||
@@ -125,7 +124,7 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
{
|
||||
ArgUtil.NotNull(step, step.DisplayName);
|
||||
var stepId = $"__{Guid.NewGuid()}";
|
||||
step.ExecutionContext = ExecutionContext.CreateEmbeddedChild(childScopeName, stepId, Guid.NewGuid(), stage);
|
||||
step.ExecutionContext = ExecutionContext.CreateEmbeddedChild(childScopeName, stepId, Guid.NewGuid());
|
||||
embeddedSteps.Add(step);
|
||||
}
|
||||
}
|
||||
@@ -144,7 +143,7 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
step.Stage = stage;
|
||||
step.Condition = stepData.Condition;
|
||||
ExecutionContext.Root.EmbeddedIntraActionState.TryGetValue(step.Action.Id, out var intraActionState);
|
||||
step.ExecutionContext = ExecutionContext.CreateEmbeddedChild(childScopeName, stepData.ContextName, step.Action.Id, stage, intraActionState: intraActionState, siblingScopeName: siblingScopeName);
|
||||
step.ExecutionContext = ExecutionContext.CreateEmbeddedChild(childScopeName, stepData.ContextName, step.Action.Id, intraActionState: intraActionState, siblingScopeName: siblingScopeName);
|
||||
step.ExecutionContext.ExpressionValues["inputs"] = inputsData;
|
||||
if (!String.IsNullOrEmpty(ExecutionContext.SiblingScopeName))
|
||||
{
|
||||
@@ -242,10 +241,6 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
step.ExecutionContext.ExpressionFunctions.Add(new FunctionInfo<FailureFunction>(PipelineTemplateConstants.Failure, 0, 0));
|
||||
step.ExecutionContext.ExpressionFunctions.Add(new FunctionInfo<SuccessFunction>(PipelineTemplateConstants.Success, 0, 0));
|
||||
|
||||
// Set action_status to the success of the current composite action
|
||||
var actionResult = ExecutionContext.Result?.ToActionResult() ?? ActionResult.Success;
|
||||
step.ExecutionContext.SetGitHubContext("action_status", actionResult.ToString());
|
||||
|
||||
// Initialize env context
|
||||
Trace.Info("Initialize Env context for embedded step");
|
||||
#if OS_WINDOWS
|
||||
@@ -300,100 +295,108 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
CancellationTokenRegistration? jobCancelRegister = null;
|
||||
try
|
||||
{
|
||||
// Register job cancellation call back only if job cancellation token not been fire before each step run
|
||||
if (!ExecutionContext.Root.CancellationToken.IsCancellationRequested)
|
||||
{
|
||||
// Test the condition again. The job was canceled after the condition was originally evaluated.
|
||||
jobCancelRegister = ExecutionContext.Root.CancellationToken.Register(() =>
|
||||
{
|
||||
// Mark job as cancelled
|
||||
ExecutionContext.Root.Result = TaskResult.Canceled;
|
||||
ExecutionContext.Root.JobContext.Status = ExecutionContext.Root.Result?.ToActionResult();
|
||||
|
||||
step.ExecutionContext.Debug($"Re-evaluate condition on job cancellation for step: '{step.DisplayName}'.");
|
||||
var conditionReTestTraceWriter = new ConditionTraceWriter(Trace, null); // host tracing only
|
||||
var conditionReTestResult = false;
|
||||
if (HostContext.RunnerShutdownToken.IsCancellationRequested)
|
||||
{
|
||||
step.ExecutionContext.Debug($"Skip Re-evaluate condition on runner shutdown.");
|
||||
}
|
||||
else
|
||||
{
|
||||
try
|
||||
{
|
||||
var templateEvaluator = step.ExecutionContext.ToPipelineTemplateEvaluator(conditionReTestTraceWriter);
|
||||
var condition = new BasicExpressionToken(null, null, null, step.Condition);
|
||||
conditionReTestResult = templateEvaluator.EvaluateStepIf(condition, step.ExecutionContext.ExpressionValues, step.ExecutionContext.ExpressionFunctions, step.ExecutionContext.ToExpressionState());
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
// Cancel the step since we get exception while re-evaluate step condition
|
||||
Trace.Info("Caught exception from expression when re-test condition on job cancellation.");
|
||||
step.ExecutionContext.Error(ex);
|
||||
}
|
||||
}
|
||||
|
||||
if (!conditionReTestResult)
|
||||
{
|
||||
// Cancel the step
|
||||
Trace.Info("Cancel current running step.");
|
||||
step.ExecutionContext.CancelToken();
|
||||
}
|
||||
});
|
||||
}
|
||||
else
|
||||
{
|
||||
if (ExecutionContext.Root.Result != TaskResult.Canceled)
|
||||
{
|
||||
// Mark job as cancelled
|
||||
ExecutionContext.Root.Result = TaskResult.Canceled;
|
||||
ExecutionContext.Root.JobContext.Status = ExecutionContext.Root.Result?.ToActionResult();
|
||||
}
|
||||
}
|
||||
// Evaluate condition
|
||||
step.ExecutionContext.Debug($"Evaluating condition for step: '{step.DisplayName}'");
|
||||
var conditionTraceWriter = new ConditionTraceWriter(Trace, step.ExecutionContext);
|
||||
var conditionResult = false;
|
||||
var conditionEvaluateError = default(Exception);
|
||||
if (HostContext.RunnerShutdownToken.IsCancellationRequested)
|
||||
{
|
||||
step.ExecutionContext.Debug($"Skip evaluate condition on runner shutdown.");
|
||||
}
|
||||
else
|
||||
{
|
||||
try
|
||||
{
|
||||
var templateEvaluator = step.ExecutionContext.ToPipelineTemplateEvaluator(conditionTraceWriter);
|
||||
var condition = new BasicExpressionToken(null, null, null, step.Condition);
|
||||
conditionResult = templateEvaluator.EvaluateStepIf(condition, step.ExecutionContext.ExpressionValues, step.ExecutionContext.ExpressionFunctions, step.ExecutionContext.ToExpressionState());
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
Trace.Info("Caught exception from expression.");
|
||||
Trace.Error(ex);
|
||||
conditionEvaluateError = ex;
|
||||
}
|
||||
}
|
||||
if (!conditionResult && conditionEvaluateError == null)
|
||||
{
|
||||
// Condition is false
|
||||
Trace.Info("Skipping step due to condition evaluation.");
|
||||
step.ExecutionContext.Result = TaskResult.Skipped;
|
||||
continue;
|
||||
}
|
||||
else if (conditionEvaluateError != null)
|
||||
{
|
||||
// Condition error
|
||||
step.ExecutionContext.Error(conditionEvaluateError);
|
||||
step.ExecutionContext.Result = TaskResult.Failed;
|
||||
ExecutionContext.Result = TaskResult.Failed;
|
||||
break;
|
||||
}
|
||||
else
|
||||
// For main steps just run the action
|
||||
if (stage == ActionRunStage.Main)
|
||||
{
|
||||
await RunStepAsync(step);
|
||||
}
|
||||
|
||||
// We need to evaluate conditions for pre/post steps
|
||||
else
|
||||
{
|
||||
// Register job cancellation call back only if job cancellation token not been fire before each step run
|
||||
if (!ExecutionContext.Root.CancellationToken.IsCancellationRequested)
|
||||
{
|
||||
// Test the condition again. The job was canceled after the condition was originally evaluated.
|
||||
jobCancelRegister = ExecutionContext.Root.CancellationToken.Register(() =>
|
||||
{
|
||||
// Mark job as cancelled
|
||||
ExecutionContext.Root.Result = TaskResult.Canceled;
|
||||
ExecutionContext.Root.JobContext.Status = ExecutionContext.Root.Result?.ToActionResult();
|
||||
|
||||
step.ExecutionContext.Debug($"Re-evaluate condition on job cancellation for step: '{step.DisplayName}'.");
|
||||
var conditionReTestTraceWriter = new ConditionTraceWriter(Trace, null); // host tracing only
|
||||
var conditionReTestResult = false;
|
||||
if (HostContext.RunnerShutdownToken.IsCancellationRequested)
|
||||
{
|
||||
step.ExecutionContext.Debug($"Skip Re-evaluate condition on runner shutdown.");
|
||||
}
|
||||
else
|
||||
{
|
||||
try
|
||||
{
|
||||
var templateEvaluator = step.ExecutionContext.ToPipelineTemplateEvaluator(conditionReTestTraceWriter);
|
||||
var condition = new BasicExpressionToken(null, null, null, step.Condition);
|
||||
conditionReTestResult = templateEvaluator.EvaluateStepIf(condition, step.ExecutionContext.ExpressionValues, step.ExecutionContext.ExpressionFunctions, step.ExecutionContext.ToExpressionState());
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
// Cancel the step since we get exception while re-evaluate step condition
|
||||
Trace.Info("Caught exception from expression when re-test condition on job cancellation.");
|
||||
step.ExecutionContext.Error(ex);
|
||||
}
|
||||
}
|
||||
|
||||
if (!conditionReTestResult)
|
||||
{
|
||||
// Cancel the step
|
||||
Trace.Info("Cancel current running step.");
|
||||
step.ExecutionContext.CancelToken();
|
||||
}
|
||||
});
|
||||
}
|
||||
else
|
||||
{
|
||||
if (ExecutionContext.Root.Result != TaskResult.Canceled)
|
||||
{
|
||||
// Mark job as cancelled
|
||||
ExecutionContext.Root.Result = TaskResult.Canceled;
|
||||
ExecutionContext.Root.JobContext.Status = ExecutionContext.Root.Result?.ToActionResult();
|
||||
}
|
||||
}
|
||||
// Evaluate condition
|
||||
step.ExecutionContext.Debug($"Evaluating condition for step: '{step.DisplayName}'");
|
||||
var conditionTraceWriter = new ConditionTraceWriter(Trace, step.ExecutionContext);
|
||||
var conditionResult = false;
|
||||
var conditionEvaluateError = default(Exception);
|
||||
if (HostContext.RunnerShutdownToken.IsCancellationRequested)
|
||||
{
|
||||
step.ExecutionContext.Debug($"Skip evaluate condition on runner shutdown.");
|
||||
}
|
||||
else
|
||||
{
|
||||
try
|
||||
{
|
||||
var templateEvaluator = step.ExecutionContext.ToPipelineTemplateEvaluator(conditionTraceWriter);
|
||||
var condition = new BasicExpressionToken(null, null, null, step.Condition);
|
||||
conditionResult = templateEvaluator.EvaluateStepIf(condition, step.ExecutionContext.ExpressionValues, step.ExecutionContext.ExpressionFunctions, step.ExecutionContext.ToExpressionState());
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
Trace.Info("Caught exception from expression.");
|
||||
Trace.Error(ex);
|
||||
conditionEvaluateError = ex;
|
||||
}
|
||||
}
|
||||
if (!conditionResult && conditionEvaluateError == null)
|
||||
{
|
||||
// Condition is false
|
||||
Trace.Info("Skipping step due to condition evaluation.");
|
||||
step.ExecutionContext.Result = TaskResult.Skipped;
|
||||
continue;
|
||||
}
|
||||
else if (conditionEvaluateError != null)
|
||||
{
|
||||
// Condition error
|
||||
step.ExecutionContext.Error(conditionEvaluateError);
|
||||
step.ExecutionContext.Result = TaskResult.Failed;
|
||||
ExecutionContext.Result = TaskResult.Failed;
|
||||
break;
|
||||
}
|
||||
else
|
||||
{
|
||||
await RunStepAsync(step);
|
||||
}
|
||||
}
|
||||
}
|
||||
finally
|
||||
{
|
||||
@@ -409,6 +412,12 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
{
|
||||
Trace.Info($"Update job result with current composite step result '{step.ExecutionContext.Result}'.");
|
||||
ExecutionContext.Result = TaskResultUtil.MergeTaskResults(ExecutionContext.Result, step.ExecutionContext.Result.Value);
|
||||
|
||||
// We should run cleanup even if one of the cleanup step fails
|
||||
if (stage != ActionRunStage.Post)
|
||||
{
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -83,7 +83,7 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
HasPreStep = Data.HasPre,
|
||||
HasPostStep = Data.HasPost,
|
||||
IsEmbedded = ExecutionContext.IsEmbedded,
|
||||
Type = Data.NodeVersion
|
||||
Type = "node12"
|
||||
};
|
||||
ExecutionContext.Root.ActionsStepsTelemetry.Add(telemetry);
|
||||
}
|
||||
@@ -99,7 +99,7 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
workingDirectory = HostContext.GetDirectory(WellKnownDirectory.Work);
|
||||
}
|
||||
|
||||
var nodeRuntimeVersion = await StepHost.DetermineNodeRuntimeVersion(ExecutionContext, Data.NodeVersion);
|
||||
var nodeRuntimeVersion = await StepHost.DetermineNodeRuntimeVersion(ExecutionContext);
|
||||
string file = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Externals), nodeRuntimeVersion, "bin", $"node{IOUtil.ExeExtension}");
|
||||
|
||||
// Format the arguments passed to node.
|
||||
|
||||
@@ -21,9 +21,11 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
event EventHandler<ProcessDataReceivedEventArgs> OutputDataReceived;
|
||||
event EventHandler<ProcessDataReceivedEventArgs> ErrorDataReceived;
|
||||
|
||||
IExecutionContext ExecutionContext { get; set; }
|
||||
|
||||
string ResolvePathForStepHost(string path);
|
||||
|
||||
Task<string> DetermineNodeRuntimeVersion(IExecutionContext executionContext, string preferredVersion);
|
||||
Task<string> DetermineNodeRuntimeVersion(IExecutionContext executionContext);
|
||||
|
||||
Task<int> ExecuteAsync(string workingDirectory,
|
||||
string fileName,
|
||||
@@ -53,14 +55,16 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
public event EventHandler<ProcessDataReceivedEventArgs> OutputDataReceived;
|
||||
public event EventHandler<ProcessDataReceivedEventArgs> ErrorDataReceived;
|
||||
|
||||
public IExecutionContext ExecutionContext { get; set; }
|
||||
|
||||
public string ResolvePathForStepHost(string path)
|
||||
{
|
||||
return path;
|
||||
}
|
||||
|
||||
public Task<string> DetermineNodeRuntimeVersion(IExecutionContext executionContext, string preferredVersion)
|
||||
public Task<string> DetermineNodeRuntimeVersion(IExecutionContext executionContext)
|
||||
{
|
||||
return Task.FromResult<string>(preferredVersion);
|
||||
return Task.FromResult<string>("node12");
|
||||
}
|
||||
|
||||
public async Task<int> ExecuteAsync(string workingDirectory,
|
||||
@@ -99,6 +103,8 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
public event EventHandler<ProcessDataReceivedEventArgs> OutputDataReceived;
|
||||
public event EventHandler<ProcessDataReceivedEventArgs> ErrorDataReceived;
|
||||
|
||||
public IExecutionContext ExecutionContext { get; set; }
|
||||
|
||||
public string ResolvePathForStepHost(string path)
|
||||
{
|
||||
// make sure container exist.
|
||||
@@ -123,7 +129,7 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
}
|
||||
}
|
||||
|
||||
public async Task<string> DetermineNodeRuntimeVersion(IExecutionContext executionContext, string preferredVersion)
|
||||
public async Task<string> DetermineNodeRuntimeVersion(IExecutionContext executionContext)
|
||||
{
|
||||
// Best effort to determine a compatible node runtime
|
||||
// There may be more variation in which libraries are linked than just musl/glibc,
|
||||
@@ -148,14 +154,14 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
var msg = $"JavaScript Actions in Alpine containers are only supported on x64 Linux runners. Detected {os} {arch}";
|
||||
throw new NotSupportedException(msg);
|
||||
}
|
||||
nodeExternal = $"{preferredVersion}_alpine";
|
||||
nodeExternal = "node12_alpine";
|
||||
executionContext.Debug($"Container distribution is alpine. Running JavaScript Action with external tool: {nodeExternal}");
|
||||
return nodeExternal;
|
||||
}
|
||||
}
|
||||
}
|
||||
// Optimistically use the default
|
||||
nodeExternal = preferredVersion;
|
||||
nodeExternal = "node12";
|
||||
executionContext.Debug($"Running JavaScript Action with default external tool: {nodeExternal}");
|
||||
return nodeExternal;
|
||||
}
|
||||
@@ -174,69 +180,138 @@ namespace GitHub.Runner.Worker.Handlers
|
||||
ArgUtil.NotNull(Container, nameof(Container));
|
||||
ArgUtil.NotNullOrEmpty(Container.ContainerId, nameof(Container.ContainerId));
|
||||
|
||||
var dockerManager = HostContext.GetService<IDockerCommandManager>();
|
||||
string dockerClientPath = dockerManager.DockerPath;
|
||||
|
||||
// Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
|
||||
IList<string> dockerCommandArgs = new List<string>();
|
||||
dockerCommandArgs.Add($"exec");
|
||||
|
||||
// [OPTIONS]
|
||||
dockerCommandArgs.Add($"-i");
|
||||
dockerCommandArgs.Add($"--workdir {workingDirectory}");
|
||||
foreach (var env in environment)
|
||||
var podManHandler = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Bin), "kubectlHandler", "index.js");
|
||||
if (File.Exists(podManHandler))
|
||||
{
|
||||
// e.g. -e MY_SECRET maps the value into the exec'ed process without exposing
|
||||
// the value directly in the command
|
||||
dockerCommandArgs.Add($"-e {env.Key}");
|
||||
var podmanInput = new ContainerEngineHandlerInput()
|
||||
{
|
||||
Command = "Exec",
|
||||
ExecInput = new JobContainerExecInput()
|
||||
{
|
||||
JobContainer = this.Container,
|
||||
WorkingDirectory = workingDirectory,
|
||||
FileName = fileName,
|
||||
Arguments = arguments,
|
||||
EnvironmentKeys = environment.Keys.ToList(),
|
||||
EnvironmentVariables = environment.ToDictionary(x => x.Key, y => y.Value)
|
||||
}
|
||||
};
|
||||
|
||||
// make sure all env are using container path
|
||||
foreach (var envKey in environment.Keys.ToList())
|
||||
{
|
||||
environment[envKey] = this.Container.TranslateToContainerPath(environment[envKey]);
|
||||
}
|
||||
|
||||
// ContainerEngineHandlerOutput podmanOutput = null;
|
||||
using (var processInvoker = HostContext.CreateService<IProcessInvoker>())
|
||||
{
|
||||
var redirectStandardIn = Channel.CreateUnbounded<string>(new UnboundedChannelOptions() { SingleReader = true, SingleWriter = true });
|
||||
redirectStandardIn.Writer.TryWrite(JsonUtility.ToString(podmanInput));
|
||||
|
||||
// processInvoker.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs message)
|
||||
// {
|
||||
// ExecutionContext.Output(message.Data);
|
||||
// };
|
||||
|
||||
// processInvoker.ErrorDataReceived += delegate (object sender, ProcessDataReceivedEventArgs message)
|
||||
// {
|
||||
// executionContext.Output(message.Data);
|
||||
// if (podmanOutput == null && message.Data.IndexOf("___CONTAINER_ENGINE_HANDLER_OUTPUT___") >= 0)
|
||||
// {
|
||||
// try
|
||||
// {
|
||||
// podmanOutput = JsonUtility.FromString<ContainerEngineHandlerOutput>(message.Data.Replace("___CONTAINER_ENGINE_HANDLER_OUTPUT___", ""));
|
||||
// }
|
||||
// catch (Exception ex)
|
||||
// {
|
||||
// executionContext.Error(ex);
|
||||
// }
|
||||
// }
|
||||
// };
|
||||
processInvoker.OutputDataReceived += OutputDataReceived;
|
||||
processInvoker.ErrorDataReceived += ErrorDataReceived;
|
||||
|
||||
// Execute the process. Exit code 0 should always be returned.
|
||||
// A non-zero exit code indicates infrastructural failure.
|
||||
// Task failure should be communicated over STDOUT using ## commands.
|
||||
return await processInvoker.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Work),
|
||||
fileName: Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Externals), "node12", "bin", $"node{IOUtil.ExeExtension}"),
|
||||
arguments: podManHandler,
|
||||
environment: environment,
|
||||
requireExitCodeZero: requireExitCodeZero,
|
||||
outputEncoding: Encoding.UTF8,
|
||||
killProcessOnCancel: killProcessOnCancel,
|
||||
redirectStandardIn: redirectStandardIn,
|
||||
cancellationToken: cancellationToken);
|
||||
}
|
||||
}
|
||||
if (!string.IsNullOrEmpty(PrependPath))
|
||||
else
|
||||
{
|
||||
// Prepend tool paths to container's PATH
|
||||
var fullPath = !string.IsNullOrEmpty(Container.ContainerRuntimePath) ? $"{PrependPath}:{Container.ContainerRuntimePath}" : PrependPath;
|
||||
dockerCommandArgs.Add($"-e PATH=\"{fullPath}\"");
|
||||
}
|
||||
var dockerManager = HostContext.GetService<IDockerCommandManager>();
|
||||
string dockerClientPath = dockerManager.DockerPath;
|
||||
|
||||
// CONTAINER
|
||||
dockerCommandArgs.Add($"{Container.ContainerId}");
|
||||
// Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
|
||||
IList<string> dockerCommandArgs = new List<string>();
|
||||
dockerCommandArgs.Add($"exec");
|
||||
|
||||
// COMMAND
|
||||
dockerCommandArgs.Add(fileName);
|
||||
// [OPTIONS]
|
||||
dockerCommandArgs.Add($"-i");
|
||||
dockerCommandArgs.Add($"--workdir {workingDirectory}");
|
||||
foreach (var env in environment)
|
||||
{
|
||||
// e.g. -e MY_SECRET maps the value into the exec'ed process without exposing
|
||||
// the value directly in the command
|
||||
dockerCommandArgs.Add($"-e {env.Key}");
|
||||
}
|
||||
if (!string.IsNullOrEmpty(PrependPath))
|
||||
{
|
||||
// Prepend tool paths to container's PATH
|
||||
var fullPath = !string.IsNullOrEmpty(Container.ContainerRuntimePath) ? $"{PrependPath}:{Container.ContainerRuntimePath}" : PrependPath;
|
||||
dockerCommandArgs.Add($"-e PATH=\"{fullPath}\"");
|
||||
}
|
||||
|
||||
// [ARG...]
|
||||
dockerCommandArgs.Add(arguments);
|
||||
// CONTAINER
|
||||
dockerCommandArgs.Add($"{Container.ContainerId}");
|
||||
|
||||
string dockerCommandArgstring = string.Join(" ", dockerCommandArgs);
|
||||
// COMMAND
|
||||
dockerCommandArgs.Add(fileName);
|
||||
|
||||
// make sure all env are using container path
|
||||
foreach (var envKey in environment.Keys.ToList())
|
||||
{
|
||||
environment[envKey] = this.Container.TranslateToContainerPath(environment[envKey]);
|
||||
}
|
||||
// [ARG...]
|
||||
dockerCommandArgs.Add(arguments);
|
||||
|
||||
using (var processInvoker = HostContext.CreateService<IProcessInvoker>())
|
||||
{
|
||||
processInvoker.OutputDataReceived += OutputDataReceived;
|
||||
processInvoker.ErrorDataReceived += ErrorDataReceived;
|
||||
string dockerCommandArgstring = string.Join(" ", dockerCommandArgs);
|
||||
|
||||
// make sure all env are using container path
|
||||
foreach (var envKey in environment.Keys.ToList())
|
||||
{
|
||||
environment[envKey] = this.Container.TranslateToContainerPath(environment[envKey]);
|
||||
}
|
||||
|
||||
using (var processInvoker = HostContext.CreateService<IProcessInvoker>())
|
||||
{
|
||||
processInvoker.OutputDataReceived += OutputDataReceived;
|
||||
processInvoker.ErrorDataReceived += ErrorDataReceived;
|
||||
|
||||
#if OS_WINDOWS
|
||||
// It appears that node.exe outputs UTF8 when not in TTY mode.
|
||||
outputEncoding = Encoding.UTF8;
|
||||
#else
|
||||
// Let .NET choose the default.
|
||||
outputEncoding = null;
|
||||
// Let .NET choose the default.
|
||||
outputEncoding = null;
|
||||
#endif
|
||||
|
||||
return await processInvoker.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Work),
|
||||
fileName: dockerClientPath,
|
||||
arguments: dockerCommandArgstring,
|
||||
environment: environment,
|
||||
requireExitCodeZero: requireExitCodeZero,
|
||||
outputEncoding: outputEncoding,
|
||||
killProcessOnCancel: killProcessOnCancel,
|
||||
redirectStandardIn: null,
|
||||
inheritConsoleHandler: inheritConsoleHandler,
|
||||
cancellationToken: cancellationToken);
|
||||
return await processInvoker.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Work),
|
||||
fileName: dockerClientPath,
|
||||
arguments: dockerCommandArgstring,
|
||||
environment: environment,
|
||||
requireExitCodeZero: requireExitCodeZero,
|
||||
outputEncoding: outputEncoding,
|
||||
killProcessOnCancel: killProcessOnCancel,
|
||||
redirectStandardIn: null,
|
||||
inheritConsoleHandler: inheritConsoleHandler,
|
||||
cancellationToken: cancellationToken);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -55,7 +55,7 @@ namespace GitHub.Runner.Worker
|
||||
ArgUtil.NotNull(message, nameof(message));
|
||||
|
||||
// Create a new timeline record for 'Set up job'
|
||||
IExecutionContext context = jobContext.CreateChild(Guid.NewGuid(), "Set up job", $"{nameof(JobExtension)}_Init", null, null, ActionRunStage.Pre);
|
||||
IExecutionContext context = jobContext.CreateChild(Guid.NewGuid(), "Set up job", $"{nameof(JobExtension)}_Init", null, null);
|
||||
|
||||
List<IStep> preJobSteps = new List<IStep>();
|
||||
List<IStep> jobSteps = new List<IStep>();
|
||||
@@ -306,13 +306,13 @@ namespace GitHub.Runner.Worker
|
||||
JobExtensionRunner extensionStep = step as JobExtensionRunner;
|
||||
ArgUtil.NotNull(extensionStep, extensionStep.DisplayName);
|
||||
Guid stepId = Guid.NewGuid();
|
||||
extensionStep.ExecutionContext = jobContext.CreateChild(stepId, extensionStep.DisplayName, null, null, stepId.ToString("N"), ActionRunStage.Pre);
|
||||
extensionStep.ExecutionContext = jobContext.CreateChild(stepId, extensionStep.DisplayName, null, null, stepId.ToString("N"));
|
||||
}
|
||||
else if (step is IActionRunner actionStep)
|
||||
{
|
||||
ArgUtil.NotNull(actionStep, step.DisplayName);
|
||||
Guid stepId = Guid.NewGuid();
|
||||
actionStep.ExecutionContext = jobContext.CreateChild(stepId, actionStep.DisplayName, stepId.ToString("N"), null, null, ActionRunStage.Pre, intraActionStates[actionStep.Action.Id]);
|
||||
actionStep.ExecutionContext = jobContext.CreateChild(stepId, actionStep.DisplayName, stepId.ToString("N"), null, null, intraActionStates[actionStep.Action.Id]);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -323,7 +323,7 @@ namespace GitHub.Runner.Worker
|
||||
{
|
||||
ArgUtil.NotNull(actionStep, step.DisplayName);
|
||||
intraActionStates.TryGetValue(actionStep.Action.Id, out var intraActionState);
|
||||
actionStep.ExecutionContext = jobContext.CreateChild(actionStep.Action.Id, actionStep.DisplayName, actionStep.Action.Name, null, actionStep.Action.ContextName, ActionRunStage.Main, intraActionState);
|
||||
actionStep.ExecutionContext = jobContext.CreateChild(actionStep.Action.Id, actionStep.DisplayName, actionStep.Action.Name, null, actionStep.Action.ContextName, intraActionState);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -394,7 +394,7 @@ namespace GitHub.Runner.Worker
|
||||
ArgUtil.NotNull(jobContext, nameof(jobContext));
|
||||
|
||||
// create a new timeline record node for 'Finalize job'
|
||||
IExecutionContext context = jobContext.CreateChild(Guid.NewGuid(), "Complete job", $"{nameof(JobExtension)}_Final", null, null, ActionRunStage.Post);
|
||||
IExecutionContext context = jobContext.CreateChild(Guid.NewGuid(), "Complete job", $"{nameof(JobExtension)}_Final", null, null);
|
||||
using (var register = jobContext.CancellationToken.Register(() => { context.CancelToken(); }))
|
||||
{
|
||||
try
|
||||
|
||||
@@ -106,7 +106,6 @@ namespace GitHub.Runner.Worker
|
||||
}
|
||||
|
||||
jobContext.SetRunnerContext("os", VarUtil.OS);
|
||||
jobContext.SetRunnerContext("arch", VarUtil.OSArchitecture);
|
||||
|
||||
var runnerSettings = HostContext.GetService<IConfigurationStore>().GetSettings();
|
||||
jobContext.SetRunnerContext("name", runnerSettings.AgentName);
|
||||
|
||||
@@ -3,12 +3,13 @@
|
||||
<PropertyGroup>
|
||||
<TargetFramework>netcoreapp3.1</TargetFramework>
|
||||
<OutputType>Exe</OutputType>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;linux-musl-x64;osx-x64</RuntimeIdentifiers>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64</RuntimeIdentifiers>
|
||||
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
|
||||
<AssetTargetFallback>portable-net45+win8</AssetTargetFallback>
|
||||
<NoWarn>NU1701;NU1603</NoWarn>
|
||||
<Version>$(Version)</Version>
|
||||
<TieredCompilationQuickJit>true</TieredCompilationQuickJit>
|
||||
<PublishReadyToRun>true</PublishReadyToRun>
|
||||
</PropertyGroup>
|
||||
|
||||
<ItemGroup>
|
||||
|
||||
@@ -46,7 +46,7 @@
|
||||
"runs": {
|
||||
"one-of": [
|
||||
"container-runs",
|
||||
"node-runs",
|
||||
"node12-runs",
|
||||
"plugin-runs",
|
||||
"composite-runs"
|
||||
]
|
||||
@@ -80,7 +80,7 @@
|
||||
"loose-value-type": "string"
|
||||
}
|
||||
},
|
||||
"node-runs": {
|
||||
"node12-runs": {
|
||||
"mapping": {
|
||||
"properties": {
|
||||
"using": "non-empty-string",
|
||||
@@ -112,10 +112,10 @@
|
||||
"item-type": "composite-step"
|
||||
}
|
||||
},
|
||||
"composite-step": {
|
||||
"composite-step":{
|
||||
"one-of": [
|
||||
"run-step",
|
||||
"uses-step"
|
||||
"uses-step"
|
||||
]
|
||||
},
|
||||
"run-step": {
|
||||
@@ -123,7 +123,6 @@
|
||||
"properties": {
|
||||
"name": "string-steps-context",
|
||||
"id": "non-empty-string",
|
||||
"if": "step-if",
|
||||
"run": {
|
||||
"type": "string-steps-context",
|
||||
"required": true
|
||||
@@ -142,7 +141,6 @@
|
||||
"properties": {
|
||||
"name": "string-steps-context",
|
||||
"id": "non-empty-string",
|
||||
"if": "step-if",
|
||||
"uses": {
|
||||
"type": "non-empty-string",
|
||||
"required": true
|
||||
@@ -218,24 +216,6 @@
|
||||
"loose-value-type": "string"
|
||||
}
|
||||
},
|
||||
"step-if": {
|
||||
"context": [
|
||||
"github",
|
||||
"inputs",
|
||||
"strategy",
|
||||
"matrix",
|
||||
"steps",
|
||||
"job",
|
||||
"runner",
|
||||
"env",
|
||||
"always(0,0)",
|
||||
"failure(0,0)",
|
||||
"cancelled(0,0)",
|
||||
"success(0,0)",
|
||||
"hashFiles(1,255)"
|
||||
],
|
||||
"string": {}
|
||||
},
|
||||
"step-with": {
|
||||
"context": [
|
||||
"github",
|
||||
@@ -254,4 +234,4 @@
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -638,7 +638,6 @@ namespace GitHub.DistributedTask.Pipelines.ObjectTemplating
|
||||
new NamedValueInfo<NoOperationNamedValue>(PipelineTemplateConstants.Matrix),
|
||||
new NamedValueInfo<NoOperationNamedValue>(PipelineTemplateConstants.Steps),
|
||||
new NamedValueInfo<NoOperationNamedValue>(PipelineTemplateConstants.GitHub),
|
||||
new NamedValueInfo<NoOperationNamedValue>(PipelineTemplateConstants.Inputs),
|
||||
new NamedValueInfo<NoOperationNamedValue>(PipelineTemplateConstants.Job),
|
||||
new NamedValueInfo<NoOperationNamedValue>(PipelineTemplateConstants.Runner),
|
||||
new NamedValueInfo<NoOperationNamedValue>(PipelineTemplateConstants.Env),
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
<PropertyGroup>
|
||||
<TargetFramework>netcoreapp3.1</TargetFramework>
|
||||
<OutputType>Library</OutputType>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;linux-musl-x64;osx-x64</RuntimeIdentifiers>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64</RuntimeIdentifiers>
|
||||
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
|
||||
<AssetTargetFallback>portable-net45+win8</AssetTargetFallback>
|
||||
<NoWarn>NU1701;NU1603</NoWarn>
|
||||
|
||||
@@ -19,7 +19,6 @@ namespace GitHub.Runner.Common.Tests
|
||||
"linux-x64",
|
||||
"linux-arm",
|
||||
"linux-arm64",
|
||||
"linux-musl-x64",
|
||||
"osx-x64"
|
||||
};
|
||||
|
||||
|
||||
@@ -965,7 +965,7 @@ namespace GitHub.Runner.Common.Tests.Worker
|
||||
};
|
||||
|
||||
//Act
|
||||
var result = await _actionManager.PrepareActionsAsync(_ec.Object, actions);
|
||||
var result = await _actionManager.PrepareActionsAsync(_ec.Object, actions);
|
||||
|
||||
//Assert
|
||||
Assert.Equal(1, result.PreStepTracker.Count);
|
||||
@@ -1288,7 +1288,7 @@ runs:
|
||||
[Fact]
|
||||
[Trait("Level", "L0")]
|
||||
[Trait("Category", "Worker")]
|
||||
public void LoadsNode12ActionDefinition()
|
||||
public void LoadsNodeActionDefinition()
|
||||
{
|
||||
try
|
||||
{
|
||||
@@ -1344,78 +1344,8 @@ runs:
|
||||
Assert.True(string.IsNullOrEmpty(inputDefaults["entryPoint"]));
|
||||
Assert.NotNull(definition.Data.Execution); // execution
|
||||
|
||||
Assert.NotNull(definition.Data.Execution as NodeJSActionExecutionData);
|
||||
Assert.NotNull((definition.Data.Execution as NodeJSActionExecutionData));
|
||||
Assert.Equal("task.js", (definition.Data.Execution as NodeJSActionExecutionData).Script);
|
||||
Assert.Equal("node12", (definition.Data.Execution as NodeJSActionExecutionData).NodeVersion);
|
||||
}
|
||||
finally
|
||||
{
|
||||
Teardown();
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
[Trait("Level", "L0")]
|
||||
[Trait("Category", "Worker")]
|
||||
public void LoadsNode16ActionDefinition()
|
||||
{
|
||||
try
|
||||
{
|
||||
// Arrange.
|
||||
Setup();
|
||||
const string Content = @"
|
||||
# Container action
|
||||
name: 'Hello World'
|
||||
description: 'Greet the world and record the time'
|
||||
author: 'GitHub'
|
||||
inputs:
|
||||
greeting: # id of input
|
||||
description: 'The greeting we choose - will print ""{greeting}, World!"" on stdout'
|
||||
required: true
|
||||
default: 'Hello'
|
||||
entryPoint: # id of input
|
||||
description: 'optional docker entrypoint overwrite.'
|
||||
required: false
|
||||
outputs:
|
||||
time: # id of output
|
||||
description: 'The time we did the greeting'
|
||||
icon: 'hello.svg' # vector art to display in the GitHub Marketplace
|
||||
color: 'green' # optional, decorates the entry in the GitHub Marketplace
|
||||
runs:
|
||||
using: 'node16'
|
||||
main: 'task.js'
|
||||
";
|
||||
Pipelines.ActionStep instance;
|
||||
string directory;
|
||||
CreateAction(yamlContent: Content, instance: out instance, directory: out directory);
|
||||
|
||||
// Act.
|
||||
Definition definition = _actionManager.LoadAction(_ec.Object, instance);
|
||||
|
||||
// Assert.
|
||||
Assert.NotNull(definition);
|
||||
Assert.Equal(directory, definition.Directory);
|
||||
Assert.NotNull(definition.Data);
|
||||
Assert.NotNull(definition.Data.Inputs); // inputs
|
||||
Dictionary<string, string> inputDefaults = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase);
|
||||
foreach (var input in definition.Data.Inputs)
|
||||
{
|
||||
var name = input.Key.AssertString("key").Value;
|
||||
var value = input.Value.AssertScalar("value").ToString();
|
||||
|
||||
_hc.GetTrace().Info($"Default: {name} = {value}");
|
||||
inputDefaults[name] = value;
|
||||
}
|
||||
|
||||
Assert.Equal(2, inputDefaults.Count);
|
||||
Assert.True(inputDefaults.ContainsKey("greeting"));
|
||||
Assert.Equal("Hello", inputDefaults["greeting"]);
|
||||
Assert.True(string.IsNullOrEmpty(inputDefaults["entryPoint"]));
|
||||
Assert.NotNull(definition.Data.Execution); // execution
|
||||
|
||||
Assert.NotNull(definition.Data.Execution as NodeJSActionExecutionData);
|
||||
Assert.Equal("task.js", (definition.Data.Execution as NodeJSActionExecutionData).Script);
|
||||
Assert.Equal("node16", (definition.Data.Execution as NodeJSActionExecutionData).NodeVersion);
|
||||
}
|
||||
finally
|
||||
{
|
||||
|
||||
@@ -408,50 +408,6 @@ namespace GitHub.Runner.Common.Tests.Worker
|
||||
var nodeAction = result.Execution as NodeJSActionExecutionData;
|
||||
|
||||
Assert.Equal("main.js", nodeAction.Script);
|
||||
Assert.Equal("node12", nodeAction.NodeVersion);
|
||||
}
|
||||
finally
|
||||
{
|
||||
Teardown();
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
[Trait("Level", "L0")]
|
||||
[Trait("Category", "Worker")]
|
||||
public void Load_Node16Action()
|
||||
{
|
||||
try
|
||||
{
|
||||
//Arrange
|
||||
Setup();
|
||||
|
||||
var actionManifest = new ActionManifestManager();
|
||||
actionManifest.Initialize(_hc);
|
||||
|
||||
//Act
|
||||
var result = actionManifest.Load(_ec.Object, Path.Combine(TestUtil.GetTestDataPath(), "node16action.yml"));
|
||||
|
||||
//Assert
|
||||
Assert.Equal("Hello World", result.Name);
|
||||
Assert.Equal("Greet the world and record the time", result.Description);
|
||||
Assert.Equal(2, result.Inputs.Count);
|
||||
Assert.Equal("greeting", result.Inputs[0].Key.AssertString("key").Value);
|
||||
Assert.Equal("Hello", result.Inputs[0].Value.AssertString("value").Value);
|
||||
Assert.Equal("entryPoint", result.Inputs[1].Key.AssertString("key").Value);
|
||||
Assert.Equal("", result.Inputs[1].Value.AssertString("value").Value);
|
||||
Assert.Equal(1, result.Deprecated.Count);
|
||||
|
||||
Assert.True(result.Deprecated.ContainsKey("greeting"));
|
||||
result.Deprecated.TryGetValue("greeting", out string value);
|
||||
Assert.Equal("This property has been deprecated", value);
|
||||
|
||||
Assert.Equal(ActionExecutionType.NodeJS, result.Execution.ExecutionType);
|
||||
|
||||
var nodeAction = result.Execution as NodeJSActionExecutionData;
|
||||
|
||||
Assert.Equal("main.js", nodeAction.Script);
|
||||
Assert.Equal("node16", nodeAction.NodeVersion);
|
||||
}
|
||||
finally
|
||||
{
|
||||
@@ -670,32 +626,6 @@ namespace GitHub.Runner.Common.Tests.Worker
|
||||
{
|
||||
Teardown();
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
[Trait("Level", "L0")]
|
||||
[Trait("Category", "Worker")]
|
||||
public void Load_ConditionalCompositeAction()
|
||||
{
|
||||
try
|
||||
{
|
||||
//Arrange
|
||||
Setup();
|
||||
|
||||
var actionManifest = new ActionManifestManager();
|
||||
actionManifest.Initialize(_hc);
|
||||
|
||||
//Act
|
||||
var result = actionManifest.Load(_ec.Object, Path.Combine(TestUtil.GetTestDataPath(), "conditional_composite_action.yml"));
|
||||
|
||||
//Assert
|
||||
Assert.Equal("Conditional Composite", result.Name);
|
||||
Assert.Equal(ActionExecutionType.Composite, result.Execution.ExecutionType);
|
||||
}
|
||||
finally
|
||||
{
|
||||
Teardown();
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
|
||||
@@ -193,9 +193,9 @@ namespace GitHub.Runner.Common.Tests.Worker
|
||||
// Act.
|
||||
jobContext.InitializeJob(jobRequest, CancellationToken.None);
|
||||
|
||||
var action1 = jobContext.CreateChild(Guid.NewGuid(), "action_1", "action_1", null, null, 0);
|
||||
var action1 = jobContext.CreateChild(Guid.NewGuid(), "action_1", "action_1", null, null);
|
||||
action1.IntraActionState["state"] = "1";
|
||||
var action2 = jobContext.CreateChild(Guid.NewGuid(), "action_2", "action_2", null, null, 0);
|
||||
var action2 = jobContext.CreateChild(Guid.NewGuid(), "action_2", "action_2", null, null);
|
||||
action2.IntraActionState["state"] = "2";
|
||||
|
||||
|
||||
@@ -291,8 +291,8 @@ namespace GitHub.Runner.Common.Tests.Worker
|
||||
// Act.
|
||||
jobContext.InitializeJob(jobRequest, CancellationToken.None);
|
||||
|
||||
var action1 = jobContext.CreateChild(Guid.NewGuid(), "action_1_pre", "action_1_pre", null, null, 0);
|
||||
var action2 = jobContext.CreateChild(Guid.NewGuid(), "action_1_main", "action_1_main", null, null, 0);
|
||||
var action1 = jobContext.CreateChild(Guid.NewGuid(), "action_1_pre", "action_1_pre", null, null);
|
||||
var action2 = jobContext.CreateChild(Guid.NewGuid(), "action_1_main", "action_1_main", null, null);
|
||||
|
||||
var actionId = Guid.NewGuid();
|
||||
var postRunner1 = hc.CreateService<IActionRunner>();
|
||||
|
||||
@@ -105,36 +105,6 @@ namespace GitHub.Runner.Common.Tests.Worker.Expressions
|
||||
}
|
||||
}
|
||||
|
||||
[Theory]
|
||||
[Trait("Level", "L0")]
|
||||
[Trait("Category", "Worker")]
|
||||
[InlineData(ActionResult.Failure, ActionResult.Failure, true)]
|
||||
[InlineData(ActionResult.Failure, ActionResult.Success, false)]
|
||||
[InlineData(ActionResult.Success, ActionResult.Failure, true)]
|
||||
[InlineData(ActionResult.Success, ActionResult.Success, false)]
|
||||
[InlineData(ActionResult.Success, null, false)]
|
||||
public void FailureFunctionComposite(ActionResult jobStatus, ActionResult? actionStatus, bool expected)
|
||||
{
|
||||
using (TestHostContext hc = CreateTestContext())
|
||||
{
|
||||
// Arrange
|
||||
|
||||
var executionContext = InitializeExecutionContext(hc);
|
||||
executionContext.Setup(x => x.GetGitHubContext("action_status")).Returns(actionStatus.ToString());
|
||||
executionContext.Setup( x=> x.IsEmbedded).Returns(true);
|
||||
executionContext.Setup( x=> x.Stage).Returns(ActionRunStage.Main);
|
||||
|
||||
_jobContext.Status = jobStatus;
|
||||
|
||||
// Act.
|
||||
bool actual = Evaluate("failure()");
|
||||
|
||||
// Assert.
|
||||
Assert.Equal(expected, actual);
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
[Trait("Level", "L0")]
|
||||
[Trait("Category", "Worker")]
|
||||
@@ -164,43 +134,12 @@ namespace GitHub.Runner.Common.Tests.Worker.Expressions
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
[Theory]
|
||||
[Trait("Level", "L0")]
|
||||
[Trait("Category", "Worker")]
|
||||
[InlineData(ActionResult.Failure, ActionResult.Failure, false)]
|
||||
[InlineData(ActionResult.Failure, ActionResult.Success, true)]
|
||||
[InlineData(ActionResult.Success, ActionResult.Failure, false)]
|
||||
[InlineData(ActionResult.Success, ActionResult.Success, true)]
|
||||
[InlineData(ActionResult.Success, null, true)]
|
||||
public void SuccessFunctionComposite(ActionResult jobStatus, ActionResult? actionStatus, bool expected)
|
||||
{
|
||||
using (TestHostContext hc = CreateTestContext())
|
||||
{
|
||||
// Arrange
|
||||
|
||||
var executionContext = InitializeExecutionContext(hc);
|
||||
executionContext.Setup(x => x.GetGitHubContext("action_status")).Returns(actionStatus.ToString());
|
||||
executionContext.Setup( x=> x.IsEmbedded).Returns(true);
|
||||
executionContext.Setup( x=> x.Stage).Returns(ActionRunStage.Main);
|
||||
|
||||
_jobContext.Status = jobStatus;
|
||||
|
||||
// Act.
|
||||
bool actual = Evaluate("success()");
|
||||
|
||||
// Assert.
|
||||
Assert.Equal(expected, actual);
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private TestHostContext CreateTestContext([CallerMemberName] String testName = "")
|
||||
{
|
||||
return new TestHostContext(this, testName);
|
||||
}
|
||||
|
||||
private Mock<IExecutionContext> InitializeExecutionContext(TestHostContext hc)
|
||||
private void InitializeExecutionContext(TestHostContext hc)
|
||||
{
|
||||
_jobContext = new JobContext();
|
||||
|
||||
@@ -210,8 +149,6 @@ namespace GitHub.Runner.Common.Tests.Worker.Expressions
|
||||
|
||||
_templateContext = new TemplateContext();
|
||||
_templateContext.State[nameof(IExecutionContext)] = executionContext.Object;
|
||||
|
||||
return executionContext;
|
||||
}
|
||||
|
||||
private bool Evaluate(string expression)
|
||||
|
||||
@@ -1,109 +0,0 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Runtime.CompilerServices;
|
||||
using System.Threading.Tasks;
|
||||
using Moq;
|
||||
using Xunit;
|
||||
using GitHub.Runner.Worker;
|
||||
using GitHub.Runner.Worker.Handlers;
|
||||
using GitHub.Runner.Worker.Container;
|
||||
|
||||
namespace GitHub.Runner.Common.Tests.Worker
|
||||
{
|
||||
public sealed class StepHostL0
|
||||
{
|
||||
private Mock<IExecutionContext> _ec;
|
||||
private Mock<IDockerCommandManager> _dc;
|
||||
private TestHostContext CreateTestContext([CallerMemberName] String testName = "")
|
||||
{
|
||||
var hc = new TestHostContext(this, testName);
|
||||
|
||||
_ec = new Mock<IExecutionContext>();
|
||||
_ec.SetupAllProperties();
|
||||
_ec.Setup(x => x.Global).Returns(new GlobalContext { WriteDebug = true });
|
||||
var trace = hc.GetTrace();
|
||||
_ec.Setup(x => x.Write(It.IsAny<string>(), It.IsAny<string>())).Callback((string tag, string message) => { trace.Info($"[{tag}]{message}"); });
|
||||
|
||||
_dc = new Mock<IDockerCommandManager>();
|
||||
hc.SetSingleton(_dc.Object);
|
||||
return hc;
|
||||
}
|
||||
|
||||
[Fact]
|
||||
[Trait("Level", "L0")]
|
||||
[Trait("Category", "Worker")]
|
||||
public async Task DetermineNodeRuntimeVersionInContainerAsync()
|
||||
{
|
||||
using (TestHostContext hc = CreateTestContext())
|
||||
{
|
||||
// Arrange.
|
||||
var sh = new ContainerStepHost();
|
||||
sh.Initialize(hc);
|
||||
sh.Container = new ContainerInfo() { ContainerId = "1234abcd" };
|
||||
|
||||
_dc.Setup(d => d.DockerExec(_ec.Object, It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>(), It.IsAny<List<string>>()))
|
||||
.ReturnsAsync(0);
|
||||
|
||||
// Act.
|
||||
var nodeVersion = await sh.DetermineNodeRuntimeVersion(_ec.Object, "node12");
|
||||
|
||||
// Assert.
|
||||
Assert.Equal("node12", nodeVersion);
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
[Trait("Level", "L0")]
|
||||
[Trait("Category", "Worker")]
|
||||
public async Task DetermineNodeRuntimeVersionInAlpineContainerAsync()
|
||||
{
|
||||
using (TestHostContext hc = CreateTestContext())
|
||||
{
|
||||
// Arrange.
|
||||
var sh = new ContainerStepHost();
|
||||
sh.Initialize(hc);
|
||||
sh.Container = new ContainerInfo() { ContainerId = "1234abcd" };
|
||||
|
||||
_dc.Setup(d => d.DockerExec(_ec.Object, It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>(), It.IsAny<List<string>>()))
|
||||
.Callback((IExecutionContext ec, string id, string options, string command, List<string> output) =>
|
||||
{
|
||||
output.Add("alpine");
|
||||
})
|
||||
.ReturnsAsync(0);
|
||||
|
||||
// Act.
|
||||
var nodeVersion = await sh.DetermineNodeRuntimeVersion(_ec.Object, "node16");
|
||||
|
||||
// Assert.
|
||||
Assert.Equal("node16_alpine", nodeVersion);
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
[Trait("Level", "L0")]
|
||||
[Trait("Category", "Worker")]
|
||||
public async Task DetermineNodeRuntimeVersionInUnknowContainerAsync()
|
||||
{
|
||||
using (TestHostContext hc = CreateTestContext())
|
||||
{
|
||||
// Arrange.
|
||||
var sh = new ContainerStepHost();
|
||||
sh.Initialize(hc);
|
||||
sh.Container = new ContainerInfo() { ContainerId = "1234abcd" };
|
||||
|
||||
_dc.Setup(d => d.DockerExec(_ec.Object, It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>(), It.IsAny<List<string>>()))
|
||||
.Callback((IExecutionContext ec, string id, string options, string command, List<string> output) =>
|
||||
{
|
||||
output.Add("github");
|
||||
})
|
||||
.ReturnsAsync(0);
|
||||
|
||||
// Act.
|
||||
var nodeVersion = await sh.DetermineNodeRuntimeVersion(_ec.Object, "node16");
|
||||
|
||||
// Assert.
|
||||
Assert.Equal("node16", nodeVersion);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,7 +1,7 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<TargetFramework>netcoreapp3.1</TargetFramework>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;linux-musl-x64;osx-x64</RuntimeIdentifiers>
|
||||
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64</RuntimeIdentifiers>
|
||||
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
|
||||
<AssetTargetFallback>portable-net45+win8</AssetTargetFallback>
|
||||
<NoWarn>NU1701;NU1603;NU1603;xUnit2013;</NoWarn>
|
||||
|
||||
@@ -1,49 +0,0 @@
|
||||
|
||||
name: 'Conditional Composite'
|
||||
description: 'Test composite run step conditionals'
|
||||
inputs:
|
||||
exit-code:
|
||||
description: 'Action fails if set to non-zero'
|
||||
default: '0'
|
||||
outputs:
|
||||
default:
|
||||
description: "Did step run with default?"
|
||||
value: ${{ steps.default-conditional.outputs.default }}
|
||||
success:
|
||||
description: "Did step run with success?"
|
||||
value: ${{ steps.success-conditional.outputs.success }}
|
||||
failure:
|
||||
description: "Did step run with failure?"
|
||||
value: ${{ steps.failure-conditional.outputs.failure }}
|
||||
always:
|
||||
description: "Did step run with always?"
|
||||
value: ${{ steps.always-conditional.outputs.always }}
|
||||
|
||||
runs:
|
||||
using: "composite"
|
||||
steps:
|
||||
- run: exit ${{ inputs.exit-code }}
|
||||
shell: bash
|
||||
|
||||
- run: echo "::set-output name=default::true"
|
||||
id: default-conditional
|
||||
shell: bash
|
||||
|
||||
- run: echo "::set-output name=success::true"
|
||||
id: success-conditional
|
||||
shell: bash
|
||||
if: success()
|
||||
|
||||
- run: echo "::set-output name=failure::true"
|
||||
id: failure-conditional
|
||||
shell: bash
|
||||
if: failure()
|
||||
|
||||
- run: echo "::set-output name=always::true"
|
||||
id: always-conditional
|
||||
shell: bash
|
||||
if: always()
|
||||
|
||||
- run: echo "failed"
|
||||
shell: bash
|
||||
if: ${{ inputs.exit-code == 1 && failure() }}
|
||||
@@ -1,20 +0,0 @@
|
||||
name: 'Hello World'
|
||||
description: 'Greet the world and record the time'
|
||||
author: 'Test Corporation'
|
||||
inputs:
|
||||
greeting: # id of input
|
||||
description: 'The greeting we choose - will print ""{greeting}, World!"" on stdout'
|
||||
required: true
|
||||
default: 'Hello'
|
||||
deprecationMessage: 'This property has been deprecated'
|
||||
entryPoint: # id of input
|
||||
description: 'optional docker entrypoint overwrite.'
|
||||
required: false
|
||||
outputs:
|
||||
time: # id of output
|
||||
description: 'The time we did the greeting'
|
||||
icon: 'hello.svg' # vector art to display in the GitHub Marketplace
|
||||
color: 'green' # optional, decorates the entry in the GitHub Marketplace
|
||||
runs:
|
||||
using: 'node16'
|
||||
main: 'main.js'
|
||||
@@ -47,10 +47,6 @@ elif [[ "$CURRENT_PLATFORM" == 'linux' ]]; then
|
||||
aarch64) RUNTIME_ID="linux-arm64";;
|
||||
esac
|
||||
fi
|
||||
|
||||
if [ -f "/lib/ld-musl-x86_64.so.1" ]; then
|
||||
RUNTIME_ID="linux-musl-x64"
|
||||
fi
|
||||
elif [[ "$CURRENT_PLATFORM" == 'darwin' ]]; then
|
||||
RUNTIME_ID='osx-x64'
|
||||
fi
|
||||
@@ -61,7 +57,7 @@ fi
|
||||
|
||||
# Make sure current platform support publish the dotnet runtime
|
||||
# Windows can publish win-x86/x64
|
||||
# Linux can publish linux-x64/arm/arm64/musl-x64
|
||||
# Linux can publish linux-x64/arm/arm64
|
||||
# OSX can publish osx-x64
|
||||
if [[ "$CURRENT_PLATFORM" == 'windows' ]]; then
|
||||
if [[ ("$RUNTIME_ID" != 'win-x86') && ("$RUNTIME_ID" != 'win-x64') ]]; then
|
||||
@@ -69,7 +65,7 @@ if [[ "$CURRENT_PLATFORM" == 'windows' ]]; then
|
||||
exit 1
|
||||
fi
|
||||
elif [[ "$CURRENT_PLATFORM" == 'linux' ]]; then
|
||||
if [[ ("$RUNTIME_ID" != 'linux-x64') && ("$RUNTIME_ID" != 'linux-musl-x64') && ("$RUNTIME_ID" != 'linux-arm64') && ("$RUNTIME_ID" != 'linux-arm') ]]; then
|
||||
if [[ ("$RUNTIME_ID" != 'linux-x64') && ("$RUNTIME_ID" != 'linux-x86') && ("$RUNTIME_ID" != 'linux-arm64') && ("$RUNTIME_ID" != 'linux-arm') ]]; then
|
||||
echo "Failed: Can't build $RUNTIME_ID package $CURRENT_PLATFORM" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -2,10 +2,10 @@
|
||||
<Project ToolsVersion="14.0" DefaultTargets="Build"
|
||||
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
|
||||
<Target Name="GenerateConstant">
|
||||
<Exec Command="git rev-parse HEAD" ConsoleToMSBuild="true">
|
||||
<!-- <Exec Command="git rev-parse HEAD" ConsoleToMSBuild="true">
|
||||
<Output TaskParameter="ConsoleOutput" PropertyName="GitInfoCommitHash" />
|
||||
</Exec>
|
||||
<Message Text="Building $(Product): $(GitInfoCommitHash) --- $(PackageRuntime)" Importance="high"/>
|
||||
</Exec> -->
|
||||
<Message Text="Building $(Product): --- $(PackageRuntime)" Importance="high"/>
|
||||
|
||||
<ItemGroup>
|
||||
<BuildConstants Include="namespace GitHub.Runner.Sdk"/>
|
||||
@@ -14,7 +14,7 @@
|
||||
<BuildConstants Include="%20%20%20%20{"/>
|
||||
<BuildConstants Include="%20%20%20%20%20%20%20%20public static class Source"/>
|
||||
<BuildConstants Include="%20%20%20%20%20%20%20%20{"/>
|
||||
<BuildConstants Include="%20%20%20%20%20%20%20%20%20%20%20%20public static readonly string CommitHash = %22$(GitInfoCommitHash)%22%3B"/>
|
||||
<BuildConstants Include="%20%20%20%20%20%20%20%20%20%20%20%20public static readonly string CommitHash = %22dfcfae49e59b6dc3c2bb5295c649b33c4b49c964%22%3B"/>
|
||||
<BuildConstants Include="%20%20%20%20%20%20%20%20}%0A"/>
|
||||
<BuildConstants Include="%20%20%20%20%20%20%20%20public static class RunnerPackage"/>
|
||||
<BuildConstants Include="%20%20%20%20%20%20%20%20{"/>
|
||||
@@ -27,7 +27,6 @@
|
||||
|
||||
<WriteLinesToFile File="Runner.Sdk/BuildConstants.cs" Lines="@(BuildConstants)" Overwrite="true" />
|
||||
|
||||
<Exec Command="git update-index --assume-unchanged ./Runner.Sdk/BuildConstants.cs" ConsoleToMSBuild="true" />
|
||||
</Target>
|
||||
|
||||
<ItemGroup>
|
||||
|
||||
@@ -1 +1 @@
|
||||
2.284.0
|
||||
2.283.3
|
||||
|
||||
Reference in New Issue
Block a user