Compare commits

..

247 Commits

Author SHA1 Message Date
TingluoHuang
632fcb0d50 unset GTIHUB_ACTION_REPOSITORY and GITHUB_ACTION_REF for non-repo based actions. 2020-11-12 10:59:52 -05:00
Thomas Boop
35dda19491 Add deprecation date and release 2.274.1 version (#796) 2020-11-09 09:01:47 -05:00
Julio Barba
36bdf50bc6 Prepare the release of 2.274.0 runner 2020-11-05 10:25:24 -05:00
Chris Gavin
95e2158dc6 Add an environment variable to indicate which repository the currently running Action came from. (#585)
* add `workflow_dispatch`

* Add an environment variable to indicate which repository the currently running Action came from.

* Expose the Action ref as well.

* Move setting `github.action_repository` and `github.action_ref` to `ActionRunner.cs`.

* Don't set `action_repository` and `action_ref` for local Actions.

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2020-11-03 14:39:17 -05:00
Jason Laqua
3ebaeb9f19 Fixes #759 doesn't change proxy environment variables (#760)
* Fixes #759 doesn't change proxy environment variables

* Update RunnerWebProxy.cs

* Update RunnerWebProxyL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2020-11-03 10:47:30 -05:00
shinriyo
9d678cb270 DRY and add sudo (#687)
remove 3 "redundant" text and put one text for DRY.
and developers always forget `sudo` and annoying `Need to run with sudo privilege` message.
so, add first.
2020-11-02 21:38:35 -05:00
Tingluo Huang
27788491ea raise error for set-env, block set node_options. (#784)
* raise error for set-env, block set node_options.

* feedback.
2020-11-02 14:09:29 -05:00
Yashwanth Anantharaju
5ba7affea4 fix in correct check (#778) 2020-10-30 14:34:00 -04:00
Robin Neatherway
ce92d7a6b5 Change ping .. > nul to sleep (#647)
* Change `ping .. > nul` to `sleep`

The filename `nul` is a Windows-ism that causes the update script to
create such a file in the current working directory. The `ping`
utility is also an dependency not installed by
`installdependencies.sh`, so it seemed easier to change it to the
standard `sleep` command.

* Update dotnet-install script as requested by test

* Update dotnet-install.ps1

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2020-10-29 10:15:30 -04:00
Temtaime
d23ca0ba7a Add .editorconfig (#768)
* Add .editorconfig

* Create .editorconfig
2020-10-27 10:50:50 -04:00
dependabot[bot]
9d1c81f018 Bump @actions/core in /src/Misc/expressionFunc/hashFiles (#729)
Bumps [@actions/core](https://github.com/actions/toolkit/tree/HEAD/packages/core) from 1.2.0 to 1.2.6.
- [Release notes](https://github.com/actions/toolkit/releases)
- [Changelog](https://github.com/actions/toolkit/blob/main/packages/core/RELEASES.md)
- [Commits](https://github.com/actions/toolkit/commits/HEAD/packages/core)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-26 23:29:08 -04:00
Josh Soref
7a8abe726a Improve apt handling (#708)
* Unify apt/apt-get logic

The previous logic was buggy in that it tried to use `apt` in the `apt-get` branch after deciding that `apt` was unavailable...

* Prefer apt-get over apt

apt does not have a stable cli and using it from scripts yields annoying messages

* Improve English for missing apt-get & apt case

* Fix apt-get/apt fallback behavior for $ patterns

If there's a `$` in the apt install pattern, it will not fail if it selects a thing and decides it isn't interested in installing it.

* Fix spelling of libssl
2020-10-26 23:27:09 -04:00
Łukasz Łaniewski-Wołłk
a9135e61a0 Correcting bug in check of libicu presence (#695) 2020-10-26 23:14:17 -04:00
Justin Weissig
feafd3e1d7 fixed grammar issues (#672)
Nothing major here just minor wording.
2020-10-26 23:11:30 -04:00
Justin Weissig
dc3b2d3a36 fixed wording (#671)
Fixed a few minor grammar issues
2020-10-26 23:10:59 -04:00
Justin Weissig
a371309079 minor spelling & grammar tweaks (#670)
Fixed a few minor spelling & grammar issues.
2020-10-26 23:10:26 -04:00
Justin Weissig
5dd6bde4ca fixed minor spelling mistake (#669)
Changed enhancment to enhancement.
2020-10-26 23:09:38 -04:00
Tingluo Huang
c196103e58 update dotnet install script. 2020-10-26 23:07:57 -04:00
Fabian Mastenbroek
d55070da3e Update to .NET Core SDK 3.1.302 (#681)
This change updates the .NET Core SDK used by the Actions Runner to
version 3.1.302 to address the issues that are caused by the following issue:
    https://github.com/dotnet/runtime/issues/13475
See #574 for more information.

Fixes #574
2020-10-26 22:51:29 -04:00
Yashwanth Anantharaju
8279ae9a70 Support environment URL parsing (#762)
* environment URL parsing
2020-10-21 12:14:21 -04:00
Hayden Faulds
2e3b03623f log runner group name (#696)
* log runner group name

* linting
2020-10-16 14:56:06 +01:00
Thomas Boop
c18c8746db Release notes for 2.273.5 (#734) 2020-10-02 11:49:49 -04:00
Thomas Boop
6332a52d76 Notify on unsecure commands (#731)
* notify on unsecure commands
2020-10-02 11:34:37 -04:00
Yang Cao
8bb588bb69 Expose retention days in env for toolkit/artifacts package (#714) 2020-09-17 15:11:12 -04:00
David Kale
4510f69c73 Prepare 273.4 release 2020-09-17 18:19:42 +00:00
David Kale
c7b8552edf Prepare 2.273.3 release 2020-09-16 15:06:07 +00:00
Julio Barba
0face6e3af Preparing the release of 2.273.2 runner 2020-09-14 13:06:41 -04:00
eric sciple
306be41266 fix bug w checkout v1 updating GITHUB_WORKSPACE (#704) 2020-09-14 12:00:00 -04:00
David Kale
4e85b8f3b7 Allow registry credentials for job/service containers (#694)
* Log in with container credentials if given

* Stub in registry aware auth for later

* Fix hang if password is empty

* Remove default param to fix build

* PR Feedback. Add some tests and fix parse
2020-09-11 12:28:58 -04:00
Julio Barba
444332ca88 Prepare the release of 2.273.1 runner 2020-09-08 13:01:36 -04:00
Thomas Boop
e6eb9e381d Cleanup FileCommands (#693) 2020-09-04 15:35:36 -04:00
eric sciple
3a76a2e291 read env file (#683) 2020-08-29 23:18:35 -04:00
Thomas Boop
9976cb92a0 Add Runner File Commands (#684)
* Add File Runner Commands
2020-08-28 15:32:25 -04:00
Thomas Brumley
d900654c42 Add in Log line numbers for streaming logs (#663)
* Add in Log line

Co-authored-by: yaananth (Yash) <yaananth@github.com>
2020-08-25 12:02:29 -04:00
Julio Barba
65e3ec86b4 Set executable bit 2020-08-18 16:09:04 -04:00
Julio Barba
a7f205593a Update dotnet scripts 2020-08-18 16:03:06 -04:00
Julio Barba
55f60a4ffc Prepare the release of 2.273.0 runner 2020-08-17 15:41:15 -04:00
Ethan Chiu
ca13b25240 Fix Outputs Example (#658) 2020-08-14 11:55:27 -04:00
Timo Schilling
b0c2734380 fix endgroup maker (#640) 2020-08-14 11:53:30 -04:00
Ethan Chiu
9e7b56f698 Fix Null Ref Issues Composite Actions (#657) 2020-08-12 17:12:54 -04:00
Ethan Chiu
8c29e33e88 Fix DisplayName Changing in middle of composite action run (#645) 2020-08-10 16:26:23 -04:00
Ethan Chiu
976217d6ec Free up memory from step level outputs in composite action (#641) 2020-08-10 14:31:30 -04:00
Ethan Chiu
562eafab3a Adding Documentation to ADR for Support for Script Execution + Explicit Definition (#616) 2020-08-10 11:30:34 -04:00
Joe Bourne
9015b95a72 Updating virtual environment terminology (#651)
* Dropping pool terminology

* Update README.md
2020-08-07 15:52:56 -04:00
eric sciple
7d4bbf46de fix feature flag check; omit context for generated context names (#638) 2020-08-04 11:12:40 -04:00
Christopher Johnson
7b608e3e92 Adding help text for the new runnergroup feature (#626)
Co-authored-by: Christopher Johnson <thchrisjohnson@github.com>
2020-07-30 12:03:40 -04:00
Ethan Chiu
f028b4e2b0 Revert JobSteps to Queue Data Structure (#625)
* Revert JobSteps to Queue data structure

* Revert tests
2020-07-29 16:19:04 -04:00
TingluoHuang
38f816c2ae prepare release 2.272.0 runner. 2020-07-29 15:31:45 -04:00
efyx
bc1fe2cfe0 Fix poor performance of process spawned from svc daemon (#614) 2020-07-29 15:20:28 -04:00
Ethan Chiu
89a13db2c3 Remove TESTING_COMPOSITE_ACTIONS_ALPHA Env Variable (#624) 2020-07-29 15:12:15 -04:00
Ethan Chiu
d59092d973 GITHUB_ACTION_PATH + GITHUB_ACTION so that we can run scripts for Composite Run Steps (#615)
* Add environment variable for GITHUB_ACTION_PATH

* ah

* Remove debugging messages

* Set github action path at step level instead of global scope to avoid necessary removal

* Remove set context for github action

* Set github action path before and after composite action

* Copy GitHub Context, use this copied context for each composit step, and then set the action_path for each one (to avoid stamping over parent pointer GitHubContext
2020-07-29 14:28:14 -04:00
Ethan Chiu
855b90c3d4 Explicitly define what is allowed for a composite action (#605)
* Explicitly define what is allowed for an action

* Add step-env

* Remove secrets + defaults

* new line

* Add safety check to prevent from checking defaults in ScriptHandler for composite action

* Revert "Add safety check to prevent from checking defaults in ScriptHandler for composite action"

This reverts commit aeae15de7b.

* Need to explictly use ActionStep type since we need the .Inputs attribute which is only found in the ActionStep not IStep

* Fix ActionManifestManager

* Remove todos

* Revert "Revert "Add safety check to prevent from checking defaults in ScriptHandler for composite action""

This reverts commit a22fcbc036.

* revert

* Remove needs in env

* Make shell required + add inputs

* Remove passing context to all composite steps attribuyte
2020-07-28 10:15:46 -04:00
Christopher Johnson
48ac96307c Add ability to register a runner to the non-default self-hosted runner group (#613)
Co-authored-by: Christopher Johnson <thchrisjohnson@github.com>
2020-07-23 17:46:48 -04:00
Ethan Chiu
2e50dffb37 Clean Up Composite UI (#610)
* Remove redundant code (display name is already evaluated in ActionRunner beforehand for each step)

* remove

* Remove nesting information for composite steps.

* put messages in debug logs if composite. if not, put these messages as outputs

* Fix group issue

* Fix end group issue
2020-07-23 09:45:00 -04:00
Tingluo Huang
e7b0844772 Add manual trigger 2020-07-23 00:34:36 -04:00
Ethan Chiu
d5a5550649 Fix Timeout-minutes for Whole Composite Action Step (#599)
* Exploring child Linked Cancellation Tokens

* Preliminary Timeout-minutes fix

* Final Solution for resolving cancellation token's timeout vs. cancellation

* Clean up + Fix error handling

* Use linked tokens instead

* Clean up

* one liner

* Remove JobExecutionContext => Replace with public Root accessor

* Move CreateLinkedTokenSource in the CreateCompositeStep Function
2020-07-22 18:01:50 -04:00
Ethan Chiu
3d0147d322 Improve Debugging Messages for Empty Tokens (#609)
* Improve Debugging Messages for Empty Tokens

* fix tests
2020-07-22 17:34:40 -04:00
Ethan Chiu
bd1f245aac Clarify details for defaults, shell, and working-dir (#607) 2020-07-22 16:11:55 -04:00
Steven Maude
005f1c15b1 Fix "propogate" typo in ADR 0549 (#600) 2020-07-22 16:10:57 -04:00
David Kale
da3cb5506f Fold logs for intermediate docker commands (#608) 2020-07-22 14:55:49 -04:00
dependabot[bot]
32d439070b Bump lodash in /src/Misc/expressionFunc/hashFiles (#603)
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.15 to 4.17.19.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.15...4.17.19)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-07-20 10:21:01 -04:00
jeffrey
ec9f8f1682 dbl quotes around variable so CD works if path contains spaces (#602) 2020-07-20 10:19:37 -04:00
eric sciple
0921af735a move shared ExecutionContext properties under .Global (#594) 2020-07-19 19:05:47 -04:00
eric sciple
1cc3c08cf2 Prepare to switch GITHUB_ACTION to use ContextName instead of refname (#593)
This PR changes GITHUB_ACTION to use the step ContextName, instead of refname. The behavior is behind a feature flag. Refname is an otherwise deprecated property.

Primary motivation: For composite actions, we need a distinct GITHUB_ACTION for each nested step. This PR adds code to generate a default context name for nested steps.

For nested steps, GITHUB_ACTION will be set to "{ScopeName}.{ContextName}" to ensure no collisions.

A corresponding change will be made on the server so context name is never empty. Generated context names will start with "__".

A follow-up PR is required to avoid tracking "step" context values (outputs/conclusion/result) for generated context names. Waiting on telemetry from the server to confirm it's safe to assume leading "__" is a generate context name.
2020-07-19 17:19:13 -04:00
Ethan Chiu
f9dca15c63 Composite Run Steps Refactoring (#591)
* Add basic framework for baby steps runner

* Basic logic for adding steps / invoking composite action steps

* Composite Steps Runner MVP

* Fix null object reference error

* intialize composiute

* Comment out code that is handled by stepsrunner

* Add composite clean up step

* Remove previous 'workarounds' from StepsRunner. Clean Up PR

* Remove todo

* Remove todo

* Fix using unitialized object yikes

* Remove time delay

* Format handler

* Move output handler into action handler

* Add try to evaluate display name

* Remove while loop yikes

* Abstract away the windows encoding check during step running

* Github context set to {ScopeName}.{ContextName} or {ContextName} if ScopeName is null

* Remove setting result to sucess since result defaults to sucess

* Fix windows error

* Fix windows

* revert:

* Windows fix

* Fix Windows Error in Abstraction

* Remove Composite Steps Runner => consolidate into Composite Steps Runner

* Remove unn. attribute in ExecutionContext

* Change protection levels, plus change function name to more clear meaning

* Remove location param

* location pt.2 fix

* Remove outputs step

* Remove temp directory

* new line

* Add arguitl not null

* better comment

* Change encoding name

* Check count > 0 for composite steps, import System.Threading

* Change function header encodingutil

* Add TODO

* Add await

* Handle Failed Step

* Move over SetAllCompositeOutputs to the handler

* Remove timeout-minutes setting in steps-level

* Use only ExecutionContext

* Move using to the top

* Remove redundant check

* Change function name

* Remove testing code

* Consolidate error code

* Consolidate code

* Change HandleOutput => ProcessCompositeActionOutputs

* Remove set the timeout comment

* Add Cancelling functionality + Remove unn. parameter
2020-07-17 16:31:48 -04:00
eric sciple
0877d9a533 Update StringUtil.cs 2020-07-16 10:30:42 -04:00
eric sciple
d5e40c6a60 Update 0549-composite-run-steps.md 2020-07-15 20:00:45 -04:00
eric sciple
391bc35bb9 Update 0549-composite-run-steps.md 2020-07-15 19:59:13 -04:00
eric sciple
e4267b8434 Update 0549-composite-run-steps.md 2020-07-15 19:57:22 -04:00
TingluoHuang
2709cbc0ea rename master to main. 2020-07-14 13:37:20 -04:00
Ethan Chiu
5e0cde8649 Composite Actions UI (#578)
* Composite Action Run Steps

* Env Flow => Able to get env variables and overwrite current env variables => but it doesn't 'stick'

* clean up

* Clean up trace messages + add Trace debug in ActionManager

* Add debugging message

* Optimize runtime of code

* Change String to string

* Add comma to Composite

* Change JobSteps to a List, Change Register Step function name

* Add TODO, remove unn. content

* Remove unnecessary code

* Fix unit tests

* Fix env format

* Remove comment

* Remove TODO message for context

* Add verbose trace logs which are only viewable by devs

* Initial Start for FileTable stuff

* Progress towards passing FileTable or FileID or FileName

* Sort usings in Composite Action Handler

* Change 0 to location

* Update context variables in composite action yaml

* Add helpful error message for null steps

* Pass fileID to all children token of root action token

* Change confusing term context => templateContext, Eliminate _fileTable and only use ExecutionContext.FileTable + update this table when need be

* Remove unnessary FileID attribute from CompositeActionExecutionData

* Clean up file path for error message

* Remove todo

* Initial start/framework for output handling

* Outline different class vs Handler approach

* Remove InitializeScope

* Remove InitializeScope

* Fix Workflow Step Env overiding Parent Env

* First Approach for Attaching ID + Group ID to each Composite Action Step

* Add GroupID to the ActionDefinitionData

* starting foundation for handling clean up outputs step

* Pass outputs data to each composite action step to enable set-output functionality

* Create ScopeName for whole composite action.

This will enable us to add to the StepsContext[ScopeName] for the composite action which will allow us to use all these outputs in the cleanup step

* Hook up composite output step to handler => tmmrw implement composite output handler

* Add post composite action step to cleanup outputs => triggers composite output cleanup handler

* Fix Outputs Token handling start. Add individual step scope names.

* Set up Scope Name and Context Name naming system{

* Figured out how to pass Parent Execution Context to clean up step

* Figured out how to pass Parent Execution Context and scope names to
clean up step

* Add GetOutput function for StepsContext

* Generate child scope name correctly if parent scope name is null

* Simplify InitializeScope()

* Outputs are set correctly and able to get all final outputs in handler

* Parse through Action Outputs

* Fix null ScopeName + ContextName in CompositeOutputHandler

* Shift over handling of Action Outputs to output handler

* First attempt to fix null retrievals for output variables

* Basic Support for Outputs Done.

* Clean up pt.1

* Refactor outputs to avoid using Action Reference

* Clean up code

* Clean up part 2

* Add clarifying comments for the output handler

* Remove TODO

* Remove env in composite action scope

* Clean up

* Revert back

* revert back

* add back envToken

* Remove unnecessary code

* Add file length check

* Clean up

* Fix logging issue

* Figure out how to handle set-env edge cases

* formatting

* fix unit tests

* Fix windows unit test syntax error

* Fix period

* Sanity check for fileTable add + remove unn. code

* revert back

* Add back line break

* Fix null errors

* Address situation if FileTable is null + add sanity check for adding file to fileTable

* add line

* Revert

* Fix unit tests to instantiate a FileTable

* Fix logic for trimming manifestfile path

* Add null check

* Revert

* Revert

* revert

* spacing

* Add filetable to testing file, remove ? since we know filetable should never be non null

* Fix Throw logic

* Clarify template outputs token

* Add another type support for outputs to avoid container unit tests errors

* Add mapping for parity

* Build support for new outputs format

* Build support for new outputs format

* Refactor to avoid duplication of action yaml for workflow yaml

* revert

* revert

* revert

* spacing
2020-07-13 17:55:15 -04:00
Ethan Chiu
cb2b323781 Composite Run Steps Outputs (#568)
* Composite Action Run Steps

* Env Flow => Able to get env variables and overwrite current env variables => but it doesn't 'stick'

* clean up

* Clean up trace messages + add Trace debug in ActionManager

* Add debugging message

* Optimize runtime of code

* Change String to string

* Add comma to Composite

* Change JobSteps to a List, Change Register Step function name

* Add TODO, remove unn. content

* Remove unnecessary code

* Fix unit tests

* Fix env format

* Remove comment

* Remove TODO message for context

* Add verbose trace logs which are only viewable by devs

* Initial Start for FileTable stuff

* Progress towards passing FileTable or FileID or FileName

* Sort usings in Composite Action Handler

* Change 0 to location

* Update context variables in composite action yaml

* Add helpful error message for null steps

* Pass fileID to all children token of root action token

* Change confusing term context => templateContext, Eliminate _fileTable and only use ExecutionContext.FileTable + update this table when need be

* Remove unnessary FileID attribute from CompositeActionExecutionData

* Clean up file path for error message

* Remove todo

* Initial start/framework for output handling

* Outline different class vs Handler approach

* Remove InitializeScope

* Remove InitializeScope

* Fix Workflow Step Env overiding Parent Env

* First Approach for Attaching ID + Group ID to each Composite Action Step

* Add GroupID to the ActionDefinitionData

* starting foundation for handling clean up outputs step

* Pass outputs data to each composite action step to enable set-output functionality

* Create ScopeName for whole composite action.

This will enable us to add to the StepsContext[ScopeName] for the composite action which will allow us to use all these outputs in the cleanup step

* Hook up composite output step to handler => tmmrw implement composite output handler

* Add post composite action step to cleanup outputs => triggers composite output cleanup handler

* Fix Outputs Token handling start. Add individual step scope names.

* Set up Scope Name and Context Name naming system{

* Figured out how to pass Parent Execution Context to clean up step

* Figured out how to pass Parent Execution Context and scope names to
clean up step

* Add GetOutput function for StepsContext

* Generate child scope name correctly if parent scope name is null

* Simplify InitializeScope()

* Outputs are set correctly and able to get all final outputs in handler

* Parse through Action Outputs

* Fix null ScopeName + ContextName in CompositeOutputHandler

* Shift over handling of Action Outputs to output handler

* First attempt to fix null retrievals for output variables

* Basic Support for Outputs Done.

* Clean up pt.1

* Refactor outputs to avoid using Action Reference

* Clean up code

* Clean up part 2

* Add clarifying comments for the output handler

* Remove TODO

* Remove env in composite action scope

* Clean up

* Revert back

* revert back

* add back envToken

* Remove unnecessary code

* Add file length check

* Clean up

* Figure out how to handle set-env edge cases

* formatting

* fix unit tests

* Fix windows unit test syntax error

* Fix period

* Sanity check for fileTable add + remove unn. code

* revert back

* Add back line break

* Fix null errors

* Address situation if FileTable is null + add sanity check for adding file to fileTable

* add line

* Revert

* Fix unit tests to instantiate a FileTable

* Fix logic for trimming manifestfile path

* Add null check

* Revert

* Revert

* revert

* spacing

* Add filetable to testing file, remove ? since we know filetable should never be non null

* Fix Throw logic

* Clarify template outputs token

* Add another type support for outputs to avoid container unit tests errors

* Add mapping for parity

* Build support for new outputs format

* Refactor to avoid duplication of action yaml for workflow yaml

* Move SDK work in ActionManifestManager, Condense Code

* Defer runs evaluation till after for loop to ensure order doesn't matter

* Fix logic error in setting scope and context names

* Add Regex + Add Child Context name null resolution

* move private function to bottom of class
2020-07-13 17:23:19 -04:00
Ethan Chiu
6c3958f365 Composite Run Steps ADR (#554)
* start

* Inputs + Outputs

* Clarify docs

* Finish Environment

* Add if condition

* Clarify language

* Update 0549-composite-run-steps.md

* timeout-minutes

* Finish

* add relevant example

* Fix syntax

* fix env example

* fix yaml syntax

* Update 0549-composite-run-steps.md

* Update file names, add more relevant example if condition

* Add note to continue-on-error

* Apply changes to If Condition

* bolding

* Update 0549-composite-run-steps.md

* Update 0549-composite-run-steps.md

* Update 0549-composite-run-steps.md

* Update 0549-composite-run-steps.md

* Syntax support + spacing

* Add guiding principles.

* Update 0549-composite-run-steps.md

* Reverse order.

* Update 0549-composite-run-steps.md

* change from job to step

* Update 0549-composite-run-steps.md

* Update 0549-composite-run-steps.md

* Update 0549-composite-run-steps.md

* Add Secrets

* Update 0549-composite-run-steps.md

* Update 0549-composite-run-steps.md

* Fix output example

* Fix output example

* Fix action examples to use using.

* fix output variable name

* update workingDir + env

* Defaults + continue-on-error

* Update Outputs Section

* Eliminate Env

* Secrets

* Update timeout-minutes

* Update 0549-composite-run-steps.md

* Update 0549-composite-run-steps.md

* Fix example.

* Remove TODOs

* Update 0549-composite-run-steps.md

* Update 0549-composite-run-steps.md
2020-07-13 12:30:31 -04:00
Ethan Chiu
9d7bd4706b Improve Error Messaging for Actions by Using ExecutionContext's FileTable as Single Source of Truth and by Passing FileID to All Children Tokens. (#564)
* Composite Action Run Steps

* Env Flow => Able to get env variables and overwrite current env variables => but it doesn't 'stick'

* clean up

* Clean up trace messages + add Trace debug in ActionManager

* Add debugging message

* Optimize runtime of code

* Change String to string

* Add comma to Composite

* Change JobSteps to a List, Change Register Step function name

* Add TODO, remove unn. content

* Remove unnecessary code

* Fix unit tests

* Fix env format

* Remove comment

* Remove TODO message for context

* Add verbose trace logs which are only viewable by devs

* Initial Start for FileTable stuff

* Progress towards passing FileTable or FileID or FileName

* Sort usings in Composite Action Handler

* Change 0 to location

* Update context variables in composite action yaml

* Add helpful error message for null steps

* Pass fileID to all children token of root action token

* Change confusing term context => templateContext, Eliminate _fileTable and only use ExecutionContext.FileTable + update this table when need be

* Remove unnessary FileID attribute from CompositeActionExecutionData

* Clean up file path for error message

* Remove todo

* Fix Workflow Step Env overiding Parent Env

* Remove env in composite action scope

* Clean up

* Revert back

* revert back

* add back envToken

* Remove unnecessary code

* Add file length check

* Clean up

* Figure out how to handle set-env edge cases

* formatting

* fix unit tests

* Fix windows unit test syntax error

* Fix period

* Sanity check for fileTable add + remove unn. code

* revert back

* Add back line break

* Fix null errors

* Address situation if FileTable is null + add sanity check for adding file to fileTable

* add line

* Revert

* Fix unit tests to instantiate a FileTable

* Fix logic for trimming manifestfile path

* Add null check

* Add filetable to testing file, remove ? since we know filetable should never be non null
2020-07-08 17:15:16 -04:00
Ethan Chiu
5822a38c39 Add bash command for running custom runner (#569) 2020-07-08 11:20:38 -04:00
Ethan Chiu
d42c9da2d7 Composite Actions: Support Env Flow (#557)
* Composite Action Run Steps

* Env Flow => Able to get env variables and overwrite current env variables => but it doesn't 'stick'

* clean up

* Clean up trace messages + add Trace debug in ActionManager

* Add debugging message

* Optimize runtime of code

* Change String to string

* Add comma to Composite

* Change JobSteps to a List, Change Register Step function name

* Add TODO, remove unn. content

* Remove unnecessary code

* Fix unit tests

* Fix env format

* Remove comment

* Remove TODO message for context

* Add verbose trace logs which are only viewable by devs

* Sort usings in Composite Action Handler

* Change 0 to location

* Update context variables in composite action yaml

* Add helpful error message for null steps

* Fix Workflow Step Env overiding Parent Env

* Remove env in composite action scope

* Clean up

* Revert back

* revert back

* add back envToken

* Remove unnecessary code

* Figure out how to handle set-env edge cases

* formatting

* fix unit tests

* Fix windows unit test syntax error
2020-07-08 10:16:51 -04:00
eric sciple
121deedeb5 Fix trailing '.0' for Int64 values (#572) 2020-06-30 17:25:47 -04:00
Ethan Chiu
a0942ed345 Composite Actions Support for Multiple Run Steps (#549)
* Composite Action Run Steps

* Clean up trace messages + add Trace debug in ActionManager

* Change String to string

* Add comma to Composite

* Change JobSteps to a List, Change Register Step function name

* Add TODO, remove unn. content

* Remove unnecessary code

* Fix unit tests

* Add verbose trace logs which are only viewable by devs

* Sort usings in Composite Action Handler

* Change 0 to location

* Update context variables in composite action yaml

* Add helpful error message for null steps
2020-06-23 15:35:32 -04:00
TingluoHuang
7cef9a27ca release 2.267.0 runner. 2020-06-23 14:05:28 -04:00
Tingluo Huang
df7e16954e print runner and machine name to log. (#539) 2020-06-23 13:57:37 -04:00
eric sciple
4e7d27a53c remove temporary logic when resolving action download info (#550) 2020-06-15 13:13:47 -04:00
Lokesh Gopu
89d1418e48 Update exception message (#540) 2020-06-11 17:25:50 -04:00
Tingluo Huang
e728b8594d fix race condition. (#538) 2020-06-11 16:17:24 -04:00
Tingluo Huang
de4490d06d Restore SELinux context on service file when SELinux is enabled (#525) 2020-06-11 15:40:09 -04:00
Tingluo Huang
2e800f857e Skip search $PATH on command with fully qualified path (#526) 2020-06-11 13:52:42 -04:00
Tingluo Huang
312c7668a8 Fix DataContract with Token service (#532) 2020-06-11 12:11:35 -04:00
Tingluo Huang
eaf39bb058 add libicu66 for Ubuntu 20.04 (#535) 2020-06-11 12:11:13 -04:00
eric sciple
5815819f24 Resolve action download info (#515) 2020-06-09 08:53:28 -04:00
Ethan Chiu
1aea046932 Add substep for developer flow for clarity (#523) 2020-06-08 10:47:58 -04:00
Ethan Chiu
eda463601c Update Links and Language to Git + VSCode (#522) 2020-06-08 10:19:17 -04:00
Nick Fields
f994ae0542 Reduce input validation warnings (#506)
* Only raise a single warning for unexpected inputs

* Update invalid input test to raise single warning
2020-06-05 23:09:14 -04:00
Tingluo Huang
3c5aef791c Fix null ref exception in SecretMasker caused by hashfiles timeout. (#516) 2020-06-05 23:02:10 -04:00
Tingluo Huang
c4626d0c3a Remove SPS/Token migration code. Remove GHES url manipulate code. (#513)
* Remove SPS/Token migration code. Remove GHES url manipulate code.

* feedback.
2020-06-03 23:24:53 -04:00
eric sciple
416a7ac4b8 prepare to switch to service resolves archive download info (#508) 2020-06-02 17:21:50 -04:00
TingluoHuang
11435857e4 prepare 2.263.0 runner release. 2020-05-21 15:49:10 -04:00
Tingluo Huang
6f260012a3 Fix inputs validation warning, fix post step display name, fix worker crash due to error in step.env (#490) 2020-05-21 11:09:50 -04:00
eric sciple
4fc87ddfc6 fix problem matcher for GHES (#488) 2020-05-19 16:15:03 -04:00
eric sciple
b45c1b9440 switch GITHUB_URL to GITHUB_SERVER_URL (#482) 2020-05-18 13:02:30 -04:00
Quan TRAN
73307c0a30 jq returns "null" if the field does not exist (#478) 2020-05-14 11:09:20 -04:00
Tingluo Huang
cd8e4ddba1 Validate inputs only for repo action, no warning for small delay. (#476)
* validate inputs only for repo action, no warning for small delay.

* l0
2020-05-12 16:09:13 -04:00
Tingluo Huang
abf59bdcb6 Fix configure as service with runner name has space. (#474) 2020-05-12 16:08:50 -04:00
Brian Cristante
09cf59c1e0 Use an env var to point to an Actions Service dev instance (#468)
* Use an environment variable for the testing backdoor

* Make the env var a boolean

We'll still have to pass the URL on the command line

* Empty commit to rerun CI
2020-05-11 17:14:02 -04:00
TingluoHuang
7a65236022 prepare 2.262.0 runner release. 2020-05-11 15:05:59 -04:00
David Kale
462b5117c8 docker build using -f instead of implied default (#471)
* pass -f to docker build

* Wrong place

* build path

* Also pass docker context path

* Tidy up format

* PR Feedback
2020-05-11 13:57:31 -04:00
Tingluo Huang
6922f3cb86 sps/token migration tweak, ActionResult casing. (#462) 2020-05-11 12:36:35 -04:00
Tingluo Huang
911135e66c add help info for '--labels' (#472) 2020-05-11 12:36:16 -04:00
PJ Quirk
01c9a8a8af Remove community actions munging and add dotcom fallback for downloading actions (#469)
* Fallback to dotcom rather than munged community actions

* Encapsulate the link and the auth details

* Rename the method to be clearer

* Remove BOM
2020-05-11 12:23:02 -04:00
eric sciple
33d2d2c328 update checkout@v1 for GHES (#470) 2020-05-11 11:41:16 -04:00
Justin Hutchings
a246b3b29d Add CodeQL Analysis workflow (#459)
* Add CodeQL Analysis workflow

* Fix path

* Add manual build step

Import manual build step from build.yml
2020-05-07 11:17:34 -04:00
Brian Cristante
c7768d4a7b Adapt create-latest-svc.sh to work with GHES (#452)
* Add README

* Adapt create-latest-svc to work with GHES

* Fix inverted conditions
2020-04-27 23:53:53 -04:00
Tingluo Huang
70729fb3c4 Help trace worker crash in Kusto. (#450)
* Help trace worker crash in Kusto.

* more

* feedback.
2020-04-27 23:44:17 -04:00
Tingluo Huang
1470a3b6e2 better error when runner removed from service. (#441) 2020-04-27 23:02:00 -04:00
eric sciple
2fadf430e4 post-alpha fixes for github.url github.api_url and github.graphql_url (#451) 2020-04-24 13:38:59 -04:00
PJ Quirk
f798f5606b Use the API_URL and munge action URLs for GHES (#437)
* First pass at logic for GHES, not all correct

* Need to mock out file downloading

* Allowed for mocking of HTTP responses

* Added test for builtin GHES action download

* More tests

* Don't retry on action 404

* Remove commented out code

* Add a using statement back, because Windows

* Make windows happy again

* Another windows fix

* Always delete the cache since it isn't fully implemented

* Use RunnerService base class

* Add examples, update URL path

* Remove forceDotCom

* Fix a bug

* Remove a test that's no longer relevant

* PR feedback

* Add missing return

* More trace info

* Use the new agreed-upon format

* Use the auth token since we're hitting GHES directly

* Fixing tests on windows

* Fixed one more test
2020-04-23 17:02:13 -04:00
Tingluo Huang
3f7a01af93 add secret masker for trimming double qoutes. (#440) 2020-04-21 22:07:55 -04:00
Tingluo Huang
d5c54f9819 Raise warning when action input does not match action.yml. (#429) 2020-04-21 19:42:53 -04:00
Mathias LANG
9f78ad3b34 Make release notes code blocks copy-paste-able (#430)
The release notes used C-style comments (`//`) instead of shell-style (`#`),
causing the code snippet to not be easily copy-paste-able.

Additionally, the code fence for Windows had the extra `powershell` added,
as well as a comment to direct users towards `powershell` over `cmd`.
2020-04-19 22:11:44 -04:00
Jan Pazdziora
97883c8cd5 Fix spelling of RHEL and CentOS. (#436) 2020-04-19 16:53:06 -04:00
Tingluo Huang
c5fa9fb062 Print node version in debug instead of output. (#433) 2020-04-17 11:53:51 -04:00
Bryan MacFarlane
b2dcdc21dc Sample scripts to automate scaleable runners (#427) 2020-04-17 11:08:45 -04:00
David Kale
c126b52fe5 ArgumentNullException: Value cannot be null, for anonymous volume mounts (#426)
* Dont check if path starts with null

* Check SourceVolumePath not MountVolume obj

* Prefer string.IsNullOrEmpty
2020-04-14 14:36:39 -04:00
Lokesh Gopu
117ec1fff9 Fix optional parameter for unattended (#425) 2020-04-14 12:14:26 -04:00
Tingluo Huang
d5c7097d2c support 'pre' execution for actions. (#389) 2020-04-13 21:46:30 -04:00
Lokesh Gopu
f9baec4b32 Added support for custom labels (#414)
* Added support for custom labels

* ignore case

* Added interactive config for labels

* Fixing L0s

* pr comments
2020-04-13 21:33:13 -04:00
chenrui
a20ad4e121 Update releaseVersion to v2.168.0 (#420)
v2.168.0 is the latest release version, update the meta file to reflect that.
2020-04-11 13:59:20 -04:00
Tingluo Huang
2bd0b1af0e launch middle man process on macOS to workaround SIP limit (#416) 2020-04-09 16:13:06 -04:00
Tingluo Huang
baa6ded3bc Better Kusto Tracing for self-hosted runner. (#405) 2020-04-09 14:33:16 -04:00
TingluoHuang
7817e1a976 Prepare 2.169.0 runner release for GHES Alpha. 2020-04-08 11:32:56 -04:00
Tingluo Huang
d90273a068 Raise warning when volume mount root. (#413) 2020-04-08 11:17:54 -04:00
TingluoHuang
2cdde6cb16 fix L0 test. 2020-04-07 14:04:37 -04:00
TingluoHuang
1f52dfa636 bump dev-dependency version. 2020-04-07 14:00:37 -04:00
dependabot[bot]
83b5742278 Bump acorn from 6.4.0 to 6.4.1 in /src/Misc/expressionFunc/hashFiles (#371)
Bumps [acorn](https://github.com/acornjs/acorn) from 6.4.0 to 6.4.1.
- [Release notes](https://github.com/acornjs/acorn/releases)
- [Commits](https://github.com/acornjs/acorn/compare/6.4.0...6.4.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-04-07 13:49:48 -04:00
eric sciple
ba69b5bc93 Fix runner config IsHostedServer detection for GHES alpha (#401) 2020-04-01 17:29:57 -04:00
Tingluo Huang
0e8777ebda ADR for wrapper action (#361)
* wrapper action adr.

* rename

* updates.

* Update 0361-wrapper-action.md
2020-03-31 10:11:02 -04:00
Bryan MacFarlane
a5f06b3ec2 ADR 397: Configuration time custom labels (#397)
Since configuring self-hosted runners is commonly automated via scripts, the labels need to be able to be created during configuration. The runner currently registers the built-in labels (os, arch) during registration but does not accept labels via command line args to extend the set registered.
2020-03-30 17:46:32 -04:00
Tingluo Huang
be325f26a6 support config with GHES url. (#393) 2020-03-30 14:48:02 -04:00
Josh Soref
dec260920f spelling: deprecate (#394) 2020-03-30 07:45:06 -04:00
eric sciple
b0a1294ef5 Fix API URL for GHES (#390) 2020-03-27 00:16:02 -04:00
Tingluo Huang
3d70ef2da1 update workflow schema file. (#388) 2020-03-26 23:01:17 -04:00
eric sciple
e23d68f6e2 add github.url and github.api_url for ghes alpha (#386) 2020-03-25 15:11:52 -04:00
eric sciple
dff1024cd3 cache actions on premises (#381) 2020-03-24 21:51:37 -04:00
TingluoHuang
9fc0686dc2 prepare 2.168.0 runner release. 2020-03-24 16:25:11 -04:00
David Kale
ab001a7004 Add expanded volumes strings to container mounts (#384) 2020-03-23 18:53:01 -04:00
Tingluo Huang
178a618e01 expose GITHUB_REPOSITORY_OWNER. (#378) 2020-03-20 13:02:07 -04:00
eric sciple
dfaf6e06ee switch hashFiles to extension function (#362) 2020-03-18 12:08:51 -04:00
Tingluo Huang
b0a71481f0 support defaults. (#369) 2020-03-17 23:40:37 -04:00
Tingluo Huang
88875ca1b0 set steps.<id>.outcome and steps.<id>.conclusion. (#372) 2020-03-17 21:18:42 -04:00
Tingluo Huang
a5eb8cb5c4 set CI=true when launch process in actions runner. (#374) 2020-03-17 19:58:12 -04:00
Josh Soref
41f4ca3414 grammar (#373) 2020-03-16 22:19:57 -04:00
eric sciple
aa9f5bf070 adr step output and conclusion (#274) 2020-03-16 14:56:07 -04:00
Tingluo Huang
2d6042421f add support for job outputs. (#365)
* add support for job outputs.
2020-03-14 17:54:58 -04:00
Tingluo Huang
c8890d0f3f Expose job name as $GITHUB_JOB (#366) 2020-03-12 20:47:25 -04:00
Konrad Pabjan
53fb6297cb Change problem matchers output to debug (#363) 2020-03-11 21:52:46 -04:00
Tingluo Huang
f9b5d626c5 load and print machine setup info from .setup_info (#364) 2020-03-11 10:36:56 -04:00
Tingluo Huang
d34afb54b1 ADR for expose runner's machine info in log. (#354)
* ADR for expose runner's machine info in log.

* rename

* Update docs/adrs/0354-runner-machine-info.md

Co-Authored-By: Hugo van Kemenade <hugovk@users.noreply.github.com>

* Update docs/adrs/0354-runner-machine-info.md

Co-Authored-By: Hugo van Kemenade <hugovk@users.noreply.github.com>

* Update docs/adrs/0354-runner-machine-info.md

Co-Authored-By: Hugo van Kemenade <hugovk@users.noreply.github.com>

* Update 0354-runner-machine-info.md

Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
2020-03-10 15:45:35 -04:00
Julio Barba
e291ebc58a Add runner auth documentation (#357)
Add runner authentication/authorization documentation.

This doc explains how auth is used at all phases of the runner lifetime (i.e. configuration, listener start, and workflow run), for both self-hosted and hosted runners.
2020-03-09 13:01:41 -04:00
Tingluo Huang
6bec1e3bb8 Switch to use token service instead of SPS for exchanging oauth token. (#325)
* Gracefully switch the runner to use Token Service instead of SPS.

* PR feedback.

* feedback2

* report error.
2020-03-04 21:40:58 -05:00
eric sciple
0cba42590f preserve workflow file/line/column for better error messages (#356) 2020-03-03 22:38:19 -05:00
Josh Gross
94e7560ccd Add event type to credential call (#352)
* Add event type to credential call

* Move events to contants
2020-03-02 11:22:45 -05:00
Lokesh Gopu
d80ab095a5 Update runnerversion (#348)
* Update runnerversion

* Update runnerversion
2020-02-28 13:28:33 -05:00
Josh Gross
2efd6f70e2 Use the Uri Scheme in the register runner url (#345) 2020-02-25 18:30:33 -05:00
Lokesh Gopu
a6f144b014 Update Runner Register GitHub API URL to Support Org-level Runner (#339)
* Update GitHub API URL

* pr comments

* Updated GitHub API URL
2020-02-24 09:15:15 -05:00
eric sciple
5294a3ee06 commands translate file path from container action (#331) 2020-02-12 21:07:43 -05:00
Tingluo Huang
745b90a8b2 Revert "Update Base64 Encoders to deal with suffixes (#284)" (#330)
This reverts commit c45aebc9ab.
2020-02-12 14:26:30 -05:00
Tingluo Huang
0db908da8d Use authenticate endpoint for testing runner connection. (#311)
* use authenticate endpoint for testing runner connection.

* PR feedback.
2020-02-05 16:56:38 -05:00
Thomas Boop
68de3a94be Remove Temporary Build Step (#316)
* Remove Temporary Build Step

* Updated dev.sh to set path for find
2020-02-04 12:59:49 -05:00
Tingluo Huang
a0a590fb48 setup/evaluate env context after setup steps context. (#309) 2020-01-30 22:14:14 -05:00
Christopher Johnson
87a232c477 Fix windows directions in release notes (#307) 2020-01-29 12:58:09 -05:00
TingluoHuang
a3c2479a29 prepare 2.165.0 runner release. 2020-01-27 22:14:06 -05:00
Thomas Boop
c45aebc9ab Update Base64 Encoders to deal with suffixes (#284)
* Update Base64 Encoders to deal with suffixes

* Set UriDataEscape to public for unit tests
2020-01-27 21:38:31 -05:00
Thomas Boop
b676ab3d33 Add ADR For Base64 Masking Improvements (#297)
* Base64 Secret Masking ADR

* slight addendums

* Update and rename 0000-base64-masking-trailing-characters.md to 0297-base64-masking-trailing-characters.md
2020-01-27 21:38:01 -05:00
Tingluo Huang
0a6bac355d include step.env as part of env context. (#300) 2020-01-27 15:54:28 -05:00
Tingluo Huang
eb78d19b17 Set both http_proxy and HTTP_PROXY env for runner/worker processes. (#298) 2020-01-27 11:04:35 -05:00
David Kale
17970ad1f9 Set http_proxy and related env vars for containers (#304)
* WIP add in basic passthrough for proxy env vars

* Add http_proxy vars after container env is created
2020-01-27 10:56:18 -05:00
Alberto Gimeno
2e0e8eb822 Change prompt message when removing a runner (#303)
* Change prompt message to be consistent with the UI/API

* Update test
2020-01-26 18:40:06 -05:00
Tingluo Huang
2a506cc556 Update 0278-env-context.md (#299) 2020-01-22 20:09:09 -05:00
Tingluo Huang
43dd34820b Follow redirect link for download runner package
GitHub release assets download link returns 302 to storage.
2020-01-21 12:23:30 -05:00
David Kale
746c9d9ec0 Trace javascript action exit code instead of user logs (#290)
* Trace javascript action exit code instead of user logs

* Debug instead of trace
2020-01-21 11:23:30 -05:00
Tingluo Huang
fa2ecfcc4c Fix page log name isn't unqiue. (#295) 2020-01-21 11:08:37 -05:00
Alberto Gimeno
c59c0e2ded Support action.yaml file (#288)
* Support action.yaml file

* L0 tests.

* l0

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2020-01-20 12:22:59 -05:00
Tingluo Huang
7a382facb3 default post-job action's condition to always(). (#293) 2020-01-20 09:28:29 -05:00
Tingluo Huang
e9ae42693f change hashFiles() expression to use @actions/glob. (#268) 2020-01-19 10:32:13 -05:00
David Kale
9cafe8c028 Update --help (#282)
* Update --help

* Tiny indent

* Remove once option
2020-01-17 13:55:11 -05:00
Thomas Boop
1484c3fb03 Add Temporary Step to fix build on windows (#285)
* Add Temporary Step to fix build on windows

* Update workflow
2020-01-16 14:19:54 -05:00
eric sciple
53d632706d adr: command input echoing (#280) 2020-01-14 14:55:17 -05:00
eric sciple
d6179242ca adr: hashFiles expression function (#279) 2020-01-14 14:54:37 -05:00
eric sciple
0da38a6924 adr: add env context (#278) 2020-01-14 14:54:29 -05:00
eric sciple
b19e5d7924 ADR: Run action shell options (#277) 2020-01-14 14:54:20 -05:00
eric sciple
80ac4a8964 ADR: problem matchers (#276) 2020-01-14 14:54:04 -05:00
eric sciple
02639a2092 translate problem matcher file to host path (#272) 2020-01-13 15:24:57 -05:00
eric sciple
a727194742 allow container to be null/empty (#266) 2020-01-12 23:02:36 -05:00
eric sciple
a9c58d7398 Handle escaped '%' in commands data section (#200) 2020-01-12 03:30:26 -05:00
Bryan MacFarlane
e15414eb5e proxy support ADR (#263) 2020-01-09 11:52:04 -05:00
Tingluo Huang
4ab1e645c3 Fix typo in error message. (#260) 2020-01-07 15:13:05 -05:00
Tingluo Huang
584f6b6ca3 upload log on runner force kill worker. (#255) 2020-01-06 13:04:23 -05:00
Tingluo Huang
abc65839f3 detect source file path without using env. (#257) 2020-01-06 12:56:15 -05:00
Tingluo Huang
06292aa118 Expose whether debug is on/off via RUNNER_DEBUG. (#253) 2020-01-03 21:23:46 -05:00
Joseph Petersen
ac1a076a3b Treat warnings as errors (#249)
* Treat warnings as errors

* fix warnings
2019-12-21 09:51:41 -05:00
Joseph Petersen
300bc67950 ignore .idea folder (#248)
created by rider
2019-12-21 09:50:07 -05:00
Tim Heuer
289c7f36a2 Minor typo (#247)
capitalize .NET
2019-12-20 20:20:44 -05:00
Julio Barba
b89d7fb8ef Remove old "v1" artifact download/publish code (#212)
* Remove old v1 artifact download/publish code
* Remove the Build2 REST API SDK
2019-12-19 16:02:00 -05:00
Josh Gross
5fd705bb84 Update LICENSE (#242) 2019-12-19 15:22:49 -05:00
Julio Barba
9e37732401 Verify that has Windows service started successfully (#236) 2019-12-19 14:34:26 -05:00
Bryan MacFarlane
6c70d53eea clean up some unneeded dockerfiles 2019-12-19 09:16:43 -05:00
Bryan MacFarlane
f791e2d512 update contributions md 2019-12-19 09:03:14 -05:00
Bryan MacFarlane
f1e36651ad build workflow ignores md changes 2019-12-19 08:54:43 -05:00
Bryan MacFarlane
be24fea81b update issue templates 2019-12-19 08:34:17 -05:00
Bryan MacFarlane
84ca2c05ce update readme 2019-12-19 08:20:39 -05:00
Bryan MacFarlane
2249560cec update readme 2019-12-18 23:15:30 -05:00
Bryan MacFarlane
2d4b821abe update readme 2019-12-18 23:13:22 -05:00
Bryan MacFarlane
371bf8e607 update readme and contributions 2019-12-18 22:49:31 -05:00
Tingluo Huang
9ba11da490 move .sln file. (#238) 2019-12-18 20:13:57 -05:00
Tingluo Huang
40302373ba Create releaseVersion (#237) 2019-12-18 15:39:26 -05:00
Tingluo Huang
9a08f7418f delete unused files. (#235) 2019-12-18 15:28:36 -05:00
Tingluo Huang
80b6038cdc consume dotnet core 3.1 in runner. (#213) 2019-12-18 15:09:03 -05:00
David Kale
70a09bc5ac shell from prependpath (#231)
* Prepend path before locating shell tool

* Join optional prepended path to path before searching it

* Use prepended path when whiching shell tool

* Addition prependPath location

* Also use prepended paths when writing out run details

* Small tweak to undo unnecessary change
2019-12-18 15:00:12 -05:00
Tingluo Huang
c6cf1eb3f1 Release runner using actions (#223)
* update runner release workflow

* trim script.

* feedback.
2019-12-18 14:56:37 -05:00
Julio Barba
50d979f1b2 Bring back tools folder fallback code (#232) 2019-12-17 18:21:13 -05:00
Tingluo Huang
91b7e7a07a delete more unused code. (#230)
* delete more unused code.

* pr feedback.
2019-12-17 16:47:14 -05:00
Tingluo Huang
d0a4a41a63 delete un-used code. (#218) 2019-12-16 17:05:26 -05:00
Julio Barba
c3c66bb14a Replace remaining Agent -> Runner references (#229) 2019-12-16 15:45:00 -05:00
Tingluo Huang
86df779fe9 expose github.run_id and github.run_number to action runtime env. (#224) 2019-12-16 15:23:55 -05:00
David Kale
1918906505 First pass (#221) 2019-12-16 14:53:19 -05:00
Julio Barba
9448135fcd Replace a few more instances Agent -> Runner (#228) 2019-12-16 11:51:08 -05:00
Tingluo Huang
f3aedd86fd Update AGENT_ALLOW_RUNASROOT to RUNNER_ALLOW_RUNASROOT (#227)
* Update config.sh

* Update run.sh
2019-12-16 11:27:46 -05:00
Mike Coutermarsh
d778f13dee Remove runner flow: Change from PAT to "deletion token" in prompt (#225)
* Updating prompt deletion token

Currently if you leave the token off the command, we're showing "Enter your personal access token:"

Which won't work. This updates prompt to "deletion token"

* Call correct function in test

* Fix command text in test
2019-12-15 21:38:42 -05:00
Tingluo Huang
9bbbca9e5d Prepare 2.163.0 runner release. 2019-12-12 15:20:39 -05:00
Tingluo Huang
2cac011558 Load and set env from .env file before creating HostContext. (#220) 2019-12-12 14:56:45 -05:00
Julio Barba
f78d35dc4e Trim Build2 SDK (#219)
* Trim Build2 SDK REST API methods
* Remove unused files
2019-12-12 13:53:12 -05:00
David Kale
181dac1c07 Update node external to 12.13.1 (#215)
* Update node external to 12.13.1

* Typo

* Typo2
2019-12-12 09:38:48 -05:00
eric sciple
ab87b39f53 better repo matching for issue file path (#208) 2019-12-11 14:21:26 -05:00
Julio Barba
a3c6a8c201 Introduce name config argument (#217) 2019-12-11 13:26:06 -05:00
Julio Barba
275ab753a1 Runner cleanup - continuation (#209)
* Agent/AgentCredential -> Runner/RunnerCredential
* Test trait rename: Agent -> Runner
* Enable remaining RunnerL0 tests
* Remove job message PII variable masking code
* Remove unused Agent.ToolsDirectory variable
* Misc test Agent -> Runner renaming
* Some more misc cleaning
2019-12-09 17:54:41 -05:00
Tingluo Huang
3ed80b7c10 Fix L0 tests, add/update runner release yaml. (#214) 2019-12-09 16:11:00 -05:00
Tingluo Huang
d81a7656a4 Add Proxy Support for self-hosted runner. (#206) 2019-12-09 15:15:54 -05:00
Julio Barba
56e18f3606 Update .NET install SH script (again) (#210) 2019-12-04 17:57:16 -05:00
Julio Barba
cd2cec8282 Another runner code cleanup round (#197)
* Remove remaining non-SDK references of capabilities/demands
* Remove unused Runner.Common constants
* Remove more variables
* Clean up RU link, and named-pipe support
* Remove NotificationSocketAddress
* Re-add legacy OnPremises JobDispatcher code (commented out)
* More misc cleanup
2019-12-04 10:18:37 -05:00
Julio Barba
f8829feb63 Update dotnet-install scripts (#207) 2019-12-03 17:13:03 -05:00
Julio Barba
b061ec410f Revert: Switch publish agent package task to direct to new pool 2019-12-02 16:54:19 -05:00
Julio Barba
2b63b9c379 Prepare 2.162.0 runner release (#204) 2019-12-02 16:16:35 -05:00
Julio Barba
d93fb70a3e Fix PrepareActions_DownloadActionFromGraph test (#205) 2019-12-02 16:03:42 -05:00
Julio Barba
4ce1bfb58a Switch publish agent package task to direct to new pool 2019-12-02 14:51:46 -05:00
Julio Barba
7a6d9dc5c8 Implement new runner service name convention (#193)
* Limit service name to 80 characters
* Add L0 tests
* New service name convention
* Make RepoOrOrgName a computed property
* Add service name sanitizing logic with L0 test
2019-11-27 14:44:29 -05:00
Julio Barba
de29a39d14 Support downloading/publishing artifacts from Pipelines endpoint (#188)
* Support downloading/publishing artifacts from Pipelines endpoint
* Remove `Path` from everywhere
* Remove unused JobId argument
* PR feedback
* More PR feedback
2019-11-25 13:30:44 -05:00
eric sciple
7d505f7f77 problem matcher default severity (#203) 2019-11-22 13:21:46 -05:00
Thomas Boop
159e4c506a Updated Release Notes to use CLI Downloads (#196)
* Updated Release Notes to use CLI Downloads

* Add Config and Run steps

* Specify Root Drive for Windows

* Remove unactionable steps from readme config
2019-11-19 16:27:36 -05:00
David Kale
45c19eb7cb 150: Support more cpu architectures (#184)
* Cross compile for win-x86, linux-arm, linux-arm64

* Build with actions instead

* Remove win-x86

* Preserve CURRENT_PLATFORM in dev.sh

* build.yaml

* Fix formatting. Remove piplines

* Use 4 space indent consistently

* x32 -> x86

* TEMP: Only test when platform === target runtime

Fix arm64 node externals url

* win-x86 externals

* Temporarily bench rhel

* Add RHEL6, skip L0 on arm for now

* Add stub for downloading new node externals when they are ready

* Remove RHEL6

* Package based on new runtime names

* Remove unused rhel from matrix includes

* Update release, add packages

* RID typo

* Cant cross test arm on x64 hosts

* New arch is a feature

Dont release x86 until we have an e2e test machine

* Fix version

* Get version from file to avoid exec error during package on x64 host for arm package

* Update Release Notes for 2.161.0 (#195)

* More cleanup

* Update release notes
2019-11-13 11:26:06 -05:00
970 changed files with 21695 additions and 97674 deletions

View File

@@ -1,10 +0,0 @@
## Runner Version and Platform
Version of your runner?
OS of the machine running the runner? OSX/Windows/Linux/...
## What's not working?
Please include error messages and screenshots.
## Runner and Worker's Diagnostic Logs
Logs are located in the runner's `_diag` folder. The runner logs are prefixed with `Runner_` and the worker logs are prefixed with `Worker_`. All sensitive information should already be masked out, but please double-check before pasting here.

34
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,34 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Run '....'
3. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
## Runner Version and Platform
Version of your runner?
OS of the machine running the runner? OSX/Windows/Linux/...
## What's not working?
Please include error messages and screenshots.
## Job Log Output
If applicable, include the relevant part of the job / step log output here. All sensitive information should already be masked out, but please double-check before pasting here.
## Runner and Worker's Diagnostic Logs
If applicable, add relevant diagnostic log information. Logs are located in the runner's `_diag` folder. The runner logs are prefixed with `Runner_` and the worker logs are prefixed with `Worker_`. Each job run correlates to a worker log. All sensitive information should already be masked out, but please double-check before pasting here.

View File

@@ -0,0 +1,27 @@
---
name: Feature Request
about: Create a request to help us improve
title: ''
labels: enhancement
assignees: ''
---
Thank you 🙇‍♀ for wanting to create a feature in this repository. Before you do, please ensure you are filing the issue in the right place. Issues should only be opened on if the issue **relates to code in this repository**.
* If you have found a security issue [please submit it here](https://hackerone.com/github)
* If you have questions or issues with the service, writing workflows or actions, then please [visit the GitHub Community Forum's Actions Board](https://github.community/t5/GitHub-Actions/bd-p/actions)
* If you are having an issue or question about GitHub Actions then please [contact customer support](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/about-github-actions#contacting-support)
If you have a feature request that is relevant to this repository, the runner, then please include the information below:
**Describe the enhancement**
A clear and concise description of what the features or enhancement you need.
**Code Snippet**
If applicable, add a code snippet.
**Additional information**
Add any other context about the feature here.
NOTE: if the feature request has been agreed upon then the assignee will create an ADR. See docs/adrs/README.md

View File

@@ -1,20 +1,24 @@
name: Runner CI
on:
workflow_dispatch:
push:
branches:
- master
- main
- releases/*
paths-ignore:
- '**.md'
pull_request:
branches:
- '*'
paths-ignore:
- '**.md'
jobs:
build:
strategy:
matrix:
runtime: [ linux-x64, linux-arm64, linux-arm, win-x64, osx-x64 ]
# runtime: [ linux-x64, linux-arm64, linux-arm, win-x64, win-x86, osx-x64 ]
include:
- runtime: linux-x64
os: ubuntu-latest
@@ -36,10 +40,6 @@ jobs:
os: windows-latest
devScript: ./dev
# - runtime: win-x86
# os: windows-latest
# devScript: ./dev
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v1

35
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: "Code Scanning - Action"
on:
push:
schedule:
- cron: '0 0 * * 0'
jobs:
CodeQL-Build:
strategy:
fail-fast: false
# CodeQL runs on ubuntu-latest, windows-latest, and macos-latest
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
# Override language selection by uncommenting this and choosing your languages
# with:
# languages: go, javascript, csharp, python, cpp, java
- name: Manual build
run : |
./dev.sh layout Release linux-x64
working-directory: src
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

196
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,196 @@
name: Runner CD
on:
workflow_dispatch:
push:
paths:
- releaseVersion
jobs:
check:
if: startsWith(github.ref, 'refs/heads/releases/') || github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
# Make sure ./releaseVersion match ./src/runnerversion
# Query GitHub release ensure version is not used
- name: Check version
uses: actions/github-script@0.3.0
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
const core = require('@actions/core')
const fs = require('fs');
const runnerVersion = fs.readFileSync('${{ github.workspace }}/src/runnerversion', 'utf8').replace(/\n$/g, '')
const releaseVersion = fs.readFileSync('${{ github.workspace }}/releaseVersion', 'utf8').replace(/\n$/g, '')
if (runnerVersion != releaseVersion) {
console.log('Request Release Version: ' + releaseVersion + '\nCurrent Runner Version: ' + runnerVersion)
core.setFailed('Version mismatch! Make sure ./releaseVersion match ./src/runnerVersion')
return
}
try {
const release = await github.repos.getReleaseByTag({
owner: '${{ github.event.repository.owner.name }}',
repo: '${{ github.event.repository.name }}',
tag: 'v' + runnerVersion
})
core.setFailed('Release with same tag already created: ' + release.data.html_url)
} catch (e) {
// We are good to create the release if release with same tag doesn't exists
if (e.status != 404) {
throw e
}
}
build:
needs: check
strategy:
matrix:
runtime: [ linux-x64, linux-arm64, linux-arm, win-x64, osx-x64 ]
include:
- runtime: linux-x64
os: ubuntu-latest
devScript: ./dev.sh
- runtime: linux-arm64
os: ubuntu-latest
devScript: ./dev.sh
- runtime: linux-arm
os: ubuntu-latest
devScript: ./dev.sh
- runtime: osx-x64
os: macOS-latest
devScript: ./dev.sh
- runtime: win-x64
os: windows-latest
devScript: ./dev
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v1
# Build runner layout
- name: Build & Layout Release
run: |
${{ matrix.devScript }} layout Release ${{ matrix.runtime }}
working-directory: src
# Run tests
- name: L0
run: |
${{ matrix.devScript }} test
working-directory: src
if: matrix.runtime != 'linux-arm64' && matrix.runtime != 'linux-arm'
# Create runner package tar.gz/zip
- name: Package Release
if: github.event_name != 'pull_request'
run: |
${{ matrix.devScript }} package Release ${{ matrix.runtime }}
working-directory: src
# Upload runner package tar.gz/zip as artifact.
# Since each package name is unique, so we don't need to put ${{matrix}} info into artifact name
- name: Publish Artifact
if: github.event_name != 'pull_request'
uses: actions/upload-artifact@v1
with:
name: runner-packages
path: _package
release:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
# Download runner package tar.gz/zip produced by 'build' job
- name: Download Artifact
uses: actions/download-artifact@v1
with:
name: runner-packages
path: ./
# Create ReleaseNote file
- name: Create ReleaseNote
id: releaseNote
uses: actions/github-script@0.3.0
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
const core = require('@actions/core')
const fs = require('fs');
const runnerVersion = fs.readFileSync('${{ github.workspace }}/src/runnerversion', 'utf8').replace(/\n$/g, '')
const releaseNote = fs.readFileSync('${{ github.workspace }}/releaseNote.md', 'utf8').replace(/<RUNNER_VERSION>/g, runnerVersion)
console.log(releaseNote)
core.setOutput('version', runnerVersion);
core.setOutput('note', releaseNote);
# Create GitHub release
- uses: actions/create-release@master
id: createRelease
name: Create ${{ steps.releaseNote.outputs.version }} Runner Release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: "v${{ steps.releaseNote.outputs.version }}"
release_name: "v${{ steps.releaseNote.outputs.version }}"
body: |
${{ steps.releaseNote.outputs.note }}
prerelease: true
# Upload release assets
- name: Upload Release Asset (win-x64)
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}.zip
asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}.zip
asset_content_type: application/octet-stream
- name: Upload Release Asset (linux-x64)
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/actions-runner-linux-x64-${{ steps.releaseNote.outputs.version }}.tar.gz
asset_name: actions-runner-linux-x64-${{ steps.releaseNote.outputs.version }}.tar.gz
asset_content_type: application/octet-stream
- name: Upload Release Asset (osx-x64)
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/actions-runner-osx-x64-${{ steps.releaseNote.outputs.version }}.tar.gz
asset_name: actions-runner-osx-x64-${{ steps.releaseNote.outputs.version }}.tar.gz
asset_content_type: application/octet-stream
- name: Upload Release Asset (linux-arm)
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/actions-runner-linux-arm-${{ steps.releaseNote.outputs.version }}.tar.gz
asset_name: actions-runner-linux-arm-${{ steps.releaseNote.outputs.version }}.tar.gz
asset_content_type: application/octet-stream
- name: Upload Release Asset (linux-arm64)
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/actions-runner-linux-arm64-${{ steps.releaseNote.outputs.version }}.tar.gz
asset_name: actions-runner-linux-arm64-${{ steps.releaseNote.outputs.version }}.tar.gz
asset_content_type: application/octet-stream

8
.gitignore vendored
View File

@@ -1,12 +1,19 @@
# build output
**/bin
**/obj
**/libs
**/lib
# editors
**/*.xproj
**/*.xproj.user
**/.vs
**/.vscode
**/*.error
**/*.json.pretty
.idea/
# output
node_modules
_downloads
_layout
@@ -19,4 +26,3 @@ TestLogs
#generated
src/Runner.Sdk/BuildConstants.cs

View File

@@ -1,5 +1,5 @@
The MIT License (MIT)
Copyright (c) Microsoft Corporation
Copyright (c) 2019 GitHub
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -17,4 +17,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
SOFTWARE.

View File

@@ -1,31 +1,25 @@
# GitHub Actions Runner
<p align="center">
<img src="docs/res/github-graph.png">
</p>
# GitHub Actions Runner
[![Actions Status](https://github.com/actions/runner/workflows/Runner%20CI/badge.svg)](https://github.com/actions/runner/actions)
The runner is the application that runs a job from a GitHub Actions workflow. It is used by GitHub Actions in the [hosted virtual environments](https://github.com/actions/virtual-environments), or you can [self-host the runner](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/about-self-hosted-runners) in your own environment.
## Get Started
![win](docs/res/win_sm.png) [Pre-reqs](docs/start/envwin.md) | [Download](https://github.com/actions/runner/releases/latest)
For more information about installing and using self-hosted runners, see [Adding self-hosted runners](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/adding-self-hosted-runners) and [Using self-hosted runners in a workflow](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/using-self-hosted-runners-in-a-workflow)
![macOS](docs/res/apple_sm.png) [Pre-reqs](docs/start/envosx.md) | [Download](https://github.com/actions/runner/releases/latest)
Runner releases:
![linux](docs/res/linux_sm.png) [Pre-reqs](docs/start/envlinux.md) | [Download](https://github.com/actions/runner/releases/latest)
![win](docs/res/win_sm.png) [Pre-reqs](docs/start/envwin.md) | [Download](https://github.com/actions/runner/releases)
**Configure:**
![macOS](docs/res/apple_sm.png) [Pre-reqs](docs/start/envosx.md) | [Download](https://github.com/actions/runner/releases)
*MacOS and Linux*
```bash
./config.sh
```
*Windows*
```bash
config.cmd
```
![linux](docs/res/linux_sm.png) [Pre-reqs](docs/start/envlinux.md) | [Download](https://github.com/actions/runner/releases)
## Contribute
For developers that want to contribute, [read here](docs/contribute.md) on how to build and test.
We accept contributions in the form of issues and pull requests. [Read more here](docs/contribute.md) before contributing.

View File

@@ -1,32 +0,0 @@
[
{
"name": "actions-runner-win-x64-<RUNNER_VERSION>.zip",
"platform": "win-x64",
"version": "<RUNNER_VERSION>",
"downloadUrl": "https://githubassets.azureedge.net/runners/<RUNNER_VERSION>/actions-runner-win-x64-<RUNNER_VERSION>.zip"
},
{
"name": "actions-runner-osx-x64-<RUNNER_VERSION>.tar.gz",
"platform": "osx-x64",
"version": "<RUNNER_VERSION>",
"downloadUrl": "https://githubassets.azureedge.net/runners/<RUNNER_VERSION>/actions-runner-osx-x64-<RUNNER_VERSION>.tar.gz"
},
{
"name": "actions-runner-linux-x64-<RUNNER_VERSION>.tar.gz",
"platform": "linux-x64",
"version": "<RUNNER_VERSION>",
"downloadUrl": "https://githubassets.azureedge.net/runners/<RUNNER_VERSION>/actions-runner-linux-x64-<RUNNER_VERSION>.tar.gz"
},
{
"name": "actions-runner-linux-arm64-<RUNNER_VERSION>.tar.gz",
"platform": "linux-arm64",
"version": "<RUNNER_VERSION>",
"downloadUrl": "https://githubassets.azureedge.net/runners/<RUNNER_VERSION>/actions-runner-linux-arm64-<RUNNER_VERSION>.tar.gz"
},
{
"name": "actions-runner-linux-arm-<RUNNER_VERSION>.tar.gz",
"platform": "linux-arm",
"version": "<RUNNER_VERSION>",
"downloadUrl": "https://githubassets.azureedge.net/runners/<RUNNER_VERSION>/actions-runner-linux-arm-<RUNNER_VERSION>.tar.gz"
}
]

View File

@@ -1,276 +0,0 @@
stages:
- stage: Build
jobs:
################################################################################
- job: build_windows_agent_x64
################################################################################
displayName: Windows Agent (x64)
pool:
vmImage: vs2017-win2016
steps:
# Steps template for windows platform
- template: windows.template.yml
parameters:
targetRuntime: win-x64
# Package dotnet core windows dependency (VC++ Redistributable)
- powershell: |
Write-Host "Downloading 'VC++ Redistributable' package."
$outDir = Join-Path -Path $env:TMP -ChildPath ([Guid]::NewGuid())
New-Item -Path $outDir -ItemType directory
$outFile = Join-Path -Path $outDir -ChildPath "ucrt.zip"
Invoke-WebRequest -Uri https://vstsagenttools.blob.core.windows.net/tools/ucrt/ucrt_x64.zip -OutFile $outFile
Write-Host "Unzipping 'VC++ Redistributable' package to agent layout."
$unzipDir = Join-Path -Path $outDir -ChildPath "unzip"
Add-Type -AssemblyName System.IO.Compression.FileSystem
[System.IO.Compression.ZipFile]::ExtractToDirectory($outFile, $unzipDir)
$agentLayoutBin = Join-Path -Path $(Build.SourcesDirectory) -ChildPath "_layout\bin"
Copy-Item -Path $unzipDir -Destination $agentLayoutBin -Force
displayName: Package UCRT
# Create agent package zip
- script: dev.cmd package Release win-x64
workingDirectory: src
displayName: Package Release
# Upload agent package zip as build artifact
- task: PublishBuildArtifacts@1
displayName: Publish Artifact (Windows)
inputs:
pathToPublish: _package
artifactName: runners
artifactType: container
# ################################################################################
# - job: build_windows_agent_x86
# ################################################################################
# displayName: Windows Agent (x86)
# pool:
# vmImage: vs2017-win2016
# steps:
# # Steps template for windows platform
# - template: windows.template.yml
# parameters:
# targetRuntime: win-x86
# # Package dotnet core windows dependency (VC++ Redistributable)
# - powershell: |
# Write-Host "Downloading 'VC++ Redistributable' package."
# $outDir = Join-Path -Path $env:TMP -ChildPath ([Guid]::NewGuid())
# New-Item -Path $outDir -ItemType directory
# $outFile = Join-Path -Path $outDir -ChildPath "ucrt.zip"
# Invoke-WebRequest -Uri https://vstsagenttools.blob.core.windows.net/tools/ucrt/ucrt_x86.zip -OutFile $outFile
# Write-Host "Unzipping 'VC++ Redistributable' package to agent layout."
# $unzipDir = Join-Path -Path $outDir -ChildPath "unzip"
# Add-Type -AssemblyName System.IO.Compression.FileSystem
# [System.IO.Compression.ZipFile]::ExtractToDirectory($outFile, $unzipDir)
# $agentLayoutBin = Join-Path -Path $(Build.SourcesDirectory) -ChildPath "_layout\bin"
# Copy-Item -Path $unzipDir -Destination $agentLayoutBin -Force
# displayName: Package UCRT
# # Create agent package zip
# - script: dev.cmd package Release win-x86
# workingDirectory: src
# displayName: Package Release
# # Upload agent package zip as build artifact
# - task: PublishBuildArtifacts@1
# displayName: Publish Artifact (Windows)
# inputs:
# pathToPublish: _package
# artifactName: runners
# artifactType: container
################################################################################
- job: build_linux_agent_x64
################################################################################
displayName: Linux Agent (x64)
pool:
vmImage: ubuntu-16.04
steps:
# Steps template for non-windows platform
- template: nonwindows.template.yml
parameters:
targetRuntime: linux-x64
# Create agent package zip
- script: ./dev.sh package Release linux-x64
workingDirectory: src
displayName: Package Release
# Upload agent package zip as build artifact
- task: PublishBuildArtifacts@1
displayName: Publish Artifact (Linux)
inputs:
pathToPublish: _package
artifactName: runners
artifactType: container
################################################################################
- job: build_linux_agent_arm64
################################################################################
displayName: Linux Agent (arm64)
pool:
vmImage: ubuntu-16.04
steps:
# Steps template for non-windows platform
- template: nonwindows.template.yml
parameters:
targetRuntime: linux-arm64
# Create agent package zip
- script: ./dev.sh package Release linux-arm64
workingDirectory: src
displayName: Package Release
# Upload agent package zip as build artifact
- task: PublishBuildArtifacts@1
displayName: Publish Artifact (Linux)
inputs:
pathToPublish: _package
artifactName: runners
artifactType: container
################################################################################
- job: build_linux_agent_arm
################################################################################
displayName: Linux Agent (arm)
pool:
vmImage: ubuntu-16.04
steps:
# Steps template for non-windows platform
- template: nonwindows.template.yml
parameters:
targetRuntime: linux-arm
# Create agent package zip
- script: ./dev.sh package Release linux-arm
workingDirectory: src
displayName: Package Release
# Upload agent package zip as build artifact
- task: PublishBuildArtifacts@1
displayName: Publish Artifact (Linux)
inputs:
pathToPublish: _package
artifactName: runners
artifactType: container
################################################################################
- job: build_osx_agent_x64
################################################################################
displayName: macOS Agent (x64)
pool:
vmImage: macOS-10.13
steps:
# Steps template for non-windows platform
- template: nonwindows.template.yml
parameters:
targetRuntime: osx-x64
# Create agent package zip
- script: ./dev.sh package Release osx-x64
workingDirectory: src
displayName: Package Release
# Upload agent package zip as build artifact
- task: PublishBuildArtifacts@1
displayName: Publish Artifact (OSX)
inputs:
pathToPublish: _package
artifactName: runners
artifactType: container
- stage: Release
dependsOn: Build
jobs:
################################################################################
- job: publish_agent_packages
################################################################################
displayName: Publish Agents (Windows/Linux/OSX)
pool:
name: ProductionRMAgents
steps:
# Download all agent packages from all previous phases
- task: DownloadBuildArtifacts@0
displayName: Download Agent Packages
inputs:
artifactName: runners
# Upload agent packages to Azure blob storage and refresh Azure CDN
- powershell: |
Write-Host "Preloading Azure modules." # This is for better performance, to avoid module-autoloading.
Import-Module AzureRM, AzureRM.profile, AzureRM.Storage, Azure.Storage, AzureRM.Cdn -ErrorAction Ignore -PassThru
Enable-AzureRmAlias -Scope CurrentUser
$uploadFiles = New-Object System.Collections.ArrayList
$certificateThumbprint = (Get-ItemProperty -Path "$(ServicePrincipalReg)").ServicePrincipalCertThumbprint
$clientId = (Get-ItemProperty -Path "$(ServicePrincipalReg)").ServicePrincipalClientId
Write-Host "##vso[task.setsecret]$certificateThumbprint"
Write-Host "##vso[task.setsecret]$clientId"
Login-AzureRmAccount -ServicePrincipal -CertificateThumbprint $certificateThumbprint -ApplicationId $clientId -TenantId $(GitHubTenantId)
Select-AzureRmSubscription -SubscriptionId $(GitHubSubscriptionId)
$storage = Get-AzureRmStorageAccount -ResourceGroupName githubassets -AccountName githubassets
Get-ChildItem -LiteralPath "$(System.ArtifactsDirectory)/runners" | ForEach-Object {
$versionDir = $_.Name.Trim('.zip').Trim('.tar.gz')
$versionDir = $versionDir.SubString($versionDir.LastIndexOf('-') + 1)
Write-Host "##vso[task.setvariable variable=ReleaseAgentVersion;]$versionDir"
Write-Host "Uploading $_ to BlobStorage githubassets/runners/$versionDir"
Set-AzureStorageBlobContent -Context $storage.Context -Container runners -File "$(System.ArtifactsDirectory)/runners/$_" -Blob "$versionDir/$_" -Force
$uploadFiles.Add("/runners/$versionDir/$_")
}
Write-Host "Get CDN info"
Get-AzureRmCdnEndpoint -ProfileName githubassets -ResourceGroupName githubassets
Write-Host "Purge Azure CDN Cache"
Unpublish-AzureRmCdnEndpointContent -EndpointName githubassets -ProfileName githubassets -ResourceGroupName githubassets -PurgeContent $uploadFiles
Write-Host "Pull assets through Azure CDN"
$uploadFiles | ForEach-Object {
$downloadUrl = "https://githubassets.azureedge.net" + $_
Write-Host $downloadUrl
Invoke-WebRequest -Uri $downloadUrl -OutFile $_.SubString($_.LastIndexOf('/') + 1)
}
displayName: Upload to Azure Blob
# Create agent release on Github
- powershell: |
Write-Host "Creating github release."
$releaseNotes = [System.IO.File]::ReadAllText("$(Build.SourcesDirectory)\releaseNote.md").Replace("<RUNNER_VERSION>","$(ReleaseAgentVersion)")
$releaseData = @{
tag_name = "v$(ReleaseAgentVersion)";
target_commitish = "$(Build.SourceVersion)";
name = "v$(ReleaseAgentVersion)";
body = $releaseNotes;
draft = $false;
prerelease = $true;
}
$releaseParams = @{
Uri = "https://api.github.com/repos/actions/runner/releases";
Method = 'POST';
Headers = @{
Authorization = 'Basic ' + [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes("github:$(GithubToken)"));
}
ContentType = 'application/json';
Body = (ConvertTo-Json $releaseData -Compress)
}
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$releaseCreated = Invoke-RestMethod @releaseParams
Write-Host $releaseCreated
$releaseId = $releaseCreated.id
$assets = [System.IO.File]::ReadAllText("$(Build.SourcesDirectory)\assets.json").Replace("<RUNNER_VERSION>","$(ReleaseAgentVersion)")
$assetsParams = @{
Uri = "https://uploads.github.com/repos/actions/runner/releases/$releaseId/assets?name=assets.json"
Method = 'POST';
Headers = @{
Authorization = 'Basic ' + [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes("github:$(GithubToken)"));
}
ContentType = 'application/octet-stream';
Body = [system.Text.Encoding]::UTF8.GetBytes($assets)
}
Invoke-RestMethod @assetsParams
displayName: Create agent release on Github

View File

@@ -1,95 +0,0 @@
jobs:
################################################################################
- job: build_windows_x64_agent
################################################################################
displayName: Windows Agent (x64)
pool:
vmImage: vs2017-win2016
steps:
# Steps template for windows platform
- template: windows.template.yml
# Package dotnet core windows dependency (VC++ Redistributable)
- powershell: |
Write-Host "Downloading 'VC++ Redistributable' package."
$outDir = Join-Path -Path $env:TMP -ChildPath ([Guid]::NewGuid())
New-Item -Path $outDir -ItemType directory
$outFile = Join-Path -Path $outDir -ChildPath "ucrt.zip"
Invoke-WebRequest -Uri https://vstsagenttools.blob.core.windows.net/tools/ucrt/ucrt_x64.zip -OutFile $outFile
Write-Host "Unzipping 'VC++ Redistributable' package to agent layout."
$unzipDir = Join-Path -Path $outDir -ChildPath "unzip"
Add-Type -AssemblyName System.IO.Compression.FileSystem
[System.IO.Compression.ZipFile]::ExtractToDirectory($outFile, $unzipDir)
$agentLayoutBin = Join-Path -Path $(Build.SourcesDirectory) -ChildPath "_layout\bin"
Copy-Item -Path $unzipDir -Destination $agentLayoutBin -Force
displayName: Package UCRT
condition: and(succeeded(), ne(variables['build.reason'], 'PullRequest'))
# Create agent package zip
- script: dev.cmd package Release
workingDirectory: src
displayName: Package Release
condition: and(succeeded(), ne(variables['build.reason'], 'PullRequest'))
# Upload agent package zip as build artifact
- task: PublishBuildArtifacts@1
displayName: Publish Artifact (Windows x64)
condition: and(succeeded(), ne(variables['build.reason'], 'PullRequest'))
inputs:
pathToPublish: _package
artifactName: agent
artifactType: container
################################################################################
- job: build_linux_x64_agent
################################################################################
displayName: Linux Agent (x64)
pool:
vmImage: ubuntu-16.04
steps:
# Steps template for non-windows platform
- template: nonwindows.template.yml
# Create agent package zip
- script: ./dev.sh package Release
workingDirectory: src
displayName: Package Release
condition: and(succeeded(), ne(variables['build.reason'], 'PullRequest'))
# Upload agent package zip as build artifact
- task: PublishBuildArtifacts@1
displayName: Publish Artifact (Linux x64)
condition: and(succeeded(), ne(variables['build.reason'], 'PullRequest'))
inputs:
pathToPublish: _package
artifactName: agent
artifactType: container
################################################################################
- job: build_osx_agent
################################################################################
displayName: macOS Agent (x64)
pool:
vmImage: macOS-10.14
steps:
# Steps template for non-windows platform
- template: nonwindows.template.yml
# Create agent package zip
- script: ./dev.sh package Release
workingDirectory: src
displayName: Package Release
condition: and(succeeded(), ne(variables['build.reason'], 'PullRequest'))
# Upload agent package zip as build artifact
- task: PublishBuildArtifacts@1
displayName: Publish Artifact (OSX)
condition: and(succeeded(), ne(variables['build.reason'], 'PullRequest'))
inputs:
pathToPublish: _package
artifactName: agent
artifactType: container

View File

@@ -0,0 +1,61 @@
# ADR 263: Self Hosted Runner Proxies
**Date**: 2019-11-13
**Status**: Accepted
## Context
- Proxy support is required for some enterprises and organizations to start using their own self hosted runners
- While there is not a standard convention, many applications support setting proxies via the environmental variables `http_proxy`, `https_proxy`, `no_proxy`, such as curl, wget, perl, python, docker, git, R, ect
- Some of these applications use `HTTPS_PROXY` versus `https_proxy`, but most understand or primarily support the lowercase variant
## Decision
We will update the Runner to use the conventional environment variables for proxies: `http_proxy`, `https_proxy` and `no_proxy` if they are set.
These are described in detail below:
- `https_proxy` a proxy URL for all https traffic. It may contain basic authentication credentials. For example:
- http://proxy.com
- http://127.0.0.1:8080
- http://user:password@proxy.com
- `http_proxy` a proxy URL for all http traffic. It may contain basic authentication credentials. For example:
- http://proxy.com
- http://127.0.0.1:8080
- http://user:password@proxy.com
- `no_proxy` a comma separated list of hosts that should not use the proxy. An optional port may be specified
- `google.com`
- `yahoo.com:443`
- `google.com,bing.com`
We won't use `http_proxy` for https traffic when `https_proxy` is not set, this behavior lines up with any libcurl based tools (curl, git) and wget.
Otherwise action authors and workflow users need to adjust to differences between the runner proxy convention, and tools used by their actions and scripts.
Example:
Customer set `http_proxy=http://127.0.0.1:8888` and configure the runner against `https://github.com/owner/repo`, with the `https_proxy` -> `http_proxy` fallback, the runner will connect to the server without any problem. However, if a user runs `git push` to `https://github.com/owner/repo`, `git` won't use the proxy since it requires `https_proxy` to be set for any https traffic.
> `golang`, `node.js` and other dev tools from the linux community use `http_proxy` for both http and https traffic based on my research.
A majority of our users are using Linux where these variables are commonly required to be set by various programs. By reading these values, we simplify the process for self hosted runners to set up proxy, and expose it in a way users are already familiar with.
A password provided for a proxy will be masked in the logs.
We will support the lowercase and uppercase variants, with lowercase taking priority if both are set.
### No Proxy Format
While exact implementations are different per application on handle `no_proxy` env, most applications accept a comma separated list of hosts. Some accept wildcard characters (*). We are going to do exact case-insensitive matches, and not support wildcards at this time.
For example:
- example.com will match example.com, foo.example.com, foo.bar.example.com
- foo.example.com will match bar.foo.example.com and foo.example.com
We will not support IP addresses for `no_proxy`, only hostnames.
## Consequences
1. Enterprises and organizations needing proxy support will be able to embrace self hosted runners
2. Users will need to set these environmental variables before configuring the runner in order to use a proxy when configuring
3. The runner will read from the environmental variables during config and runtime and use the provided proxy if it exists
4. Users may need to pass these environmental variables into other applications if they do not natively take these variables
5. Action authors may need to update their workflows to react to the these environment variables
6. We will document the way of setting environmental variables for runners using the environment variables and how the runner uses them
7. Like all other secrets, users will be able to relatively easily figure out proxy password if they can modify a workflow file running on a self hosted machine

View File

@@ -0,0 +1,62 @@
# ADR 0274: Step outcome and conclusion
**Date**: 2020-01-13
**Status**: Accepted
## Context
This ADR proposes adding `steps.<id>.outcome` and `steps.<id>.conclusion` to the steps context.
This allows downstream a step to run based on whether a previous step succeeded or failed.
Reminder, currently the steps contains `steps.<id>.outputs`.
## Decision
For steps that have completed, populate `steps.<id>.outcome` and `steps.<id>.conclusion` with one of the following values:
- `success`
- `failure`
- `cancelled`
- `skipped`
When a continue-on-error step fails, the outcome will be `failure` even though the final conclusion is `success`.
### Example
```yaml
steps:
- id: experimental
continue-on-error: true
run: ./build.sh experimental
- if: ${{ steps.experimental.outcome == 'success' }}
run: ./publish.sh experimental
```
### Terminology
The runs API uses the term `conclusion`.
Therefore we use a different term `outcome` for the value prior to continue-on-error.
The following is a snippet from the runs API response payload:
```json
"steps": [
{
"name": "Set up job",
"status": "completed",
"conclusion": "success",
"number": 1,
"started_at": "2020-01-09T11:06:16.000-05:00",
"completed_at": "2020-01-09T11:06:18.000-05:00"
},
```
## Consequences
- Update runner
- Update [docs](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/contexts-and-expression-syntax-for-github-actions#steps-context)

View File

@@ -0,0 +1,263 @@
# ADR 0276: Problem Matchers
**Date** 2019-06-05
**Status** Accepted
## Context
Compilation failures during a CI build should surface good error messages.
For example, the actual compile errors from the typescript compiler should bubble as issues in the UI. And not simply "tsc exited with exit code 1".
VSCode has an extensible model for solving this type of problem. VSCode allows users to configure which problems matchers to use, when scanning output. For example, a user can apply the `tsc` problem matcher to receive a rich error output experience in VSCode, when compiling their typescript project.
The problem-matcher concept fits well with "setup" actions. For example, the `setup-nodejs` action will download node.js, add it to the PATH, and register the `tsc` problem matcher. For the duration of the job, the `tsc` problem matcher will be applied against the output.
## Decision
### Registration
#### Using `##` command
`##[add-matcher]path-to-problem-matcher-config.json`
Using a `##` command allows for flexibility:
- Ad hoc scripts can register problem matchers
- Allows problem matchers to be conditionally registered
Note, if a matcher with the same name is registered a second time, it will clobber the first instance.
#### Unregister using `##` command
A way out for rare cases where scoping is a problem.
`##[remove-matcher]owner`
For this to be usable, the `owner` needs to be discoverable. Therefore, debug print the owner on registration.
### Single line matcher
Consider the output:
```
[...]
Build FAILED.
"C:\temp\problemmatcher\myproject\ConsoleApp1\ConsoleApp1.sln" (default target) (1) ->
"C:\temp\problemmatcher\myproject\ConsoleApp1\ConsoleApp1\ConsoleApp1.csproj" (default target) (2) ->
"C:\temp\problemmatcher\myproject\ConsoleApp1\ClassLibrary1\ClassLibrary1.csproj" (default target) (3) ->
(CoreCompile target) ->
Class1.cs(16,24): warning CS0612: 'ClassLibrary1.Helpers.MyHelper.Name' is obsolete [C:\temp\problemmatcher\myproject\ConsoleApp1\ClassLibrary1\ClassLibrary1.csproj]
"C:\temp\problemmatcher\myproject\ConsoleApp1\ConsoleApp1.sln" (default target) (1) ->
"C:\temp\problemmatcher\myproject\ConsoleApp1\ConsoleApp1\ConsoleApp1.csproj" (default target) (2) ->
"C:\temp\problemmatcher\myproject\ConsoleApp1\ClassLibrary1\ClassLibrary1.csproj" (default target) (3) ->
(CoreCompile target) ->
Helpers\MyHelper.cs(16,30): error CS1002: ; expected [C:\temp\problemmatcher\myproject\ConsoleApp1\ClassLibrary1\ClassLibrary1.csproj]
1 Warning(s)
1 Error(s)
```
The below match configuration uses a regular expression to discover problem lines. And the match groups are mapped into issue-properties.
```json
"owner": "msbuild",
"pattern": [
{
"regexp": "^\\s*([^:]+)\\((\\d+),(\\d+)\\): (error|warning) ([^:]+): (.*) \\[(.+)\\]$",
"file": 1,
"line": 2,
"column": 3,
"severity": 4,
"code": 5,
"message": 6,
"fromPath": 7
}
]
```
The above output and match configuration produces the following matches:
```
line: Class1.cs(16,24): warning CS0612: 'ClassLibrary1.Helpers.MyHelper.Name' is obsolete [C:\myrepo\myproject\ConsoleApp1\ClassLibrary1\ClassLibrary1.csproj]
file: Class1.cs
line: 16
column: 24
severity: warning
code: CS0612
message: 'ClassLibrary1.Helpers.MyHelper.Name' is obsolete
fromPath: C:\myrepo\myproject\ConsoleApp1\ClassLibrary1\ClassLibrary1.csproj
```
```
line: Helpers\MyHelper.cs(16,30): error CS1002: ; expected [C:\myrepo\myproject\ConsoleApp1\ClassLibrary1\ClassLibrary1.csproj]
file: Helpers\MyHelper.cs
line: 16
column: 30
severity: error
code: CS1002
message: ; expected
fromPath: C:\myrepo\myproject\ConsoleApp1\ClassLibrary1\ClassLibrary1.csproj
```
Additionally the line will appear red in the web UI (prefix with `##[error]`).
Note, an error does not imply task failure. Exit codes communicate failure.
Note, strip color codes when evaluating regular expressions.
### Multi-line matcher
Consider the below output from ESLint in stylish mode. The file name is printed once, yet multiple error lines are printed.
```
test.js
1:0 error Missing "use strict" statement strict
5:10 error 'addOne' is defined but never used no-unused-vars
✖ 2 problems (2 errors, 0 warnings)
```
The below match configuration uses multiple regular expressions, for the multiple lines.
And the last pattern of a multiline matcher can specify the `loop` property. This allows multiple errors to be discovered.
```json
"owner": "eslint-stylish",
"pattern": [
{
"regexp": "^([^\\s].*)$",
"file": 1
},
{
"regexp": "^\\s+(\\d+):(\\d+)\\s+(error|warning|info)\\s+(.*)\\s\\s+(.*)$",
"line": 1,
"column": 2,
"severity": 3,
"message": 4,
"code": 5,
"loop": true
}
]
```
The above output and match configuration produces two matches:
```
line: 1:0 error Missing "use strict" statement strict
file: test.js
line: 1
column: 0
severity: error
message: Missing "use strict" statement
code: strict
```
```
line: 5:10 error 'addOne' is defined but never used no-unused-vars
file: test.js
line: 5
column: 10
severity: error
message: 'addOne' is defined but never used
code: no-unused-vars
```
Note, in the above example only the error line will appear red in the web UI. The \"file\" line will not appear red.
### Other details
#### Configuration `owner`
Can be used to stomp over or remove.
#### Rooting the file
The goal of the file information is to provide a hyperlink in the UI.
Solving this problem means:
- Rooting the file when unrooted:
- Use the `fromPath` if specified (assume file path)
- Use the `github.workspace` (where the repo is cloned on disk)
- Match against a repository to determine the relative path within the repo
This is a place where we diverge from VSCode. VSCode task configurations are specific to the local workspace (workspace root is known or can be specified). We're solving a more generic problem, so we need more information - specifically the `fromPath` property - in order to accurately root the path.
In order to avoid creating inaccurate hyperlinks on the error issues, the agent will verify the file exists and is in the main repository. Otherwise omit the file property from the error issue and debug trace what happened.
#### Supported severity levels
Ordinal ignore case:
- `warning`
- `error`
Coalesce empty with \"error\". For any other values, omit logging an issue and debug trace what happened.
#### Default severity level
Problem matchers are unable to interpret severity strings other than `warning` and `error`. The `severity` match group expects `warning` or `error` (case insensitive).
However some tools indicate error/warning in different ways. For example `flake8` uses codes like `E100`, `W200`, and `F300` (error, warning, fatal, respectively).
Therefore, allow a property `severity`, sibling to `owner`, which identifies the default severity for the problem matcher. This allows two problem matchers to be registered - one for warnings and one for errors.
For example, given the following `flake8` output:
```
./bootcamp/settings.py:156:80: E501 line too long (94 > 79 characters)
./bootcamp/settings.py:165:5: F403 'from local_settings import *' used; unable to detect undefined names
```
Two problem matchers can be used:
```json
{
"problemMatcher": [
{
"owner": "flake8",
"pattern": [
{
"regexp": "^(.+):(\\d+):(\\d+): ([EF]\\d+) (.+)$",
"file": 1,
"line": 2,
"column": 3,
"code": 4,
"message": 5
}
]
},
{
"owner": "flake8-warnings",
"severity": "warning",
"pattern": [
{
"regexp": "^(.+):(\\d+):(\\d+): (W\\d+) (.+)$",
"file": 1,
"line": 2,
"column": 3,
"code": 4,
"message": 5
}
]
}
]
}
```
#### Mitigate regular expression denial of service (ReDos)
If a matcher exceeds a 1 second timeout when processing a line, retry up to two three times total.
After three unsuccessful attempts, warn and eject the matcher. The matcher will not run again for the duration of the job.
### Where we diverge from VSCode
- We added the `fromPath` concept for rooting paths. This is done differently in VSCode, since a task is the scope (root path well known). For us, the job is the scope.
- VSCode allows additional activation info background tasks that are always running (recompile on files changed). They allow regular expressions to define when the matcher scope begins and ends. This is an interesting concept that we could leverage to help solve our scoping problem.
## Consequences
- Setup actions should register problem matchers

View File

@@ -0,0 +1,93 @@
# ADR 0277: Run action shell option
**Date** 2019-07-09
**Status** Accepted
## Context
run-actions run scripts using a platform specific shell:
`bash -eo pipefail` on non-windows, and `cmd.exe /c /d /s` on windows
The `shell` option overwrites this to allow different flags or completely different shells/interpreters
A small example is:
```yml
jobs:
bash-job:
actions:
- run: echo "Hello"
shell: bash
python-job:
actions:
- run: print("Hello")
shell: python {0}
```
## Decision
___
### Shell option
The keyword being used is `shell`
`shell` can be either:
1. Builtins / Explicitly supported keywords. It is useful to support at least `cmd`, and `powershell` on Windows. Because `cmd my_cmd_script` and `powershell my_ps1_script` are not valid the same way many Linux/cross-platform interpreters are, e.g. `bash myscript` or `python myscript`. Those tools (and potentially others) also require the correct file extension to run, or must be run in a particular way to get the exit codes consistently, so we must have first class knowledge about them. We provide default templates for these keywords as follows:
- `cmd`: Default is: `%ComSpec% /D /E:ON /V:OFF /S /C "CALL "{0}""` where the script name is automatically appended with `.cmd` and substituted for `{0}`
- Note this is equivalent to the default Windows behavior if no shell option is given
- `pwsh`: Default is: `pwsh -command "& '{0}'"` where the script is automatically appended with `.ps1`
- `powershell`: Default is: `powershell -command "& '{0}'"` where the script is automatically appended with `.ps1`
- `bash`: Uses `bash --noprofile --norc -eo pipefail {0}`
- The default behavior on non-Windows if no shell is given is to attempt this first
- `sh`: Uses `sh -e {0}`
- This is the default behavior on non-Windows if no shell is given, AND `bash` (see above) was not located on the PATH
- `python`: `python {0}`
- **NOTE**: The exact command ran may vary by machine. We only provide default arguments and command format for the listed shell. While the above behavior is expected on hosted machines, private runners may vary. For example, `sh` (or other commands) may actually be a link to `/bin/dash`, `/bin/bash`, or other
1. A template string: `command [...options] {0} [...more_options]`
- As above, the file name of the temporary script will be templated in. This gives users more control to have options at any location relative to the script path
- The first whitespace-delimited word of the string will be interpreted as the command
- e.g. `python {0} arg1 arg2` or similar can be used if passing args is needed. Some shells will require other options after the filename for various reasons
Note that (1) simply provides defaults that are executed with the same mechanism as (2). That is:
- A temporary script file is generated, and the path to that file is templated into the string at `{0}`
- The first word of the formatted string is assumed to be a command, and we attempt to locate its full path
- The fully qualified path to the command, plus the remaining arguments, is executed
- e.g. `shell: bash` expands to `/bin/bash --noprofile --norc -eo pipefail /runner/_layout/_work/_temp/f8d4fb2b-19d9-47e6-a786-4cc538d52761.sh` on my private runner
At this time, **THE LIST OF WELL-KNOWN SHELL OPTIONS IS**:
- cmd - Windows (hosted vs2017, vs2019) only
- powershell - Windows (hosted vs2017, vs2019) only
- sh - All hosted platforms
- pwsh - All hosted platforms
- bash - All hosted platforms
- python - All hosted platforms. Can use setup-python to configure which python will be used
___
### Containers
For container jobs, `shell` should just work the same as above, transparently. We will simply `exec` the command in the job container, passing the same arguments in
___
### Exit codes / Error action preference
For builtin shells, we provide defaults that make the most sense for CI, running within Actions, and being executed by our runner
bash/sh:
- Fail-fast behavior using `set -e o pipefail` is the default for `bash` and `shell` builtins, and by default when no option is given on non-Windows platforms
- Users can opt out of fail-fast and take full control easily by providing a template string to the shell options, eg: `bash {0}`.
- sh-like shells exit with the exit code of the last command executed in a script, and is our default behavior. Thus the runner reports the status of the step as fail/succeed based on this exit code
powershell/pwsh
- Fail-fast behavior when possible. For `pwsh` and `powershell` builtins, we will prepend `$ErrorActionPreference = 'stop'` to script contents
- We append `if ((Test-Path -LiteralPath variable:\LASTEXITCODE)) { exit $LASTEXITCODE }` to powershell scripts to get Action statuses to reflect the script's last exit code
- Users can always opt out by not using the builtins, and providing a shell option like: `pwsh -File {0}`, or `powershell -Command "& '{0}'"`, depending on need
cmd
- There doesn't seem to be a way to fully opt in to fail-fast behavior other than writing your script to check each error code and respond accordingly, so we can't actually provide that behavior by default, it will be completely up to the user to write this behavior into their script
- cmd.exe will exit (return the error code to the runner) with the errorlevel of the last program it executed. This is internally consistent with the previous default behavior (sh, pwsh) and is the cmd.exe default, so we keep that behavior
## Consequences
Valid `shell` options will depend on the hosted images. We will need to maintain tight image compat
First class support for a shell will require a major version schema change to modify. We cannot remove or modify the behavior of a well-known supported option, However, adding first class support for new shells is backwards compatible. For instance, we can add a well-known `python` option, because non-well-known options would have always needed to include `{0}`, e.g. `python {0}`

View File

@@ -0,0 +1,60 @@
# ADR 0278: Env Context
**Date**: 2019-09-30
**Status**: Accepted
## Context
User wants to reference workflow variables defined in workflow yaml file for action's input, displayName and condition.
## Decision
### Add `env` context in the runner
Runner will create and populate the `env` context for every job execution using following logic:
1. On job start, create `env` context with any environment variables in the job message, these are env defined in customer's YAML file's job/workflow level `env` section.
2. Update `env` context when customer use `::set-env::` to set env at the runner level.
3. Update `env` context with step's `env` block before each step runs.
The `env` context is only available in the runner, customer can't use the `env` context in any server evaluation part, just like the `runner` context
Example yaml:
```yaml
env:
env1: 10
env2: 20
env3: 30
jobs:
build:
env:
env1: 100
env2: 200
runs-on: ubuntu-latest
steps:
- run: |
echo ${{ env.env1 }} // 1000
echo $env1 // 1000
echo $env2 // 200
echo $env3 // 30
if: env.env2 == 200 // true
name: ${{ env.env1 }}_${{ env.env2 }} //1000_200
env:
env1: 1000
```
### Don't populate the `env` context with environment variables from runner machine.
With job container and container action, the `env` context may not have the right value customer want and will cause confusion.
Ex:
```yaml
build:
runs-on: ubuntu-latest <- $USER=runner in hosted machine
container: ubuntu:16.04 <- $USER=root in container
steps:
- run: echo ${{env.USER}} <- what should customer expect this output? runner/root
- uses: docker://ubuntu:18.04
with:
args: echo ${{env.USER}} <- what should customer expect this output? runner/root
```

View File

@@ -0,0 +1,71 @@
# ADR 0279: HashFiles Expression Function
**Date**: 2019-09-30
**Status**: Accepted
## Context
First party action `actions/cache` needs a input which is an explicit `key` used for restoring and saving the cache. For packages caching, the most comment `key` might be the hash result of contents from all `package-lock.json` under `node_modules` folder.
There are serval different ways to get the hash `key` input for `actions/cache` action.
1. Customer calculate the `key` themselves from a different action, customer won't like this since it needs extra step for using cache feature
```yaml
steps:
- run: |
hash=some_linux_hash_method(file1, file2, file3)
echo ::set-output name=hash::$hash
id: createHash
- uses: actions/cache@v1
with:
key: ${{ steps.createHash.outputs.hash }}
```
2. Make the `key` input of `actions/cache` follow certain convention to calculate hash, this limited the `key` input to a certain format customer may not want.
```yaml
steps:
- uses: actions/cache@v1
with:
key: ${{ runner.os }}|${{ github.workspace }}|**/package-lock.json
```
## Decision
### Add hashFiles() function to expression engine for calculate files' hash
`hashFiles()` will only allow on runner side since it needs to read files on disk, using `hashFiles()` on any server side evaluated expression will cause runtime errors.
`hashFiles()` will only support hashing files under the `$GITHUB_WORKSPACE` since the expression evaluated on the runner, if customer use job container or container action, the runner won't have access to file system inside the container.
`hashFiles()` will only take 1 parameters:
- `hashFiles('**/package-lock.json')` // Search files under $GITHUB_WORKSPACE and calculate a hash for them
**Question: Do we need to support more than one match patterns?**
Ex: `hashFiles('**/package-lock.json', '!toolkit/core/package-lock.json', '!toolkit/io/package-lock.json')`
Answer: Only support single match pattern for GA, we can always add later.
This will help customer has better experience with the `actions/cache` action's input.
```yaml
steps:
- uses: actions/cache@v1
with:
key: ${{hashFiles('**/package-lock.json')}}-${{github.ref}}-${{runner.os}}
```
For search pattern, we will use basic globbing (`*` `?` and `[]`) and globstar (`**`).
Additional pattern details:
- Root relative paths with `github.workspace` (the main repo)
- Make `*` match files that start with `.`
- Case insensitive on Windows
- Accept `\` or `/` path separators on Windows
Hashing logic:
1. Get all files under `$GITHUB_WORKSPACE`.
2. Use search pattern filter all files to get files that matches the search pattern. (search pattern only apply to file path not folder path)
3. Sort all matched files by full file path in alphabet order.
4. Use SHA256 algorithm to hash each matched file and store hash result.
5. Use SHA256 to hash all stored files' hash results to get the final 64 chars hash result.
**Question: Should we include the folder structure info into the hash?**
Answer: No

View File

@@ -0,0 +1,30 @@
# ADR 0280: Echoing of Command Input
**Date**: 2019-11-04
**Status**: Accepted
## Context
Command echoing as a default behavior tends to clutter the user logs, so we want to swap to a system where users have to opt in to see this information.
Command outputs will still be echoed in the case there are any errors processing such commands. This is so the end user can have more context on why the command failed and help with troubleshooting.
Echo output in the user logs can be explicitly controlled by the new commands `::echo::on` and `::echo::off`. By default, echoing is enabled if `ACTIONS_STEP_DEBUG` secret is enabled, otherwise echoing is disabled.
## Decision
- The only commands that currently echo output are
- `remove-matcher`
- `add-matcher`
- `add-path`
- These will no longer echo the command, if processed successfully
- All commands echo the input when any of these conditions is fulfilled:
1. When such commands fail with an error
2. When `::echo::on` is set
3. When the `ACTIONS_STEP_DEBUG` is set, and echoing hasn't been explicitly disabled with `::echo::off`
- There are a few commands that won't be echoed, even when echo is enabled. These are (as of 2019/11/04):
- `add-mask`
- `debug`
- `warning`
- `error`
- The three commands above will not echo, either because echoing the command would leak secrets (e.g. `add-mask`), or it would not add any additional troubleshooting information to the logs (e.g. `debug`). It's expected that future commands would follow these "echo-suppressing" guidelines as well. Echo-suppressed commands are still free to output other information to the logs, as deemed fit.

View File

@@ -0,0 +1,48 @@
# ADR 0297: Base64 Masking Trailing Characters
**Date** 2020-01-21
**Status** Proposed
## Context
The Runner registers a number of Value Encoders, which mask various encodings of a provided secret. Currently, we register a 3 base64 Encoders:
- The base64 encoded secret
- The secret with the first character removed then base64 encoded
- The secret with the first two characters removed then base64 encoded
This gives us good coverage across the board for secrets and secrets with a prefix (i.e. `base64($user:$pass)`).
However, we don't have great coverage for cases where the secret has a string appended to it before it is base64 encoded (i.e.: `base64($pass\n))`).
Most notably we've seen this as a result of user error where a user accidentially appends a newline or space character before encoding their secret in base64.
## Decision
### Trim end characters
We are going to modify all existing base64 encoders to trim information before registering as a secret.
We will trim:
- `=` from the end of all base64 strings. This is a padding character that contains no information.
- Based on the number of `=`'s at the end of a base64 string, a malicious user could predict the length of the original secret modulo 3.
- If a user saw `***==`, they would know the secret could be 1,4,7,10... characters.
- If a string contains `=` we will also trim the last non-padding character from the base64 secret.
- This character can change if a string is appended to the secret before the encoding.
### Register a fourth encoder
We will also add back in the original base64 encoded secret encoder for four total encoders:
- The base64 encoded secret
- The base64 encoded secret trimmed
- The secret with the first character removed then base64 encoded and trimmed
- The secret with the first two characters removed then base64 encoded and trimmed
This allows us to fully cover the most common scenario where a user base64 encodes their secret and expects the entire thing to be masked.
This will result in us only revealing length or bit information when a prefix or suffix is added to a secret before encoding.
## Consequences
- In the case where a secret has a prefix or suffix added before base64 encoding, we may now reveal up to 20 bits of information and the length of the original string modulo 3, rather then the original 16 bits and no length information
- Secrets with a suffix appended before encoding will now be masked across the board. Previously it was only masked if it was a multiple of 3 characters
- Performance will suffer in a neglible way

View File

@@ -0,0 +1,35 @@
# ADR 354: Expose runner machine info
**Date**: 2020-03-02
**Status**: Pending
## Context
- Provide a mechanism in the runner to include extra information in `Set up job` step's log.
Ex: Include OS/Software info from Hosted image.
## Decision
The runner will look for a file `.setup_info` under the runner's root directory, The file can be a JSON with a simple schema.
```json
[
{
"group": "OS Detail",
"detail": "........"
},
{
"group": "Software Detail",
"detail": "........"
}
]
```
The runner will use `##[group]` and `##[endgroup]` to fold all detail info into an expandable group.
Both [virtual-environments](https://github.com/actions/virtual-environments) and self-hosted runners can use this mechanism to add extra logging info to the `Set up job` step's log.
## Consequences
1. Change the runner to best effort read/parse `.extra_setup_info` file under runner root directory.
2. [virtual-environments](https://github.com/actions/virtual-environments) generate the file during image generation.
3. Change MMS provisioner to properly copy the file to runner root directory at runtime.

View File

@@ -0,0 +1,75 @@
# ADR 361: Wrapper Action
**Date**: 2020-03-06
**Status**: Pending
## Context
In addition to action's regular execution, action author may wants their action has a chance to participate in:
- Job initialize
My Action will collect machine resource usage (CPU/RAM/Disk) during a workflow job execution, we need to start perf recorder at the begin of the job.
- Job cleanup
My Action will dirty local workspace or machine environment during execution, we need to cleanup these changes at the end of the job.
Ex: `actions/checkout@v2` will write `github.token` into local `.git/config` during execution, it has post job cleanup defined to undo the changes.
## Decision
### Add `pre` and `post` execution to action
Node Action Example:
```yaml
name: 'My action with pre'
description: 'My action with pre'
runs:
using: 'node12'
pre: 'setup.js'
pre-if: 'success()' // Optional
main: 'index.js'
post: 'cleanup.js'
post-if: 'success()' // Optional
```
Container Action Example:
```yaml
name: 'My action with pre'
description: 'My action with pre'
runs:
using: 'docker'
image: 'mycontainer:latest'
pre-entrypoint: 'setup.sh'
pre-if: 'success()' // Optional
entrypoint: 'entrypoint.sh'
post-entrypoint: 'cleanup.sh'
post-if: 'success()' // Optional
```
Both `pre` and `post` will has default `pre-if/post-if` sets to `always()`.
Setting `pre` to `always()` will make sure no matter what condition evaluate result the `main` gets at runtime, the `pre` has always run already.
`pre` executes in order of how the steps are defined.
`pre` will always be added to job steps list during job setup.
> Action referenced from local repository (`./my-action`) won't get `pre` setup correctly since the repository haven't checkout during job initialize.
> We can't use GitHub api to download the repository since there is a about 3 mins delay between `git push` and the new commit available to download using GitHub api.
`post` will be pushed into a `poststeps` stack lazily when the action's `pre` or `main` execution passed `if` condition check and about to run, you can't have an action that only contains a `post`, we will pop and run each `post` after all `pre` and `main` finished.
> Currently `post` works for both repository action (`org/repo@v1`) and local action (`./my-action`)
Valid action:
- only has `main`
- has `pre` and `main`
- has `main` and `post`
- has `pre`, `main` and `post`
Invalid action:
- only has `pre`
- only has `post`
- has `pre` and `post`
Potential downside of introducing `pre`:
- Extra magic wrt step order. Users should control the step order. Especially when we introduce templates.
- Eliminates the possibility to lazily download the action tarball, since `pre` always run by default, we have to download the tarball to check whether action defined a `pre`
- `pre` doesn't work with local action, we suggested customer use local action for testing their action changes, ex CI for their action, to avoid delay between `git push` and GitHub repo tarball download api.
- Condition on the `pre` can't be controlled using dynamic step outputs. `pre` executes too early.

View File

@@ -0,0 +1,56 @@
# ADR 0397: Support adding custom labels during runner config
**Date**: 2020-03-30
**Status**: Approved
## Context
Since configuring self-hosted runners is commonly automated via scripts, the labels need to be able to be created during configuration. The runner currently registers the built-in labels (os, arch) during registration but does not accept labels via command line args to extend the set registered.
See Issue: https://github.com/actions/runner/issues/262
This is another version of [ADR275](https://github.com/actions/runner/pull/275)
## Decision
This ADR proposes that we add a `--labels` option to `config`, which could be used to add custom additional labels to the configured runner.
For example, to add a single extra label the operator could run:
```bash
./config.sh --labels mylabel
```
> Note: the current runner command line parsing and envvar override algorithm only supports a single argument (key).
This would add the label `mylabel` to the runner, and enable users to select the runner in their workflow using this label:
```yaml
runs-on: [self-hosted, mylabel]
```
To add multiple labels the operator could run:
```bash
./config.sh --labels mylabel,anotherlabel
```
> Note: the current runner command line parsing and envvar override algorithm only supports a single argument (key).
This would add the label `mylabel` and `anotherlabel` to the runner, and enable users to select the runner in their workflow using this label:
```yaml
runs-on: [self-hosted, mylabel, anotherlabel]
```
It would not be possible to remove labels from an existing runner using `config.sh`, instead labels would have to be removed using the GitHub UI.
The labels argument will split on commas, trim and discard empty strings. That effectively means don't use commans in unattended config label names. Alternatively we could choose to escape commans but it's a nice to have.
## Replace
If an existing runner exists and the option to replace is chosen (interactively of via unattend as in this scenario), then the labels will be replaced / overwritten (not merged).
## Overriding built-in labels
Note that it is possible to register "built-in" hosted labels like `ubuntu-latest` and is not considered an error. This is an effective way for the org / runner admin to dictate by policy through registration that this set of runners will be used without having to edit all the workflow files now and in the future.
We will also not make other restrictions such as limiting explicitly adding os / arch labels and validating. We will assume that explicit labels were added for a reason and not restricting offers the most flexibility and future proofing / compat.
## Consequences
The ability to add custom labels to a self-hosted runner would enable most scenarios where job runner selection based on runner capabilities or characteristics are required.

View File

@@ -0,0 +1,378 @@
# ADR 0549: Composite Run Steps
**Date**: 2020-06-17
**Status**: Accepted
## Context
Customers want to be able to compose actions from actions (ex: https://github.com/actions/runner/issues/438)
An important step towards meeting this goal is to build in functionality for actions where users can simply execute any number of steps.
### Guiding Principles
We don't want the workflow author to need to know how the internal workings of the action work. Users shouldn't know the internal workings of the composite action (for example, `default.shell` and `default.workingDir` should not be inherited from the workflow file to the action file). When deciding how to design certain parts of composite run steps, we want to think one logical step from the consumer.
A composite action is treated as **one** individual job step (this is known as encapsulation).
## Decision
**In this ADR, we only support running multiple run steps in an Action.** In doing so, we build in support for mapping and flowing the inputs, outputs, and env variables (ex: All nested steps should have access to its parents' input variables and nested steps can overwrite the input variables).
### Composite Run Steps Features
This feature supports at the top action level:
- name
- description
- inputs
- runs
- outputs
This feature supports at the run step level:
- name
- id
- run
- env
- shell
- working-directory
This feature **does not support** at the run step level:
- timeout-minutes
- secrets
- conditionals (needs, if, etc.)
- continue-on-error
### Steps
Example `workflow.yml`
```yaml
jobs:
build:
runs-on: self-hosted
steps:
- id: step1
uses: actions/setup-python@v1
- id: step2
uses: actions/setup-node@v2
- uses: actions/checkout@v2
- uses: user/composite@v1
- name: workflow step 1
run: echo hello world 3
- name: workflow step 2
run: echo hello world 4
```
Example `user/composite/action.yml`
```yaml
runs:
using: "composite"
steps:
- run: pip install -r requirements.txt
shell: bash
- run: npm install
shell: bash
```
Example Output
```yaml
[npm installation output]
[pip requirements output]
echo hello world 3
echo hello world 4
```
We add a token called "composite" which allows our Runner code to process composite actions. By invoking "using: composite", our Runner code then processes the "steps" attribute, converts this template code to a list of steps, and finally runs each run step sequentially. If any step fails and there are no `if` conditions defined, the whole composite action job fails.
### Defaults
We will not support "defaults" in a composite action.
### Shell and Working-directory
For each run step in a composite action, the action author can set the `shell` and `working-directory` attributes for that step. The shell attribute is **required** for each run step because the action author does not know what the workflow author is using for the operating system so we need to explicitly prevent unknown behavior by making sure that each run step has an explicit shell **set by the action author.** On the other hand, `working-directory` is optional. Moreover, the composite action author can map in values from the `inputs` for it's `shell` and `working-directory` attributes at the step level for an action.
For example,
`action.yml`
```yaml
inputs:
shell_1:
description: 'Your name'
default: 'pwsh'
steps:
- run: echo 1
shell: ${{ inputs.shell_1 }}
```
Note, the workflow file and action file are treated as separate entities. **So, the workflow `defaults` will never change the `shell` and `working-directory` value in the run steps in a composite action.** Note, `defaults` in a workflow only apply to run steps not "uses" steps (steps that use an action).
### Running Local Scripts
Example 'workflow.yml':
```yaml
jobs:
build:
runs-on: self-hosted
steps:
- uses: user/composite@v1
```
Example `user/composite/action.yml`:
```yaml
runs:
using: "composite"
steps:
- run: chmod +x ${{ github.action_path }}/test/script2.sh
shell: bash
- run: chmod +x $GITHUB_ACTION_PATH/script.sh
shell: bash
- run: ${{ github.action_path }}/test/script2.sh
shell: bash
- run: $GITHUB_ACTION_PATH/script.sh
shell: bash
```
Where `user/composite` has the file structure:
```
.
+-- action.yml
+-- script.sh
+-- test
| +-- script2.sh
```
Users will be able to run scripts located in their action folder by first prepending the relative path and script name with `$GITHUB_ACTION_PATH` or `github.action_path` which contains the path in which the composite action is downloaded to and where those "files" live. Note, you'll have to use `chmod` before running each script if you do not git check in your script files into your github repo with the executable bit turned on.
### Inputs
Example `workflow.yml`:
```yaml
steps:
- id: foo
uses: user/composite@v1
with:
your_name: "Octocat"
```
Example `user/composite/action.yml`:
```yaml
inputs:
your_name:
description: 'Your name'
default: 'Ethan'
runs:
using: "composite"
steps:
- run: echo hello ${{ inputs.your_name }}
shell: bash
```
Example Output:
```
hello Octocat
```
Each input variable in the composite action is only viewable in its own scope.
### Outputs
Example `workflow.yml`:
```yaml
...
steps:
- id: foo
uses: user/composite@v1
- run: echo random-number ${{ steps.foo.outputs.random-number }}
shell: bash
```
Example `user/composite/action.yml`:
```yaml
outputs:
random-number:
description: "Random number"
value: ${{ steps.random-number-generator.outputs.random-id }}
runs:
using: "composite"
steps:
- id: random-number-generator
run: echo "::set-output name=random-id::$(echo $RANDOM)"
shell: bash
```
Example Output:
```
::set-output name=my-output::43243
random-number 43243
```
Each of the output variables from the composite action is viewable from the workflow file that uses the composite action. In other words, every child action output(s) is viewable only by its parent using dot notation (ex `steps.foo.outputs.random-number`).
Moreover, the output ids are only accessible within the scope where it was defined. Note that in the example above, in our `workflow.yml` file, it should not have access to output id (i.e. `random-id`). The reason why we are doing this is because we don't want to require the workflow author to know the internal workings of the composite action.
### Context
Similar to the workflow file, the composite action has access to the [same context objects](https://help.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#contexts) (ex: `github`, `env`, `strategy`).
### Environment
In the Composite Action, you'll only be able to use `::set-env::` to set environment variables just like you could with other actions.
### Secrets
**We will not support "Secrets" in a composite action for now. This functionality will be focused on in a future ADR.**
We'll pass the secrets from the composite action's parents (ex: the workflow file) to the composite action. Secrets can be created in the composite action with the secrets context. In the actions yaml, we'll automatically mask the secret.
### If Condition
** If and needs conditions will not be supported in the composite run steps feature. It will be supported later on in a new feature. **
Old reasoning:
Example `workflow.yml`:
```yaml
steps:
- run: exit 1
- uses: user/composite@v1 # <--- this will run, as it's marked as always runing
if: always()
```
Example `user/composite/action.yml`:
```yaml
runs:
using: "composite"
steps:
- run: echo "just succeeding"
shell: bash
- run: echo "I will run, as my current scope is succeeding"
shell: bash
if: success()
- run: exit 1
shell: bash
- run: echo "I will not run, as my current scope is now failing"
shell: bash
```
**We will not support "if Condition" in a composite action for now. This functionality will be focused on in a future ADR.**
See the paragraph below for a rudimentary approach (thank you to @cybojenix for the idea, example, and explanation for this approach):
The `if` statement in the parent (in the example above, this is the `workflow.yml`) shows whether or not we should run the composite action. So, our composite action will run since the `if` condition for running the composite action is `always()`.
**Note that the if condition on the parent does not propagate to the rest of its children though.**
In the child action (in this example, this is the `action.yml`), it starts with a clean slate (in other words, no imposing if conditions). Similar to the logic in the paragraph above, `echo "I will run, as my current scope is succeeding"` will run since the `if` condition checks if the previous steps **within this composite action** has not failed. `run: echo "I will not run, as my current scope is now failing"` will not run since the previous step resulted in an error and by default, the if expression is set to `success()` if the if condition is not set for a step.
What if a step has `cancelled()`? We do the opposite of our approach above if `cancelled()` is used for any of our composite run steps. We will cancel any step that has this condition if the workflow is cancelled at all.
### Timeout-minutes
Example `workflow.yml`:
```yaml
steps:
- id: bar
uses: user/test@v1
timeout-minutes: 50
```
Example `user/composite/action.yml`:
```yaml
runs:
using: "composite"
steps:
- id: foo1
run: echo test 1
timeout-minutes: 10
shell: bash
- id: foo2
run: echo test 2
shell: bash
- id: foo3
run: echo test 3
timeout-minutes: 10
shell: bash
```
**We will not support "timeout-minutes" in a composite action for now. This functionality will be focused on in a future ADR.**
A composite action in its entirety is a job. You can set both timeout-minutes for the whole composite action or its steps as long as the the sum of the `timeout-minutes` for each composite action step that has the attribute `timeout-minutes` is less than or equals to `timeout-minutes` for the composite action. There is no default timeout-minutes for each composite action step.
If the time taken for any of the steps in combination or individually exceed the whole composite action `timeout-minutes` attribute, the whole job will fail (1). If an individual step exceeds its own `timeout-minutes` attribute but the total time that has been used including this step is below the overall composite action `timeout-minutes`, the individual step will fail but the rest of the steps will run based on their own `timeout-minutes` attribute (they will still abide by condition (1) though).
For reference, in the example above, if the composite step `foo1` takes 11 minutes to run, that step will fail but the rest of the steps, `foo1` and `foo2`, will proceed as long as their total runtime with the previous failed `foo1` action is less than the composite action's `timeout-minutes` (50 minutes). If the composite step `foo2` takes 51 minutes to run, it will cause the whole composite action job to fail. I
The rationale behind this is that users can configure their steps with the `if` condition to conditionally set how steps rely on each other. Due to the additional capabilities that are offered with combining `timeout-minutes` and/or `if`, we wanted the `timeout-minutes` condition to be as dumb as possible and not effect other steps.
[Usage limits still apply](https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions?query=if%28%29#usage-limits)
### Continue-on-error
Example `workflow.yml`:
```yaml
steps:
- run: exit 1
- id: bar
uses: user/test@v1
continue-on-error: false
- id: foo
run: echo "Hello World" <------- This step will not run
```
Example `user/composite/action.yml`:
```yaml
runs:
using: "composite"
steps:
- run: exit 1
continue-on-error: true
shell: bash
- run: echo "Hello World 2" <----- This step will run
shell: bash
```
**We will not support "continue-on-error" in a composite action for now. This functionality will be focused on in a future ADR.**
If any of the steps fail in the composite action and the `continue-on-error` is set to `false` for the whole composite action step in the workflow file, then the steps below it will run. On the flip side, if `continue-on-error` is set to `true` for the whole composite action step in the workflow file, the next job step will run.
For the composite action steps, it follows the same logic as above. In this example, `"Hello World 2"` will be outputted because the previous step has `continue-on-error` set to `true` although that previous step errored.
### Visualizing Composite Action in the GitHub Actions UI
We want all the composite action's steps to be condensed into the original composite action node.
Here is a visual represenation of the [first example](#Steps)
```yaml
| composite_action_node |
| echo hello world 1 |
| echo hello world 2 |
| echo hello world 3 |
| echo hello world 4 |
```
## Consequences
This ADR lays the framework for eventually supporting nested Composite Actions within Composite Actions. This ADR allows for users to run multiple run steps within a GitHub Composite Action with the support of inputs, outputs, environment, and context for use in any steps as well as the if, timeout-minutes, and the continue-on-error attributes for each Composite Action step.

19
docs/adrs/README.md Normal file
View File

@@ -0,0 +1,19 @@
# ADRs
ADR, short for "Architecture Decision Record" is a way of capturing important architectural decisions, along with their context and consequences.
This folder includes ADRs for the actions runner. ADRs are proposed in the form of a pull request, and they commonly follow this format:
* **Title**: short present tense imperative phrase, less than 50 characters, like a git commit message.
* **Status**: proposed, accepted, rejected, deprecated, superseded, etc.
* **Context**: what is the issue that we're seeing that is motivating this decision or change.
* **Decision**: what is the change that we're actually proposing or doing.
* **Consequences**: what becomes easier or more difficult to do because of this change.
---
- More information about ADRs can be found [here](https://github.com/joelparkerhenderson/architecture_decision_record).

57
docs/automate.md Normal file
View File

@@ -0,0 +1,57 @@
# Automate Configuring Self-Hosted Runners
## Export PAT
Before running any of these sample scripts, create a GitHub PAT and export it before running the script
```bash
export RUNNER_CFG_PAT=yourPAT
```
## Create running as a service
**Scenario**: Run on a machine or VM (not container) which automates:
- Resolving latest released runner
- Download and extract latest
- Acquire a registration token
- Configure the runner
- Run as a systemd (linux) or Launchd (osx) service
:point_right: [Sample script here](../scripts/create-latest-svc.sh) :point_left:
Run as a one-liner. NOTE: replace with yourorg/yourrepo (repo level) or just yourorg (org level)
```bash
curl -s https://raw.githubusercontent.com/actions/runner/automate/scripts/create-latest-svc.sh | bash -s yourorg/yourrepo
```
## Uninstall running as service
**Scenario**: Run on a machine or VM (not container) which automates:
- Stops and uninstalls the systemd (linux) or Launchd (osx) service
- Acquires a removal token
- Removes the runner
:point_right: [Sample script here](../scripts/remove-svc.sh) :point_left:
Repo level one liner. NOTE: replace with yourorg/yourrepo (repo level) or just yourorg (org level)
```bash
curl -s https://raw.githubusercontent.com/actions/runner/automate/scripts/remove-svc.sh | bash -s yourorg/yourrepo
```
### Delete an offline runner
**Scenario**: Deletes a registered runner that is offline:
- Ensures the runner is offline
- Resolves id from name
- Deletes the runner
:point_right: [Sample script here](../scripts/delete.sh) :point_left:
Repo level one-liner. NOTE: replace with yourorg/yourrepo (repo level) or just yourorg (org level) and replace runnername
```bash
curl -s https://raw.githubusercontent.com/actions/runner/automate/scripts/delete.sh | bash -s yourorg/yourrepo runnername
```

View File

@@ -1,10 +1,31 @@
# Contribution guide for developers
# Contributions
## Required Dev Dependencies
We welcome contributions in the form of issues and pull requests. We view the contributions and the process as the same for github and external contributors.
![Win](res/win_sm.png) Git for Windows [Install Here](https://git-scm.com/downloads) (needed for dev sh script)
> IMPORTANT: Building your own runner is critical for the dev inner loop process when contributing changes. However, only runners built and distributed by GitHub (releases) are supported in production. Be aware that workflows and orchestrations run service side with the runner being a remote process to run steps. For that reason, the service can pull the runner forward so customizations can be lost.
## To Build, Test, Layout
## Issues
Log issues for both bugs and enhancement requests. Logging issues are important for the open community.
Issues in this repository should be for the runner application. Note that the VM and virtual machine images (including the developer toolsets) installed on the actions hosted machine pools are located [in this repository](https://github.com/actions/virtual-environments)
## Enhancements and Feature Requests
We ask that before significant effort is put into code changes, that we have agreement on taking the change before time is invested in code changes.
1. Create a feature request. Once agreed we will take the enhancement
2. Create an ADR to agree on the details of the change.
An ADR is an Architectural Decision Record. This allows consensus on the direction forward and also serves as a record of the change and motivation. [Read more here](adrs/README.md)
## Development Life Cycle
### Required Dev Dependencies
![Win](res/win_sm.png) ![*nix](res/linux_sm.png) Git for Windows and Linux [Install Here](https://git-scm.com/downloads) (needed for dev sh script)
### To Build, Test, Layout
Navigate to the `src` directory and run the following command:
@@ -14,27 +35,41 @@ Navigate to the `src` directory and run the following command:
**Commands:**
* `layout` (`l`): Run first time to create a full agent layout in `{root}/_layout`
* `build` (`b`): Build everything and update agent layout folder
* `test` (`t`): Build agent binaries and run unit tests
* `layout` (`l`): Run first time to create a full runner layout in `{root}/_layout`
* `build` (`b`): Build everything and update runner layout folder
* `test` (`t`): Build runner binaries and run unit tests
Sample developer flow:
```bash
git clone https://github.com/actions/runner
cd runner
cd ./src
./dev.(sh/cmd) layout # the agent that build from source is in {root}/_layout
./dev.(sh/cmd) layout # the runner that built from source is in {root}/_layout
<make code changes>
./dev.(sh/cmd) build # {root}/_layout will get updated
./dev.(sh/cmd) test # run all unit tests before git commit/push
```
## Editors
View logs:
```bash
cd runner/_layout/_diag
ls
cat (Runner/Worker)_TIMESTAMP.log # view your log file
```
Run Runner:
```bash
cd runner/_layout
./run.sh # run your custom runner
```
### Editors
[Using Visual Studio 2019](https://www.visualstudio.com/vs/)
[Using Visual Studio Code](https://code.visualstudio.com/)
[Using Visual Studio](https://code.visualstudio.com/docs)
## Styling
### Styling
We use the .NET Foundation and CoreCLR style guidelines [located here](
https://github.com/dotnet/corefx/blob/master/Documentation/coding-guidelines/coding-style.md)

61
docs/design/auth.md Normal file
View File

@@ -0,0 +1,61 @@
# Runner Authentication and Authorization
## Goals
- Support runner installs in untrusted domains.
- The account that configures or runs the runner process is not relevant for accessing GitHub resources.
- Accessing GitHub resources is done with a per-job token which expires when job completes.
- The token is granted to trusted parts of the system including the runner, actions and script steps specified by the workflow author as trusted.
- All OAuth tokens that come from the Token Service that the runner uses to access Actions Service resources are the same. It's just the scope and expiration of the token that may vary.
## Configuration
Configuring a self-hosted runner is [covered here in the documentation](https://help.github.com/en/actions/hosting-your-own-runners/adding-self-hosted-runners).
Configuration is done with the user being authenticated via a time-limited, GitHub runner registration token.
*Your credentials are never used for registering the runner with the service.*
![Self-hosted runner config](../res/self-hosted-config.png)
During configuration, an RSA public/private key pair is created, the private key is stored in file on disk. On Windows, the content is protected with DPAPI (machine level encrypted - runner only valid on that machine) and on Linux/OSX with `chmod` permissions.
Using your credentials, the runner is registered with the service by sending the public key to the service which adds that runner to the pool and stores the public key, the Token Service will generate a `clientId` associated with the public key.
## Start and Listen
After configuring the runner, the runner can be started interactively (`./run.cmd` or `./run.sh`) or as a service.
![Self-hosted runner start](../res/self-hosted-start.png)
On start, the runner listener process loads the RSA private key (on Windows decrypting with machine key DPAPI), and asks the Token Service for an OAuth token which is signed with the RSA private key.
The server then responds with an OAuth token that grants permission to access the message queue (HTTP long poll), allowing the runner to acquire the messages it will eventually run.
## Run a workflow
When a workflow is run, its labels are evaluated, it is matched to a runner and a message is placed in a queue of messages for that runner.
The runner then starts listening for jobs via the message queue HTTP long poll.
The message is encrypted with the runner's public key, stored during runner configuration.
![Runner workflow run](../res/workflow-run.png)
A workflow is queued as a result of a triggered [event](https://help.github.com/en/actions/reference/events-that-trigger-workflows). Workflows can be scheduled to [run at specific UTC times](https://help.github.com/en/actions/reference/events-that-trigger-workflows#scheduled-events-schedule) using POSIX `cron` syntax.
An [OAuth token](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html) is generated, granting limited access to the host in Actions Service associated with the github.com repository/organization.
The lifetime of the OAuth token is the lifetime of the run or at most the [job timeout (default: 6 hours)](https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idtimeout-minutes), plus 10 additional minutes.
## Accessing GitHub resources
The job message sent to the runner contains the OAuth token to talk back to the Actions Service.
The runner listener parent process will spawn a runner worker process for that job and send it the job message over IPC.
The token is never persisted.
Each action is run as a unique subprocess.
The encrypted access token will be provided as an environment variable in each action subprocess.
The token is registered with the runner as a secret and scrubbed from the logs as they are written.
Authentication in a workflow run to github.com can be accomplished by using the [`GITHUB_TOKEN`](https://help.github.com/en/actions/configuring-and-managing-workflows/authenticating-with-the-github_token#about-the-github_token-secret)) secret. This token expires after 60 minutes. Please note that this token is different from the OAuth token that the runner uses to talk to the Actions Service.
## Hosted runner authentication
Hosted runner authentication differs from self-hosted authentication in that runners do not undergo a registration process, but instead, the hosted runners get the OAuth token directly by reading the `.credentials` file. The scope of this particular token is limited for a given workflow job execution, and the token is revoked as soon as the job is finished.
![Hosted runner config and start](../res/hosted-config-start.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

View File

@@ -0,0 +1,52 @@
# Markup used to generate the runner auth diagrams: https://websequencediagrams.com
title Runner Configuration (self-hosted only)
note left of Runner: GitHub repo URL as input
Runner->github.com: Retrieve Actions Service access using runner registration token
github.com->Runner: Access token for Actions Service
note left of Runner: Generate RSA key pair
note left of Runner: Store encrypted RSA private key on disk
Runner->Actions Service: Register runner using Actions Service access token
note right of Runner: Runner name, RSA public key sent
note right of Actions Service: Public key stored
Actions Service->Token Service: Register runner as an app along with the RSA public key
note right of Token Service: Public key stored
Token Service->Actions Service: Client Id for the runner application
Actions Service->Runner: Client Id and Token Endpoint URL
note left of Runner: Store runner configuration info into .runner file
note left of Runner: Store Token registration info into .credentials file
title Runner Start and Running (self-hosted only)
Runner.Listener->Runner.Listener: Start
note left of Runner.Listener: Load config info from .runner
note left of Runner.Listener: Load token registration from .credentials
Runner.Listener->Token Service: Exchange OAuth token (happens every 50 mins)
note right of Runner.Listener: Construct JWT token, use Client Id signed by RSA private key
note left of Actions Service: Find corresponding RSA public key, use Client Id\nVerify JWT token's signature
Token Service->Runner.Listener: OAuth token with limited permission and valid for 50 mins
Runner.Listener->Actions Service: Connect to Actions Service with OAuth token
Actions Service->Runner.Listener: Workflow job
title Running workflow
Runner.Listener->Service (Message Queue): Get message
note right of Runner.Listener: Authenticate with exchanged OAuth token
Event->Actions Service: Queue workflow
Actions Service->Actions Service: Generate OAuth token per job
Actions Service->Actions Service: Build job message with the OAuth token
Actions Service->Actions Service: Encrypt job message with the target runner's public key
Actions Service->Service (Message Queue): Send encrypted job message to runner
Service (Message Queue)->Runner.Listener: Send job
note right of Runner.Listener: Decrypt message with runner's private key
Runner.Listener->Runner.Worker: Create worker process per job and run the job
title Runner Configuration, Start and Running (hosted only)
Machine Management Service->Runner.Listener: Construct .runner configuration file, store token in .credentials
Runner.Listener->Runner.Listener: Start
note left of Runner.Listener: Load config info from .runner
note left of Runner.Listener: Load OAuth token from .credentials
Runner.Listener->Actions Service: Connect to Actions Service with OAuth token in .credentials
Actions Service->Runner.Listener: Workflow job

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

BIN
docs/res/workflow-run.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

View File

@@ -28,7 +28,7 @@ Execute ./bin/installdependencies.sh to install any missing Dotnet Core 3.0 depe
```
You can easily correct the problem by executing `./bin/installdependencies.sh`.
The `installdependencies.sh` script should install all required dependencies on all supported Linux versions
> Note: The `installdependencies.sh` script will try to use the default package management mechanism on your Linux flavor (ex. `yum`/`apt-get`/`apt`). You might need to deal with error coming from the package management mechanism related to your setup, like [#1353](https://github.com/Microsoft/vsts-agent/issues/1353)
> Note: The `installdependencies.sh` script will try to use the default package management mechanism on your Linux flavor (ex. `yum`/`apt-get`/`apt`).
### Full dependencies list
@@ -40,7 +40,7 @@ Debian based OS (Debian, Ubuntu, Linux Mint)
- libssl1.1, libssl1.0.2 or libssl1.0.0
- libicu63, libicu60, libicu57 or libicu55
Fedora based OS (Fedora, Redhat, Centos, Oracle Linux 7)
Fedora based OS (Fedora, Red Hat Enterprise Linux, CentOS, Oracle Linux 7)
- lttng-ust
- openssl-libs

View File

@@ -9,4 +9,4 @@
- Windows Server 2016 64-bit
- Windows Server 2019 64-bit
## [More .Net Core Prerequisites Information](https://docs.microsoft.com/en-us/dotnet/core/windows-prerequisites?tabs=netcore30)
## [More .NET Core Prerequisites Information](https://docs.microsoft.com/en-us/dotnet/core/windows-prerequisites?tabs=netcore30)

View File

@@ -1,7 +0,0 @@
FROM mcr.microsoft.com/dotnet/core/runtime-deps:2.1
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
curl \
git \
&& rm -rf /var/lib/apt/lists/*

View File

@@ -1,150 +0,0 @@
FROM centos:6
# Install dependencies
RUN yum install -y \
centos-release-SCL \
epel-release \
wget \
unzip \
&& \
rpm --import http://linuxsoft.cern.ch/cern/slc6X/x86_64/RPM-GPG-KEY-cern && \
wget -O /etc/yum.repos.d/slc6-devtoolset.repo http://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo && \
yum install -y \
"perl(Time::HiRes)" \
autoconf \
cmake \
cmake3 \
devtoolset-2-toolchain \
doxygen \
expat-devel \
gcc \
gcc-c++ \
gdb \
gettext-devel \
krb5-devel \
libedit-devel \
libidn-devel \
libmetalink-devel \
libnghttp2-devel \
libssh2-devel \
libunwind-devel \
libuuid-devel \
lttng-ust-devel \
lzma \
ncurses-devel \
openssl-devel \
perl-devel \
python-argparse \
python27 \
readline-devel \
swig \
xz \
zlib-devel \
&& \
yum clean all
# Build and install clang and lldb 3.9.1
RUN wget ftp://sourceware.org/pub/binutils/snapshots/binutils-2.29.1.tar.xz && \
wget http://releases.llvm.org/3.9.1/cfe-3.9.1.src.tar.xz && \
wget http://releases.llvm.org/3.9.1/llvm-3.9.1.src.tar.xz && \
wget http://releases.llvm.org/3.9.1/lldb-3.9.1.src.tar.xz && \
wget http://releases.llvm.org/3.9.1/compiler-rt-3.9.1.src.tar.xz && \
\
tar -xf binutils-2.29.1.tar.xz && \
tar -xf llvm-3.9.1.src.tar.xz && \
mkdir llvm-3.9.1.src/tools/clang && \
mkdir llvm-3.9.1.src/tools/lldb && \
mkdir llvm-3.9.1.src/projects/compiler-rt && \
tar -xf cfe-3.9.1.src.tar.xz --strip 1 -C llvm-3.9.1.src/tools/clang && \
tar -xf lldb-3.9.1.src.tar.xz --strip 1 -C llvm-3.9.1.src/tools/lldb && \
tar -xf compiler-rt-3.9.1.src.tar.xz --strip 1 -C llvm-3.9.1.src/projects/compiler-rt && \
rm binutils-2.29.1.tar.xz && \
rm cfe-3.9.1.src.tar.xz && \
rm lldb-3.9.1.src.tar.xz && \
rm llvm-3.9.1.src.tar.xz && \
rm compiler-rt-3.9.1.src.tar.xz && \
\
mkdir llvmbuild && \
cd llvmbuild && \
scl enable python27 devtoolset-2 \
' \
cmake3 \
-DCMAKE_CXX_COMPILER=/opt/rh/devtoolset-2/root/usr/bin/g++ \
-DCMAKE_C_COMPILER=/opt/rh/devtoolset-2/root/usr/bin/gcc \
-DCMAKE_LINKER=/opt/rh/devtoolset-2/root/usr/bin/ld \
-DCMAKE_BUILD_TYPE=Release \
-DLLVM_LIBDIR_SUFFIX=64 \
-DLLVM_ENABLE_EH=1 \
-DLLVM_ENABLE_RTTI=1 \
-DLLVM_BINUTILS_INCDIR=../binutils-2.29.1/include \
../llvm-3.9.1.src \
&& \
make -j $(($(getconf _NPROCESSORS_ONLN)+1)) && \
make install \
' && \
cd .. && \
rm -r llvmbuild && \
rm -r llvm-3.9.1.src && \
rm -r binutils-2.29.1
# Build and install curl 7.45.0
RUN wget https://curl.haxx.se/download/curl-7.45.0.tar.lzma && \
tar -xf curl-7.45.0.tar.lzma && \
rm curl-7.45.0.tar.lzma && \
cd curl-7.45.0 && \
scl enable python27 devtoolset-2 \
' \
./configure \
--disable-dict \
--disable-ftp \
--disable-gopher \
--disable-imap \
--disable-ldap \
--disable-ldaps \
--disable-libcurl-option \
--disable-manual \
--disable-pop3 \
--disable-rtsp \
--disable-smb \
--disable-smtp \
--disable-telnet \
--disable-tftp \
--enable-ipv6 \
--enable-optimize \
--enable-symbol-hiding \
--with-ca-bundle=/etc/pki/tls/certs/ca-bundle.crt \
--with-nghttp2 \
--with-gssapi \
--with-ssl \
--without-librtmp \
&& \
make install \
' && \
cd .. && \
rm -r curl-7.45.0
# Install ICU 57.1
RUN wget http://download.icu-project.org/files/icu4c/57.1/icu4c-57_1-RHEL6-x64.tgz && \
tar -xf icu4c-57_1-RHEL6-x64.tgz -C / && \
rm icu4c-57_1-RHEL6-x64.tgz
# Compile and install a version of the git that supports the features that cli repo build needs
# NOTE: The git needs to be built after the curl so that it can use the libcurl to add https
# protocol support.
RUN \
wget https://www.kernel.org/pub/software/scm/git/git-2.9.5.tar.gz && \
tar -xf git-2.9.5.tar.gz && \
rm git-2.9.5.tar.gz && \
cd git-2.9.5 && \
make configure && \
./configure --prefix=/usr/local --without-tcltk && \
make -j $(nproc --all) all && \
make install && \
cd .. && \
rm -r git-2.9.5
ENV LD_LIBRARY_PATH=/usr/local/lib

View File

@@ -1,33 +0,0 @@
parameters:
targetRuntime: ''
steps:
# Build agent layout
- script: ./dev.sh layout Release ${{ parameters.targetRuntime }}
workingDirectory: src
displayName: Build & Layout Release ${{ parameters.targetRuntime }}
# Run test
- script: ./dev.sh test
workingDirectory: src
displayName: Test
condition: and(ne('${{ parameters.targetRuntime }}', 'linux-arm64'), ne('${{ parameters.targetRuntime }}', 'linux-arm'))
# # Publish test results
# - task: PublishTestResults@2
# displayName: Publish Test Results **/*.trx
# condition: always()
# inputs:
# testRunner: VSTest
# testResultsFiles: '**/*.trx'
# testRunTitle: 'Agent Tests'
# # Upload test log
# - task: PublishBuildArtifacts@1
# displayName: Publish Test logs
# condition: always()
# inputs:
# pathToPublish: src/Test/TestLogs
# artifactName: $(System.JobId)
# artifactType: container

View File

@@ -1,55 +1,69 @@
## Features
- Add packages for Windows x86 (win-x6), Linux ARM32 (linux-arm), Linux ARM64 (linux-arm64)
- N/A
## Bugs
- N/A
## Misc
- N/A
## Agent Downloads
| | Package |
| ------- | ----------------------------------------------------------------------------------------------------------- |
| Windows x64 | [actions-runner-win-x64-<RUNNER_VERSION>.zip](https://githubassets.azureedge.net/runners/<RUNNER_VERSION>/actions-runner-win-x64-<RUNNER_VERSION>.zip) |
| macOS | [actions-runner-osx-x64-<RUNNER_VERSION>.tar.gz](https://githubassets.azureedge.net/runners/<RUNNER_VERSION>/actions-runner-osx-x64-<RUNNER_VERSION>.tar.gz) |
| Linux x64 | [actions-runner-linux-x64-<RUNNER_VERSION>.tar.gz](https://githubassets.azureedge.net/runners/<RUNNER_VERSION>/actions-runner-linux-x64-<RUNNER_VERSION>.tar.gz) |
| Linux arm64 | [actions-runner-linux-arm64-<RUNNER_VERSION>.tar.gz](https://githubassets.azureedge.net/runners/<RUNNER_VERSION>/actions-runner-linux-arm64-<RUNNER_VERSION>.tar.gz) |
| Linux arm | [actions-runner-linux-arm-<RUNNER_VERSION>.tar.gz](https://githubassets.azureedge.net/runners/<RUNNER_VERSION>/actions-runner-linux-arm-<RUNNER_VERSION>.tar.gz) |
After Download:
- Add deprecation date to add-path and set-env runner commands (#796)
## Windows x64
We recommend configuring the runner in a root folder of the Windows drive (e.g. "C:\actions-runner"). This will help avoid issues related to service identity folder permissions and long file path restrictions on Windows.
``` bash
C:\> mkdir myagent && cd myagent
C:\myagent> Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory("$HOME\Downloads\actions-runner-win-x64-<RUNNER_VERSION>.zip", "$PWD")
The following snipped needs to be run on `powershell`:
``` powershell
# Create a folder under the drive root
mkdir \actions-runner ; cd \actions-runner
# Download the latest runner package
Invoke-WebRequest -Uri https://github.com/actions/runner/releases/download/v<RUNNER_VERSION>/actions-runner-win-x64-<RUNNER_VERSION>.zip -OutFile actions-runner-win-x64-<RUNNER_VERSION>.zip
# Extract the installer
Add-Type -AssemblyName System.IO.Compression.FileSystem ;
[System.IO.Compression.ZipFile]::ExtractToDirectory("$PWD\actions-runner-win-x64-<RUNNER_VERSION>.zip", "$PWD")
```
## OSX
``` bash
~/$ mkdir myagent && cd myagent
~/myagent$ tar xzf ~/Downloads/actions-runner-osx-x64-<RUNNER_VERSION>.tar.gz
# Create a folder
mkdir actions-runner && cd actions-runner
# Download the latest runner package
curl -O -L https://github.com/actions/runner/releases/download/v<RUNNER_VERSION>/actions-runner-osx-x64-<RUNNER_VERSION>.tar.gz
# Extract the installer
tar xzf ./actions-runner-osx-x64-<RUNNER_VERSION>.tar.gz
```
## Linux x64
``` bash
~/$ mkdir myagent && cd myagent
~/myagent$ tar xzf ~/Downloads/actions-runner-linux-x64-<RUNNER_VERSION>.tar.gz
# Create a folder
mkdir actions-runner && cd actions-runner
# Download the latest runner package
curl -O -L https://github.com/actions/runner/releases/download/v<RUNNER_VERSION>/actions-runner-linux-x64-<RUNNER_VERSION>.tar.gz
# Extract the installer
tar xzf ./actions-runner-linux-x64-<RUNNER_VERSION>.tar.gz
```
## Linux arm64
## Linux arm64 (Pre-release)
``` bash
~/$ mkdir myagent && cd myagent
~/myagent$ tar xzf ~/Downloads/actions-runner-linux-arm64-<RUNNER_VERSION>.tar.gz
# Create a folder
mkdir actions-runner && cd actions-runner
# Download the latest runner package
curl -O -L https://github.com/actions/runner/releases/download/v<RUNNER_VERSION>/actions-runner-linux-arm64-<RUNNER_VERSION>.tar.gz
# Extract the installer
tar xzf ./actions-runner-linux-arm64-<RUNNER_VERSION>.tar.gz
```
## Linux arm
## Linux arm (Pre-release)
``` bash
~/$ mkdir myagent && cd myagent
~/myagent$ tar xzf ~/Downloads/actions-runner-linux-arm-<RUNNER_VERSION>.tar.gz
```
# Create a folder
mkdir actions-runner && cd actions-runner
# Download the latest runner package
curl -O -L https://github.com/actions/runner/releases/download/v<RUNNER_VERSION>/actions-runner-linux-arm-<RUNNER_VERSION>.tar.gz
# Extract the installer
tar xzf ./actions-runner-linux-arm-<RUNNER_VERSION>.tar.gz
```
## Using your self hosted runner
For additional details about configuring, running, or shutting down the runner please check out our [product docs.](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/adding-self-hosted-runners)

1
releaseVersion Normal file
View File

@@ -0,0 +1 @@
<Update to ./src/runnerversion when creating release>

4
scripts/README.md Normal file
View File

@@ -0,0 +1,4 @@
# Sample scripts for self-hosted runners
Here are some examples to work from if you'd like to automate your use of self-hosted runners.
See the docs [here](../docs/automate.md).

147
scripts/create-latest-svc.sh Executable file
View File

@@ -0,0 +1,147 @@
#/bin/bash
set -e
#
# Downloads latest releases (not pre-release) runner
# Configures as a service
#
# Examples:
# RUNNER_CFG_PAT=<yourPAT> ./create-latest-svc.sh myuser/myrepo my.ghe.deployment.net
# RUNNER_CFG_PAT=<yourPAT> ./create-latest-svc.sh myorg my.ghe.deployment.net
#
# Usage:
# export RUNNER_CFG_PAT=<yourPAT>
# ./create-latest-svc scope [ghe_domain] [name] [user]
#
# scope required repo (:owner/:repo) or org (:organization)
# ghe_domain optional the fully qualified domain name of your GitHub Enterprise Server deployment
# name optional defaults to hostname
# user optional user svc will run as. defaults to current
#
# Notes:
# PATS over envvars are more secure
# Should be used on VMs and not containers
# Works on OSX and Linux
# Assumes x64 arch
#
runner_scope=${1}
ghe_hostname=${2}
runner_name=${3:-$(hostname)}
svc_user=${4:-$USER}
echo "Configuring runner @ ${runner_scope}"
sudo echo
#---------------------------------------
# Validate Environment
#---------------------------------------
runner_plat=linux
[ ! -z "$(which sw_vers)" ] && runner_plat=osx;
function fatal()
{
echo "error: $1" >&2
exit 1
}
if [ -z "${runner_scope}" ]; then fatal "supply scope as argument 1"; fi
if [ -z "${RUNNER_CFG_PAT}" ]; then fatal "RUNNER_CFG_PAT must be set before calling"; fi
which curl || fatal "curl required. Please install in PATH with apt-get, brew, etc"
which jq || fatal "jq required. Please install in PATH with apt-get, brew, etc"
# bail early if there's already a runner there. also sudo early
if [ -d ./runner ]; then
fatal "Runner already exists. Use a different directory or delete ./runner"
fi
sudo -u ${svc_user} mkdir runner
# TODO: validate not in a container
# TODO: validate systemd or osx svc installer
#--------------------------------------
# Get a config token
#--------------------------------------
echo
echo "Generating a registration token..."
base_api_url="https://api.github.com"
if [ -n "${ghe_hostname}" ]; then
base_api_url="https://${ghe_hostname}/api/v3"
fi
# if the scope has a slash, it's a repo runner
orgs_or_repos="orgs"
if [[ "$runner_scope" == *\/* ]]; then
orgs_or_repos="repos"
fi
export RUNNER_TOKEN=$(curl -s -X POST ${base_api_url}/${orgs_or_repos}/${runner_scope}/actions/runners/registration-token -H "accept: application/vnd.github.everest-preview+json" -H "authorization: token ${RUNNER_CFG_PAT}" | jq -r '.token')
if [ "null" == "$RUNNER_TOKEN" -o -z "$RUNNER_TOKEN" ]; then fatal "Failed to get a token"; fi
#---------------------------------------
# Download latest released and extract
#---------------------------------------
echo
echo "Downloading latest runner ..."
# For the GHES Alpha, download the runner from github.com
latest_version_label=$(curl -s -X GET 'https://api.github.com/repos/actions/runner/releases/latest' | jq -r '.tag_name')
latest_version=$(echo ${latest_version_label:1})
runner_file="actions-runner-${runner_plat}-x64-${latest_version}.tar.gz"
if [ -f "${runner_file}" ]; then
echo "${runner_file} exists. skipping download."
else
runner_url="https://github.com/actions/runner/releases/download/${latest_version_label}/${runner_file}"
echo "Downloading ${latest_version_label} for ${runner_plat} ..."
echo $runner_url
curl -O -L ${runner_url}
fi
ls -la *.tar.gz
#---------------------------------------------------
# extract to runner directory in this directory
#---------------------------------------------------
echo
echo "Extracting ${runner_file} to ./runner"
tar xzf "./${runner_file}" -C runner
# export of pass
sudo chown -R $svc_user ./runner
pushd ./runner
#---------------------------------------
# Unattend config
#---------------------------------------
runner_url="https://github.com/${runner_scope}"
if [ -n "${ghe_hostname}" ]; then
runner_url="https://${ghe_hostname}/${runner_scope}"
fi
echo
echo "Configuring ${runner_name} @ $runner_url"
echo "./config.sh --unattended --url $runner_url --token *** --name $runner_name"
sudo -E -u ${svc_user} ./config.sh --unattended --url $runner_url --token $RUNNER_TOKEN --name $runner_name
#---------------------------------------
# Configuring as a service
#---------------------------------------
echo
echo "Configuring as a service ..."
prefix=""
if [ "${runner_plat}" == "linux" ]; then
prefix="sudo "
fi
${prefix}./svc.sh install ${svc_user}
${prefix}./svc.sh start

83
scripts/delete.sh Executable file
View File

@@ -0,0 +1,83 @@
#/bin/bash
set -e
#
# Force deletes a runner from the service
# The caller should have already ensured the runner is gone and/or stopped
#
# Examples:
# RUNNER_CFG_PAT=<yourPAT> ./delete.sh myuser/myrepo myname
# RUNNER_CFG_PAT=<yourPAT> ./delete.sh myorg
#
# Usage:
# export RUNNER_CFG_PAT=<yourPAT>
# ./delete.sh scope name
#
# scope required repo (:owner/:repo) or org (:organization)
# name optional defaults to hostname. name to delete
#
# Notes:
# PATS over envvars are more secure
# Works on OSX and Linux
# Assumes x64 arch
#
runner_scope=${1}
runner_name=${2}
echo "Deleting runner ${runner_name} @ ${runner_scope}"
function fatal()
{
echo "error: $1" >&2
exit 1
}
if [ -z "${runner_scope}" ]; then fatal "supply scope as argument 1"; fi
if [ -z "${runner_name}" ]; then fatal "supply name as argument 2"; fi
if [ -z "${RUNNER_CFG_PAT}" ]; then fatal "RUNNER_CFG_PAT must be set before calling"; fi
which curl || fatal "curl required. Please install in PATH with apt-get, brew, etc"
which jq || fatal "jq required. Please install in PATH with apt-get, brew, etc"
base_api_url="https://api.github.com/orgs"
if [[ "$runner_scope" == *\/* ]]; then
base_api_url="https://api.github.com/repos"
fi
#--------------------------------------
# Ensure offline
#--------------------------------------
runner_status=$(curl -s -X GET ${base_api_url}/${runner_scope}/actions/runners?per_page=100 -H "accept: application/vnd.github.everest-preview+json" -H "authorization: token ${RUNNER_CFG_PAT}" \
| jq -M -j ".runners | .[] | [select(.name == \"${runner_name}\")] | .[0].status")
if [ -z "${runner_status}" ]; then
fatal "Could not find runner with name ${runner_name}"
fi
echo "Status: ${runner_status}"
if [ "${runner_status}" != "offline" ]; then
fatal "Runner should be offline before removing"
fi
#--------------------------------------
# Get id of runner to remove
#--------------------------------------
runner_id=$(curl -s -X GET ${base_api_url}/${runner_scope}/actions/runners?per_page=100 -H "accept: application/vnd.github.everest-preview+json" -H "authorization: token ${RUNNER_CFG_PAT}" \
| jq -M -j ".runners | .[] | [select(.name == \"${runner_name}\")] | .[0].id")
if [ -z "${runner_id}" ]; then
fatal "Could not find runner with name ${runner_name}"
fi
echo "Removing id ${runner_id}"
#--------------------------------------
# Remove the runner
#--------------------------------------
curl -s -X DELETE ${base_api_url}/${runner_scope}/actions/runners/${runner_id} -H "authorization: token ${RUNNER_CFG_PAT}"
echo "Done."

76
scripts/remove-svc.sh Executable file
View File

@@ -0,0 +1,76 @@
#/bin/bash
set -e
#
# Removes a runner running as a service
# Must be run on the machine where the service is run
#
# Examples:
# RUNNER_CFG_PAT=<yourPAT> ./remove-svc.sh myuser/myrepo
# RUNNER_CFG_PAT=<yourPAT> ./remove-svc.sh myorg
#
# Usage:
# export RUNNER_CFG_PAT=<yourPAT>
# ./remove-svc scope name
#
# scope required repo (:owner/:repo) or org (:organization)
# name optional defaults to hostname. name to uninstall and remove
#
# Notes:
# PATS over envvars are more secure
# Should be used on VMs and not containers
# Works on OSX and Linux
# Assumes x64 arch
#
runner_scope=${1}
runner_name=${2:-$(hostname)}
echo "Uninstalling runner ${runner_name} @ ${runner_scope}"
sudo echo
function fatal()
{
echo "error: $1" >&2
exit 1
}
if [ -z "${runner_scope}" ]; then fatal "supply scope as argument 1"; fi
if [ -z "${RUNNER_CFG_PAT}" ]; then fatal "RUNNER_CFG_PAT must be set before calling"; fi
which curl || fatal "curl required. Please install in PATH with apt-get, brew, etc"
which jq || fatal "jq required. Please install in PATH with apt-get, brew, etc"
runner_plat=linux
[ ! -z "$(which sw_vers)" ] && runner_plat=osx;
#--------------------------------------
# Get a remove token
#--------------------------------------
echo
echo "Generating a removal token..."
# if the scope has a slash, it's an repo runner
base_api_url="https://api.github.com/orgs"
if [[ "$runner_scope" == *\/* ]]; then
base_api_url="https://api.github.com/repos"
fi
export REMOVE_TOKEN=$(curl -s -X POST ${base_api_url}/${runner_scope}/actions/runners/remove-token -H "accept: application/vnd.github.everest-preview+json" -H "authorization: token ${RUNNER_CFG_PAT}" | jq -r '.token')
if [ -z "$REMOVE_TOKEN" ]; then fatal "Failed to get a token"; fi
#---------------------------------------
# Stop and uninstall the service
#---------------------------------------
echo
echo "Uninstall the service ..."
pushd ./runner
prefix=""
if [ "${runner_plat}" == "linux" ]; then
prefix="sudo "
fi
${prefix}./svc.sh stop
${prefix}./svc.sh uninstall
${prefix}./config.sh remove --token $REMOVE_TOKEN

10
src/.editorconfig Normal file
View File

@@ -0,0 +1,10 @@
[*.cs]
charset = utf-8
insert_final_newline = true
csharp_new_line_before_else = true
csharp_new_line_before_catch = true
csharp_new_line_before_finally = true
csharp_new_line_before_open_brace = all
csharp_space_after_keywords_in_control_flow_statements = true

View File

@@ -1,25 +1,30 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 16
VisualStudioVersion = 16.0.29411.138
MinimumVisualStudioVersion = 10.0.40219.1
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.Common", "src\Runner.Common\Runner.Common.csproj", "{084289A3-CD7A-42E0-9219-4348B4B7E19B}"
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.Common", "Runner.Common\Runner.Common.csproj", "{084289A3-CD7A-42E0-9219-4348B4B7E19B}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.Listener", "src\Runner.Listener\Runner.Listener.csproj", "{7D461AEE-BF2A-4855-BD96-56921160B36A}"
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.Listener", "Runner.Listener\Runner.Listener.csproj", "{7D461AEE-BF2A-4855-BD96-56921160B36A}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.PluginHost", "src\Runner.PluginHost\Runner.PluginHost.csproj", "{D0320EB1-CB6D-4179-BFDC-2F2B664A370C}"
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.PluginHost", "Runner.PluginHost\Runner.PluginHost.csproj", "{D0320EB1-CB6D-4179-BFDC-2F2B664A370C}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.Plugins", "src\Runner.Plugins\Runner.Plugins.csproj", "{C23AFD6F-4DCD-4243-BC61-865BE31B9168}"
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.Plugins", "Runner.Plugins\Runner.Plugins.csproj", "{C23AFD6F-4DCD-4243-BC61-865BE31B9168}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.Sdk", "src\Runner.Sdk\Runner.Sdk.csproj", "{D0484633-DA97-4C34-8E47-1DADE212A57A}"
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.Sdk", "Runner.Sdk\Runner.Sdk.csproj", "{D0484633-DA97-4C34-8E47-1DADE212A57A}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "RunnerService", "src\Runner.Service\Windows\RunnerService.csproj", "{D12EBD71-0464-46D0-8394-40BCFBA0A6F2}"
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "RunnerService", "Runner.Service\Windows\RunnerService.csproj", "{D12EBD71-0464-46D0-8394-40BCFBA0A6F2}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.Worker", "src\Runner.Worker\Runner.Worker.csproj", "{C2F5B9FA-2621-411F-8EB2-273ED276F503}"
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Runner.Worker", "Runner.Worker\Runner.Worker.csproj", "{C2F5B9FA-2621-411F-8EB2-273ED276F503}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Sdk", "src\Sdk\Sdk.csproj", "{D2EE812B-E4DF-49BB-AE87-12BC49949B5F}"
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Sdk", "Sdk\Sdk.csproj", "{D2EE812B-E4DF-49BB-AE87-12BC49949B5F}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Test", "src\Test\Test.csproj", "{C932061F-F6A1-4F1E-B854-A6C6B30DC3EF}"
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Test", "Test\Test.csproj", "{C932061F-F6A1-4F1E-B854-A6C6B30DC3EF}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Solution Items", "Solution Items", "{EFB254FC-7927-445E-BA64-6676ADB309E9}"
ProjectSection(SolutionItems) = preProject
.editorconfig = .editorconfig
EndProjectSection
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution

View File

@@ -16,9 +16,6 @@
<PropertyGroup Condition="'$(BUILD_OS)' == 'Linux'">
<DefineConstants>$(DefineConstants);OS_LINUX</DefineConstants>
</PropertyGroup>
<PropertyGroup Condition="'$(BUILD_OS)' == 'Linux' AND '$(PackageRuntime)' == 'rhel.6-x64'">
<DefineConstants>$(DefineConstants);OS_RHEL6</DefineConstants>
</PropertyGroup>
<!-- Set Platform/bitness vars -->
<PropertyGroup Condition="'$(BUILD_OS)' == 'Windows' AND ('$(PackageRuntime)' == 'win-x64' OR '$(PackageRuntime)' == '')">
@@ -49,4 +46,9 @@
<PropertyGroup Condition="'$(Configuration)' == 'Debug'">
<DefineConstants>$(DefineConstants);DEBUG</DefineConstants>
</PropertyGroup>
<!-- Set Treat tarnings as errors -->
<PropertyGroup>
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
</PropertyGroup>
</Project>

View File

@@ -69,6 +69,8 @@
.PARAMETER ProxyUseDefaultCredentials
Default: false
Use default credentials, when using proxy address.
.PARAMETER ProxyBypassList
If set with ProxyAddress, will provide the list of comma separated urls that will bypass the proxy
.PARAMETER SkipNonVersionedFiles
Default: false
Skips installing non-versioned files if they already exist, such as dotnet.exe.
@@ -96,6 +98,7 @@ param(
[string]$FeedCredential,
[string]$ProxyAddress,
[switch]$ProxyUseDefaultCredentials,
[string[]]$ProxyBypassList=@(),
[switch]$SkipNonVersionedFiles,
[switch]$NoCdn
)
@@ -119,11 +122,27 @@ $VersionRegEx="/\d+\.\d+[^/]+/"
$OverrideNonVersionedFiles = !$SkipNonVersionedFiles
function Say($str) {
Write-Host "dotnet-install: $str"
try
{
Write-Host "dotnet-install: $str"
}
catch
{
# Some platforms cannot utilize Write-Host (Azure Functions, for instance). Fall back to Write-Output
Write-Output "dotnet-install: $str"
}
}
function Say-Verbose($str) {
Write-Verbose "dotnet-install: $str"
try
{
Write-Verbose "dotnet-install: $str"
}
catch
{
# Some platforms cannot utilize Write-Verbose (Azure Functions, for instance). Fall back to Write-Output
Write-Output "dotnet-install: $str"
}
}
function Say-Invocation($Invocation) {
@@ -154,7 +173,16 @@ function Invoke-With-Retry([ScriptBlock]$ScriptBlock, [int]$MaxAttempts = 3, [in
function Get-Machine-Architecture() {
Say-Invocation $MyInvocation
# possible values: amd64, x64, x86, arm64, arm
# On PS x86, PROCESSOR_ARCHITECTURE reports x86 even on x64 systems.
# To get the correct architecture, we need to use PROCESSOR_ARCHITEW6432.
# PS x64 doesn't define this, so we fall back to PROCESSOR_ARCHITECTURE.
# Possible values: amd64, x64, x86, arm64, arm
if( $ENV:PROCESSOR_ARCHITEW6432 -ne $null )
{
return $ENV:PROCESSOR_ARCHITEW6432
}
return $ENV:PROCESSOR_ARCHITECTURE
}
@@ -167,7 +195,7 @@ function Get-CLIArchitecture-From-Architecture([string]$Architecture) {
{ $_ -eq "x86" } { return "x86" }
{ $_ -eq "arm" } { return "arm" }
{ $_ -eq "arm64" } { return "arm64" }
default { throw "Architecture not supported. If you think this is a bug, report it at https://github.com/dotnet/cli/issues" }
default { throw "Architecture '$Architecture' not supported. If you think this is a bug, report it at https://github.com/dotnet/install-scripts/issues" }
}
}
@@ -228,7 +256,11 @@ function GetHTTPResponse([Uri] $Uri)
if($ProxyAddress) {
$HttpClientHandler = New-Object System.Net.Http.HttpClientHandler
$HttpClientHandler.Proxy = New-Object System.Net.WebProxy -Property @{Address=$ProxyAddress;UseDefaultCredentials=$ProxyUseDefaultCredentials}
$HttpClientHandler.Proxy = New-Object System.Net.WebProxy -Property @{
Address=$ProxyAddress;
UseDefaultCredentials=$ProxyUseDefaultCredentials;
BypassList = $ProxyBypassList;
}
$HttpClient = New-Object System.Net.Http.HttpClient -ArgumentList $HttpClientHandler
}
else {
@@ -309,14 +341,12 @@ function Parse-Jsonfile-For-Version([string]$JSonFile) {
If (-Not (Test-Path $JSonFile)) {
throw "Unable to find '$JSonFile'"
exit 0
}
try {
$JSonContent = Get-Content($JSonFile) -Raw | ConvertFrom-Json | Select-Object -expand "sdk" -ErrorAction SilentlyContinue
}
catch {
throw "Json file unreadable: '$JSonFile'"
exit 0
}
if ($JSonContent) {
try {
@@ -330,16 +360,13 @@ function Parse-Jsonfile-For-Version([string]$JSonFile) {
}
catch {
throw "Unable to parse the SDK node in '$JSonFile'"
exit 0
}
}
else {
throw "Unable to find the SDK node in '$JSonFile'"
exit 0
}
If ($Version -eq $null) {
throw "Unable to find the SDK:version node in '$JSonFile'"
exit 0
}
return $Version
}
@@ -368,17 +395,20 @@ function Get-Specific-Version-From-Version([string]$AzureFeed, [string]$Channel,
function Get-Download-Link([string]$AzureFeed, [string]$SpecificVersion, [string]$CLIArchitecture) {
Say-Invocation $MyInvocation
# If anything fails in this lookup it will default to $SpecificVersion
$SpecificProductVersion = Get-Product-Version -AzureFeed $AzureFeed -SpecificVersion $SpecificVersion
if ($Runtime -eq "dotnet") {
$PayloadURL = "$AzureFeed/Runtime/$SpecificVersion/dotnet-runtime-$SpecificVersion-win-$CLIArchitecture.zip"
$PayloadURL = "$AzureFeed/Runtime/$SpecificVersion/dotnet-runtime-$SpecificProductVersion-win-$CLIArchitecture.zip"
}
elseif ($Runtime -eq "aspnetcore") {
$PayloadURL = "$AzureFeed/aspnetcore/Runtime/$SpecificVersion/aspnetcore-runtime-$SpecificVersion-win-$CLIArchitecture.zip"
$PayloadURL = "$AzureFeed/aspnetcore/Runtime/$SpecificVersion/aspnetcore-runtime-$SpecificProductVersion-win-$CLIArchitecture.zip"
}
elseif ($Runtime -eq "windowsdesktop") {
$PayloadURL = "$AzureFeed/Runtime/$SpecificVersion/windowsdesktop-runtime-$SpecificVersion-win-$CLIArchitecture.zip"
$PayloadURL = "$AzureFeed/Runtime/$SpecificVersion/windowsdesktop-runtime-$SpecificProductVersion-win-$CLIArchitecture.zip"
}
elseif (-not $Runtime) {
$PayloadURL = "$AzureFeed/Sdk/$SpecificVersion/dotnet-sdk-$SpecificVersion-win-$CLIArchitecture.zip"
$PayloadURL = "$AzureFeed/Sdk/$SpecificVersion/dotnet-sdk-$SpecificProductVersion-win-$CLIArchitecture.zip"
}
else {
throw "Invalid value for `$Runtime"
@@ -386,7 +416,7 @@ function Get-Download-Link([string]$AzureFeed, [string]$SpecificVersion, [string
Say-Verbose "Constructed primary named payload URL: $PayloadURL"
return $PayloadURL
return $PayloadURL, $SpecificProductVersion
}
function Get-LegacyDownload-Link([string]$AzureFeed, [string]$SpecificVersion, [string]$CLIArchitecture) {
@@ -407,6 +437,51 @@ function Get-LegacyDownload-Link([string]$AzureFeed, [string]$SpecificVersion, [
return $PayloadURL
}
function Get-Product-Version([string]$AzureFeed, [string]$SpecificVersion) {
Say-Invocation $MyInvocation
if ($Runtime -eq "dotnet") {
$ProductVersionTxtURL = "$AzureFeed/Runtime/$SpecificVersion/productVersion.txt"
}
elseif ($Runtime -eq "aspnetcore") {
$ProductVersionTxtURL = "$AzureFeed/aspnetcore/Runtime/$SpecificVersion/productVersion.txt"
}
elseif ($Runtime -eq "windowsdesktop") {
$ProductVersionTxtURL = "$AzureFeed/Runtime/$SpecificVersion/productVersion.txt"
}
elseif (-not $Runtime) {
$ProductVersionTxtURL = "$AzureFeed/Sdk/$SpecificVersion/productVersion.txt"
}
else {
throw "Invalid value '$Runtime' specified for `$Runtime"
}
Say-Verbose "Checking for existence of $ProductVersionTxtURL"
try {
$productVersionResponse = GetHTTPResponse($productVersionTxtUrl)
if ($productVersionResponse.StatusCode -eq 200) {
$productVersion = $productVersionResponse.Content.ReadAsStringAsync().Result.Trim()
if ($productVersion -ne $SpecificVersion)
{
Say "Using alternate version $productVersion found in $ProductVersionTxtURL"
}
return $productVersion
}
else {
Say-Verbose "Got StatusCode $($productVersionResponse.StatusCode) trying to get productVersion.txt at $productVersionTxtUrl, so using default value of $SpecificVersion"
$productVersion = $SpecificVersion
}
} catch {
Say-Verbose "Could not read productVersion.txt at $productVersionTxtUrl, so using default value of $SpecificVersion (Exception: '$($_.Exception.Message)' )"
$productVersion = $SpecificVersion
}
return $productVersion
}
function Get-User-Share-Path() {
Say-Invocation $MyInvocation
@@ -430,7 +505,7 @@ function Is-Dotnet-Package-Installed([string]$InstallRoot, [string]$RelativePath
Say-Invocation $MyInvocation
$DotnetPackagePath = Join-Path -Path $InstallRoot -ChildPath $RelativePathToPackage | Join-Path -ChildPath $SpecificVersion
Say-Verbose "Is-Dotnet-Package-Installed: Path to a package: $DotnetPackagePath"
Say-Verbose "Is-Dotnet-Package-Installed: DotnetPackagePath=$DotnetPackagePath"
return Test-Path $DotnetPackagePath -PathType Container
}
@@ -560,9 +635,14 @@ function Prepend-Sdk-InstallRoot-To-Path([string]$InstallRoot, [string]$BinFolde
}
}
Say "Note that the intended use of this script is for Continuous Integration (CI) scenarios, where:"
Say "- The SDK needs to be installed without user interaction and without admin rights."
Say "- The SDK installation doesn't need to persist across multiple CI runs."
Say "To set up a development environment or to run apps, use installers rather than this script. Visit https://dotnet.microsoft.com/download to get the installer.`r`n"
$CLIArchitecture = Get-CLIArchitecture-From-Architecture $Architecture
$SpecificVersion = Get-Specific-Version-From-Version -AzureFeed $AzureFeed -Channel $Channel -Version $Version -JSonFile $JSonFile
$DownloadLink = Get-Download-Link -AzureFeed $AzureFeed -SpecificVersion $SpecificVersion -CLIArchitecture $CLIArchitecture
$DownloadLink, $EffectiveVersion = Get-Download-Link -AzureFeed $AzureFeed -SpecificVersion $SpecificVersion -CLIArchitecture $CLIArchitecture
$LegacyDownloadLink = Get-LegacyDownload-Link -AzureFeed $AzureFeed -SpecificVersion $SpecificVersion -CLIArchitecture $CLIArchitecture
$InstallRoot = Resolve-Installation-Path $InstallDir
@@ -588,6 +668,11 @@ if ($DryRun) {
}
}
Say "Repeatable invocation: $RepeatableCommand"
if ($SpecificVersion -ne $EffectiveVersion)
{
Say "NOTE: Due to finding a version manifest with this runtime, it would actually install with version '$EffectiveVersion'"
}
exit 0
}
@@ -611,6 +696,12 @@ else {
throw "Invalid value for `$Runtime"
}
if ($SpecificVersion -ne $EffectiveVersion)
{
Say "Performing installation checks for effective version: $EffectiveVersion"
$SpecificVersion = $EffectiveVersion
}
# Check if the SDK version is already installed.
$isAssetInstalled = Is-Dotnet-Package-Installed -InstallRoot $InstallRoot -RelativePathToPackage $dotnetPackageRelativePath -SpecificVersion $SpecificVersion
if ($isAssetInstalled) {
@@ -663,8 +754,22 @@ if ($DownloadFailed) {
Say "Extracting zip from $DownloadLink"
Extract-Dotnet-Package -ZipPath $ZipPath -OutPath $InstallRoot
# Check if the SDK version is now installed; if not, fail the installation.
$isAssetInstalled = Is-Dotnet-Package-Installed -InstallRoot $InstallRoot -RelativePathToPackage $dotnetPackageRelativePath -SpecificVersion $SpecificVersion
# Check if the SDK version is installed; if not, fail the installation.
$isAssetInstalled = $false
# if the version contains "RTM" or "servicing"; check if a 'release-type' SDK version is installed.
if ($SpecificVersion -Match "rtm" -or $SpecificVersion -Match "servicing") {
$ReleaseVersion = $SpecificVersion.Split("-")[0]
Say-Verbose "Checking installation: version = $ReleaseVersion"
$isAssetInstalled = Is-Dotnet-Package-Installed -InstallRoot $InstallRoot -RelativePathToPackage $dotnetPackageRelativePath -SpecificVersion $ReleaseVersion
}
# Check if the SDK version is installed.
if (!$isAssetInstalled) {
Say-Verbose "Checking installation: version = $SpecificVersion"
$isAssetInstalled = Is-Dotnet-Package-Installed -InstallRoot $InstallRoot -RelativePathToPackage $dotnetPackageRelativePath -SpecificVersion $SpecificVersion
}
if (!$isAssetInstalled) {
throw "`"$assetName`" with version = $SpecificVersion failed to install with an unknown error."
}
@@ -673,5 +778,199 @@ Remove-Item $ZipPath
Prepend-Sdk-InstallRoot-To-Path -InstallRoot $InstallRoot -BinFolderRelativePath $BinFolderRelativePath
Say "Note that the script does not resolve dependencies during installation."
Say "To check the list of dependencies, go to https://docs.microsoft.com/dotnet/core/install/windows#dependencies"
Say "Installation finished"
exit 0
# SIG # Begin signature block
# MIIjlgYJKoZIhvcNAQcCoIIjhzCCI4MCAQExDzANBglghkgBZQMEAgEFADB5Bgor
# BgEEAYI3AgEEoGswaTA0BgorBgEEAYI3AgEeMCYCAwEAAAQQH8w7YFlLCE63JNLG
# KX7zUQIBAAIBAAIBAAIBAAIBADAxMA0GCWCGSAFlAwQCAQUABCA+isugNMwZSGLd
# kfBd0C2Ud//U2Nbj31s1jg3Yf9gh4KCCDYUwggYDMIID66ADAgECAhMzAAABiK9S
# 1rmSbej5AAAAAAGIMA0GCSqGSIb3DQEBCwUAMH4xCzAJBgNVBAYTAlVTMRMwEQYD
# VQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNy
# b3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25p
# bmcgUENBIDIwMTEwHhcNMjAwMzA0MTgzOTQ4WhcNMjEwMzAzMTgzOTQ4WjB0MQsw
# CQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9u
# ZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMR4wHAYDVQQDExVNaWNy
# b3NvZnQgQ29ycG9yYXRpb24wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
# AQCSCNryE+Cewy2m4t/a74wZ7C9YTwv1PyC4BvM/kSWPNs8n0RTe+FvYfU+E9uf0
# t7nYlAzHjK+plif2BhD+NgdhIUQ8sVwWO39tjvQRHjP2//vSvIfmmkRoML1Ihnjs
# 9kQiZQzYRDYYRp9xSQYmRwQjk5hl8/U7RgOiQDitVHaU7BT1MI92lfZRuIIDDYBd
# vXtbclYJMVOwqZtv0O9zQCret6R+fRSGaDNfEEpcILL+D7RV3M4uaJE4Ta6KAOdv
# V+MVaJp1YXFTZPKtpjHO6d9pHQPZiG7NdC6QbnRGmsa48uNQrb6AfmLKDI1Lp31W
# MogTaX5tZf+CZT9PSuvjOCLNAgMBAAGjggGCMIIBfjAfBgNVHSUEGDAWBgorBgEE
# AYI3TAgBBggrBgEFBQcDAzAdBgNVHQ4EFgQUj9RJL9zNrPcL10RZdMQIXZN7MG8w
# VAYDVR0RBE0wS6RJMEcxLTArBgNVBAsTJE1pY3Jvc29mdCBJcmVsYW5kIE9wZXJh
# dGlvbnMgTGltaXRlZDEWMBQGA1UEBRMNMjMwMDEyKzQ1ODM4NjAfBgNVHSMEGDAW
# gBRIbmTlUAXTgqoXNzcitW2oynUClTBUBgNVHR8ETTBLMEmgR6BFhkNodHRwOi8v
# d3d3Lm1pY3Jvc29mdC5jb20vcGtpb3BzL2NybC9NaWNDb2RTaWdQQ0EyMDExXzIw
# MTEtMDctMDguY3JsMGEGCCsGAQUFBwEBBFUwUzBRBggrBgEFBQcwAoZFaHR0cDov
# L3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9jZXJ0cy9NaWNDb2RTaWdQQ0EyMDEx
# XzIwMTEtMDctMDguY3J0MAwGA1UdEwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggIB
# ACnXo8hjp7FeT+H6iQlV3CcGnkSbFvIpKYafgzYCFo3UHY1VHYJVb5jHEO8oG26Q
# qBELmak6MTI+ra3WKMTGhE1sEIlowTcp4IAs8a5wpCh6Vf4Z/bAtIppP3p3gXk2X
# 8UXTc+WxjQYsDkFiSzo/OBa5hkdW1g4EpO43l9mjToBdqEPtIXsZ7Hi1/6y4gK0P
# mMiwG8LMpSn0n/oSHGjrUNBgHJPxgs63Slf58QGBznuXiRaXmfTUDdrvhRocdxIM
# i8nXQwWACMiQzJSRzBP5S2wUq7nMAqjaTbeXhJqD2SFVHdUYlKruvtPSwbnqSRWT
# GI8s4FEXt+TL3w5JnwVZmZkUFoioQDMMjFyaKurdJ6pnzbr1h6QW0R97fWc8xEIz
# LIOiU2rjwWAtlQqFO8KNiykjYGyEf5LyAJKAO+rJd9fsYR+VBauIEQoYmjnUbTXM
# SY2Lf5KMluWlDOGVh8q6XjmBccpaT+8tCfxpaVYPi1ncnwTwaPQvVq8RjWDRB7Pa
# 8ruHgj2HJFi69+hcq7mWx5nTUtzzFa7RSZfE5a1a5AuBmGNRr7f8cNfa01+tiWjV
# Kk1a+gJUBSP0sIxecFbVSXTZ7bqeal45XSDIisZBkWb+83TbXdTGMDSUFKTAdtC+
# r35GfsN8QVy59Hb5ZYzAXczhgRmk7NyE6jD0Ym5TKiW5MIIHejCCBWKgAwIBAgIK
# YQ6Q0gAAAAAAAzANBgkqhkiG9w0BAQsFADCBiDELMAkGA1UEBhMCVVMxEzARBgNV
# BAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jv
# c29mdCBDb3Jwb3JhdGlvbjEyMDAGA1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlm
# aWNhdGUgQXV0aG9yaXR5IDIwMTEwHhcNMTEwNzA4MjA1OTA5WhcNMjYwNzA4MjEw
# OTA5WjB+MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UE
# BxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSgwJgYD
# VQQDEx9NaWNyb3NvZnQgQ29kZSBTaWduaW5nIFBDQSAyMDExMIICIjANBgkqhkiG
# 9w0BAQEFAAOCAg8AMIICCgKCAgEAq/D6chAcLq3YbqqCEE00uvK2WCGfQhsqa+la
# UKq4BjgaBEm6f8MMHt03a8YS2AvwOMKZBrDIOdUBFDFC04kNeWSHfpRgJGyvnkmc
# 6Whe0t+bU7IKLMOv2akrrnoJr9eWWcpgGgXpZnboMlImEi/nqwhQz7NEt13YxC4D
# dato88tt8zpcoRb0RrrgOGSsbmQ1eKagYw8t00CT+OPeBw3VXHmlSSnnDb6gE3e+
# lD3v++MrWhAfTVYoonpy4BI6t0le2O3tQ5GD2Xuye4Yb2T6xjF3oiU+EGvKhL1nk
# kDstrjNYxbc+/jLTswM9sbKvkjh+0p2ALPVOVpEhNSXDOW5kf1O6nA+tGSOEy/S6
# A4aN91/w0FK/jJSHvMAhdCVfGCi2zCcoOCWYOUo2z3yxkq4cI6epZuxhH2rhKEmd
# X4jiJV3TIUs+UsS1Vz8kA/DRelsv1SPjcF0PUUZ3s/gA4bysAoJf28AVs70b1FVL
# 5zmhD+kjSbwYuER8ReTBw3J64HLnJN+/RpnF78IcV9uDjexNSTCnq47f7Fufr/zd
# sGbiwZeBe+3W7UvnSSmnEyimp31ngOaKYnhfsi+E11ecXL93KCjx7W3DKI8sj0A3
# T8HhhUSJxAlMxdSlQy90lfdu+HggWCwTXWCVmj5PM4TasIgX3p5O9JawvEagbJjS
# 4NaIjAsCAwEAAaOCAe0wggHpMBAGCSsGAQQBgjcVAQQDAgEAMB0GA1UdDgQWBBRI
# bmTlUAXTgqoXNzcitW2oynUClTAZBgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTAL
# BgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBRyLToCMZBD
# uRQFTuHqp8cx0SOJNDBaBgNVHR8EUzBRME+gTaBLhklodHRwOi8vY3JsLm1pY3Jv
# c29mdC5jb20vcGtpL2NybC9wcm9kdWN0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFf
# MDNfMjIuY3JsMF4GCCsGAQUFBwEBBFIwUDBOBggrBgEFBQcwAoZCaHR0cDovL3d3
# dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFf
# MDNfMjIuY3J0MIGfBgNVHSAEgZcwgZQwgZEGCSsGAQQBgjcuAzCBgzA/BggrBgEF
# BQcCARYzaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraW9wcy9kb2NzL3ByaW1h
# cnljcHMuaHRtMEAGCCsGAQUFBwICMDQeMiAdAEwAZQBnAGEAbABfAHAAbwBsAGkA
# YwB5AF8AcwB0AGEAdABlAG0AZQBuAHQALiAdMA0GCSqGSIb3DQEBCwUAA4ICAQBn
# 8oalmOBUeRou09h0ZyKbC5YR4WOSmUKWfdJ5DJDBZV8uLD74w3LRbYP+vj/oCso7
# v0epo/Np22O/IjWll11lhJB9i0ZQVdgMknzSGksc8zxCi1LQsP1r4z4HLimb5j0b
# pdS1HXeUOeLpZMlEPXh6I/MTfaaQdION9MsmAkYqwooQu6SpBQyb7Wj6aC6VoCo/
# KmtYSWMfCWluWpiW5IP0wI/zRive/DvQvTXvbiWu5a8n7dDd8w6vmSiXmE0OPQvy
# CInWH8MyGOLwxS3OW560STkKxgrCxq2u5bLZ2xWIUUVYODJxJxp/sfQn+N4sOiBp
# mLJZiWhub6e3dMNABQamASooPoI/E01mC8CzTfXhj38cbxV9Rad25UAqZaPDXVJi
# hsMdYzaXht/a8/jyFqGaJ+HNpZfQ7l1jQeNbB5yHPgZ3BtEGsXUfFL5hYbXw3MYb
# BL7fQccOKO7eZS/sl/ahXJbYANahRr1Z85elCUtIEJmAH9AAKcWxm6U/RXceNcbS
# oqKfenoi+kiVH6v7RyOA9Z74v2u3S5fi63V4GuzqN5l5GEv/1rMjaHXmr/r8i+sL
# gOppO6/8MO0ETI7f33VtY5E90Z1WTk+/gFcioXgRMiF670EKsT/7qMykXcGhiJtX
# cVZOSEXAQsmbdlsKgEhr/Xmfwb1tbWrJUnMTDXpQzTGCFWcwghVjAgEBMIGVMH4x
# CzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRt
# b25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01p
# Y3Jvc29mdCBDb2RlIFNpZ25pbmcgUENBIDIwMTECEzMAAAGIr1LWuZJt6PkAAAAA
# AYgwDQYJYIZIAWUDBAIBBQCgga4wGQYJKoZIhvcNAQkDMQwGCisGAQQBgjcCAQQw
# HAYKKwYBBAGCNwIBCzEOMAwGCisGAQQBgjcCARUwLwYJKoZIhvcNAQkEMSIEIK4I
# CDH7/r/eeMqTtDETJ67ogfneVRo0/P6ogV2vy4tXMEIGCisGAQQBgjcCAQwxNDAy
# oBSAEgBNAGkAYwByAG8AcwBvAGYAdKEagBhodHRwOi8vd3d3Lm1pY3Jvc29mdC5j
# b20wDQYJKoZIhvcNAQEBBQAEggEAOnmVmILEjI6ZiuuSOvvTvijidkBez61Vz97A
# jV3AOsfmUvLpVaTVa1Mt2iPDuq1QLqRPaT7BD8PAUwr91pYllVgEd8NqivCIaCZg
# QyIRiTmHQxbozWsLcjxMvX2VxSmNKDw7IOHzUbXtmiEGhygyZpdh/uiCj7ziSxp3
# lQBR8mUE1NL9dxaxKWLhGeORqAepw6nId9oO+mHRh4JRK7uqZOFAES7/21M9vPZi
# XYilJLgIoyMkvqYSdoouzn6+m74kgzkNkyK9GYz2mmO2BCMnai9Njze2d0+kY+37
# kt10BmJDw3FHaZ+/fH/TMTgo0ZcAOicP9ccdIh/CzzpU52o+Q6GCEvEwghLtBgor
# BgEEAYI3AwMBMYIS3TCCEtkGCSqGSIb3DQEHAqCCEsowghLGAgEDMQ8wDQYJYIZI
# AWUDBAIBBQAwggFVBgsqhkiG9w0BCRABBKCCAUQEggFAMIIBPAIBAQYKKwYBBAGE
# WQoDATAxMA0GCWCGSAFlAwQCAQUABCBSbhMJwNER+BICn3iLUnPrP8dptyUphcFC
# A/NsIgnPLwIGX4hEzP6WGBMyMDIwMTEwOTE0NDY1Mi4yMzNaMASAAgH0oIHUpIHR
# MIHOMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMH
# UmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSkwJwYDVQQL
# EyBNaWNyb3NvZnQgT3BlcmF0aW9ucyBQdWVydG8gUmljbzEmMCQGA1UECxMdVGhh
# bGVzIFRTUyBFU046MEE1Ni1FMzI5LTRENEQxJTAjBgNVBAMTHE1pY3Jvc29mdCBU
# aW1lLVN0YW1wIFNlcnZpY2Wggg5EMIIE9TCCA92gAwIBAgITMwAAAScvbqPvkagZ
# qAAAAAABJzANBgkqhkiG9w0BAQsFADB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMK
# V2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0
# IENvcnBvcmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0Eg
# MjAxMDAeFw0xOTEyMTkwMTE0NTlaFw0yMTAzMTcwMTE0NTlaMIHOMQswCQYDVQQG
# EwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwG
# A1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSkwJwYDVQQLEyBNaWNyb3NvZnQg
# T3BlcmF0aW9ucyBQdWVydG8gUmljbzEmMCQGA1UECxMdVGhhbGVzIFRTUyBFU046
# MEE1Ni1FMzI5LTRENEQxJTAjBgNVBAMTHE1pY3Jvc29mdCBUaW1lLVN0YW1wIFNl
# cnZpY2UwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQD4Ad5xEZ5On0uN
# L71ng9xwoDPRKeMUyEIj5yVxPRPh5GVbU7D3pqDsoXzQMhfeRP61L1zlU1HCRS+1
# 29eo0yj1zjbAlmPAwosUgyIonesWt9E4hFlXCGUcIg5XMdvQ+Ouzk2r+awNRuk8A
# BGOa0I4VBy6zqCYHyX2pGauiB43frJSNP6pcrO0CBmpBZNjgepof5Z/50vBuJDUS
# ug6OIMQ7ZwUhSzX4bEmZUUjAycBb62dhQpGqHsXe6ypVDTgAEnGONdSBKkHiNT8H
# 0Zt2lm0vCLwHyTwtgIdi67T/LCp+X2mlPHqXsY3u72X3GYn/3G8YFCkrSc6m3b0w
# TXPd5/2fAgMBAAGjggEbMIIBFzAdBgNVHQ4EFgQU5fSWVYBfOTEkW2JTiV24WNNt
# lfIwHwYDVR0jBBgwFoAU1WM6XIoxkPNDe3xGG8UzaFqFbVUwVgYDVR0fBE8wTTBL
# oEmgR4ZFaHR0cDovL2NybC5taWNyb3NvZnQuY29tL3BraS9jcmwvcHJvZHVjdHMv
# TWljVGltU3RhUENBXzIwMTAtMDctMDEuY3JsMFoGCCsGAQUFBwEBBE4wTDBKBggr
# BgEFBQcwAoY+aHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9jZXJ0cy9NaWNU
# aW1TdGFQQ0FfMjAxMC0wNy0wMS5jcnQwDAYDVR0TAQH/BAIwADATBgNVHSUEDDAK
# BggrBgEFBQcDCDANBgkqhkiG9w0BAQsFAAOCAQEACsqNfNFVxwalZ42cEMuzZc12
# 6Nvluanx8UewDVeUQZEZHRmppMFHAzS/g6RzmxTyR2tKE3mChNGW5dTL730vEbRh
# nYRmBgiX/gT3f4AQrOPnZGXY7zszcrlbgzxpakOX+x0u4rkP3Ashh3B2CdJ11XsB
# di5PiZa1spB6U5S8D15gqTUfoIniLT4v1DBdkWExsKI1vsiFcDcjGJ4xRlMRF+fw
# 7SY0WZoOzwRzKxDTdg4DusAXpaeKbch9iithLFk/vIxQrqCr/niW8tEA+eSzeX/E
# q1D0ZyvOn4e2lTnwoJUKH6OQAWSBogyK4OCbFeJOqdKAUiBTgHKkQIYh/tbKQjCC
# BnEwggRZoAMCAQICCmEJgSoAAAAAAAIwDQYJKoZIhvcNAQELBQAwgYgxCzAJBgNV
# BAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4w
# HAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xMjAwBgNVBAMTKU1pY3Jvc29m
# dCBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAyMDEwMB4XDTEwMDcwMTIxMzY1
# NVoXDTI1MDcwMTIxNDY1NVowfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hp
# bmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jw
# b3JhdGlvbjEmMCQGA1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTAw
# ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCpHQ28dxGKOiDs/BOX9fp/
# aZRrdFQQ1aUKAIKF++18aEssX8XD5WHCdrc+Zitb8BVTJwQxH0EbGpUdzgkTjnxh
# MFmxMEQP8WCIhFRDDNdNuDgIs0Ldk6zWczBXJoKjRQ3Q6vVHgc2/JGAyWGBG8lhH
# hjKEHnRhZ5FfgVSxz5NMksHEpl3RYRNuKMYa+YaAu99h/EbBJx0kZxJyGiGKr0tk
# iVBisV39dx898Fd1rL2KQk1AUdEPnAY+Z3/1ZsADlkR+79BL/W7lmsqxqPJ6Kgox
# 8NpOBpG2iAg16HgcsOmZzTznL0S6p/TcZL2kAcEgCZN4zfy8wMlEXV4WnAEFTyJN
# AgMBAAGjggHmMIIB4jAQBgkrBgEEAYI3FQEEAwIBADAdBgNVHQ4EFgQU1WM6XIox
# kPNDe3xGG8UzaFqFbVUwGQYJKwYBBAGCNxQCBAweCgBTAHUAYgBDAEEwCwYDVR0P
# BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU1fZWy4/oolxiaNE9
# lJBb186aGMQwVgYDVR0fBE8wTTBLoEmgR4ZFaHR0cDovL2NybC5taWNyb3NvZnQu
# Y29tL3BraS9jcmwvcHJvZHVjdHMvTWljUm9vQ2VyQXV0XzIwMTAtMDYtMjMuY3Js
# MFoGCCsGAQUFBwEBBE4wTDBKBggrBgEFBQcwAoY+aHR0cDovL3d3dy5taWNyb3Nv
# ZnQuY29tL3BraS9jZXJ0cy9NaWNSb29DZXJBdXRfMjAxMC0wNi0yMy5jcnQwgaAG
# A1UdIAEB/wSBlTCBkjCBjwYJKwYBBAGCNy4DMIGBMD0GCCsGAQUFBwIBFjFodHRw
# Oi8vd3d3Lm1pY3Jvc29mdC5jb20vUEtJL2RvY3MvQ1BTL2RlZmF1bHQuaHRtMEAG
# CCsGAQUFBwICMDQeMiAdAEwAZQBnAGEAbABfAFAAbwBsAGkAYwB5AF8AUwB0AGEA
# dABlAG0AZQBuAHQALiAdMA0GCSqGSIb3DQEBCwUAA4ICAQAH5ohRDeLG4Jg/gXED
# PZ2joSFvs+umzPUxvs8F4qn++ldtGTCzwsVmyWrf9efweL3HqJ4l4/m87WtUVwgr
# UYJEEvu5U4zM9GASinbMQEBBm9xcF/9c+V4XNZgkVkt070IQyK+/f8Z/8jd9Wj8c
# 8pl5SpFSAK84Dxf1L3mBZdmptWvkx872ynoAb0swRCQiPM/tA6WWj1kpvLb9BOFw
# nzJKJ/1Vry/+tuWOM7tiX5rbV0Dp8c6ZZpCM/2pif93FSguRJuI57BlKcWOdeyFt
# w5yjojz6f32WapB4pm3S4Zz5Hfw42JT0xqUKloakvZ4argRCg7i1gJsiOCC1JeVk
# 7Pf0v35jWSUPei45V3aicaoGig+JFrphpxHLmtgOR5qAxdDNp9DvfYPw4TtxCd9d
# dJgiCGHasFAeb73x4QDf5zEHpJM692VHeOj4qEir995yfmFrb3epgcunCaw5u+zG
# y9iCtHLNHfS4hQEegPsbiSpUObJb2sgNVZl6h3M7COaYLeqN4DMuEin1wC9UJyH3
# yKxO2ii4sanblrKnQqLJzxlBTeCG+SqaoxFmMNO7dDJL32N79ZmKLxvHIa9Zta7c
# RDyXUHHXodLFVeNp3lfB0d4wwP3M5k37Db9dT+mdHhk4L7zPWAUu7w2gUDXa7wkn
# HNWzfjUeCLraNtvTX4/edIhJEqGCAtIwggI7AgEBMIH8oYHUpIHRMIHOMQswCQYD
# VQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEe
# MBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSkwJwYDVQQLEyBNaWNyb3Nv
# ZnQgT3BlcmF0aW9ucyBQdWVydG8gUmljbzEmMCQGA1UECxMdVGhhbGVzIFRTUyBF
# U046MEE1Ni1FMzI5LTRENEQxJTAjBgNVBAMTHE1pY3Jvc29mdCBUaW1lLVN0YW1w
# IFNlcnZpY2WiIwoBATAHBgUrDgMCGgMVALOVuE5sgxzETO4s+poBqI6r1x8zoIGD
# MIGApH4wfDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNV
# BAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEmMCQG
# A1UEAxMdTWljcm9zb2Z0IFRpbWUtU3RhbXAgUENBIDIwMTAwDQYJKoZIhvcNAQEF
# BQACBQDjU7byMCIYDzIwMjAxMTA5MTYzOTE0WhgPMjAyMDExMTAxNjM5MTRaMHcw
# PQYKKwYBBAGEWQoEATEvMC0wCgIFAONTtvICAQAwCgIBAAICIt0CAf8wBwIBAAIC
# EcQwCgIFAONVCHICAQAwNgYKKwYBBAGEWQoEAjEoMCYwDAYKKwYBBAGEWQoDAqAK
# MAgCAQACAwehIKEKMAgCAQACAwGGoDANBgkqhkiG9w0BAQUFAAOBgQAQhyIIAC/A
# P+VJdbhL9IQgm8WTa1DmPPE+BQSuRbBy2MmzC1KostixdEkr2OaNSjcYuZBNIJgv
# vE8CWhVDD+sbBpVcOdoSfoBwHXKfvqSTiWvovoexkF0X5aon7yr3PkJ/kEqoLyUM
# xRvdWKJdHOL1sT0/aWHn048c6aGin/zc8DGCAw0wggMJAgEBMIGTMHwxCzAJBgNV
# BAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4w
# HAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xJjAkBgNVBAMTHU1pY3Jvc29m
# dCBUaW1lLVN0YW1wIFBDQSAyMDEwAhMzAAABJy9uo++RqBmoAAAAAAEnMA0GCWCG
# SAFlAwQCAQUAoIIBSjAaBgkqhkiG9w0BCQMxDQYLKoZIhvcNAQkQAQQwLwYJKoZI
# hvcNAQkEMSIEIJZkrbvF4R8oqYYpN6ZPGOj+QEZTQriEi/Yw9gW6zMqRMIH6Bgsq
# hkiG9w0BCRACLzGB6jCB5zCB5DCBvQQgG5LoSxKGHWoW/wVMlbMztlQ4upAdzEmq
# H//vLu0jPiIwgZgwgYCkfjB8MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGlu
# Z3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBv
# cmF0aW9uMSYwJAYDVQQDEx1NaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EgMjAxMAIT
# MwAAAScvbqPvkagZqAAAAAABJzAiBCDwhEViCRvqKwQV3MxociF2iGYrDP4p1BK+
# s4tStO4vSDANBgkqhkiG9w0BAQsFAASCAQAkgmDo8lVmar0ZIqTG1it3skG8PZC9
# iqEEC1vxcz8OSfsjl2QSkQ5T2+3xWpxWA4uy2+Byv0bi8EsfQEnnn4vtdthS6/kb
# vB/LLQiqoMhJ0rasf3/y/4KnQZEtztpg1+cCaNwFUgI6o+E8YEFt1frhLwFs/0WH
# 5pyBFx9ECEs0M22SLIpW13gexv9fgk6ZboIfSreAI28DLveeJpkgwggxHRpuVOVD
# 4D7QQJAvJ0VU6p+yJlbvQXR9iltwb1REhlsJ5mADJ/FkzPVX/swMSUIoyE2inlxK
# LEiPkkZYwiFYCifFYUTnQjWU1Ls0EV+ysosL+jhzCxO8S6oRdp5TAi4F
# SIG # End signature block

View File

@@ -144,7 +144,7 @@ get_linux_platform_name() {
else
if [ -e /etc/os-release ]; then
. /etc/os-release
echo "$ID.$VERSION_ID"
echo "$ID${VERSION_ID:+.${VERSION_ID}}"
return 0
elif [ -e /etc/redhat-release ]; then
local redhatRelease=$(</etc/redhat-release)
@@ -159,6 +159,10 @@ get_linux_platform_name() {
return 1
}
is_musl_based_distro() {
(ldd --version 2>&1 || true) | grep -q musl
}
get_current_os_name() {
eval $invocation
@@ -168,15 +172,15 @@ get_current_os_name() {
return 0
elif [ "$uname" = "FreeBSD" ]; then
echo "freebsd"
return 0
return 0
elif [ "$uname" = "Linux" ]; then
local linux_platform_name
linux_platform_name="$(get_linux_platform_name)" || { echo "linux" && return 0 ; }
if [[ $linux_platform_name == "rhel.6" ]]; then
if [ "$linux_platform_name" = "rhel.6" ]; then
echo $linux_platform_name
return 0
elif [[ $linux_platform_name == alpine* ]]; then
elif is_musl_based_distro; then
echo "linux-musl"
return 0
else
@@ -202,7 +206,7 @@ get_legacy_os_name() {
else
if [ -e /etc/os-release ]; then
. /etc/os-release
os=$(get_legacy_os_name_from_platform "$ID.$VERSION_ID" || echo "")
os=$(get_legacy_os_name_from_platform "$ID${VERSION_ID:+.${VERSION_ID}}" || echo "")
if [ -n "$os" ]; then
echo "$os"
return 0
@@ -237,33 +241,6 @@ check_min_reqs() {
return 0
}
check_pre_reqs() {
eval $invocation
if [ "${DOTNET_INSTALL_SKIP_PREREQS:-}" = "1" ]; then
return 0
fi
if [ "$(uname)" = "Linux" ]; then
if [ ! -x "$(command -v ldconfig)" ]; then
echo "ldconfig is not in PATH, trying /sbin/ldconfig."
LDCONFIG_COMMAND="/sbin/ldconfig"
else
LDCONFIG_COMMAND="ldconfig"
fi
local librarypath=${LD_LIBRARY_PATH:-}
LDCONFIG_COMMAND="$LDCONFIG_COMMAND -NXv ${librarypath//:/ }"
[ -z "$($LDCONFIG_COMMAND 2>/dev/null | grep libunwind)" ] && say_warning "Unable to locate libunwind. Probable prerequisite missing; install libunwind."
[ -z "$($LDCONFIG_COMMAND 2>/dev/null | grep libssl)" ] && say_warning "Unable to locate libssl. Probable prerequisite missing; install libssl."
[ -z "$($LDCONFIG_COMMAND 2>/dev/null | grep libicu)" ] && say_warning "Unable to locate libicu. Probable prerequisite missing; install libicu."
[ -z "$($LDCONFIG_COMMAND 2>/dev/null | grep -F libcurl.so)" ] && say_warning "Unable to locate libcurl. Probable prerequisite missing; install libcurl."
fi
return 0
}
# args:
# input - $1
to_lowercase() {
@@ -360,7 +337,7 @@ get_normalized_architecture_from_architecture() {
;;
esac
say_err "Architecture \`$architecture\` not supported. If you think this is a bug, report it at https://github.com/dotnet/cli/issues"
say_err "Architecture \`$architecture\` not supported. If you think this is a bug, report it at https://github.com/dotnet/install-scripts/issues"
return 1
}
@@ -455,7 +432,6 @@ parse_jsonfile_for_version() {
sdk_list=$(echo $sdk_section | awk -F"[{}]" '{print $2}')
sdk_list=${sdk_list//[\" ]/}
sdk_list=${sdk_list//,/$'\n'}
sdk_list="$(echo -e "${sdk_list}" | tr -d '[[:space:]]')"
local version_info=""
while read -r line; do
@@ -471,6 +447,7 @@ parse_jsonfile_for_version() {
return 1
fi
unset IFS;
echo "$version_info"
return 0
}
@@ -531,17 +508,18 @@ construct_download_link() {
local channel="$2"
local normalized_architecture="$3"
local specific_version="${4//[$'\t\r\n']}"
local specific_product_version="$(get_specific_product_version "$1" "$4")"
local osname
osname="$(get_current_os_name)" || return 1
local download_link=null
if [[ "$runtime" == "dotnet" ]]; then
download_link="$azure_feed/Runtime/$specific_version/dotnet-runtime-$specific_version-$osname-$normalized_architecture.tar.gz"
download_link="$azure_feed/Runtime/$specific_version/dotnet-runtime-$specific_product_version-$osname-$normalized_architecture.tar.gz"
elif [[ "$runtime" == "aspnetcore" ]]; then
download_link="$azure_feed/aspnetcore/Runtime/$specific_version/aspnetcore-runtime-$specific_version-$osname-$normalized_architecture.tar.gz"
download_link="$azure_feed/aspnetcore/Runtime/$specific_version/aspnetcore-runtime-$specific_product_version-$osname-$normalized_architecture.tar.gz"
elif [ -z "$runtime" ]; then
download_link="$azure_feed/Sdk/$specific_version/dotnet-sdk-$specific_version-$osname-$normalized_architecture.tar.gz"
download_link="$azure_feed/Sdk/$specific_version/dotnet-sdk-$specific_product_version-$osname-$normalized_architecture.tar.gz"
else
return 1
fi
@@ -550,6 +528,50 @@ construct_download_link() {
return 0
}
# args:
# azure_feed - $1
# specific_version - $2
get_specific_product_version() {
# If we find a 'productVersion.txt' at the root of any folder, we'll use its contents
# to resolve the version of what's in the folder, superseding the specified version.
eval $invocation
local azure_feed="$1"
local specific_version="${2//[$'\t\r\n']}"
local specific_product_version=$specific_version
local download_link=null
if [[ "$runtime" == "dotnet" ]]; then
download_link="$azure_feed/Runtime/$specific_version/productVersion.txt${feed_credential}"
elif [[ "$runtime" == "aspnetcore" ]]; then
download_link="$azure_feed/aspnetcore/Runtime/$specific_version/productVersion.txt${feed_credential}"
elif [ -z "$runtime" ]; then
download_link="$azure_feed/Sdk/$specific_version/productVersion.txt${feed_credential}"
else
return 1
fi
if machine_has "curl"
then
specific_product_version=$(curl -s --fail "$download_link")
if [ $? -ne 0 ]
then
specific_product_version=$specific_version
fi
elif machine_has "wget"
then
specific_product_version=$(wget -qO- "$download_link")
if [ $? -ne 0 ]
then
specific_product_version=$specific_version
fi
fi
specific_product_version="${specific_product_version//[$'\t\r\n']}"
echo "$specific_product_version"
return 0
}
# args:
# azure_feed - $1
# channel - $2
@@ -631,7 +653,7 @@ copy_files_or_dirs_from_list() {
local osname="$(get_current_os_name)"
local override_switch=$(
if [ "$override" = false ]; then
if [[ "$osname" == "linux-musl" ]]; then
if [ "$osname" = "linux-musl" ]; then
printf -- "-u";
else
printf -- "-n";
@@ -714,11 +736,12 @@ downloadcurl() {
# Append feed_credential as late as possible before calling curl to avoid logging feed_credential
remote_path="${remote_path}${feed_credential}"
local curl_options="--retry 20 --retry-delay 2 --connect-timeout 15 -sSL -f --create-dirs "
local failed=false
if [ -z "$out_path" ]; then
curl --retry 10 -sSL -f --create-dirs "$remote_path" || failed=true
curl $curl_options "$remote_path" || failed=true
else
curl --retry 10 -sSL -f --create-dirs -o "$out_path" "$remote_path" || failed=true
curl $curl_options -o "$out_path" "$remote_path" || failed=true
fi
if [ "$failed" = true ]; then
say_verbose "Curl download failed"
@@ -734,12 +757,12 @@ downloadwget() {
# Append feed_credential as late as possible before calling wget to avoid logging feed_credential
remote_path="${remote_path}${feed_credential}"
local wget_options="--tries 20 --waitretry 2 --connect-timeout 15 "
local failed=false
if [ -z "$out_path" ]; then
wget -q --tries 10 -O - "$remote_path" || failed=true
wget -q $wget_options -O - "$remote_path" || failed=true
else
wget --tries 10 -O "$out_path" "$remote_path" || failed=true
wget $wget_options -O "$out_path" "$remote_path" || failed=true
fi
if [ "$failed" = true ]; then
say_verbose "Wget download failed"
@@ -756,6 +779,7 @@ calculate_vars() {
say_verbose "normalized_architecture=$normalized_architecture"
specific_version="$(get_specific_version_from_version "$azure_feed" "$channel" "$normalized_architecture" "$version" "$json_file")"
specific_product_version="$(get_specific_product_version "$azure_feed" "$specific_version")"
say_verbose "specific_version=$specific_version"
if [ -z "$specific_version" ]; then
say_err "Could not resolve version information."
@@ -840,13 +864,27 @@ install_dotnet() {
say "Extracting zip from $download_link"
extract_dotnet_package "$zip_path" "$install_root"
# Check if the SDK version is now installed; if not, fail the installation.
if ! is_dotnet_package_installed "$install_root" "$asset_relative_path" "$specific_version"; then
say_err "\`$asset_name\` with version = $specific_version failed to install with an unknown error."
return 1
# Check if the SDK version is installed; if not, fail the installation.
# if the version contains "RTM" or "servicing"; check if a 'release-type' SDK version is installed.
if [[ $specific_version == *"rtm"* || $specific_version == *"servicing"* ]]; then
IFS='-'
read -ra verArr <<< "$specific_version"
release_version="${verArr[0]}"
unset IFS;
say_verbose "Checking installation: version = $release_version"
if is_dotnet_package_installed "$install_root" "$asset_relative_path" "$release_version"; then
return 0
fi
fi
return 0
# Check if the standard SDK version is installed.
say_verbose "Checking installation: version = $specific_product_version"
if is_dotnet_package_installed "$install_root" "$asset_relative_path" "$specific_product_version"; then
return 0
fi
say_err "\`$asset_name\` with version = $specific_product_version failed to install with an unknown error."
return 1
}
args=("$@")
@@ -1029,6 +1067,11 @@ if [ "$no_cdn" = true ]; then
azure_feed="$uncached_feed"
fi
say "Note that the intended use of this script is for Continuous Integration (CI) scenarios, where:"
say "- The SDK needs to be installed without user interaction and without admin rights."
say "- The SDK installation doesn't need to persist across multiple CI runs."
say "To set up a development environment or to run apps, use installers rather than this script. Visit https://dotnet.microsoft.com/download to get the installer.\n"
check_min_reqs
calculate_vars
script_name=$(basename "$0")
@@ -1050,7 +1093,6 @@ if [ "$dry_run" = true ]; then
exit 0
fi
check_pre_reqs
install_dotnet
bin_path="$(get_absolute_path "$(combine_paths "$install_root" "$bin_folder_relative_path")")"
@@ -1061,4 +1103,6 @@ else
say "Binaries of dotnet can be found in $bin_path"
fi
say "Note that the script does not resolve dependencies during installation."
say "To check the list of dependencies, go to https://docs.microsoft.com/dotnet/core/install, select your operating system and check the \"Dependencies\" section."
say "Installation finished successfully."

View File

@@ -0,0 +1,3 @@
dist/
lib/
node_modules/

View File

@@ -0,0 +1,59 @@
{
"plugins": ["jest", "@typescript-eslint"],
"extends": ["plugin:github/es6"],
"parser": "@typescript-eslint/parser",
"parserOptions": {
"ecmaVersion": 9,
"sourceType": "module",
"project": "./tsconfig.json"
},
"rules": {
"eslint-comments/no-use": "off",
"import/no-namespace": "off",
"no-console": "off",
"no-unused-vars": "off",
"@typescript-eslint/no-unused-vars": "error",
"@typescript-eslint/explicit-member-accessibility": ["error", {"accessibility": "no-public"}],
"@typescript-eslint/no-require-imports": "error",
"@typescript-eslint/array-type": "error",
"@typescript-eslint/await-thenable": "error",
"@typescript-eslint/ban-ts-ignore": "error",
"camelcase": "off",
"@typescript-eslint/camelcase": "error",
"@typescript-eslint/class-name-casing": "error",
"@typescript-eslint/explicit-function-return-type": ["error", {"allowExpressions": true}],
"@typescript-eslint/func-call-spacing": ["error", "never"],
"@typescript-eslint/generic-type-naming": ["error", "^[A-Z][A-Za-z]*$"],
"@typescript-eslint/no-array-constructor": "error",
"@typescript-eslint/no-empty-interface": "error",
"@typescript-eslint/no-explicit-any": "error",
"@typescript-eslint/no-extraneous-class": "error",
"@typescript-eslint/no-for-in-array": "error",
"@typescript-eslint/no-inferrable-types": "error",
"@typescript-eslint/no-misused-new": "error",
"@typescript-eslint/no-namespace": "error",
"@typescript-eslint/no-non-null-assertion": "warn",
"@typescript-eslint/no-object-literal-type-assertion": "error",
"@typescript-eslint/no-unnecessary-qualifier": "error",
"@typescript-eslint/no-unnecessary-type-assertion": "error",
"@typescript-eslint/no-useless-constructor": "error",
"@typescript-eslint/no-var-requires": "error",
"@typescript-eslint/prefer-for-of": "warn",
"@typescript-eslint/prefer-function-type": "warn",
"@typescript-eslint/prefer-includes": "error",
"@typescript-eslint/prefer-interface": "error",
"@typescript-eslint/prefer-string-starts-ends-with": "error",
"@typescript-eslint/promise-function-async": "error",
"@typescript-eslint/require-array-sort-compare": "error",
"@typescript-eslint/restrict-plus-operands": "error",
"semi": "off",
"@typescript-eslint/semi": ["error", "never"],
"@typescript-eslint/type-annotation-spacing": "error",
"@typescript-eslint/unbound-method": "error"
},
"env": {
"node": true,
"es6": true,
"jest/globals": true
}
}

View File

@@ -0,0 +1,3 @@
dist/
lib/
node_modules/

View File

@@ -0,0 +1,11 @@
{
"printWidth": 80,
"tabWidth": 2,
"useTabs": false,
"semi": false,
"singleQuote": true,
"trailingComma": "none",
"bracketSpacing": false,
"arrowParens": "avoid",
"parser": "typescript"
}

View File

@@ -0,0 +1 @@
To update hashFiles under `Misc/layoutbin` run `npm install && npm run all`

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,35 @@
{
"name": "hashFiles",
"version": "1.0.0",
"description": "GitHub Actions HashFiles() expression function",
"main": "lib/hashFiles.js",
"scripts": {
"build": "tsc",
"format": "prettier --write **/*.ts",
"format-check": "prettier --check **/*.ts",
"lint": "eslint src/**/*.ts",
"pack": "ncc build -o ../../layoutbin/hashFiles",
"all": "npm run build && npm run format && npm run lint && npm run pack"
},
"repository": {
"type": "git",
"url": "git+https://github.com/actions/runner.git"
},
"keywords": [
"actions"
],
"author": "GitHub Actions",
"license": "MIT",
"dependencies": {
"@actions/glob": "^0.1.0"
},
"devDependencies": {
"@types/node": "^12.7.12",
"@typescript-eslint/parser": "^2.8.0",
"@zeit/ncc": "^0.20.5",
"eslint": "^6.8.0",
"eslint-plugin-github": "^2.0.0",
"prettier": "^1.19.1",
"typescript": "^3.6.4"
}
}

View File

@@ -0,0 +1,55 @@
import * as glob from '@actions/glob'
import * as crypto from 'crypto'
import * as fs from 'fs'
import * as stream from 'stream'
import * as util from 'util'
import * as path from 'path'
async function run(): Promise<void> {
// arg0 -> node
// arg1 -> hashFiles.js
// env[followSymbolicLinks] = true/null
// env[patterns] -> glob patterns
let followSymbolicLinks = false
const matchPatterns = process.env.patterns || ''
if (process.env.followSymbolicLinks === 'true') {
console.log('Follow symbolic links')
followSymbolicLinks = true
}
console.log(`Match Pattern: ${matchPatterns}`)
let hasMatch = false
const githubWorkspace = process.cwd()
const result = crypto.createHash('sha256')
let count = 0
const globber = await glob.create(matchPatterns, {followSymbolicLinks})
for await (const file of globber.globGenerator()) {
console.log(file)
if (!file.startsWith(`${githubWorkspace}${path.sep}`)) {
console.log(`Ignore '${file}' since it is not under GITHUB_WORKSPACE.`)
continue
}
if (fs.statSync(file).isDirectory()) {
console.log(`Skip directory '${file}'.`)
continue
}
const hash = crypto.createHash('sha256')
const pipeline = util.promisify(stream.pipeline)
await pipeline(fs.createReadStream(file), hash)
result.write(hash.digest())
count++
if (!hasMatch) {
hasMatch = true
}
}
result.end()
if (hasMatch) {
console.log(`Find ${count} files to hash.`)
console.error(`__OUTPUT__${result.digest('hex')}__OUTPUT__`)
} else {
console.error(`__OUTPUT____OUTPUT__`)
}
}
run()

View File

@@ -0,0 +1,12 @@
{
"compilerOptions": {
"target": "es6", /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019' or 'ESNEXT'. */
"module": "commonjs", /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */
"outDir": "./lib", /* Redirect output structure to the directory. */
"rootDir": "./src", /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */
"strict": true, /* Enable all strict type-checking options. */
"noImplicitAny": true, /* Raise error on expressions and declarations with an implied 'any' type. */
"esModuleInterop": true /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */
},
"exclude": ["node_modules", "**/*.test.ts"]
}

View File

@@ -3,7 +3,7 @@ PACKAGERUNTIME=$1
PRECACHE=$2
NODE_URL=https://nodejs.org/dist
NODE12_VERSION="12.4.0"
NODE12_VERSION="12.13.1"
get_abs_path() {
# exploits the fact that pwd will print abs path when no args
@@ -137,22 +137,15 @@ if [[ "$PACKAGERUNTIME" == "osx-x64" ]]; then
fi
# Download the external tools for Linux PACKAGERUNTIMEs.
if [[ "$PACKAGERUNTIME" == "linux-x64" || "$PACKAGERUNTIME" == "rhel.6-x64" ]]; then
if [[ "$PACKAGERUNTIME" == "linux-x64" ]]; then
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/node-v${NODE12_VERSION}-linux-x64.tar.gz" node12 fix_nested_dir
# TODO: Repath this blob to use a consistent version format (_ vs .)
acquireExternalTool "https://vstsagenttools.blob.core.windows.net/tools/nodejs/12_4_0/alpine/node-v${NODE12_VERSION}-alpine.tar.gz" node12_alpine
# acquireExternalTool "https://vstsagenttools.blob.core.windows.net/tools/nodejs/12.13.0/alpine/x64/node-v${NODE12_VERSION}-alpine-x64.tar.gz" node12_alpine
acquireExternalTool "https://vstsagenttools.blob.core.windows.net/tools/nodejs/${NODE12_VERSION}/alpine/x64/node-${NODE12_VERSION}-alpine-x64.tar.gz" node12_alpine
fi
if [[ "$PACKAGERUNTIME" == "linux-arm64" ]]; then
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/node-v${NODE12_VERSION}-linux-arm64.tar.gz" node12 fix_nested_dir
# TODO: alpine node runtime for arm64(8)
# acquireExternalTool "https://vstsagenttools.blob.core.windows.net/tools/nodejs/12.13.0/alpine/arm64/node-v${NODE12_VERSION}-alpine-arm64.tar.gz" node12_alpine
fi
if [[ "$PACKAGERUNTIME" == "linux-arm" ]]; then
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/node-v${NODE12_VERSION}-linux-armv7l.tar.gz" node12 fix_nested_dir
# TODO: alpine node runtime for arm32(7)
# Need to set up custom gcc toolchain to cross compile on x64 ubuntu for armv7 (per https://github.com/nodejs/node/blob/master/BUILDING.md)
# acquireExternalTool "https://vstsagenttools.blob.core.windows.net/tools/nodejs/12.13.0/alpine/arm/node-v${NODE12_VERSION}-alpine-arm.tar.gz" node12_alpine
fi

View File

@@ -23,5 +23,7 @@
<key>ACTIONS_RUNNER_SVC</key>
<string>1</string>
</dict>
<key>ProcessType</key>
<string>Interactive</string>
</dict>
</plist>

View File

@@ -1,6 +1,7 @@
#!/bin/bash
SVC_NAME="{{SvcNameVar}}"
SVC_NAME=${SVC_NAME// /_}
SVC_DESCRIPTION="{{SvcDescription}}"
user_id=`id -u`

File diff suppressed because it is too large Load Diff

View File

@@ -9,7 +9,7 @@ fi
# Determine OS type
# Debian based OS (Debian, Ubuntu, Linux Mint) has /etc/debian_version
# Fedora based OS (Fedora, Redhat, Centos, Oracle Linux 7) has /etc/redhat-release
# Fedora based OS (Fedora, Red Hat Enterprise Linux, CentOS, Oracle Linux 7) has /etc/redhat-release
# SUSE based OS (OpenSUSE, SUSE Enterprise) has ID_LIKE=suse in /etc/os-release
function print_errormessage()
@@ -49,79 +49,77 @@ then
cat /etc/debian_version
echo "------------------------------"
# prefer apt over apt-get
command -v apt
# prefer apt-get over apt
command -v apt-get
if [ $? -eq 0 ]
then
apt update && apt install -y liblttng-ust0 libkrb5-3 zlib1g
if [ $? -ne 0 ]
then
echo "'apt' failed with exit code '$?'"
print_errormessage
exit 1
fi
# libissl version prefer: libssl1.1 -> libssl1.0.2 -> libssl1.0.0
apt install -y libssl1.1$ || apt install -y libssl1.0.2$ || apt install -y libssl1.0.0$
if [ $? -ne 0 ]
then
echo "'apt' failed with exit code '$?'"
print_errormessage
exit 1
fi
# libicu version prefer: libicu63 -> libicu60 -> libicu57 -> libicu55 -> libicu52
apt install -y libicu63 || apt install -y libicu60 || apt install -y libicu57 || apt install -y libicu55 || apt install -y libicu52
if [ $? -ne 0 ]
then
echo "'apt' failed with exit code '$?'"
print_errormessage
exit 1
fi
apt_get=apt-get
else
command -v apt-get
command -v apt
if [ $? -eq 0 ]
then
apt-get update && apt-get install -y liblttng-ust0 libkrb5-3 zlib1g
if [ $? -ne 0 ]
then
echo "'apt-get' failed with exit code '$?'"
print_errormessage
exit 1
fi
# libissl version prefer: libssl1.1 -> libssl1.0.2 -> libssl1.0.0
apt-get install -y libssl1.1$ || apt-get install -y libssl1.0.2$ || apt install -y libssl1.0.0$
if [ $? -ne 0 ]
then
echo "'apt-get' failed with exit code '$?'"
print_errormessage
exit 1
fi
# libicu version prefer: libicu63 -> libicu60 -> libicu57 -> libicu55 -> libicu52
apt-get install -y libicu63 || apt-get install -y libicu60 || apt install -y libicu57 || apt install -y libicu55 || apt install -y libicu52
if [ $? -ne 0 ]
then
echo "'apt-get' failed with exit code '$?'"
print_errormessage
exit 1
fi
apt_get=apt
else
echo "Can not find 'apt' or 'apt-get'"
echo "Found neither 'apt-get' nor 'apt'"
print_errormessage
exit 1
fi
fi
$apt_get update && $apt_get install -y liblttng-ust0 libkrb5-3 zlib1g
if [ $? -ne 0 ]
then
echo "'$apt_get' failed with exit code '$?'"
print_errormessage
exit 1
fi
apt_get_with_fallbacks() {
$apt_get install -y $1
fail=$?
if [ $fail -eq 0 ]
then
if [ "${1#"${1%?}"}" = '$' ]; then
dpkg -l "${1%?}" > /dev/null 2> /dev/null
fail=$?
fi
fi
if [ $fail -ne 0 ]
then
shift
if [ -n "$1" ]
then
apt_get_with_fallbacks "$@"
fi
fi
}
# libssl version prefer: libssl1.1 -> libssl1.0.2 -> libssl1.0.0
apt_get_with_fallbacks libssl1.1$ libssl1.0.2$ libssl1.0.0$
if [ $? -ne 0 ]
then
echo "'$apt_get' failed with exit code '$?'"
print_errormessage
exit 1
fi
# libicu version prefer: libicu66 -> libicu63 -> libicu60 -> libicu57 -> libicu55 -> libicu52
apt_get_with_fallbacks libicu66 libicu63 libicu60 libicu57 libicu55 libicu52
if [ $? -ne 0 ]
then
echo "'$apt_get' failed with exit code '$?'"
print_errormessage
exit 1
fi
elif [ -e /etc/redhat-release ]
then
echo "The current OS is Fedora based"
echo "--------Redhat Version--------"
echo "--Fedora/RHEL/CentOS Version--"
cat /etc/redhat-release
echo "------------------------------"
# use dnf on fedora
# use yum on centos and redhat
# use yum on centos and rhel
if [ -e /etc/fedora-release ]
then
command -v dnf
@@ -191,7 +189,7 @@ then
redhatRelease=$(</etc/redhat-release)
if [[ $redhatRelease == "CentOS release 6."* || $redhatRelease == "Red Hat Enterprise Linux Server release 6."* ]]
then
echo "The current OS is Red Hat Enterprise Linux 6 or Centos 6"
echo "The current OS is Red Hat Enterprise Linux 6 or CentOS 6"
# Install known dependencies, as a best effort.
# The remaining dependencies are covered by the GitHub doc that will be shown by `print_rhel6message`

View File

@@ -0,0 +1,13 @@
const { spawn } = require('child_process');
// argv[0] = node
// argv[1] = macos-run-invoker.js
var shell = process.argv[2];
var args = process.argv.slice(3);
console.log(`::debug::macos-run-invoker: ${shell}`);
console.log(`::debug::macos-run-invoker: ${JSON.stringify(args)}`);
var launch = spawn(shell, args, { stdio: 'inherit' });
launch.on('exit', function (code) {
if (code !== 0) {
process.exit(code);
}
});

View File

@@ -1,6 +1,7 @@
#!/bin/bash
SVC_NAME="{{SvcNameVar}}"
SVC_NAME=${SVC_NAME// /_}
SVC_DESCRIPTION="{{SvcDescription}}"
SVC_CMD=$1
@@ -62,12 +63,25 @@ function install()
sed "s/{{User}}/${run_as_user}/g; s/{{Description}}/$(echo ${SVC_DESCRIPTION} | sed -e 's/[\/&]/\\&/g')/g; s/{{RunnerRoot}}/$(echo ${RUNNER_ROOT} | sed -e 's/[\/&]/\\&/g')/g;" "${TEMPLATE_PATH}" > "${TEMP_PATH}" || failed "failed to create replacement temp file"
mv "${TEMP_PATH}" "${UNIT_PATH}" || failed "failed to copy unit file"
# Recent Fedora based Linux (CentOS/Redhat) has SELinux enabled by default
# We need to restore security context on the unit file we added otherwise SystemD have no access to it.
command -v getenforce > /dev/null
if [ $? -eq 0 ]
then
selinuxEnabled=$(getenforce)
if [[ $selinuxEnabled == "Enforcing" ]]
then
# SELinux is enabled, we will need to Restore SELinux Context for the service file
restorecon -r -v "${UNIT_PATH}" || failed "failed to restore SELinux context on ${UNIT_PATH}"
fi
fi
# unit file should not be executable and world writable
chmod 664 ${UNIT_PATH} || failed "failed to set permissions on ${UNIT_PATH}"
chmod 664 "${UNIT_PATH}" || failed "failed to set permissions on ${UNIT_PATH}"
systemctl daemon-reload || failed "failed to reload daemons"
# Since we started with sudo, runsvc.sh will be owned by root. Change this to current login user.
# Since we started with sudo, runsvc.sh will be owned by root. Change this to current login user.
cp ./bin/runsvc.sh ./runsvc.sh || failed "failed to copy runsvc.sh"
chown ${run_as_uid}:${run_as_gid} ./runsvc.sh || failed "failed to set owner for runsvc.sh"
chmod 755 ./runsvc.sh || failed "failed to set permission for runsvc.sh"

4
src/Misc/layoutbin/update.sh.template Normal file → Executable file
View File

@@ -28,13 +28,13 @@ date "+[%F %T-%4N] Waiting for $runnerprocessname ($runnerpid) to complete" >> "
while [ -e /proc/$runnerpid ]
do
date "+[%F %T-%4N] Process $runnerpid still running" >> "$logfile" 2>&1
ping -c 2 127.0.0.1 >nul
sleep 2
done
date "+[%F %T-%4N] Process $runnerpid finished running" >> "$logfile" 2>&1
# start re-organize folders
date "+[%F %T-%4N] Sleep 1 more second to make sure process exited" >> "$logfile" 2>&1
ping -c 2 127.0.0.1 >nul
sleep 1
# the folder structure under runner root will be
# ./bin -> bin.2.100.0 (junction folder)

View File

@@ -3,7 +3,7 @@
user_id=`id -u`
# we want to snapshot the environment of the config user
if [ $user_id -eq 0 -a -z "$AGENT_ALLOW_RUNASROOT" ]; then
if [ $user_id -eq 0 -a -z "$RUNNER_ALLOW_RUNASROOT" ]; then
echo "Must not run with sudo"
exit 1
fi
@@ -18,24 +18,26 @@ then
exit 1
fi
message="Execute sudo ./bin/installdependencies.sh to install any missing Dotnet Core 3.0 dependencies."
ldd ./bin/libcoreclr.so | grep 'not found'
if [ $? -eq 0 ]; then
echo "Dependencies is missing for Dotnet Core 3.0"
echo "Execute ./bin/installdependencies.sh to install any missing Dotnet Core 3.0 dependencies."
echo $message
exit 1
fi
ldd ./bin/System.Security.Cryptography.Native.OpenSsl.so | grep 'not found'
if [ $? -eq 0 ]; then
echo "Dependencies is missing for Dotnet Core 3.0"
echo "Execute ./bin/installdependencies.sh to install any missing Dotnet Core 3.0 dependencies."
echo $message
exit 1
fi
ldd ./bin/System.IO.Compression.Native.so | grep 'not found'
if [ $? -eq 0 ]; then
echo "Dependencies is missing for Dotnet Core 3.0"
echo "Execute ./bin/installdependencies.sh to install any missing Dotnet Core 3.0 dependencies."
echo $message
exit 1
fi
@@ -50,10 +52,10 @@ then
fi
libpath=${LD_LIBRARY_PATH:-}
$LDCONFIG_COMMAND -NXv ${libpath//:/} 2>&1 | grep libicu >/dev/null 2>&1
$LDCONFIG_COMMAND -NXv ${libpath//:/ } 2>&1 | grep libicu >/dev/null 2>&1
if [ $? -ne 0 ]; then
echo "Libicu's dependencies is missing for Dotnet Core 3.0"
echo "Execute ./bin/installdependencies.sh to install any missing Dotnet Core 3.0 dependencies."
echo $message
exit 1
fi
fi
@@ -67,7 +69,7 @@ while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symli
[[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
cd $DIR
cd "$DIR"
source ./env.sh

View File

@@ -9,9 +9,6 @@ varCheckList=(
'GRADLE_HOME'
'NVM_BIN'
'NVM_PATH'
'VSTS_HTTP_PROXY'
'VSTS_HTTP_PROXY_USERNAME'
'VSTS_HTTP_PROXY_PASSWORD'
'LD_LIBRARY_PATH'
'PERL5LIB'
)

View File

@@ -2,7 +2,7 @@
# Validate not sudo
user_id=`id -u`
if [ $user_id -eq 0 -a -z "$AGENT_ALLOW_RUNASROOT" ]; then
if [ $user_id -eq 0 -a -z "$RUNNER_ALLOW_RUNASROOT" ]; then
echo "Must not run interactively with sudo"
exit 1
fi
@@ -26,8 +26,8 @@ if [[ "$1" == "localRun" ]]; then
else
"$DIR"/bin/Runner.Listener run $*
# Return code 4 means the run once agent received an update message.
# Sleep 5 seconds to wait for the update process finish and run the agent again.
# Return code 4 means the run once runner received an update message.
# Sleep 5 seconds to wait for the update process finish and run the runner again.
returnCode=$?
if [[ $returnCode == 4 ]]; then
if [ ! -x "$(command -v sleep)" ]; then

View File

@@ -3,8 +3,6 @@
<packageSources>
<!--To inherit the global NuGet package sources remove the <clear/> line below -->
<clear />
<add key="dotnet-core" value="https://www.myget.org/F/dotnet-core/api/v3/index.json" />
<add key="dotnet-buildtools" value="https://www.myget.org/F/dotnet-buildtools/api/v3/index.json" />
<add key="api.nuget.org" value="https://api.nuget.org/v3/index.json" />
</packageSources>
</configuration>

View File

@@ -9,26 +9,27 @@ namespace GitHub.Runner.Common
{
private static readonly EscapeMapping[] _escapeMappings = new[]
{
new EscapeMapping(token: "%", replacement: "%25"),
new EscapeMapping(token: ";", replacement: "%3B"),
new EscapeMapping(token: "\r", replacement: "%0D"),
new EscapeMapping(token: "\n", replacement: "%0A"),
new EscapeMapping(token: "]", replacement: "%5D"),
new EscapeMapping(token: "%", replacement: "%25"),
};
private static readonly EscapeMapping[] _escapeDataMappings = new[]
{
new EscapeMapping(token: "\r", replacement: "%0D"),
new EscapeMapping(token: "\n", replacement: "%0A"),
new EscapeMapping(token: "%", replacement: "%25"),
};
private static readonly EscapeMapping[] _escapePropertyMappings = new[]
{
new EscapeMapping(token: "%", replacement: "%25"),
new EscapeMapping(token: "\r", replacement: "%0D"),
new EscapeMapping(token: "\n", replacement: "%0A"),
new EscapeMapping(token: ":", replacement: "%3A"),
new EscapeMapping(token: ",", replacement: "%2C"),
new EscapeMapping(token: "%", replacement: "%25"),
};
private readonly Dictionary<string, string> _properties = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase);

View File

@@ -1,33 +0,0 @@
using System.Threading;
using System.Threading.Tasks;
namespace GitHub.Runner.Common
{
//Stephen Toub: http://blogs.msdn.com/b/pfxteam/archive/2012/02/11/10266920.aspx
public class AsyncManualResetEvent
{
private volatile TaskCompletionSource<bool> m_tcs = new TaskCompletionSource<bool>();
public Task WaitAsync() { return m_tcs.Task; }
public void Set()
{
var tcs = m_tcs;
Task.Factory.StartNew(s => ((TaskCompletionSource<bool>)s).TrySetResult(true),
tcs, CancellationToken.None, TaskCreationOptions.PreferFairness, TaskScheduler.Default);
tcs.Task.Wait();
}
public void Reset()
{
while (true)
{
var tcs = m_tcs;
if (!tcs.Task.IsCompleted ||
Interlocked.CompareExchange(ref m_tcs, new TaskCompletionSource<bool>(), tcs) == tcs)
return;
}
}
}
}

View File

@@ -1,6 +1,8 @@
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
using System;
using System.IO;
using System.Linq;
using System.Runtime.Serialization;
using System.Text;
using System.Threading;
@@ -13,8 +15,8 @@ namespace GitHub.Runner.Common
[DataContract]
public sealed class RunnerSettings
{
[DataMember(EmitDefaultValue = false)]
public bool AcceptTeeEula { get; set; }
[DataMember(Name = "IsHostedServer", EmitDefaultValue = false)]
private bool? _isHostedServer;
[DataMember(EmitDefaultValue = false)]
public int AgentId { get; set; }
@@ -22,15 +24,6 @@ namespace GitHub.Runner.Common
[DataMember(EmitDefaultValue = false)]
public string AgentName { get; set; }
[DataMember(EmitDefaultValue = false)]
public string NotificationPipeName { get; set; }
[DataMember(EmitDefaultValue = false)]
public string NotificationSocketAddress { get; set; }
[DataMember(EmitDefaultValue = false)]
public bool SkipCapabilitiesScan { get; set; }
[DataMember(EmitDefaultValue = false)]
public bool SkipSessionRecover { get; set; }
@@ -51,15 +44,58 @@ namespace GitHub.Runner.Common
[DataMember(EmitDefaultValue = false)]
public string MonitorSocketAddress { get; set; }
}
[DataContract]
public sealed class RunnerRuntimeOptions
{
#if OS_WINDOWS
[DataMember(EmitDefaultValue = false)]
public bool GitUseSecureChannel { get; set; }
#endif
[IgnoreDataMember]
public bool IsHostedServer
{
get
{
// Old runners do not have this property. Hosted runners likely don't have this property either.
return _isHostedServer ?? true;
}
set
{
_isHostedServer = value;
}
}
/// <summary>
// Computed property for convenience. Can either return:
// 1. If runner was configured at the repo level, returns something like: "myorg/myrepo"
// 2. If runner was configured at the org level, returns something like: "myorg"
/// </summary>
public string RepoOrOrgName
{
get
{
Uri accountUri = new Uri(this.ServerUrl);
string repoOrOrgName = string.Empty;
if (accountUri.Host.EndsWith(".githubusercontent.com", StringComparison.OrdinalIgnoreCase))
{
Uri gitHubUrl = new Uri(this.GitHubUrl);
// Use the "NWO part" from the GitHub URL path
repoOrOrgName = gitHubUrl.AbsolutePath.Trim('/');
}
else
{
repoOrOrgName = accountUri.AbsolutePath.Split('/', StringSplitOptions.RemoveEmptyEntries).FirstOrDefault();
}
return repoOrOrgName;
}
}
[OnSerializing]
private void OnSerializing(StreamingContext context)
{
if (_isHostedServer.HasValue && _isHostedServer.Value)
{
_isHostedServer = null;
}
}
}
[ServiceLocator(Default = typeof(ConfigurationStore))]
@@ -69,14 +105,13 @@ namespace GitHub.Runner.Common
bool IsServiceConfigured();
bool HasCredentials();
CredentialData GetCredentials();
CredentialData GetMigratedCredentials();
RunnerSettings GetSettings();
void SaveCredential(CredentialData credential);
void SaveSettings(RunnerSettings settings);
void DeleteCredential();
void DeleteMigratedCredential();
void DeleteSettings();
RunnerRuntimeOptions GetRunnerRuntimeOptions();
void SaveRunnerRuntimeOptions(RunnerRuntimeOptions options);
void DeleteRunnerRuntimeOptions();
}
public sealed class ConfigurationStore : RunnerService, IConfigurationStore
@@ -84,12 +119,12 @@ namespace GitHub.Runner.Common
private string _binPath;
private string _configFilePath;
private string _credFilePath;
private string _migratedCredFilePath;
private string _serviceConfigFilePath;
private string _runtimeOptionsFilePath;
private CredentialData _creds;
private CredentialData _migratedCreds;
private RunnerSettings _settings;
private RunnerRuntimeOptions _runtimeOptions;
public override void Initialize(IHostContext hostContext)
{
@@ -110,20 +145,19 @@ namespace GitHub.Runner.Common
_credFilePath = hostContext.GetConfigFile(WellKnownConfigFile.Credentials);
Trace.Info("CredFilePath: {0}", _credFilePath);
_migratedCredFilePath = hostContext.GetConfigFile(WellKnownConfigFile.MigratedCredentials);
Trace.Info("MigratedCredFilePath: {0}", _migratedCredFilePath);
_serviceConfigFilePath = hostContext.GetConfigFile(WellKnownConfigFile.Service);
Trace.Info("ServiceConfigFilePath: {0}", _serviceConfigFilePath);
_runtimeOptionsFilePath = hostContext.GetConfigFile(WellKnownConfigFile.Options);
Trace.Info("RuntimeOptionsFilePath: {0}", _runtimeOptionsFilePath);
}
public string RootFolder { get; private set; }
public bool HasCredentials()
{
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
Trace.Info("HasCredentials()");
bool credsStored = (new FileInfo(_credFilePath)).Exists;
bool credsStored = (new FileInfo(_credFilePath)).Exists || (new FileInfo(_migratedCredFilePath)).Exists;
Trace.Info("stored {0}", credsStored);
return credsStored;
}
@@ -131,14 +165,13 @@ namespace GitHub.Runner.Common
public bool IsConfigured()
{
Trace.Info("IsConfigured()");
bool configured = HostContext.RunMode == RunMode.Local || (new FileInfo(_configFilePath)).Exists;
bool configured = new FileInfo(_configFilePath).Exists;
Trace.Info("IsConfigured: {0}", configured);
return configured;
}
public bool IsServiceConfigured()
{
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
Trace.Info("IsServiceConfigured()");
bool serviceConfigured = (new FileInfo(_serviceConfigFilePath)).Exists;
Trace.Info($"IsServiceConfigured: {serviceConfigured}");
@@ -147,7 +180,6 @@ namespace GitHub.Runner.Common
public CredentialData GetCredentials()
{
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
if (_creds == null)
{
_creds = IOUtil.LoadObject<CredentialData>(_credFilePath);
@@ -156,6 +188,16 @@ namespace GitHub.Runner.Common
return _creds;
}
public CredentialData GetMigratedCredentials()
{
if (_migratedCreds == null && File.Exists(_migratedCredFilePath))
{
_migratedCreds = IOUtil.LoadObject<CredentialData>(_migratedCredFilePath);
}
return _migratedCreds;
}
public RunnerSettings GetSettings()
{
if (_settings == null)
@@ -177,7 +219,6 @@ namespace GitHub.Runner.Common
public void SaveCredential(CredentialData credential)
{
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
Trace.Info("Saving {0} credential @ {1}", credential.Scheme, _credFilePath);
if (File.Exists(_credFilePath))
{
@@ -193,7 +234,6 @@ namespace GitHub.Runner.Common
public void SaveSettings(RunnerSettings settings)
{
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
Trace.Info("Saving runner settings.");
if (File.Exists(_configFilePath))
{
@@ -209,44 +249,18 @@ namespace GitHub.Runner.Common
public void DeleteCredential()
{
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
IOUtil.Delete(_credFilePath, default(CancellationToken));
IOUtil.Delete(_migratedCredFilePath, default(CancellationToken));
}
public void DeleteMigratedCredential()
{
IOUtil.Delete(_migratedCredFilePath, default(CancellationToken));
}
public void DeleteSettings()
{
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
IOUtil.Delete(_configFilePath, default(CancellationToken));
}
public RunnerRuntimeOptions GetRunnerRuntimeOptions()
{
if (_runtimeOptions == null && File.Exists(_runtimeOptionsFilePath))
{
_runtimeOptions = IOUtil.LoadObject<RunnerRuntimeOptions>(_runtimeOptionsFilePath);
}
return _runtimeOptions;
}
public void SaveRunnerRuntimeOptions(RunnerRuntimeOptions options)
{
Trace.Info("Saving runtime options.");
if (File.Exists(_runtimeOptionsFilePath))
{
// Delete existing runtime options file first, since the file is hidden and not able to overwrite.
Trace.Info("Delete exist runtime options file.");
IOUtil.DeleteFile(_runtimeOptionsFilePath);
}
IOUtil.SaveObject(options, _runtimeOptionsFilePath);
Trace.Info("Options Saved.");
File.SetAttributes(_runtimeOptionsFilePath, File.GetAttributes(_runtimeOptionsFilePath) | FileAttributes.Hidden);
}
public void DeleteRunnerRuntimeOptions()
{
IOUtil.Delete(_runtimeOptionsFilePath, default(CancellationToken));
}
}
}

View File

@@ -2,12 +2,6 @@
namespace GitHub.Runner.Common
{
public enum RunMode
{
Normal, // Keep "Normal" first (default value).
Local,
}
public enum WellKnownDirectory
{
Bin,
@@ -25,14 +19,13 @@ namespace GitHub.Runner.Common
{
Runner,
Credentials,
MigratedCredentials,
RSACredentials,
Service,
CredentialStore,
Certificates,
Proxy,
ProxyCredentials,
ProxyBypass,
Options,
SetupInfo,
}
public static class Constants
@@ -93,44 +86,22 @@ namespace GitHub.Runner.Common
//validArgs array as well present in the CommandSettings.cs
public static class Args
{
public static readonly string Agent = "agent";
public static readonly string Auth = "auth";
public static readonly string CollectionName = "collectionname";
public static readonly string DeploymentGroupName = "deploymentgroupname";
public static readonly string DeploymentPoolName = "deploymentpoolname";
public static readonly string DeploymentGroupTags = "deploymentgrouptags";
public static readonly string MachineGroupName = "machinegroupname";
public static readonly string MachineGroupTags = "machinegrouptags";
public static readonly string Matrix = "matrix";
public static readonly string Labels = "labels";
public static readonly string MonitorSocketAddress = "monitorsocketaddress";
public static readonly string NotificationPipeName = "notificationpipename";
public static readonly string NotificationSocketAddress = "notificationsocketaddress";
public static readonly string Pool = "pool";
public static readonly string ProjectName = "projectname";
public static readonly string ProxyUrl = "proxyurl";
public static readonly string ProxyUserName = "proxyusername";
public static readonly string SslCACert = "sslcacert";
public static readonly string SslClientCert = "sslclientcert";
public static readonly string SslClientCertKey = "sslclientcertkey";
public static readonly string SslClientCertArchive = "sslclientcertarchive";
public static readonly string SslClientCertPassword = "sslclientcertpassword";
public static readonly string Name = "name";
public static readonly string RunnerGroup = "runnergroup";
public static readonly string StartupType = "startuptype";
public static readonly string Url = "url";
public static readonly string UserName = "username";
public static readonly string WindowsLogonAccount = "windowslogonaccount";
public static readonly string Work = "work";
public static readonly string Yml = "yml";
// Secret args. Must be added to the "Secrets" getter as well.
public static readonly string Password = "password";
public static readonly string ProxyPassword = "proxypassword";
public static readonly string Token = "token";
public static readonly string WindowsLogonPassword = "windowslogonpassword";
public static string[] Secrets => new[]
{
Password,
ProxyPassword,
SslClientCertPassword,
Token,
WindowsLogonPassword,
};
@@ -139,7 +110,6 @@ namespace GitHub.Runner.Common
public static class Commands
{
public static readonly string Configure = "configure";
public static readonly string LocalRun = "localRun";
public static readonly string Remove = "remove";
public static readonly string Run = "run";
public static readonly string Warmup = "warmup";
@@ -149,26 +119,13 @@ namespace GitHub.Runner.Common
//validFlags array as well present in the CommandSettings.cs
public static class Flags
{
public static readonly string AcceptTeeEula = "acceptteeeula";
public static readonly string AddDeploymentGroupTags = "adddeploymentgrouptags";
public static readonly string AddMachineGroupTags = "addmachinegrouptags";
public static readonly string Commit = "commit";
public static readonly string DeploymentGroup = "deploymentgroup";
public static readonly string DeploymentPool = "deploymentpool";
public static readonly string OverwriteAutoLogon = "overwriteautologon";
public static readonly string GitUseSChannel = "gituseschannel";
public static readonly string Help = "help";
public static readonly string MachineGroup = "machinegroup";
public static readonly string Replace = "replace";
public static readonly string NoRestart = "norestart";
public static readonly string LaunchBrowser = "launchbrowser";
public static readonly string Once = "once";
public static readonly string RunAsAutoLogon = "runasautologon";
public static readonly string RunAsService = "runasservice";
public static readonly string SslSkipCertValidation = "sslskipcertvalidation";
public static readonly string Unattended = "unattended";
public static readonly string Version = "version";
public static readonly string WhatIf = "whatif";
}
}
@@ -180,6 +137,18 @@ namespace GitHub.Runner.Common
public const int RunnerUpdating = 3;
public const int RunOnceRunnerUpdating = 4;
}
public static readonly string InternalTelemetryIssueDataKey = "_internal_telemetry";
public static readonly string WorkerCrash = "WORKER_CRASH";
public static readonly string UnsupportedCommand = "UNSUPPORTED_COMMAND";
public static readonly string UnsupportedCommandMessage = "The `{0}` command is deprecated and will be disabled on November 16th. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/";
public static readonly string UnsupportedCommandMessageDisabled = "The `{0}` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/";
}
public static class RunnerEvent
{
public static readonly string Register = "register";
public static readonly string Remove = "remove";
}
public static class Pipeline
@@ -193,37 +162,29 @@ namespace GitHub.Runner.Common
public static class Configuration
{
public static readonly string AAD = "AAD";
public static readonly string OAuthAccessToken = "OAuthAccessToken";
public static readonly string PAT = "PAT";
public static readonly string OAuth = "OAuth";
}
public static class Expressions
{
public static readonly string Always = "always";
public static readonly string Canceled = "canceled";
public static readonly string Cancelled = "cancelled";
public static readonly string Failed = "failed";
public static readonly string Failure = "failure";
public static readonly string Success = "success";
public static readonly string Succeeded = "succeeded";
public static readonly string SucceededOrFailed = "succeededOrFailed";
public static readonly string Variables = "variables";
}
public static class Path
{
public static readonly string ActionsDirectory = "_actions";
public static readonly string ActionManifestFile = "action.yml";
public static readonly string ActionManifestYmlFile = "action.yml";
public static readonly string ActionManifestYamlFile = "action.yaml";
public static readonly string BinDirectory = "bin";
public static readonly string DiagDirectory = "_diag";
public static readonly string ExternalsDirectory = "externals";
public static readonly string RunnerDiagnosticLogPrefix = "Runner_";
public static readonly string TempDirectory = "_temp";
public static readonly string TeeDirectory = "tee";
public static readonly string ToolDirectory = "_tool";
public static readonly string TaskJsonFile = "task.json";
public static readonly string UpdateDirectory = "_update";
public static readonly string WorkDirectory = "_work";
public static readonly string WorkerDiagnosticLogPrefix = "Worker_";
@@ -240,103 +201,24 @@ namespace GitHub.Runner.Common
//
// Keep alphabetical
//
public static readonly string AllowUnsupportedCommands = "ACTIONS_ALLOW_UNSECURE_COMMANDS";
public static readonly string RunnerDebug = "ACTIONS_RUNNER_DEBUG";
public static readonly string StepDebug = "ACTIONS_STEP_DEBUG";
}
public static class Agent
{
//
// Keep alphabetical
//
public static readonly string AcceptTeeEula = "agent.acceptteeeula";
public static readonly string AllowAllEndpoints = "agent.allowAllEndpoints"; // remove after sprint 120 or so.
public static readonly string AllowAllSecureFiles = "agent.allowAllSecureFiles"; // remove after sprint 121 or so.
public static readonly string BuildDirectory = "agent.builddirectory";
public static readonly string ContainerId = "agent.containerid";
public static readonly string ContainerNetwork = "agent.containernetwork";
public static readonly string HomeDirectory = "agent.homedirectory";
public static readonly string Id = "agent.id";
public static readonly string GitUseSChannel = "agent.gituseschannel";
public static readonly string JobName = "agent.jobname";
public static readonly string MachineName = "agent.machinename";
public static readonly string Name = "agent.name";
public static readonly string OS = "agent.os";
public static readonly string OSArchitecture = "agent.osarchitecture";
public static readonly string OSVersion = "agent.osversion";
public static readonly string ProxyUrl = "agent.proxyurl";
public static readonly string ProxyUsername = "agent.proxyusername";
public static readonly string ProxyPassword = "agent.proxypassword";
public static readonly string ProxyBypassList = "agent.proxybypasslist";
public static readonly string RetainDefaultEncoding = "agent.retainDefaultEncoding";
public static readonly string RootDirectory = "agent.RootDirectory";
public static readonly string RunMode = "agent.runmode";
public static readonly string ServerOMDirectory = "agent.ServerOMDirectory";
public static readonly string ServicePortPrefix = "agent.services";
public static readonly string SslCAInfo = "agent.cainfo";
public static readonly string SslClientCert = "agent.clientcert";
public static readonly string SslClientCertKey = "agent.clientcertkey";
public static readonly string SslClientCertArchive = "agent.clientcertarchive";
public static readonly string SslClientCertPassword = "agent.clientcertpassword";
public static readonly string SslSkipCertValidation = "agent.skipcertvalidation";
public static readonly string TempDirectory = "agent.TempDirectory";
public static readonly string ToolsDirectory = "agent.ToolsDirectory";
public static readonly string Version = "agent.version";
public static readonly string WorkFolder = "agent.workfolder";
public static readonly string WorkingDirectory = "agent.WorkingDirectory";
}
public static class Build
{
//
// Keep alphabetical
//
public static readonly string ArtifactStagingDirectory = "build.artifactstagingdirectory";
public static readonly string BinariesDirectory = "build.binariesdirectory";
public static readonly string Number = "build.buildNumber";
public static readonly string Clean = "build.clean";
public static readonly string DefinitionName = "build.definitionname";
public static readonly string GatedRunCI = "build.gated.runci";
public static readonly string GatedShelvesetName = "build.gated.shelvesetname";
public static readonly string RepoClean = "build.repository.clean";
public static readonly string RepoGitSubmoduleCheckout = "build.repository.git.submodulecheckout";
public static readonly string RepoId = "build.repository.id";
public static readonly string RepoLocalPath = "build.repository.localpath";
public static readonly string RepoName = "build.Repository.name";
public static readonly string RepoProvider = "build.repository.provider";
public static readonly string RepoTfvcWorkspace = "build.repository.tfvc.workspace";
public static readonly string RepoUri = "build.repository.uri";
public static readonly string SourceBranch = "build.sourcebranch";
public static readonly string SourceTfvcShelveset = "build.sourcetfvcshelveset";
public static readonly string SourceVersion = "build.sourceversion";
public static readonly string SourcesDirectory = "build.sourcesdirectory";
public static readonly string StagingDirectory = "build.stagingdirectory";
public static readonly string SyncSources = "build.syncSources";
}
public static class System
{
//
// Keep alphabetical
//
public static readonly string AccessToken = "system.accessToken";
public static readonly string ArtifactsDirectory = "system.artifactsdirectory";
public static readonly string CollectionId = "system.collectionid";
public static readonly string Culture = "system.culture";
public static readonly string DefaultWorkingDirectory = "system.defaultworkingdirectory";
public static readonly string DefinitionId = "system.definitionid";
public static readonly string EnableAccessToken = "system.enableAccessToken";
public static readonly string HostType = "system.hosttype";
public static readonly string PhaseDisplayName = "system.phaseDisplayName";
public static readonly string PreferGitFromPath = "system.prefergitfrompath";
public static readonly string PullRequestTargetBranchName = "system.pullrequest.targetbranch";
public static readonly string SelfManageGitCreds = "system.selfmanagegitcreds";
public static readonly string ServerType = "system.servertype";
public static readonly string TFServerUrl = "system.TeamFoundationServerUri"; // back compat variable, do not document
public static readonly string TeamProject = "system.teamproject";
public static readonly string TeamProjectId = "system.teamProjectId";
public static readonly string WorkFolder = "system.workfolder";
}
}
}

View File

@@ -56,6 +56,10 @@ namespace GitHub.Runner.Common
Add<T>(extensions, "GitHub.Runner.Worker.EndGroupCommandExtension, Runner.Worker");
Add<T>(extensions, "GitHub.Runner.Worker.EchoCommandExtension, Runner.Worker");
break;
case "GitHub.Runner.Worker.IFileCommandExtension":
Add<T>(extensions, "GitHub.Runner.Worker.AddPathFileCommand, Runner.Worker");
Add<T>(extensions, "GitHub.Runner.Worker.SetEnvFileCommand, Runner.Worker");
break;
default:
// This should never happen.
throw new NotSupportedException($"Unexpected extension type: '{typeof(T).FullName}'");

View File

@@ -1,31 +1,30 @@
using GitHub.Runner.Common.Util;
using System;
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Diagnostics;
using System.Diagnostics.Tracing;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Reflection;
using System.Runtime.Loader;
using System.Threading;
using System.Threading.Tasks;
using System.Diagnostics;
using System.Net.Http;
using System.Diagnostics.Tracing;
using GitHub.DistributedTask.Logging;
using System.Net.Http.Headers;
using GitHub.Runner.Sdk;
namespace GitHub.Runner.Common
{
public interface IHostContext : IDisposable
{
RunMode RunMode { get; set; }
StartupType StartupType { get; set; }
CancellationToken RunnerShutdownToken { get; }
ShutdownReason RunnerShutdownReason { get; }
ISecretMasker SecretMasker { get; }
ProductInfoHeaderValue UserAgent { get; }
List<ProductInfoHeaderValue> UserAgents { get; }
RunnerWebProxy WebProxy { get; }
string GetDirectory(WellKnownDirectory directory);
string GetConfigFile(WellKnownConfigFile configFile);
Tracing GetTrace(string name);
@@ -54,25 +53,26 @@ namespace GitHub.Runner.Common
private readonly ConcurrentDictionary<Type, object> _serviceInstances = new ConcurrentDictionary<Type, object>();
private readonly ConcurrentDictionary<Type, Type> _serviceTypes = new ConcurrentDictionary<Type, Type>();
private readonly ISecretMasker _secretMasker = new SecretMasker();
private readonly ProductInfoHeaderValue _userAgent = new ProductInfoHeaderValue($"GitHubActionsRunner-{BuildConstants.RunnerPackage.PackageName}", BuildConstants.RunnerPackage.Version);
private readonly List<ProductInfoHeaderValue> _userAgents = new List<ProductInfoHeaderValue>() { new ProductInfoHeaderValue($"GitHubActionsRunner-{BuildConstants.RunnerPackage.PackageName}", BuildConstants.RunnerPackage.Version) };
private CancellationTokenSource _runnerShutdownTokenSource = new CancellationTokenSource();
private object _perfLock = new object();
private RunMode _runMode = RunMode.Normal;
private Tracing _trace;
private Tracing _vssTrace;
private Tracing _httpTrace;
private Tracing _actionsHttpTrace;
private Tracing _netcoreHttpTrace;
private ITraceManager _traceManager;
private AssemblyLoadContext _loadContext;
private IDisposable _httpTraceSubscription;
private IDisposable _diagListenerSubscription;
private StartupType _startupType;
private string _perfFile;
private RunnerWebProxy _webProxy = new RunnerWebProxy();
public event EventHandler Unloading;
public CancellationToken RunnerShutdownToken => _runnerShutdownTokenSource.Token;
public ShutdownReason RunnerShutdownReason { get; private set; }
public ISecretMasker SecretMasker => _secretMasker;
public ProductInfoHeaderValue UserAgent => _userAgent;
public List<ProductInfoHeaderValue> UserAgents => _userAgents;
public RunnerWebProxy WebProxy => _webProxy;
public HostContext(string hostType, string logFile = null)
{
// Validate args.
@@ -88,6 +88,7 @@ namespace GitHub.Runner.Common
this.SecretMasker.AddValueEncoder(ValueEncoders.JsonStringEscape);
this.SecretMasker.AddValueEncoder(ValueEncoders.UriDataEscape);
this.SecretMasker.AddValueEncoder(ValueEncoders.XmlDataEscape);
this.SecretMasker.AddValueEncoder(ValueEncoders.TrimDoubleQuotes);
// Create the trace manager.
if (string.IsNullOrEmpty(logFile))
@@ -116,8 +117,7 @@ namespace GitHub.Runner.Common
}
_trace = GetTrace(nameof(HostContext));
_vssTrace = GetTrace("GitHubActionsRunner"); // VisualStudioService
_actionsHttpTrace = GetTrace("GitHubActionsService");
// Enable Http trace
bool enableHttpTrace;
if (bool.TryParse(Environment.GetEnvironmentVariable("GITHUB_ACTIONS_RUNNER_HTTPTRACE"), out enableHttpTrace) && enableHttpTrace)
@@ -129,7 +129,7 @@ namespace GitHub.Runner.Common
_trace.Warning("** **");
_trace.Warning("*****************************************************************************************");
_httpTrace = GetTrace("HttpTrace");
_netcoreHttpTrace = GetTrace("HttpTrace");
_diagListenerSubscription = DiagnosticListener.AllListeners.Subscribe(this);
}
@@ -147,19 +147,58 @@ namespace GitHub.Runner.Common
_trace.Error(ex);
}
}
}
public RunMode RunMode
{
get
// Check and trace proxy info
if (!string.IsNullOrEmpty(WebProxy.HttpProxyAddress))
{
return _runMode;
if (string.IsNullOrEmpty(WebProxy.HttpProxyUsername) && string.IsNullOrEmpty(WebProxy.HttpProxyPassword))
{
_trace.Info($"Configuring anonymous proxy {WebProxy.HttpProxyAddress} for all HTTP requests.");
}
else
{
// Register proxy password as secret
if (!string.IsNullOrEmpty(WebProxy.HttpProxyPassword))
{
this.SecretMasker.AddValue(WebProxy.HttpProxyPassword);
}
_trace.Info($"Configuring authenticated proxy {WebProxy.HttpProxyAddress} for all HTTP requests.");
}
}
set
if (!string.IsNullOrEmpty(WebProxy.HttpsProxyAddress))
{
_trace.Info($"Set run mode: {value}");
_runMode = value;
if (string.IsNullOrEmpty(WebProxy.HttpsProxyUsername) && string.IsNullOrEmpty(WebProxy.HttpsProxyPassword))
{
_trace.Info($"Configuring anonymous proxy {WebProxy.HttpsProxyAddress} for all HTTPS requests.");
}
else
{
// Register proxy password as secret
if (!string.IsNullOrEmpty(WebProxy.HttpsProxyPassword))
{
this.SecretMasker.AddValue(WebProxy.HttpsProxyPassword);
}
_trace.Info($"Configuring authenticated proxy {WebProxy.HttpsProxyAddress} for all HTTPS requests.");
}
}
if (string.IsNullOrEmpty(WebProxy.HttpProxyAddress) && string.IsNullOrEmpty(WebProxy.HttpsProxyAddress))
{
_trace.Info($"No proxy settings were found based on environmental variables (http_proxy/https_proxy/HTTP_PROXY/HTTPS_PROXY)");
}
var credFile = GetConfigFile(WellKnownConfigFile.Credentials);
if (File.Exists(credFile))
{
var credData = IOUtil.LoadObject<CredentialData>(credFile);
if (credData != null &&
credData.Data.TryGetValue("clientId", out var clientId))
{
_userAgents.Add(new ProductInfoHeaderValue($"RunnerId", clientId));
}
}
}
@@ -203,6 +242,7 @@ namespace GitHub.Runner.Common
case WellKnownDirectory.Tools:
// TODO: Coallesce to just check RUNNER_TOOL_CACHE when images stabilize
path = Environment.GetEnvironmentVariable("RUNNER_TOOL_CACHE") ?? Environment.GetEnvironmentVariable("RUNNER_TOOLSDIRECTORY") ?? Environment.GetEnvironmentVariable("AGENT_TOOLSDIRECTORY") ?? Environment.GetEnvironmentVariable(Constants.Variables.Agent.ToolsDirectory);
if (string.IsNullOrEmpty(path))
{
path = Path.Combine(
@@ -252,6 +292,12 @@ namespace GitHub.Runner.Common
".credentials");
break;
case WellKnownConfigFile.MigratedCredentials:
path = Path.Combine(
GetDirectory(WellKnownDirectory.Root),
".credentials_migrated");
break;
case WellKnownConfigFile.RSACredentials:
path = Path.Combine(
GetDirectory(WellKnownDirectory.Root),
@@ -282,29 +328,18 @@ namespace GitHub.Runner.Common
".certificates");
break;
case WellKnownConfigFile.Proxy:
path = Path.Combine(
GetDirectory(WellKnownDirectory.Root),
".proxy");
break;
case WellKnownConfigFile.ProxyCredentials:
path = Path.Combine(
GetDirectory(WellKnownDirectory.Root),
".proxycredentials");
break;
case WellKnownConfigFile.ProxyBypass:
path = Path.Combine(
GetDirectory(WellKnownDirectory.Root),
".proxybypass");
break;
case WellKnownConfigFile.Options:
path = Path.Combine(
GetDirectory(WellKnownDirectory.Root),
".options");
break;
case WellKnownConfigFile.SetupInfo:
path = Path.Combine(
GetDirectory(WellKnownDirectory.Root),
".setup_info");
break;
default:
throw new NotSupportedException($"Unexpected well known config file: '{configFile}'");
}
@@ -467,12 +502,12 @@ namespace GitHub.Runner.Common
void IObserver<DiagnosticListener>.OnCompleted()
{
_httpTrace.Info("DiagListeners finished transmitting data.");
_netcoreHttpTrace.Info("DiagListeners finished transmitting data.");
}
void IObserver<DiagnosticListener>.OnError(Exception error)
{
_httpTrace.Error(error);
_netcoreHttpTrace.Error(error);
}
void IObserver<DiagnosticListener>.OnNext(DiagnosticListener listener)
@@ -485,22 +520,22 @@ namespace GitHub.Runner.Common
void IObserver<KeyValuePair<string, object>>.OnCompleted()
{
_httpTrace.Info("HttpHandlerDiagnosticListener finished transmitting data.");
_netcoreHttpTrace.Info("HttpHandlerDiagnosticListener finished transmitting data.");
}
void IObserver<KeyValuePair<string, object>>.OnError(Exception error)
{
_httpTrace.Error(error);
_netcoreHttpTrace.Error(error);
}
void IObserver<KeyValuePair<string, object>>.OnNext(KeyValuePair<string, object> value)
{
_httpTrace.Info($"Trace {value.Key} event:{Environment.NewLine}{value.Value.ToString()}");
_netcoreHttpTrace.Info($"Trace {value.Key} event:{Environment.NewLine}{value.Value.ToString()}");
}
protected override void OnEventSourceCreated(EventSource source)
{
if (source.Name.Equals("Microsoft-VSS-Http"))
if (source.Name.Equals("GitHub-Actions-Http"))
{
EnableEvents(source, EventLevel.Verbose);
}
@@ -540,24 +575,24 @@ namespace GitHub.Runner.Common
{
case EventLevel.Critical:
case EventLevel.Error:
_vssTrace.Error(message);
_actionsHttpTrace.Error(message);
break;
case EventLevel.Warning:
_vssTrace.Warning(message);
_actionsHttpTrace.Warning(message);
break;
case EventLevel.Informational:
_vssTrace.Info(message);
_actionsHttpTrace.Info(message);
break;
default:
_vssTrace.Verbose(message);
_actionsHttpTrace.Verbose(message);
break;
}
}
catch (Exception ex)
{
_vssTrace.Error(ex);
_vssTrace.Info(eventData.Message);
_vssTrace.Info(string.Join(", ", eventData.Payload?.ToArray() ?? new string[0]));
_actionsHttpTrace.Error(ex);
_actionsHttpTrace.Info(eventData.Message);
_actionsHttpTrace.Info(string.Join(", ", eventData.Payload?.ToArray() ?? new string[0]));
}
}
@@ -579,10 +614,8 @@ namespace GitHub.Runner.Common
{
public static HttpClientHandler CreateHttpClientHandler(this IHostContext context)
{
HttpClientHandler clientHandler = new HttpClientHandler();
var runnerWebProxy = context.GetService<IRunnerWebProxy>();
clientHandler.Proxy = runnerWebProxy.WebProxy;
return clientHandler;
var handlerFactory = context.GetService<IHttpClientHandlerFactory>();
return handlerFactory.CreateClientHandler(context.WebProxy);
}
}

View File

@@ -0,0 +1,19 @@
using System.Net.Http;
using GitHub.Runner.Sdk;
namespace GitHub.Runner.Common
{
[ServiceLocator(Default = typeof(HttpClientHandlerFactory))]
public interface IHttpClientHandlerFactory : IRunnerService
{
HttpClientHandler CreateClientHandler(RunnerWebProxy webProxy);
}
public class HttpClientHandlerFactory : RunnerService, IHttpClientHandlerFactory
{
public HttpClientHandler CreateClientHandler(RunnerWebProxy webProxy)
{
return new HttpClientHandler() { Proxy = webProxy };
}
}
}

View File

@@ -12,53 +12,21 @@ namespace GitHub.Runner.Common
[ServiceLocator(Default = typeof(JobNotification))]
public interface IJobNotification : IRunnerService, IDisposable
{
Task JobStarted(Guid jobId, string accessToken, Uri serverUrl);
void JobStarted(Guid jobId, string accessToken, Uri serverUrl);
Task JobCompleted(Guid jobId);
void StartClient(string pipeName, string monitorSocketAddress, CancellationToken cancellationToken);
void StartClient(string socketAddress, string monitorSocketAddress);
void StartClient(string monitorSocketAddress);
}
public sealed class JobNotification : RunnerService, IJobNotification
{
private NamedPipeClientStream _outClient;
private StreamWriter _writeStream;
private Socket _socket;
private Socket _monitorSocket;
private bool _configured = false;
private bool _useSockets = false;
private bool _isMonitorConfigured = false;
public async Task JobStarted(Guid jobId, string accessToken, Uri serverUrl)
public void JobStarted(Guid jobId, string accessToken, Uri serverUrl)
{
Trace.Info("Entering JobStarted Notification");
StartMonitor(jobId, accessToken, serverUrl);
if (_configured)
{
String message = $"Starting job: {jobId.ToString()}";
if (_useSockets)
{
try
{
Trace.Info("Writing JobStarted to socket");
_socket.Send(Encoding.UTF8.GetBytes(message));
Trace.Info("Finished JobStarted writing to socket");
}
catch (SocketException e)
{
Trace.Error($"Failed sending message \"{message}\" on socket!");
Trace.Error(e);
}
}
else
{
Trace.Info("Writing JobStarted to pipe");
await _writeStream.WriteLineAsync(message);
await _writeStream.FlushAsync();
Trace.Info("Finished JobStarted writing to pipe");
}
}
}
public async Task JobCompleted(Guid jobId)
@@ -66,95 +34,10 @@ namespace GitHub.Runner.Common
Trace.Info("Entering JobCompleted Notification");
await EndMonitor();
if (_configured)
{
String message = $"Finished job: {jobId.ToString()}";
if (_useSockets)
{
try
{
Trace.Info("Writing JobCompleted to socket");
_socket.Send(Encoding.UTF8.GetBytes(message));
Trace.Info("Finished JobCompleted writing to socket");
}
catch (SocketException e)
{
Trace.Error($"Failed sending message \"{message}\" on socket!");
Trace.Error(e);
}
}
else
{
Trace.Info("Writing JobCompleted to pipe");
await _writeStream.WriteLineAsync(message);
await _writeStream.FlushAsync();
Trace.Info("Finished JobCompleted writing to pipe");
}
}
}
public async void StartClient(string pipeName, string monitorSocketAddress, CancellationToken cancellationToken)
public void StartClient(string monitorSocketAddress)
{
if (pipeName != null && !_configured)
{
Trace.Info("Connecting to named pipe {0}", pipeName);
_outClient = new NamedPipeClientStream(".", pipeName, PipeDirection.Out, PipeOptions.Asynchronous);
await _outClient.ConnectAsync(cancellationToken);
_writeStream = new StreamWriter(_outClient, Encoding.UTF8);
_configured = true;
Trace.Info("Connection successful to named pipe {0}", pipeName);
}
ConnectMonitor(monitorSocketAddress);
}
public void StartClient(string socketAddress, string monitorSocketAddress)
{
if (!_configured)
{
try
{
string[] splitAddress = socketAddress.Split(':');
if (splitAddress.Length != 2)
{
Trace.Error("Invalid socket address {0}. Job Notification will be disabled.", socketAddress);
return;
}
IPAddress address;
try
{
address = IPAddress.Parse(splitAddress[0]);
}
catch (FormatException e)
{
Trace.Error("Invalid socket ip address {0}. Job Notification will be disabled",splitAddress[0]);
Trace.Error(e);
return;
}
int port = -1;
Int32.TryParse(splitAddress[1], out port);
if (port < IPEndPoint.MinPort || port > IPEndPoint.MaxPort)
{
Trace.Error("Invalid tcp socket port {0}. Job Notification will be disabled.", splitAddress[1]);
return;
}
_socket = new Socket(SocketType.Stream, ProtocolType.Tcp);
_socket.Connect(address, port);
Trace.Info("Connection successful to socket {0}", socketAddress);
_useSockets = true;
_configured = true;
}
catch (SocketException e)
{
Trace.Error("Connection to socket {0} failed!", socketAddress);
Trace.Error(e);
}
}
ConnectMonitor(monitorSocketAddress);
}
@@ -275,15 +158,6 @@ namespace GitHub.Runner.Common
{
if (disposing)
{
_outClient?.Dispose();
if (_socket != null)
{
_socket.Send(Encoding.UTF8.GetBytes("<EOF>"));
_socket.Shutdown(SocketShutdown.Both);
_socket = null;
}
if (_monitorSocket != null)
{
_monitorSocket.Send(Encoding.UTF8.GetBytes("<EOF>"));

View File

@@ -16,12 +16,14 @@ namespace GitHub.Runner.Common
// logging and console
Task<TaskLog> AppendLogContentAsync(Guid scopeIdentifier, string hubName, Guid planId, int logId, Stream uploadStream, CancellationToken cancellationToken);
Task AppendTimelineRecordFeedAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, Guid timelineRecordId, Guid stepId, IList<string> lines, CancellationToken cancellationToken);
Task AppendTimelineRecordFeedAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, Guid timelineRecordId, Guid stepId, IList<string> lines, long startLine, CancellationToken cancellationToken);
Task<TaskAttachment> CreateAttachmentAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, Guid timelineRecordId, String type, String name, Stream uploadStream, CancellationToken cancellationToken);
Task<TaskLog> CreateLogAsync(Guid scopeIdentifier, string hubName, Guid planId, TaskLog log, CancellationToken cancellationToken);
Task<Timeline> CreateTimelineAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, CancellationToken cancellationToken);
Task<List<TimelineRecord>> UpdateTimelineRecordsAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, IEnumerable<TimelineRecord> records, CancellationToken cancellationToken);
Task RaisePlanEventAsync<T>(Guid scopeIdentifier, string hubName, Guid planId, T eventData, CancellationToken cancellationToken) where T : JobEvent;
Task<Timeline> GetTimelineAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, CancellationToken cancellationToken);
Task<ActionDownloadInfoCollection> ResolveActionDownloadInfoAsync(Guid scopeIdentifier, string hubName, Guid planId, ActionReferenceList actions, CancellationToken cancellationToken);
}
public sealed class JobServer : RunnerService, IJobServer
@@ -32,11 +34,6 @@ namespace GitHub.Runner.Common
public async Task ConnectAsync(VssConnection jobConnection)
{
if (HostContext.RunMode == RunMode.Local)
{
return;
}
_connection = jobConnection;
int attemptCount = 5;
while (!_connection.HasAuthenticated && attemptCount-- > 0)
@@ -73,90 +70,65 @@ namespace GitHub.Runner.Common
public Task<TaskLog> AppendLogContentAsync(Guid scopeIdentifier, string hubName, Guid planId, int logId, Stream uploadStream, CancellationToken cancellationToken)
{
if (HostContext.RunMode == RunMode.Local)
{
return Task.FromResult<TaskLog>(null);
}
CheckConnection();
return _taskClient.AppendLogContentAsync(scopeIdentifier, hubName, planId, logId, uploadStream, cancellationToken: cancellationToken);
}
public Task AppendTimelineRecordFeedAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, Guid timelineRecordId, Guid stepId, IList<string> lines, CancellationToken cancellationToken)
{
if (HostContext.RunMode == RunMode.Local)
{
return Task.CompletedTask;
}
CheckConnection();
return _taskClient.AppendTimelineRecordFeedAsync(scopeIdentifier, hubName, planId, timelineId, timelineRecordId, stepId, lines, cancellationToken: cancellationToken);
}
public Task AppendTimelineRecordFeedAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, Guid timelineRecordId, Guid stepId, IList<string> lines, long startLine, CancellationToken cancellationToken)
{
CheckConnection();
return _taskClient.AppendTimelineRecordFeedAsync(scopeIdentifier, hubName, planId, timelineId, timelineRecordId, stepId, lines, startLine, cancellationToken: cancellationToken);
}
public Task<TaskAttachment> CreateAttachmentAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, Guid timelineRecordId, string type, string name, Stream uploadStream, CancellationToken cancellationToken)
{
if (HostContext.RunMode == RunMode.Local)
{
return Task.FromResult<TaskAttachment>(null);
}
CheckConnection();
return _taskClient.CreateAttachmentAsync(scopeIdentifier, hubName, planId, timelineId, timelineRecordId, type, name, uploadStream, cancellationToken: cancellationToken);
}
public Task<TaskLog> CreateLogAsync(Guid scopeIdentifier, string hubName, Guid planId, TaskLog log, CancellationToken cancellationToken)
{
if (HostContext.RunMode == RunMode.Local)
{
return Task.FromResult<TaskLog>(null);
}
CheckConnection();
return _taskClient.CreateLogAsync(scopeIdentifier, hubName, planId, log, cancellationToken: cancellationToken);
}
public Task<Timeline> CreateTimelineAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, CancellationToken cancellationToken)
{
if (HostContext.RunMode == RunMode.Local)
{
return Task.FromResult<Timeline>(null);
}
CheckConnection();
return _taskClient.CreateTimelineAsync(scopeIdentifier, hubName, planId, new Timeline(timelineId), cancellationToken: cancellationToken);
}
public Task<List<TimelineRecord>> UpdateTimelineRecordsAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, IEnumerable<TimelineRecord> records, CancellationToken cancellationToken)
{
if (HostContext.RunMode == RunMode.Local)
{
return Task.FromResult<List<TimelineRecord>>(null);
}
CheckConnection();
return _taskClient.UpdateTimelineRecordsAsync(scopeIdentifier, hubName, planId, timelineId, records, cancellationToken: cancellationToken);
}
public Task RaisePlanEventAsync<T>(Guid scopeIdentifier, string hubName, Guid planId, T eventData, CancellationToken cancellationToken) where T : JobEvent
{
if (HostContext.RunMode == RunMode.Local)
{
return Task.CompletedTask;
}
CheckConnection();
return _taskClient.RaisePlanEventAsync(scopeIdentifier, hubName, planId, eventData, cancellationToken: cancellationToken);
}
public Task<Timeline> GetTimelineAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, CancellationToken cancellationToken)
{
if (HostContext.RunMode == RunMode.Local)
{
return Task.FromResult<Timeline>(null);
}
CheckConnection();
return _taskClient.GetTimelineAsync(scopeIdentifier, hubName, planId, timelineId, includeRecords: true, cancellationToken: cancellationToken);
}
//-----------------------------------------------------------------
// Action download info
//-----------------------------------------------------------------
public Task<ActionDownloadInfoCollection> ResolveActionDownloadInfoAsync(Guid scopeIdentifier, string hubName, Guid planId, ActionReferenceList actions, CancellationToken cancellationToken)
{
CheckConnection();
return _taskClient.ResolveActionDownloadInfoAsync(scopeIdentifier, hubName, planId, actions, cancellationToken: cancellationToken);
}
}
}

View File

@@ -18,7 +18,7 @@ namespace GitHub.Runner.Common
event EventHandler<ThrottlingEventArgs> JobServerQueueThrottling;
Task ShutdownAsync();
void Start(Pipelines.AgentJobRequestMessage jobRequest);
void QueueWebConsoleLine(Guid stepRecordId, string line);
void QueueWebConsoleLine(Guid stepRecordId, string line, long? lineNumber = null);
void QueueFileUpload(Guid timelineId, Guid timelineRecordId, string type, string name, string path, bool deleteSource);
void QueueTimelineRecordUpdate(Guid timelineId, TimelineRecord timelineRecord);
}
@@ -63,7 +63,6 @@ namespace GitHub.Runner.Common
private Task[] _allDequeueTasks;
private readonly TaskCompletionSource<int> _jobCompletionSource = new TaskCompletionSource<int>();
private bool _queueInProcess = false;
private ITerminal _term;
public event EventHandler<ThrottlingEventArgs> JobServerQueueThrottling;
@@ -85,11 +84,6 @@ namespace GitHub.Runner.Common
public void Start(Pipelines.AgentJobRequestMessage jobRequest)
{
Trace.Entering();
if (HostContext.RunMode == RunMode.Local)
{
_term = HostContext.GetService<ITerminal>();
return;
}
if (_queueInProcess)
{
@@ -129,11 +123,6 @@ namespace GitHub.Runner.Common
// TimelineUpdate queue error will become critical when timeline records contain output variabls.
public async Task ShutdownAsync()
{
if (HostContext.RunMode == RunMode.Local)
{
return;
}
if (!_queueInProcess)
{
Trace.Info("No-op, all queue process tasks have been stopped.");
@@ -166,35 +155,14 @@ namespace GitHub.Runner.Common
Trace.Info("All queue process tasks have been stopped, and all queues are drained.");
}
public void QueueWebConsoleLine(Guid stepRecordId, string line)
public void QueueWebConsoleLine(Guid stepRecordId, string line, long? lineNumber)
{
Trace.Verbose("Enqueue web console line queue: {0}", line);
if (HostContext.RunMode == RunMode.Local)
{
if ((line ?? string.Empty).StartsWith("##[section]"))
{
Console.WriteLine("******************************************************************************");
Console.WriteLine(line.Substring("##[section]".Length));
Console.WriteLine("******************************************************************************");
}
else
{
Console.WriteLine(line);
}
return;
}
_webConsoleLineQueue.Enqueue(new ConsoleLineInfo(stepRecordId, line));
_webConsoleLineQueue.Enqueue(new ConsoleLineInfo(stepRecordId, line, lineNumber));
}
public void QueueFileUpload(Guid timelineId, Guid timelineRecordId, string type, string name, string path, bool deleteSource)
{
if (HostContext.RunMode == RunMode.Local)
{
return;
}
ArgUtil.NotEmpty(timelineId, nameof(timelineId));
ArgUtil.NotEmpty(timelineRecordId, nameof(timelineRecordId));
@@ -215,11 +183,6 @@ namespace GitHub.Runner.Common
public void QueueTimelineRecordUpdate(Guid timelineId, TimelineRecord timelineRecord)
{
if (HostContext.RunMode == RunMode.Local)
{
return;
}
ArgUtil.NotEmpty(timelineId, nameof(timelineId));
ArgUtil.NotNull(timelineRecord, nameof(timelineRecord));
ArgUtil.NotEmpty(timelineRecord.Id, nameof(timelineRecord.Id));
@@ -251,7 +214,7 @@ namespace GitHub.Runner.Common
}
// Group consolelines by timeline record of each step
Dictionary<Guid, List<string>> stepsConsoleLines = new Dictionary<Guid, List<string>>();
Dictionary<Guid, List<TimelineRecordLogLine>> stepsConsoleLines = new Dictionary<Guid, List<TimelineRecordLogLine>>();
List<Guid> stepRecordIds = new List<Guid>(); // We need to keep lines in order
int linesCounter = 0;
ConsoleLineInfo lineInfo;
@@ -259,7 +222,7 @@ namespace GitHub.Runner.Common
{
if (!stepsConsoleLines.ContainsKey(lineInfo.StepRecordId))
{
stepsConsoleLines[lineInfo.StepRecordId] = new List<string>();
stepsConsoleLines[lineInfo.StepRecordId] = new List<TimelineRecordLogLine>();
stepRecordIds.Add(lineInfo.StepRecordId);
}
@@ -269,7 +232,7 @@ namespace GitHub.Runner.Common
lineInfo.Line = $"{lineInfo.Line.Substring(0, 1024)}...";
}
stepsConsoleLines[lineInfo.StepRecordId].Add(lineInfo.Line);
stepsConsoleLines[lineInfo.StepRecordId].Add(new TimelineRecordLogLine(lineInfo.Line, lineInfo.LineNumber));
linesCounter++;
// process at most about 500 lines of web console line during regular timer dequeue task.
@@ -284,13 +247,13 @@ namespace GitHub.Runner.Common
{
// Split consolelines into batch, each batch will container at most 100 lines.
int batchCounter = 0;
List<List<string>> batchedLines = new List<List<string>>();
List<List<TimelineRecordLogLine>> batchedLines = new List<List<TimelineRecordLogLine>>();
foreach (var line in stepsConsoleLines[stepRecordId])
{
var currentBatch = batchedLines.ElementAtOrDefault(batchCounter);
if (currentBatch == null)
{
batchedLines.Add(new List<string>());
batchedLines.Add(new List<TimelineRecordLogLine>());
currentBatch = batchedLines.ElementAt(batchCounter);
}
@@ -312,7 +275,6 @@ namespace GitHub.Runner.Common
{
Trace.Info($"Skip {batchedLines.Count - 2} batches web console lines for last run");
batchedLines = batchedLines.TakeLast(2).ToList();
batchedLines[0].Insert(0, "...");
}
int errorCount = 0;
@@ -321,7 +283,15 @@ namespace GitHub.Runner.Common
try
{
// we will not requeue failed batch, since the web console lines are time sensitive.
await _jobServer.AppendTimelineRecordFeedAsync(_scopeIdentifier, _hubName, _planId, _jobTimelineId, _jobTimelineRecordId, stepRecordId, batch, default(CancellationToken));
if (batch[0].LineNumber.HasValue)
{
await _jobServer.AppendTimelineRecordFeedAsync(_scopeIdentifier, _hubName, _planId, _jobTimelineId, _jobTimelineRecordId, stepRecordId, batch.Select(logLine => logLine.Line).ToList(), batch[0].LineNumber.Value, default(CancellationToken));
}
else
{
await _jobServer.AppendTimelineRecordFeedAsync(_scopeIdentifier, _hubName, _planId, _jobTimelineId, _jobTimelineRecordId, stepRecordId, batch.Select(logLine => logLine.Line).ToList(), default(CancellationToken));
}
if (_firstConsoleOutputs)
{
HostContext.WritePerfCounter($"WorkerJobServerQueueAppendFirstConsoleOutput_{_planId.ToString()}");
@@ -690,13 +660,15 @@ namespace GitHub.Runner.Common
internal class ConsoleLineInfo
{
public ConsoleLineInfo(Guid recordId, string line)
public ConsoleLineInfo(Guid recordId, string line, long? lineNumber)
{
this.StepRecordId = recordId;
this.Line = line;
this.LineNumber = lineNumber;
}
public Guid StepRecordId { get; set; }
public string Line { get; set; }
public long? LineNumber { get; set; }
}
}

View File

@@ -24,7 +24,6 @@ namespace GitHub.Runner.Common
private Guid _timelineId;
private Guid _timelineRecordId;
private string _pageId;
private FileStream _pageData;
private StreamWriter _pageWriter;
private int _byteCount;
@@ -40,7 +39,6 @@ namespace GitHub.Runner.Common
{
base.Initialize(hostContext);
_totalLines = 0;
_pageId = Guid.NewGuid().ToString();
_pagesFolder = Path.Combine(hostContext.GetDirectory(WellKnownDirectory.Diag), PagingFolder);
_jobServerQueue = HostContext.GetService<IJobServerQueue>();
Directory.CreateDirectory(_pagesFolder);
@@ -102,7 +100,7 @@ namespace GitHub.Runner.Common
{
EndPage();
_byteCount = 0;
_dataFileName = Path.Combine(_pagesFolder, $"{_pageId}_{++_pageCount}.log");
_dataFileName = Path.Combine(_pagesFolder, $"{_timelineId}_{_timelineRecordId}_{++_pageCount}.log");
_pageData = new FileStream(_dataFileName, FileMode.CreateNew);
_pageWriter = new StreamWriter(_pageData, System.Text.Encoding.UTF8);
}

View File

@@ -1,9 +1,9 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp3.0</TargetFramework>
<TargetFramework>netcoreapp3.1</TargetFramework>
<OutputType>Library</OutputType>
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;rhel.6-x64;osx-x64</RuntimeIdentifiers>
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64</RuntimeIdentifiers>
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
<AssetTargetFallback>portable-net45+win8</AssetTargetFallback>
<NoWarn>NU1701;NU1603</NoWarn>

View File

@@ -1,231 +0,0 @@
using System;
using GitHub.Runner.Common.Util;
using System.IO;
using System.Runtime.Serialization;
using GitHub.Services.Common;
using System.Security.Cryptography.X509Certificates;
using System.Net;
using System.Net.Security;
using System.Net.Http;
using GitHub.Services.WebApi;
using GitHub.Runner.Sdk;
namespace GitHub.Runner.Common
{
[ServiceLocator(Default = typeof(RunnerCertificateManager))]
public interface IRunnerCertificateManager : IRunnerService
{
bool SkipServerCertificateValidation { get; }
string CACertificateFile { get; }
string ClientCertificateFile { get; }
string ClientCertificatePrivateKeyFile { get; }
string ClientCertificateArchiveFile { get; }
string ClientCertificatePassword { get; }
IVssClientCertificateManager VssClientCertificateManager { get; }
}
public class RunnerCertificateManager : RunnerService, IRunnerCertificateManager
{
private RunnerClientCertificateManager _runnerClientCertificateManager = new RunnerClientCertificateManager();
public bool SkipServerCertificateValidation { private set; get; }
public string CACertificateFile { private set; get; }
public string ClientCertificateFile { private set; get; }
public string ClientCertificatePrivateKeyFile { private set; get; }
public string ClientCertificateArchiveFile { private set; get; }
public string ClientCertificatePassword { private set; get; }
public IVssClientCertificateManager VssClientCertificateManager => _runnerClientCertificateManager;
public override void Initialize(IHostContext hostContext)
{
base.Initialize(hostContext);
LoadCertificateSettings();
}
// This should only be called from config
public void SetupCertificate(bool skipCertValidation, string caCert, string clientCert, string clientCertPrivateKey, string clientCertArchive, string clientCertPassword)
{
Trace.Info("Setup runner certificate setting base on configuration inputs.");
if (skipCertValidation)
{
Trace.Info("Ignore SSL server certificate validation error");
SkipServerCertificateValidation = true;
VssClientHttpRequestSettings.Default.ServerCertificateValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator;
}
if (!string.IsNullOrEmpty(caCert))
{
ArgUtil.File(caCert, nameof(caCert));
Trace.Info($"Self-Signed CA '{caCert}'");
}
if (!string.IsNullOrEmpty(clientCert))
{
ArgUtil.File(clientCert, nameof(clientCert));
ArgUtil.File(clientCertPrivateKey, nameof(clientCertPrivateKey));
ArgUtil.File(clientCertArchive, nameof(clientCertArchive));
Trace.Info($"Client cert '{clientCert}'");
Trace.Info($"Client cert private key '{clientCertPrivateKey}'");
Trace.Info($"Client cert archive '{clientCertArchive}'");
}
CACertificateFile = caCert;
ClientCertificateFile = clientCert;
ClientCertificatePrivateKeyFile = clientCertPrivateKey;
ClientCertificateArchiveFile = clientCertArchive;
ClientCertificatePassword = clientCertPassword;
_runnerClientCertificateManager.AddClientCertificate(ClientCertificateArchiveFile, ClientCertificatePassword);
}
// This should only be called from config
public void SaveCertificateSetting()
{
string certSettingFile = HostContext.GetConfigFile(WellKnownConfigFile.Certificates);
IOUtil.DeleteFile(certSettingFile);
var setting = new RunnerCertificateSetting();
if (SkipServerCertificateValidation)
{
Trace.Info($"Store Skip ServerCertificateValidation setting to '{certSettingFile}'");
setting.SkipServerCertValidation = true;
}
if (!string.IsNullOrEmpty(CACertificateFile))
{
Trace.Info($"Store CA cert setting to '{certSettingFile}'");
setting.CACert = CACertificateFile;
}
if (!string.IsNullOrEmpty(ClientCertificateFile) &&
!string.IsNullOrEmpty(ClientCertificatePrivateKeyFile) &&
!string.IsNullOrEmpty(ClientCertificateArchiveFile))
{
Trace.Info($"Store client cert settings to '{certSettingFile}'");
setting.ClientCert = ClientCertificateFile;
setting.ClientCertPrivatekey = ClientCertificatePrivateKeyFile;
setting.ClientCertArchive = ClientCertificateArchiveFile;
if (!string.IsNullOrEmpty(ClientCertificatePassword))
{
string lookupKey = Guid.NewGuid().ToString("D").ToUpperInvariant();
Trace.Info($"Store client cert private key password with lookup key {lookupKey}");
var credStore = HostContext.GetService<IRunnerCredentialStore>();
credStore.Write($"GITHUB_ACTIONS_RUNNER_CLIENT_CERT_PASSWORD_{lookupKey}", "GitHub", ClientCertificatePassword);
setting.ClientCertPasswordLookupKey = lookupKey;
}
}
if (SkipServerCertificateValidation ||
!string.IsNullOrEmpty(CACertificateFile) ||
!string.IsNullOrEmpty(ClientCertificateFile))
{
IOUtil.SaveObject(setting, certSettingFile);
File.SetAttributes(certSettingFile, File.GetAttributes(certSettingFile) | FileAttributes.Hidden);
}
}
// This should only be called from unconfig
public void DeleteCertificateSetting()
{
string certSettingFile = HostContext.GetConfigFile(WellKnownConfigFile.Certificates);
if (File.Exists(certSettingFile))
{
Trace.Info($"Load runner certificate setting from '{certSettingFile}'");
var certSetting = IOUtil.LoadObject<RunnerCertificateSetting>(certSettingFile);
if (certSetting != null && !string.IsNullOrEmpty(certSetting.ClientCertPasswordLookupKey))
{
Trace.Info("Delete client cert private key password from credential store.");
var credStore = HostContext.GetService<IRunnerCredentialStore>();
credStore.Delete($"GITHUB_ACTIONS_RUNNER_CLIENT_CERT_PASSWORD_{certSetting.ClientCertPasswordLookupKey}");
}
Trace.Info($"Delete cert setting file: {certSettingFile}");
IOUtil.DeleteFile(certSettingFile);
}
}
public void LoadCertificateSettings()
{
string certSettingFile = HostContext.GetConfigFile(WellKnownConfigFile.Certificates);
if (File.Exists(certSettingFile))
{
Trace.Info($"Load runner certificate setting from '{certSettingFile}'");
var certSetting = IOUtil.LoadObject<RunnerCertificateSetting>(certSettingFile);
ArgUtil.NotNull(certSetting, nameof(RunnerCertificateSetting));
if (certSetting.SkipServerCertValidation)
{
Trace.Info("Ignore SSL server certificate validation error");
SkipServerCertificateValidation = true;
VssClientHttpRequestSettings.Default.ServerCertificateValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator;
}
if (!string.IsNullOrEmpty(certSetting.CACert))
{
// make sure all settings file exist
ArgUtil.File(certSetting.CACert, nameof(certSetting.CACert));
Trace.Info($"CA '{certSetting.CACert}'");
CACertificateFile = certSetting.CACert;
}
if (!string.IsNullOrEmpty(certSetting.ClientCert))
{
// make sure all settings file exist
ArgUtil.File(certSetting.ClientCert, nameof(certSetting.ClientCert));
ArgUtil.File(certSetting.ClientCertPrivatekey, nameof(certSetting.ClientCertPrivatekey));
ArgUtil.File(certSetting.ClientCertArchive, nameof(certSetting.ClientCertArchive));
Trace.Info($"Client cert '{certSetting.ClientCert}'");
Trace.Info($"Client cert private key '{certSetting.ClientCertPrivatekey}'");
Trace.Info($"Client cert archive '{certSetting.ClientCertArchive}'");
ClientCertificateFile = certSetting.ClientCert;
ClientCertificatePrivateKeyFile = certSetting.ClientCertPrivatekey;
ClientCertificateArchiveFile = certSetting.ClientCertArchive;
if (!string.IsNullOrEmpty(certSetting.ClientCertPasswordLookupKey))
{
var cerdStore = HostContext.GetService<IRunnerCredentialStore>();
ClientCertificatePassword = cerdStore.Read($"GITHUB_ACTIONS_RUNNER_CLIENT_CERT_PASSWORD_{certSetting.ClientCertPasswordLookupKey}").Password;
HostContext.SecretMasker.AddValue(ClientCertificatePassword);
}
_runnerClientCertificateManager.AddClientCertificate(ClientCertificateArchiveFile, ClientCertificatePassword);
}
}
else
{
Trace.Info("No certificate setting found.");
}
}
}
[DataContract]
internal class RunnerCertificateSetting
{
[DataMember]
public bool SkipServerCertValidation { get; set; }
[DataMember]
public string CACert { get; set; }
[DataMember]
public string ClientCert { get; set; }
[DataMember]
public string ClientCertPrivatekey { get; set; }
[DataMember]
public string ClientCertArchive { get; set; }
[DataMember]
public string ClientCertPasswordLookupKey { get; set; }
}
}

View File

@@ -1,948 +0,0 @@
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Net;
using System.Runtime.InteropServices;
using System.Text;
using System.Text.RegularExpressions;
using System.Threading;
using GitHub.Runner.Common.Util;
using Newtonsoft.Json;
using System.IO;
using System.Runtime.Serialization;
using System.Security.Cryptography;
using GitHub.Runner.Sdk;
namespace GitHub.Runner.Common
{
// The purpose of this class is to store user's credential during runner configuration and retrive the credential back at runtime.
#if OS_WINDOWS
[ServiceLocator(Default = typeof(WindowsRunnerCredentialStore))]
#elif OS_OSX
[ServiceLocator(Default = typeof(MacOSRunnerCredentialStore))]
#else
[ServiceLocator(Default = typeof(LinuxRunnerCredentialStore))]
#endif
public interface IRunnerCredentialStore : IRunnerService
{
NetworkCredential Write(string target, string username, string password);
// throw exception when target not found from cred store
NetworkCredential Read(string target);
// throw exception when target not found from cred store
void Delete(string target);
}
#if OS_WINDOWS
// Windows credential store is per user.
// This is a limitation for user configure the runner run as windows service, when user's current login account is different with the service run as account.
// Ex: I login the box as domain\admin, configure the runner as windows service and run as domian\buildserver
// domain\buildserver won't read the stored credential from domain\admin's windows credential store.
// To workaround this limitation.
// Anytime we try to save a credential:
// 1. store it into current user's windows credential store
// 2. use DP-API do a machine level encrypt and store the encrypted content on disk.
// At the first time we try to read the credential:
// 1. read from current user's windows credential store, delete the DP-API encrypted backup content on disk if the windows credential store read succeed.
// 2. if credential not found in current user's windows credential store, read from the DP-API encrypted backup content on disk,
// write the credential back the current user's windows credential store and delete the backup on disk.
public sealed class WindowsRunnerCredentialStore : RunnerService, IRunnerCredentialStore
{
private string _credStoreFile;
private Dictionary<string, string> _credStore;
public override void Initialize(IHostContext hostContext)
{
base.Initialize(hostContext);
_credStoreFile = hostContext.GetConfigFile(WellKnownConfigFile.CredentialStore);
if (File.Exists(_credStoreFile))
{
_credStore = IOUtil.LoadObject<Dictionary<string, string>>(_credStoreFile);
}
else
{
_credStore = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase);
}
}
public NetworkCredential Write(string target, string username, string password)
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(target, nameof(target));
ArgUtil.NotNullOrEmpty(username, nameof(username));
ArgUtil.NotNullOrEmpty(password, nameof(password));
// save to .credential_store file first, then Windows credential store
string usernameBase64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(username));
string passwordBase64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(password));
// Base64Username:Base64Password -> DP-API machine level encrypt -> Base64Encoding
string encryptedUsernamePassword = Convert.ToBase64String(ProtectedData.Protect(Encoding.UTF8.GetBytes($"{usernameBase64}:{passwordBase64}"), null, DataProtectionScope.LocalMachine));
Trace.Info($"Credentials for '{target}' written to credential store file.");
_credStore[target] = encryptedUsernamePassword;
// save to .credential_store file
SyncCredentialStoreFile();
// save to Windows Credential Store
return WriteInternal(target, username, password);
}
public NetworkCredential Read(string target)
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(target, nameof(target));
IntPtr credPtr = IntPtr.Zero;
try
{
if (CredRead(target, CredentialType.Generic, 0, out credPtr))
{
Credential credStruct = (Credential)Marshal.PtrToStructure(credPtr, typeof(Credential));
int passwordLength = (int)credStruct.CredentialBlobSize;
string password = passwordLength > 0 ? Marshal.PtrToStringUni(credStruct.CredentialBlob, passwordLength / sizeof(char)) : String.Empty;
string username = Marshal.PtrToStringUni(credStruct.UserName);
Trace.Info($"Credentials for '{target}' read from windows credential store.");
// delete from .credential_store file since we are able to read it from windows credential store
if (_credStore.Remove(target))
{
Trace.Info($"Delete credentials for '{target}' from credential store file.");
SyncCredentialStoreFile();
}
return new NetworkCredential(username, password);
}
else
{
// Can't read from Windows Credential Store, fail back to .credential_store file
if (_credStore.ContainsKey(target) && !string.IsNullOrEmpty(_credStore[target]))
{
Trace.Info($"Credentials for '{target}' read from credential store file.");
// Base64Decode -> DP-API machine level decrypt -> Base64Username:Base64Password -> Base64Decode
string decryptedUsernamePassword = Encoding.UTF8.GetString(ProtectedData.Unprotect(Convert.FromBase64String(_credStore[target]), null, DataProtectionScope.LocalMachine));
string[] credential = decryptedUsernamePassword.Split(':');
if (credential.Length == 2 && !string.IsNullOrEmpty(credential[0]) && !string.IsNullOrEmpty(credential[1]))
{
string username = Encoding.UTF8.GetString(Convert.FromBase64String(credential[0]));
string password = Encoding.UTF8.GetString(Convert.FromBase64String(credential[1]));
// store back to windows credential store for current user
NetworkCredential creds = WriteInternal(target, username, password);
// delete from .credential_store file since we are able to write the credential to windows credential store for current user.
if (_credStore.Remove(target))
{
Trace.Info($"Delete credentials for '{target}' from credential store file.");
SyncCredentialStoreFile();
}
return creds;
}
else
{
throw new ArgumentOutOfRangeException(nameof(decryptedUsernamePassword));
}
}
throw new Win32Exception(Marshal.GetLastWin32Error(), $"CredRead throw an error for '{target}'");
}
}
finally
{
if (credPtr != IntPtr.Zero)
{
CredFree(credPtr);
}
}
}
public void Delete(string target)
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(target, nameof(target));
// remove from .credential_store file
if (_credStore.Remove(target))
{
Trace.Info($"Delete credentials for '{target}' from credential store file.");
SyncCredentialStoreFile();
}
// remove from windows credential store
if (!CredDelete(target, CredentialType.Generic, 0))
{
throw new Win32Exception(Marshal.GetLastWin32Error(), $"Failed to delete credentials for {target}");
}
else
{
Trace.Info($"Credentials for '{target}' deleted from windows credential store.");
}
}
private NetworkCredential WriteInternal(string target, string username, string password)
{
// save to Windows Credential Store
Credential credential = new Credential()
{
Type = CredentialType.Generic,
Persist = (UInt32)CredentialPersist.LocalMachine,
TargetName = Marshal.StringToCoTaskMemUni(target),
UserName = Marshal.StringToCoTaskMemUni(username),
CredentialBlob = Marshal.StringToCoTaskMemUni(password),
CredentialBlobSize = (UInt32)Encoding.Unicode.GetByteCount(password),
AttributeCount = 0,
Comment = IntPtr.Zero,
Attributes = IntPtr.Zero,
TargetAlias = IntPtr.Zero
};
try
{
if (CredWrite(ref credential, 0))
{
Trace.Info($"Credentials for '{target}' written to windows credential store.");
return new NetworkCredential(username, password);
}
else
{
int error = Marshal.GetLastWin32Error();
throw new Win32Exception(error, "Failed to write credentials");
}
}
finally
{
if (credential.CredentialBlob != IntPtr.Zero)
{
Marshal.FreeCoTaskMem(credential.CredentialBlob);
}
if (credential.TargetName != IntPtr.Zero)
{
Marshal.FreeCoTaskMem(credential.TargetName);
}
if (credential.UserName != IntPtr.Zero)
{
Marshal.FreeCoTaskMem(credential.UserName);
}
}
}
private void SyncCredentialStoreFile()
{
Trace.Info("Sync in-memory credential store with credential store file.");
// delete the cred store file first anyway, since it's a readonly file.
IOUtil.DeleteFile(_credStoreFile);
// delete cred store file when all creds gone
if (_credStore.Count == 0)
{
return;
}
else
{
IOUtil.SaveObject(_credStore, _credStoreFile);
File.SetAttributes(_credStoreFile, File.GetAttributes(_credStoreFile) | FileAttributes.Hidden);
}
}
[DllImport("Advapi32.dll", EntryPoint = "CredDeleteW", CharSet = CharSet.Unicode, SetLastError = true)]
internal static extern bool CredDelete(string target, CredentialType type, int reservedFlag);
[DllImport("Advapi32.dll", EntryPoint = "CredReadW", CharSet = CharSet.Unicode, SetLastError = true)]
internal static extern bool CredRead(string target, CredentialType type, int reservedFlag, out IntPtr CredentialPtr);
[DllImport("Advapi32.dll", EntryPoint = "CredWriteW", CharSet = CharSet.Unicode, SetLastError = true)]
internal static extern bool CredWrite([In] ref Credential userCredential, [In] UInt32 flags);
[DllImport("Advapi32.dll", EntryPoint = "CredFree", SetLastError = true)]
internal static extern bool CredFree([In] IntPtr cred);
internal enum CredentialPersist : UInt32
{
Session = 0x01,
LocalMachine = 0x02
}
internal enum CredentialType : uint
{
Generic = 0x01,
DomainPassword = 0x02,
DomainCertificate = 0x03
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
internal struct Credential
{
public UInt32 Flags;
public CredentialType Type;
public IntPtr TargetName;
public IntPtr Comment;
public System.Runtime.InteropServices.ComTypes.FILETIME LastWritten;
public UInt32 CredentialBlobSize;
public IntPtr CredentialBlob;
public UInt32 Persist;
public UInt32 AttributeCount;
public IntPtr Attributes;
public IntPtr TargetAlias;
public IntPtr UserName;
}
}
#elif OS_OSX
public sealed class MacOSRunnerCredentialStore : RunnerService, IRunnerCredentialStore
{
private const string _osxRunnerCredStoreKeyChainName = "_GITHUB_ACTIONS_RUNNER_CREDSTORE_INTERNAL_";
// Keychain requires a password, but this is not intended to add security
private const string _osxRunnerCredStoreKeyChainPassword = "C46F23C36AF94B72B1EAEE32C68670A0";
private string _securityUtil;
private string _runnerCredStoreKeyChain;
public override void Initialize(IHostContext hostContext)
{
base.Initialize(hostContext);
_securityUtil = WhichUtil.Which("security", true, Trace);
_runnerCredStoreKeyChain = hostContext.GetConfigFile(WellKnownConfigFile.CredentialStore);
// Create osx key chain if it doesn't exists.
if (!File.Exists(_runnerCredStoreKeyChain))
{
List<string> securityOut = new List<string>();
List<string> securityError = new List<string>();
object outputLock = new object();
using (var p = HostContext.CreateService<IProcessInvoker>())
{
p.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stdout)
{
if (!string.IsNullOrEmpty(stdout.Data))
{
lock (outputLock)
{
securityOut.Add(stdout.Data);
}
}
};
p.ErrorDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stderr)
{
if (!string.IsNullOrEmpty(stderr.Data))
{
lock (outputLock)
{
securityError.Add(stderr.Data);
}
}
};
// make sure the 'security' has access to the key so we won't get prompt at runtime.
int exitCode = p.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Root),
fileName: _securityUtil,
arguments: $"create-keychain -p {_osxRunnerCredStoreKeyChainPassword} \"{_runnerCredStoreKeyChain}\"",
environment: null,
cancellationToken: CancellationToken.None).GetAwaiter().GetResult();
if (exitCode == 0)
{
Trace.Info($"Successfully create-keychain for {_runnerCredStoreKeyChain}");
}
else
{
if (securityOut.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityOut));
}
if (securityError.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityError));
}
throw new InvalidOperationException($"'security create-keychain' failed with exit code {exitCode}.");
}
}
}
else
{
// Try unlock and lock the keychain, make sure it's still in good stage
UnlockKeyChain();
LockKeyChain();
}
}
public NetworkCredential Write(string target, string username, string password)
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(target, nameof(target));
ArgUtil.NotNullOrEmpty(username, nameof(username));
ArgUtil.NotNullOrEmpty(password, nameof(password));
try
{
UnlockKeyChain();
// base64encode username + ':' + base64encode password
// OSX keychain requires you provide -s target and -a username to retrieve password
// So, we will trade both username and password as 'secret' store into keychain
string usernameBase64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(username));
string passwordBase64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(password));
string secretForKeyChain = $"{usernameBase64}:{passwordBase64}";
List<string> securityOut = new List<string>();
List<string> securityError = new List<string>();
object outputLock = new object();
using (var p = HostContext.CreateService<IProcessInvoker>())
{
p.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stdout)
{
if (!string.IsNullOrEmpty(stdout.Data))
{
lock (outputLock)
{
securityOut.Add(stdout.Data);
}
}
};
p.ErrorDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stderr)
{
if (!string.IsNullOrEmpty(stderr.Data))
{
lock (outputLock)
{
securityError.Add(stderr.Data);
}
}
};
// make sure the 'security' has access to the key so we won't get prompt at runtime.
int exitCode = p.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Root),
fileName: _securityUtil,
arguments: $"add-generic-password -s {target} -a GITHUBACTIONSRUNNER -w {secretForKeyChain} -T \"{_securityUtil}\" \"{_runnerCredStoreKeyChain}\"",
environment: null,
cancellationToken: CancellationToken.None).GetAwaiter().GetResult();
if (exitCode == 0)
{
Trace.Info($"Successfully add-generic-password for {target} (GITHUBACTIONSRUNNER)");
}
else
{
if (securityOut.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityOut));
}
if (securityError.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityError));
}
throw new InvalidOperationException($"'security add-generic-password' failed with exit code {exitCode}.");
}
}
return new NetworkCredential(username, password);
}
finally
{
LockKeyChain();
}
}
public NetworkCredential Read(string target)
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(target, nameof(target));
try
{
UnlockKeyChain();
string username;
string password;
List<string> securityOut = new List<string>();
List<string> securityError = new List<string>();
object outputLock = new object();
using (var p = HostContext.CreateService<IProcessInvoker>())
{
p.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stdout)
{
if (!string.IsNullOrEmpty(stdout.Data))
{
lock (outputLock)
{
securityOut.Add(stdout.Data);
}
}
};
p.ErrorDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stderr)
{
if (!string.IsNullOrEmpty(stderr.Data))
{
lock (outputLock)
{
securityError.Add(stderr.Data);
}
}
};
int exitCode = p.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Root),
fileName: _securityUtil,
arguments: $"find-generic-password -s {target} -a GITHUBACTIONSRUNNER -w -g \"{_runnerCredStoreKeyChain}\"",
environment: null,
cancellationToken: CancellationToken.None).GetAwaiter().GetResult();
if (exitCode == 0)
{
string keyChainSecret = securityOut.First();
string[] secrets = keyChainSecret.Split(':');
if (secrets.Length == 2 && !string.IsNullOrEmpty(secrets[0]) && !string.IsNullOrEmpty(secrets[1]))
{
Trace.Info($"Successfully find-generic-password for {target} (GITHUBACTIONSRUNNER)");
username = Encoding.UTF8.GetString(Convert.FromBase64String(secrets[0]));
password = Encoding.UTF8.GetString(Convert.FromBase64String(secrets[1]));
return new NetworkCredential(username, password);
}
else
{
throw new ArgumentOutOfRangeException(nameof(keyChainSecret));
}
}
else
{
if (securityOut.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityOut));
}
if (securityError.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityError));
}
throw new InvalidOperationException($"'security find-generic-password' failed with exit code {exitCode}.");
}
}
}
finally
{
LockKeyChain();
}
}
public void Delete(string target)
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(target, nameof(target));
try
{
UnlockKeyChain();
List<string> securityOut = new List<string>();
List<string> securityError = new List<string>();
object outputLock = new object();
using (var p = HostContext.CreateService<IProcessInvoker>())
{
p.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stdout)
{
if (!string.IsNullOrEmpty(stdout.Data))
{
lock (outputLock)
{
securityOut.Add(stdout.Data);
}
}
};
p.ErrorDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stderr)
{
if (!string.IsNullOrEmpty(stderr.Data))
{
lock (outputLock)
{
securityError.Add(stderr.Data);
}
}
};
int exitCode = p.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Root),
fileName: _securityUtil,
arguments: $"delete-generic-password -s {target} -a GITHUBACTIONSRUNNER \"{_runnerCredStoreKeyChain}\"",
environment: null,
cancellationToken: CancellationToken.None).GetAwaiter().GetResult();
if (exitCode == 0)
{
Trace.Info($"Successfully delete-generic-password for {target} (GITHUBACTIONSRUNNER)");
}
else
{
if (securityOut.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityOut));
}
if (securityError.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityError));
}
throw new InvalidOperationException($"'security delete-generic-password' failed with exit code {exitCode}.");
}
}
}
finally
{
LockKeyChain();
}
}
private void UnlockKeyChain()
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(_securityUtil, nameof(_securityUtil));
ArgUtil.NotNullOrEmpty(_runnerCredStoreKeyChain, nameof(_runnerCredStoreKeyChain));
List<string> securityOut = new List<string>();
List<string> securityError = new List<string>();
object outputLock = new object();
using (var p = HostContext.CreateService<IProcessInvoker>())
{
p.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stdout)
{
if (!string.IsNullOrEmpty(stdout.Data))
{
lock (outputLock)
{
securityOut.Add(stdout.Data);
}
}
};
p.ErrorDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stderr)
{
if (!string.IsNullOrEmpty(stderr.Data))
{
lock (outputLock)
{
securityError.Add(stderr.Data);
}
}
};
// make sure the 'security' has access to the key so we won't get prompt at runtime.
int exitCode = p.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Root),
fileName: _securityUtil,
arguments: $"unlock-keychain -p {_osxRunnerCredStoreKeyChainPassword} \"{_runnerCredStoreKeyChain}\"",
environment: null,
cancellationToken: CancellationToken.None).GetAwaiter().GetResult();
if (exitCode == 0)
{
Trace.Info($"Successfully unlock-keychain for {_runnerCredStoreKeyChain}");
}
else
{
if (securityOut.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityOut));
}
if (securityError.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityError));
}
throw new InvalidOperationException($"'security unlock-keychain' failed with exit code {exitCode}.");
}
}
}
private void LockKeyChain()
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(_securityUtil, nameof(_securityUtil));
ArgUtil.NotNullOrEmpty(_runnerCredStoreKeyChain, nameof(_runnerCredStoreKeyChain));
List<string> securityOut = new List<string>();
List<string> securityError = new List<string>();
object outputLock = new object();
using (var p = HostContext.CreateService<IProcessInvoker>())
{
p.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stdout)
{
if (!string.IsNullOrEmpty(stdout.Data))
{
lock (outputLock)
{
securityOut.Add(stdout.Data);
}
}
};
p.ErrorDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stderr)
{
if (!string.IsNullOrEmpty(stderr.Data))
{
lock (outputLock)
{
securityError.Add(stderr.Data);
}
}
};
// make sure the 'security' has access to the key so we won't get prompt at runtime.
int exitCode = p.ExecuteAsync(workingDirectory: HostContext.GetDirectory(WellKnownDirectory.Root),
fileName: _securityUtil,
arguments: $"lock-keychain \"{_runnerCredStoreKeyChain}\"",
environment: null,
cancellationToken: CancellationToken.None).GetAwaiter().GetResult();
if (exitCode == 0)
{
Trace.Info($"Successfully lock-keychain for {_runnerCredStoreKeyChain}");
}
else
{
if (securityOut.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityOut));
}
if (securityError.Count > 0)
{
Trace.Error(string.Join(Environment.NewLine, securityError));
}
throw new InvalidOperationException($"'security lock-keychain' failed with exit code {exitCode}.");
}
}
}
}
#else
public sealed class LinuxRunnerCredentialStore : RunnerService, IRunnerCredentialStore
{
// 'ghrunner' 128 bits iv
private readonly byte[] iv = new byte[] { 0x67, 0x68, 0x72, 0x75, 0x6e, 0x6e, 0x65, 0x72, 0x67, 0x68, 0x72, 0x75, 0x6e, 0x6e, 0x65, 0x72 };
// 256 bits key
private byte[] _symmetricKey;
private string _credStoreFile;
private Dictionary<string, Credential> _credStore;
public override void Initialize(IHostContext hostContext)
{
base.Initialize(hostContext);
_credStoreFile = hostContext.GetConfigFile(WellKnownConfigFile.CredentialStore);
if (File.Exists(_credStoreFile))
{
_credStore = IOUtil.LoadObject<Dictionary<string, Credential>>(_credStoreFile);
}
else
{
_credStore = new Dictionary<string, Credential>(StringComparer.OrdinalIgnoreCase);
}
string machineId;
if (File.Exists("/etc/machine-id"))
{
// try use machine-id as encryption key
// this helps avoid accidental information disclosure, but isn't intended for true security
machineId = File.ReadAllLines("/etc/machine-id").FirstOrDefault();
Trace.Info($"machine-id length {machineId?.Length ?? 0}.");
// machine-id doesn't exist or machine-id is not 256 bits
if (string.IsNullOrEmpty(machineId) || machineId.Length != 32)
{
Trace.Warning("Can not get valid machine id from '/etc/machine-id'.");
machineId = "43e7fe5da07740cf914b90f1dac51c2a";
}
}
else
{
// /etc/machine-id not exist
Trace.Warning("/etc/machine-id doesn't exist.");
machineId = "43e7fe5da07740cf914b90f1dac51c2a";
}
List<byte> keyBuilder = new List<byte>();
foreach (var c in machineId)
{
keyBuilder.Add(Convert.ToByte(c));
}
_symmetricKey = keyBuilder.ToArray();
}
public NetworkCredential Write(string target, string username, string password)
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(target, nameof(target));
ArgUtil.NotNullOrEmpty(username, nameof(username));
ArgUtil.NotNullOrEmpty(password, nameof(password));
Trace.Info($"Store credential for '{target}' to cred store.");
Credential cred = new Credential(username, Encrypt(password));
_credStore[target] = cred;
SyncCredentialStoreFile();
return new NetworkCredential(username, password);
}
public NetworkCredential Read(string target)
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(target, nameof(target));
Trace.Info($"Read credential for '{target}' from cred store.");
if (_credStore.ContainsKey(target))
{
Credential cred = _credStore[target];
if (!string.IsNullOrEmpty(cred.UserName) && !string.IsNullOrEmpty(cred.Password))
{
Trace.Info($"Return credential for '{target}' from cred store.");
return new NetworkCredential(cred.UserName, Decrypt(cred.Password));
}
}
throw new KeyNotFoundException(target);
}
public void Delete(string target)
{
Trace.Entering();
ArgUtil.NotNullOrEmpty(target, nameof(target));
if (_credStore.ContainsKey(target))
{
Trace.Info($"Delete credential for '{target}' from cred store.");
_credStore.Remove(target);
SyncCredentialStoreFile();
}
else
{
throw new KeyNotFoundException(target);
}
}
private void SyncCredentialStoreFile()
{
Trace.Entering();
Trace.Info("Sync in-memory credential store with credential store file.");
// delete cred store file when all creds gone
if (_credStore.Count == 0)
{
IOUtil.DeleteFile(_credStoreFile);
return;
}
if (!File.Exists(_credStoreFile))
{
CreateCredentialStoreFile();
}
IOUtil.SaveObject(_credStore, _credStoreFile);
}
private string Encrypt(string secret)
{
using (Aes aes = Aes.Create())
{
aes.Key = _symmetricKey;
aes.IV = iv;
// Create a decrytor to perform the stream transform.
ICryptoTransform encryptor = aes.CreateEncryptor();
// Create the streams used for encryption.
using (MemoryStream msEncrypt = new MemoryStream())
{
using (CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write))
{
using (StreamWriter swEncrypt = new StreamWriter(csEncrypt))
{
swEncrypt.Write(secret);
}
return Convert.ToBase64String(msEncrypt.ToArray());
}
}
}
}
private string Decrypt(string encryptedText)
{
using (Aes aes = Aes.Create())
{
aes.Key = _symmetricKey;
aes.IV = iv;
// Create a decrytor to perform the stream transform.
ICryptoTransform decryptor = aes.CreateDecryptor();
// Create the streams used for decryption.
using (MemoryStream msDecrypt = new MemoryStream(Convert.FromBase64String(encryptedText)))
{
using (CryptoStream csDecrypt = new CryptoStream(msDecrypt, decryptor, CryptoStreamMode.Read))
{
using (StreamReader srDecrypt = new StreamReader(csDecrypt))
{
// Read the decrypted bytes from the decrypting stream and place them in a string.
return srDecrypt.ReadToEnd();
}
}
}
}
}
private void CreateCredentialStoreFile()
{
File.WriteAllText(_credStoreFile, "");
File.SetAttributes(_credStoreFile, File.GetAttributes(_credStoreFile) | FileAttributes.Hidden);
// Try to lock down the .credentials_store file to the owner/group
var chmodPath = WhichUtil.Which("chmod", trace: Trace);
if (!String.IsNullOrEmpty(chmodPath))
{
var arguments = $"600 {new FileInfo(_credStoreFile).FullName}";
using (var invoker = HostContext.CreateService<IProcessInvoker>())
{
var exitCode = invoker.ExecuteAsync(HostContext.GetDirectory(WellKnownDirectory.Root), chmodPath, arguments, null, default(CancellationToken)).GetAwaiter().GetResult();
if (exitCode == 0)
{
Trace.Info("Successfully set permissions for credentials store file {0}", _credStoreFile);
}
else
{
Trace.Warning("Unable to successfully set permissions for credentials store file {0}. Received exit code {1} from {2}", _credStoreFile, exitCode, chmodPath);
}
}
}
else
{
Trace.Warning("Unable to locate chmod to set permissions for credentials store file {0}.", _credStoreFile);
}
}
}
[DataContract]
internal class Credential
{
public Credential()
{ }
public Credential(string userName, string password)
{
UserName = userName;
Password = password;
}
[DataMember(IsRequired = true)]
public string UserName { get; set; }
[DataMember(IsRequired = true)]
public string Password { get; set; }
}
#endif
}

View File

@@ -41,7 +41,7 @@ namespace GitHub.Runner.Common
// job request
Task<TaskAgentJobRequest> GetAgentRequestAsync(int poolId, long requestId, CancellationToken cancellationToken);
Task<TaskAgentJobRequest> RenewAgentRequestAsync(int poolId, long requestId, Guid lockToken, CancellationToken cancellationToken);
Task<TaskAgentJobRequest> RenewAgentRequestAsync(int poolId, long requestId, Guid lockToken, string orchestrationId, CancellationToken cancellationToken);
Task<TaskAgentJobRequest> FinishAgentRequestAsync(int poolId, long requestId, Guid lockToken, DateTime finishTime, TaskResult result, CancellationToken cancellationToken);
// agent package
@@ -66,11 +66,6 @@ namespace GitHub.Runner.Common
public async Task ConnectAsync(Uri serverUrl, VssCredentials credentials)
{
if (HostContext.RunMode == RunMode.Local)
{
return;
}
var createGenericConnection = EstablishVssConnection(serverUrl, credentials, TimeSpan.FromSeconds(100));
var createMessageConnection = EstablishVssConnection(serverUrl, credentials, TimeSpan.FromSeconds(60));
var createRequestConnection = EstablishVssConnection(serverUrl, credentials, TimeSpan.FromSeconds(60));
@@ -301,31 +296,20 @@ namespace GitHub.Runner.Common
// JobRequest
//-----------------------------------------------------------------
public Task<TaskAgentJobRequest> RenewAgentRequestAsync(int poolId, long requestId, Guid lockToken, CancellationToken cancellationToken = default(CancellationToken))
public Task<TaskAgentJobRequest> RenewAgentRequestAsync(int poolId, long requestId, Guid lockToken, string orchestrationId = null, CancellationToken cancellationToken = default(CancellationToken))
{
if (HostContext.RunMode == RunMode.Local)
{
return Task.FromResult(JsonUtility.FromString<TaskAgentJobRequest>("{ lockedUntil: \"" + DateTime.Now.Add(TimeSpan.FromMinutes(5)).ToString("u") + "\" }"));
}
CheckConnection(RunnerConnectionType.JobRequest);
return _requestTaskAgentClient.RenewAgentRequestAsync(poolId, requestId, lockToken, cancellationToken: cancellationToken);
return _requestTaskAgentClient.RenewAgentRequestAsync(poolId, requestId, lockToken, orchestrationId: orchestrationId, cancellationToken: cancellationToken);
}
public Task<TaskAgentJobRequest> FinishAgentRequestAsync(int poolId, long requestId, Guid lockToken, DateTime finishTime, TaskResult result, CancellationToken cancellationToken = default(CancellationToken))
{
if (HostContext.RunMode == RunMode.Local)
{
return Task.FromResult<TaskAgentJobRequest>(null);
}
CheckConnection(RunnerConnectionType.JobRequest);
return _requestTaskAgentClient.FinishAgentRequestAsync(poolId, requestId, lockToken, finishTime, result, cancellationToken: cancellationToken);
}
public Task<TaskAgentJobRequest> GetAgentRequestAsync(int poolId, long requestId, CancellationToken cancellationToken = default(CancellationToken))
{
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
CheckConnection(RunnerConnectionType.JobRequest);
return _requestTaskAgentClient.GetAgentRequestAsync(poolId, requestId, cancellationToken: cancellationToken);
}
@@ -335,7 +319,6 @@ namespace GitHub.Runner.Common
//-----------------------------------------------------------------
public Task<List<PackageMetadata>> GetPackagesAsync(string packageType, string platform, int top, CancellationToken cancellationToken)
{
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
CheckConnection(RunnerConnectionType.Generic);
return _genericTaskAgentClient.GetPackagesAsync(packageType, platform, top, cancellationToken: cancellationToken);
}
@@ -351,5 +334,20 @@ namespace GitHub.Runner.Common
CheckConnection(RunnerConnectionType.Generic);
return _genericTaskAgentClient.UpdateAgentUpdateStateAsync(agentPoolId, agentId, currentState);
}
//-----------------------------------------------------------------
// Runner Auth Url
//-----------------------------------------------------------------
public Task<string> GetRunnerAuthUrlAsync(int runnerPoolId, int runnerId)
{
CheckConnection(RunnerConnectionType.MessageQueue);
return _messageTaskAgentClient.GetAgentAuthUrlAsync(runnerPoolId, runnerId);
}
public Task ReportRunnerAuthUrlErrorAsync(int runnerPoolId, int runnerId, string error)
{
CheckConnection(RunnerConnectionType.MessageQueue);
return _messageTaskAgentClient.ReportAgentAuthUrlMigrationErrorAsync(runnerPoolId, runnerId, error);
}
}
}

View File

@@ -1,196 +0,0 @@
using GitHub.Runner.Common.Util;
using System;
using System.Linq;
using System.Net;
using System.IO;
using System.Collections.Generic;
using System.Text.RegularExpressions;
using GitHub.Runner.Sdk;
namespace GitHub.Runner.Common
{
[ServiceLocator(Default = typeof(RunnerWebProxy))]
public interface IRunnerWebProxy : IRunnerService
{
string ProxyAddress { get; }
string ProxyUsername { get; }
string ProxyPassword { get; }
List<string> ProxyBypassList { get; }
IWebProxy WebProxy { get; }
}
public class RunnerWebProxy : RunnerService, IRunnerWebProxy
{
private readonly List<Regex> _regExBypassList = new List<Regex>();
private readonly List<string> _bypassList = new List<string>();
private RunnerWebProxyCore _runnerWebProxy = new RunnerWebProxyCore();
public string ProxyAddress { get; private set; }
public string ProxyUsername { get; private set; }
public string ProxyPassword { get; private set; }
public List<string> ProxyBypassList => _bypassList;
public IWebProxy WebProxy => _runnerWebProxy;
public override void Initialize(IHostContext context)
{
base.Initialize(context);
LoadProxySetting();
}
// This should only be called from config
public void SetupProxy(string proxyAddress, string proxyUsername, string proxyPassword)
{
ArgUtil.NotNullOrEmpty(proxyAddress, nameof(proxyAddress));
Trace.Info($"Update proxy setting from '{ProxyAddress ?? string.Empty}' to'{proxyAddress}'");
ProxyAddress = proxyAddress;
ProxyUsername = proxyUsername;
ProxyPassword = proxyPassword;
if (string.IsNullOrEmpty(ProxyUsername) || string.IsNullOrEmpty(ProxyPassword))
{
Trace.Info($"Config proxy use DefaultNetworkCredentials.");
}
else
{
Trace.Info($"Config authentication proxy as: {ProxyUsername}.");
}
_runnerWebProxy.Update(ProxyAddress, ProxyUsername, ProxyPassword, ProxyBypassList);
}
// This should only be called from config
public void SaveProxySetting()
{
if (!string.IsNullOrEmpty(ProxyAddress))
{
string proxyConfigFile = HostContext.GetConfigFile(WellKnownConfigFile.Proxy);
IOUtil.DeleteFile(proxyConfigFile);
Trace.Info($"Store proxy configuration to '{proxyConfigFile}' for proxy '{ProxyAddress}'");
File.WriteAllText(proxyConfigFile, ProxyAddress);
File.SetAttributes(proxyConfigFile, File.GetAttributes(proxyConfigFile) | FileAttributes.Hidden);
string proxyCredFile = HostContext.GetConfigFile(WellKnownConfigFile.ProxyCredentials);
IOUtil.DeleteFile(proxyCredFile);
if (!string.IsNullOrEmpty(ProxyUsername) && !string.IsNullOrEmpty(ProxyPassword))
{
string lookupKey = Guid.NewGuid().ToString("D").ToUpperInvariant();
Trace.Info($"Store proxy credential lookup key '{lookupKey}' to '{proxyCredFile}'");
File.WriteAllText(proxyCredFile, lookupKey);
File.SetAttributes(proxyCredFile, File.GetAttributes(proxyCredFile) | FileAttributes.Hidden);
var credStore = HostContext.GetService<IRunnerCredentialStore>();
credStore.Write($"GITHUB_ACTIONS_RUNNER_PROXY_{lookupKey}", ProxyUsername, ProxyPassword);
}
}
else
{
Trace.Info("No proxy configuration exist.");
}
}
// This should only be called from unconfig
public void DeleteProxySetting()
{
string proxyCredFile = HostContext.GetConfigFile(WellKnownConfigFile.ProxyCredentials);
if (File.Exists(proxyCredFile))
{
Trace.Info("Delete proxy credential from credential store.");
string lookupKey = File.ReadAllLines(proxyCredFile).FirstOrDefault();
if (!string.IsNullOrEmpty(lookupKey))
{
var credStore = HostContext.GetService<IRunnerCredentialStore>();
credStore.Delete($"GITHUB_ACTIONS_RUNNER_PROXY_{lookupKey}");
}
Trace.Info($"Delete .proxycredentials file: {proxyCredFile}");
IOUtil.DeleteFile(proxyCredFile);
}
string proxyBypassFile = HostContext.GetConfigFile(WellKnownConfigFile.ProxyBypass);
if (File.Exists(proxyBypassFile))
{
Trace.Info($"Delete .proxybypass file: {proxyBypassFile}");
IOUtil.DeleteFile(proxyBypassFile);
}
string proxyConfigFile = HostContext.GetConfigFile(WellKnownConfigFile.Proxy);
Trace.Info($"Delete .proxy file: {proxyConfigFile}");
IOUtil.DeleteFile(proxyConfigFile);
}
private void LoadProxySetting()
{
string proxyConfigFile = HostContext.GetConfigFile(WellKnownConfigFile.Proxy);
if (File.Exists(proxyConfigFile))
{
// we expect the first line of the file is the proxy url
Trace.Verbose($"Try read proxy setting from file: {proxyConfigFile}.");
ProxyAddress = File.ReadLines(proxyConfigFile).FirstOrDefault() ?? string.Empty;
ProxyAddress = ProxyAddress.Trim();
Trace.Verbose($"{ProxyAddress}");
}
if (!string.IsNullOrEmpty(ProxyAddress) && !Uri.IsWellFormedUriString(ProxyAddress, UriKind.Absolute))
{
Trace.Info($"The proxy url is not a well formed absolute uri string: {ProxyAddress}.");
ProxyAddress = string.Empty;
}
if (!string.IsNullOrEmpty(ProxyAddress))
{
Trace.Info($"Config proxy at: {ProxyAddress}.");
string proxyCredFile = HostContext.GetConfigFile(WellKnownConfigFile.ProxyCredentials);
if (File.Exists(proxyCredFile))
{
string lookupKey = File.ReadAllLines(proxyCredFile).FirstOrDefault();
if (!string.IsNullOrEmpty(lookupKey))
{
var credStore = HostContext.GetService<IRunnerCredentialStore>();
var proxyCred = credStore.Read($"GITHUB_ACTIONS_RUNNER_PROXY_{lookupKey}");
ProxyUsername = proxyCred.UserName;
ProxyPassword = proxyCred.Password;
}
}
if (!string.IsNullOrEmpty(ProxyPassword))
{
HostContext.SecretMasker.AddValue(ProxyPassword);
}
if (string.IsNullOrEmpty(ProxyUsername) || string.IsNullOrEmpty(ProxyPassword))
{
Trace.Info($"Config proxy use DefaultNetworkCredentials.");
}
else
{
Trace.Info($"Config authentication proxy as: {ProxyUsername}.");
}
string proxyBypassFile = HostContext.GetConfigFile(WellKnownConfigFile.ProxyBypass);
if (File.Exists(proxyBypassFile))
{
Trace.Verbose($"Try read proxy bypass list from file: {proxyBypassFile}.");
foreach (string bypass in File.ReadAllLines(proxyBypassFile))
{
if (string.IsNullOrWhiteSpace(bypass))
{
continue;
}
else
{
Trace.Info($"Bypass proxy for: {bypass}.");
ProxyBypassList.Add(bypass.Trim());
}
}
}
_runnerWebProxy.Update(ProxyAddress, ProxyUsername, ProxyPassword, ProxyBypassList);
}
else
{
Trace.Info($"No proxy setting found.");
}
}
}
}

View File

@@ -96,13 +96,14 @@ namespace GitHub.Runner.Common
Trace.Info($"WRITE: {message}");
if (!Silent)
{
if(colorCode != null)
if (colorCode != null)
{
Console.ForegroundColor = colorCode.Value;
Console.Write(message);
Console.ResetColor();
}
else {
else
{
Console.Write(message);
}
}
@@ -120,13 +121,14 @@ namespace GitHub.Runner.Common
Trace.Info($"WRITE LINE: {line}");
if (!Silent)
{
if(colorCode != null)
if (colorCode != null)
{
Console.ForegroundColor = colorCode.Value;
Console.WriteLine(line);
Console.ResetColor();
}
else {
else
{
Console.WriteLine(line);
}
}

View File

@@ -50,6 +50,13 @@ namespace GitHub.Runner.Common
public void Error(Exception exception)
{
Trace(TraceEventType.Error, exception.ToString());
var innerEx = exception.InnerException;
while (innerEx != null)
{
Trace(TraceEventType.Error, "#####################################################");
Trace(TraceEventType.Error, innerEx.ToString());
innerEx = innerEx.InnerException;
}
}
// Do not remove the non-format overload.

View File

@@ -0,0 +1,51 @@
using System;
using System.Threading;
using System.Threading.Tasks;
using GitHub.Runner.Sdk;
using GitHub.Runner.Common;
namespace GitHub.Runner.Common.Util
{
public static class EncodingUtil
{
public static async Task SetEncoding(IHostContext hostContext, Tracing trace, CancellationToken cancellationToken)
{
#if OS_WINDOWS
try
{
if (Console.InputEncoding.CodePage != 65001)
{
using (var p = hostContext.CreateService<IProcessInvoker>())
{
// Use UTF8 code page
int exitCode = await p.ExecuteAsync(workingDirectory: hostContext.GetDirectory(WellKnownDirectory.Work),
fileName: WhichUtil.Which("chcp", true, trace),
arguments: "65001",
environment: null,
requireExitCodeZero: false,
outputEncoding: null,
killProcessOnCancel: false,
redirectStandardIn: null,
inheritConsoleHandler: true,
cancellationToken: cancellationToken);
if (exitCode == 0)
{
trace.Info("Successfully returned to code page 65001 (UTF8)");
}
else
{
trace.Warning($"'chcp 65001' failed with exit code {exitCode}");
}
}
}
}
catch (Exception ex)
{
trace.Warning($"'chcp 65001' failed with exception {ex.Message}");
}
#endif
// Dummy variable to prevent compiler error CS1998: "This async method lacks 'await' operators and will run synchronously..."
await Task.CompletedTask;
}
}
}

View File

@@ -28,34 +28,21 @@ namespace GitHub.Runner.Listener
private readonly string[] validFlags =
{
Constants.Runner.CommandLine.Flags.Commit,
#if OS_WINDOWS
Constants.Runner.CommandLine.Flags.GitUseSChannel,
#endif
Constants.Runner.CommandLine.Flags.Help,
Constants.Runner.CommandLine.Flags.Replace,
Constants.Runner.CommandLine.Flags.RunAsService,
Constants.Runner.CommandLine.Flags.Once,
Constants.Runner.CommandLine.Flags.SslSkipCertValidation,
Constants.Runner.CommandLine.Flags.Unattended,
Constants.Runner.CommandLine.Flags.Version
};
private readonly string[] validArgs =
{
Constants.Runner.CommandLine.Args.Agent,
Constants.Runner.CommandLine.Args.Auth,
Constants.Runner.CommandLine.Args.Labels,
Constants.Runner.CommandLine.Args.MonitorSocketAddress,
Constants.Runner.CommandLine.Args.NotificationPipeName,
Constants.Runner.CommandLine.Args.Password,
Constants.Runner.CommandLine.Args.Pool,
Constants.Runner.CommandLine.Args.ProxyPassword,
Constants.Runner.CommandLine.Args.ProxyUrl,
Constants.Runner.CommandLine.Args.ProxyUserName,
Constants.Runner.CommandLine.Args.SslCACert,
Constants.Runner.CommandLine.Args.SslClientCert,
Constants.Runner.CommandLine.Args.SslClientCertKey,
Constants.Runner.CommandLine.Args.SslClientCertArchive,
Constants.Runner.CommandLine.Args.SslClientCertPassword,
Constants.Runner.CommandLine.Args.Name,
Constants.Runner.CommandLine.Args.RunnerGroup,
Constants.Runner.CommandLine.Args.StartupType,
Constants.Runner.CommandLine.Args.Token,
Constants.Runner.CommandLine.Args.Url,
@@ -77,9 +64,6 @@ namespace GitHub.Runner.Listener
public bool Unattended => TestFlag(Constants.Runner.CommandLine.Flags.Unattended);
public bool Version => TestFlag(Constants.Runner.CommandLine.Flags.Version);
#if OS_WINDOWS
public bool GitUseSChannel => TestFlag(Constants.Runner.CommandLine.Flags.GitUseSChannel);
#endif
public bool RunOnce => TestFlag(Constants.Runner.CommandLine.Flags.Once);
// Constructor.
@@ -164,25 +148,9 @@ namespace GitHub.Runner.Listener
defaultValue: false);
}
public bool GetAutoLaunchBrowser()
{
return TestFlagOrPrompt(
name: Constants.Runner.CommandLine.Flags.LaunchBrowser,
description: "Would you like to launch your browser for AAD Device Code Flow? (Y/N)",
defaultValue: true);
}
//
// Args.
//
public string GetAgentName()
{
return GetArgOrPrompt(
name: Constants.Runner.CommandLine.Args.Agent,
description: "Enter the name of runner:",
defaultValue: Environment.MachineName ?? "myagent",
validator: Validators.NonEmptyValidator);
}
public string GetAuth(string defaultValue)
{
return GetArgOrPrompt(
@@ -192,21 +160,21 @@ namespace GitHub.Runner.Listener
validator: Validators.AuthSchemeValidator);
}
public string GetPassword()
public string GetRunnerName()
{
return GetArgOrPrompt(
name: Constants.Runner.CommandLine.Args.Password,
description: "What is your GitHub password?",
defaultValue: string.Empty,
name: Constants.Runner.CommandLine.Args.Name,
description: "Enter the name of runner:",
defaultValue: Environment.MachineName ?? "myrunner",
validator: Validators.NonEmptyValidator);
}
public string GetPool()
public string GetRunnerGroupName(string defaultPoolName = null)
{
return GetArgOrPrompt(
name: Constants.Runner.CommandLine.Args.Pool,
description: "Enter the name of your runner pool:",
defaultValue: "default",
name: Constants.Runner.CommandLine.Args.RunnerGroup,
description: "Enter the name of the runner group to add this runner to:",
defaultValue: defaultPoolName ?? "default",
validator: Validators.NonEmptyValidator);
}
@@ -214,7 +182,7 @@ namespace GitHub.Runner.Listener
{
return GetArgOrPrompt(
name: Constants.Runner.CommandLine.Args.Token,
description: "Enter your personal access token:",
description: "What is your pool admin oauth access token?",
defaultValue: string.Empty,
validator: Validators.NonEmptyValidator);
}
@@ -223,7 +191,16 @@ namespace GitHub.Runner.Listener
{
return GetArgOrPrompt(
name: Constants.Runner.CommandLine.Args.Token,
description: "Enter runner register token:",
description: "What is your runner register token?",
defaultValue: string.Empty,
validator: Validators.NonEmptyValidator);
}
public string GetRunnerDeletionToken()
{
return GetArgOrPrompt(
name: Constants.Runner.CommandLine.Args.Token,
description: "Enter runner remove token:",
defaultValue: string.Empty,
validator: Validators.NonEmptyValidator);
}
@@ -244,15 +221,6 @@ namespace GitHub.Runner.Listener
validator: Validators.ServerUrlValidator);
}
public string GetUserName()
{
return GetArgOrPrompt(
name: Constants.Runner.CommandLine.Args.UserName,
description: "What is your GitHub username?",
defaultValue: string.Empty,
validator: Validators.NonEmptyValidator);
}
public string GetWindowsLogonAccount(string defaultValue, string descriptionMsg)
{
return GetArgOrPrompt(
@@ -285,65 +253,28 @@ namespace GitHub.Runner.Listener
return GetArg(Constants.Runner.CommandLine.Args.MonitorSocketAddress);
}
public string GetNotificationPipeName()
{
return GetArg(Constants.Runner.CommandLine.Args.NotificationPipeName);
}
public string GetNotificationSocketAddress()
{
return GetArg(Constants.Runner.CommandLine.Args.NotificationSocketAddress);
}
// This is used to find out the source from where the Runner.Listener.exe was launched at the time of run
public string GetStartupType()
{
return GetArg(Constants.Runner.CommandLine.Args.StartupType);
}
public string GetProxyUrl()
public ISet<string> GetLabels()
{
return GetArg(Constants.Runner.CommandLine.Args.ProxyUrl);
}
var labelSet = new HashSet<string>(StringComparer.OrdinalIgnoreCase);
string labels = GetArgOrPrompt(
name: Constants.Runner.CommandLine.Args.Labels,
description: $"This runner will have the following labels: 'self-hosted', '{VarUtil.OS}', '{VarUtil.OSArchitecture}' \nEnter any additional labels (ex. label-1,label-2):",
defaultValue: string.Empty,
validator: Validators.LabelsValidator,
isOptional: true);
public string GetProxyUserName()
{
return GetArg(Constants.Runner.CommandLine.Args.ProxyUserName);
}
if (!string.IsNullOrEmpty(labels))
{
labelSet = labels.Split(',').Where(x => !string.IsNullOrEmpty(x)).ToHashSet<string>(StringComparer.OrdinalIgnoreCase);
}
public string GetProxyPassword()
{
return GetArg(Constants.Runner.CommandLine.Args.ProxyPassword);
}
public bool GetSkipCertificateValidation()
{
return TestFlag(Constants.Runner.CommandLine.Flags.SslSkipCertValidation);
}
public string GetCACertificate()
{
return GetArg(Constants.Runner.CommandLine.Args.SslCACert);
}
public string GetClientCertificate()
{
return GetArg(Constants.Runner.CommandLine.Args.SslClientCert);
}
public string GetClientCertificatePrivateKey()
{
return GetArg(Constants.Runner.CommandLine.Args.SslClientCertKey);
}
public string GetClientCertificateArchrive()
{
return GetArg(Constants.Runner.CommandLine.Args.SslClientCertArchive);
}
public string GetClientCertificatePassword()
{
return GetArg(Constants.Runner.CommandLine.Args.SslClientCertPassword);
return labelSet;
}
//
@@ -377,7 +308,8 @@ namespace GitHub.Runner.Listener
string name,
string description,
string defaultValue,
Func<string, bool> validator)
Func<string, bool> validator,
bool isOptional = false)
{
// Check for the arg in the command line parser.
ArgUtil.NotNull(validator, nameof(validator));
@@ -388,7 +320,7 @@ namespace GitHub.Runner.Listener
if (!string.IsNullOrEmpty(result))
{
// After read the arg from input commandline args, remove it from Arg dictionary,
// This will help if bad arg value passed through CommandLine arg, when ConfigurationManager ask CommandSetting the second time,
// This will help if bad arg value passed through CommandLine arg, when ConfigurationManager ask CommandSetting the second time,
// It will prompt for input instead of continue use the bad input.
_trace.Info($"Remove {name} from Arg dictionary.");
RemoveArg(name);
@@ -408,7 +340,8 @@ namespace GitHub.Runner.Listener
secret: Constants.Runner.CommandLine.Args.Secrets.Any(x => string.Equals(x, name, StringComparison.OrdinalIgnoreCase)),
defaultValue: defaultValue,
validator: validator,
unattended: Unattended);
unattended: Unattended,
isOptional: isOptional);
}
private string GetEnvArg(string name)

View File

@@ -1,19 +1,18 @@
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
using GitHub.Services.Common;
using GitHub.Services.OAuth;
using GitHub.Services.WebApi;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Security.Cryptography;
using System.Threading.Tasks;
using System.Runtime.InteropServices;
using GitHub.Runner.Common;
using GitHub.Runner.Sdk;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Runtime.InteropServices;
using System.Security.Cryptography;
using System.Threading.Tasks;
namespace GitHub.Runner.Listener.Configuration
{
@@ -79,92 +78,25 @@ namespace GitHub.Runner.Listener.Configuration
_term.WriteLine("| |", ConsoleColor.White);
_term.WriteLine("--------------------------------------------------------------------------------", ConsoleColor.White);
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
Trace.Info(nameof(ConfigureAsync));
if (IsConfigured())
{
throw new InvalidOperationException("Cannot configure the runner because it is already configured. To reconfigure the runner, run 'config.cmd remove' or './config.sh remove' first.");
}
// Populate proxy setting from commandline args
var runnerProxy = HostContext.GetService<IRunnerWebProxy>();
bool saveProxySetting = false;
string proxyUrl = command.GetProxyUrl();
if (!string.IsNullOrEmpty(proxyUrl))
{
if (!Uri.IsWellFormedUriString(proxyUrl, UriKind.Absolute))
{
throw new ArgumentOutOfRangeException(nameof(proxyUrl));
}
Trace.Info("Reset proxy base on commandline args.");
string proxyUserName = command.GetProxyUserName();
string proxyPassword = command.GetProxyPassword();
(runnerProxy as RunnerWebProxy).SetupProxy(proxyUrl, proxyUserName, proxyPassword);
saveProxySetting = true;
}
// Populate cert setting from commandline args
var runnerCertManager = HostContext.GetService<IRunnerCertificateManager>();
bool saveCertSetting = false;
bool skipCertValidation = command.GetSkipCertificateValidation();
string caCert = command.GetCACertificate();
string clientCert = command.GetClientCertificate();
string clientCertKey = command.GetClientCertificatePrivateKey();
string clientCertArchive = command.GetClientCertificateArchrive();
string clientCertPassword = command.GetClientCertificatePassword();
// We require all Certificate files are under agent root.
// So we can set ACL correctly when configure as service
if (!string.IsNullOrEmpty(caCert))
{
caCert = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Root), caCert);
ArgUtil.File(caCert, nameof(caCert));
}
if (!string.IsNullOrEmpty(clientCert) &&
!string.IsNullOrEmpty(clientCertKey) &&
!string.IsNullOrEmpty(clientCertArchive))
{
// Ensure all client cert pieces are there.
clientCert = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Root), clientCert);
clientCertKey = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Root), clientCertKey);
clientCertArchive = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Root), clientCertArchive);
ArgUtil.File(clientCert, nameof(clientCert));
ArgUtil.File(clientCertKey, nameof(clientCertKey));
ArgUtil.File(clientCertArchive, nameof(clientCertArchive));
}
else if (!string.IsNullOrEmpty(clientCert) ||
!string.IsNullOrEmpty(clientCertKey) ||
!string.IsNullOrEmpty(clientCertArchive))
{
// Print out which args are missing.
ArgUtil.NotNullOrEmpty(Constants.Runner.CommandLine.Args.SslClientCert, Constants.Runner.CommandLine.Args.SslClientCert);
ArgUtil.NotNullOrEmpty(Constants.Runner.CommandLine.Args.SslClientCertKey, Constants.Runner.CommandLine.Args.SslClientCertKey);
ArgUtil.NotNullOrEmpty(Constants.Runner.CommandLine.Args.SslClientCertArchive, Constants.Runner.CommandLine.Args.SslClientCertArchive);
}
if (skipCertValidation || !string.IsNullOrEmpty(caCert) || !string.IsNullOrEmpty(clientCert))
{
Trace.Info("Reset runner cert setting base on commandline args.");
(runnerCertManager as RunnerCertificateManager).SetupCertificate(skipCertValidation, caCert, clientCert, clientCertKey, clientCertArchive, clientCertPassword);
saveCertSetting = true;
}
RunnerSettings runnerSettings = new RunnerSettings();
bool isHostedServer = false;
// Loop getting url and creds until you can connect
ICredentialProvider credProvider = null;
VssCredentials creds = null;
_term.WriteSection("Authentication");
while (true)
{
// Get the URL
// When testing against a dev deployment of Actions Service, set this environment variable
var useDevActionsServiceUrl = Environment.GetEnvironmentVariable("USE_DEV_ACTIONS_SERVICE_URL");
var inputUrl = command.GetUrl();
if (!inputUrl.Contains("github.com", StringComparison.OrdinalIgnoreCase) &&
!inputUrl.Contains("github.localhost", StringComparison.OrdinalIgnoreCase))
if (inputUrl.Contains("codedev.ms", StringComparison.OrdinalIgnoreCase)
|| useDevActionsServiceUrl != null)
{
runnerSettings.ServerUrl = inputUrl;
// Get the credentials
@@ -176,7 +108,7 @@ namespace GitHub.Runner.Listener.Configuration
{
runnerSettings.GitHubUrl = inputUrl;
var githubToken = command.GetRunnerRegisterToken();
GitHubAuthResult authResult = await GetTenantCredential(inputUrl, githubToken);
GitHubAuthResult authResult = await GetTenantCredential(inputUrl, githubToken, Constants.RunnerEvent.Register);
runnerSettings.ServerUrl = authResult.TenantUrl;
creds = authResult.ToVssCredentials();
Trace.Info("cred retrieved via GitHub auth");
@@ -185,7 +117,20 @@ namespace GitHub.Runner.Listener.Configuration
try
{
// Determine the service deployment type based on connection data. (Hosted/OnPremises)
isHostedServer = await IsHostedServer(runnerSettings.ServerUrl, creds);
runnerSettings.IsHostedServer = runnerSettings.GitHubUrl == null || IsHostedServer(new UriBuilder(runnerSettings.GitHubUrl));
// Warn if the Actions server url and GHES server url has different Host
if (!runnerSettings.IsHostedServer)
{
// Example actionsServerUrl is https://my-ghes/_services/pipelines/[...]
// Example githubServerUrl is https://my-ghes
var actionsServerUrl = new Uri(runnerSettings.ServerUrl);
var githubServerUrl = new Uri(runnerSettings.GitHubUrl);
if (!string.Equals(actionsServerUrl.Authority, githubServerUrl.Authority, StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException($"GitHub Actions is not properly configured in GHES. GHES url: {runnerSettings.GitHubUrl}, Actions url: {runnerSettings.ServerUrl}.");
}
}
// Validate can connect.
await _runnerServer.ConnectAsync(new Uri(runnerSettings.ServerUrl), creds);
@@ -214,17 +159,34 @@ namespace GitHub.Runner.Listener.Configuration
_term.WriteSection("Runner Registration");
//Get all the agent pools, and select the first private pool
// If we have more than one runner group available, allow the user to specify which one to be added into
string poolName = null;
TaskAgentPool agentPool = null;
List<TaskAgentPool> agentPools = await _runnerServer.GetAgentPoolsAsync();
TaskAgentPool agentPool = agentPools?.Where(x => x.IsHosted == false).FirstOrDefault();
TaskAgentPool defaultPool = agentPools?.Where(x => x.IsInternal).FirstOrDefault();
if (agentPool == null)
if (agentPools?.Where(x => !x.IsHosted).Count() > 1)
{
throw new TaskAgentPoolNotFoundException($"Could not find any private pool. Contact support.");
poolName = command.GetRunnerGroupName(defaultPool?.Name);
_term.WriteLine();
agentPool = agentPools.Where(x => string.Equals(poolName, x.Name, StringComparison.OrdinalIgnoreCase) && !x.IsHosted).FirstOrDefault();
}
else
{
Trace.Info("Found a private pool with id {1} and name {2}", agentPool.Id, agentPool.Name);
agentPool = defaultPool;
}
if (agentPool == null && poolName == null)
{
throw new TaskAgentPoolNotFoundException($"Could not find any self-hosted runner groups. Contact support.");
}
else if (agentPool == null && poolName != null)
{
throw new TaskAgentPoolNotFoundException($"Could not find any self-hosted runner group named \"{poolName}\".");
}
else
{
Trace.Info("Found a self-hosted runner group with id {1} and name {2}", agentPool.Id, agentPool.Name);
runnerSettings.PoolId = agentPool.Id;
runnerSettings.PoolName = agentPool.Name;
}
@@ -232,10 +194,13 @@ namespace GitHub.Runner.Listener.Configuration
TaskAgent agent;
while (true)
{
runnerSettings.AgentName = command.GetAgentName();
runnerSettings.AgentName = command.GetRunnerName();
_term.WriteLine();
var userLabels = command.GetLabels();
_term.WriteLine();
var agents = await _runnerServer.GetAgentsAsync(runnerSettings.PoolId, runnerSettings.AgentName);
Trace.Verbose("Returns {0} agents", agents.Count);
agent = agents.FirstOrDefault();
@@ -245,7 +210,7 @@ namespace GitHub.Runner.Listener.Configuration
if (command.GetReplace())
{
// Update existing agent with new PublicKey, agent version.
agent = UpdateExistingAgent(agent, publicKey);
agent = UpdateExistingAgent(agent, publicKey, userLabels);
try
{
@@ -262,13 +227,13 @@ namespace GitHub.Runner.Listener.Configuration
else if (command.Unattended)
{
// if not replace and it is unattended config.
throw new TaskAgentExistsException($"Pool {runnerSettings.PoolId} already contains a runner with name {runnerSettings.AgentName}.");
throw new TaskAgentExistsException($"A runner exists with the same name {runnerSettings.AgentName}.");
}
}
else
{
// Create a new agent.
agent = CreateNewAgent(runnerSettings.AgentName, publicKey);
// Create a new agent.
agent = CreateNewAgent(runnerSettings.AgentName, publicKey, userLabels);
try
{
@@ -286,44 +251,11 @@ namespace GitHub.Runner.Listener.Configuration
// Add Agent Id to settings
runnerSettings.AgentId = agent.Id;
// respect the serverUrl resolve by server.
// in case of agent configured using collection url instead of account url.
string agentServerUrl;
if (agent.Properties.TryGetValidatedValue<string>("ServerUrl", out agentServerUrl) &&
!string.IsNullOrEmpty(agentServerUrl))
{
Trace.Info($"Agent server url resolve by server: '{agentServerUrl}'.");
// we need make sure the Schema/Host/Port component of the url remain the same.
UriBuilder inputServerUrl = new UriBuilder(runnerSettings.ServerUrl);
UriBuilder serverReturnedServerUrl = new UriBuilder(agentServerUrl);
if (Uri.Compare(inputServerUrl.Uri, serverReturnedServerUrl.Uri, UriComponents.SchemeAndServer, UriFormat.Unescaped, StringComparison.OrdinalIgnoreCase) != 0)
{
inputServerUrl.Path = serverReturnedServerUrl.Path;
Trace.Info($"Replace server returned url's scheme://host:port component with user input server url's scheme://host:port: '{inputServerUrl.Uri.AbsoluteUri}'.");
runnerSettings.ServerUrl = inputServerUrl.Uri.AbsoluteUri;
}
else
{
runnerSettings.ServerUrl = agentServerUrl;
}
}
// See if the server supports our OAuth key exchange for credentials
if (agent.Authorization != null &&
agent.Authorization.ClientId != Guid.Empty &&
agent.Authorization.AuthorizationUrl != null)
{
UriBuilder configServerUrl = new UriBuilder(runnerSettings.ServerUrl);
UriBuilder oauthEndpointUrlBuilder = new UriBuilder(agent.Authorization.AuthorizationUrl);
if (!isHostedServer && Uri.Compare(configServerUrl.Uri, oauthEndpointUrlBuilder.Uri, UriComponents.SchemeAndServer, UriFormat.Unescaped, StringComparison.OrdinalIgnoreCase) != 0)
{
oauthEndpointUrlBuilder.Scheme = configServerUrl.Scheme;
oauthEndpointUrlBuilder.Host = configServerUrl.Host;
oauthEndpointUrlBuilder.Port = configServerUrl.Port;
Trace.Info($"Set oauth endpoint url's scheme://host:port component to match runner configure url's scheme://host:port: '{oauthEndpointUrlBuilder.Uri.AbsoluteUri}'.");
}
var credentialData = new CredentialData
{
Scheme = Constants.Configuration.OAuth,
@@ -331,7 +263,6 @@ namespace GitHub.Runner.Listener.Configuration
{
{ "clientId", agent.Authorization.ClientId.ToString("D") },
{ "authorizationUrl", agent.Authorization.AuthorizationUrl.AbsoluteUri },
{ "oauthEndpointUrl", oauthEndpointUrlBuilder.Uri.AbsoluteUri },
},
};
@@ -344,19 +275,22 @@ namespace GitHub.Runner.Listener.Configuration
throw new NotSupportedException("Message queue listen OAuth token.");
}
// Testing agent connection, detect any protential connection issue, like local clock skew that cause OAuth token expired.
// Testing agent connection, detect any potential connection issue, like local clock skew that cause OAuth token expired.
var credMgr = HostContext.GetService<ICredentialManager>();
VssCredentials credential = credMgr.LoadCredentials();
try
{
await _runnerServer.ConnectAsync(new Uri(runnerSettings.ServerUrl), credential);
// ConnectAsync() hits _apis/connectionData which is an anonymous endpoint
// Need to hit an authenticate endpoint to trigger OAuth token exchange.
await _runnerServer.GetAgentPoolsAsync();
_term.WriteSuccessMessage("Runner connection is good");
}
catch (VssOAuthTokenRequestException ex) when (ex.Message.Contains("Current server time is"))
{
// there are two exception messages server send that indicate clock skew.
// 1. The bearer token expired on {jwt.ValidTo}. Current server time is {DateTime.UtcNow}.
// 2. The bearer token is not valid until {jwt.ValidFrom}. Current server time is {DateTime.UtcNow}.
// 2. The bearer token is not valid until {jwt.ValidFrom}. Current server time is {DateTime.UtcNow}.
Trace.Error("Catch exception during test agent connection.");
Trace.Error(ex);
throw new Exception("The local machine's clock may be out of sync with the server time by more than five minutes. Please sync your clock with your domain or internet time and try again.");
@@ -367,46 +301,14 @@ namespace GitHub.Runner.Listener.Configuration
// We will Combine() what's stored with root. Defaults to string a relative path
runnerSettings.WorkFolder = command.GetWork();
// notificationPipeName for Hosted agent provisioner.
runnerSettings.NotificationPipeName = command.GetNotificationPipeName();
runnerSettings.MonitorSocketAddress = command.GetMonitorSocketAddress();
runnerSettings.NotificationSocketAddress = command.GetNotificationSocketAddress();
_store.SaveSettings(runnerSettings);
if (saveProxySetting)
{
Trace.Info("Save proxy setting to disk.");
(runnerProxy as RunnerWebProxy).SaveProxySetting();
}
if (saveCertSetting)
{
Trace.Info("Save agent cert setting to disk.");
(runnerCertManager as RunnerCertificateManager).SaveCertificateSetting();
}
_term.WriteLine();
_term.WriteSuccessMessage("Settings Saved.");
_term.WriteLine();
bool saveRuntimeOptions = false;
var runtimeOptions = new RunnerRuntimeOptions();
#if OS_WINDOWS
if (command.GitUseSChannel)
{
saveRuntimeOptions = true;
runtimeOptions.GitUseSecureChannel = true;
}
#endif
if (saveRuntimeOptions)
{
Trace.Info("Save agent runtime options to disk.");
_store.SaveRunnerRuntimeOptions(runtimeOptions);
}
#if OS_WINDOWS
// config windows service
bool runAsService = command.GetRunAsService();
@@ -426,7 +328,6 @@ namespace GitHub.Runner.Listener.Configuration
public async Task UnconfigureAsync(CommandSettings command)
{
ArgUtil.Equal(RunMode.Normal, HostContext.RunMode, nameof(HostContext.RunMode));
string currentAction = string.Empty;
_term.WriteSection("Runner removal");
@@ -472,14 +373,13 @@ namespace GitHub.Runner.Listener.Configuration
}
else
{
var githubToken = command.GetToken();
GitHubAuthResult authResult = await GetTenantCredential(settings.GitHubUrl, githubToken);
var githubToken = command.GetRunnerDeletionToken();
GitHubAuthResult authResult = await GetTenantCredential(settings.GitHubUrl, githubToken, Constants.RunnerEvent.Remove);
creds = authResult.ToVssCredentials();
Trace.Info("cred retrieved via GitHub auth");
}
// Determine the service deployment type based on connection data. (Hosted/OnPremises)
bool isHostedServer = await IsHostedServer(settings.ServerUrl, creds);
await _runnerServer.ConnectAsync(new Uri(settings.ServerUrl), creds);
var agents = await _runnerServer.GetAgentsAsync(settings.PoolId, settings.AgentName);
@@ -502,7 +402,7 @@ namespace GitHub.Runner.Listener.Configuration
_term.WriteLine("Cannot connect to server, because config files are missing. Skipping removing runner from the server.");
}
//delete credential config files
//delete credential config files
currentAction = "Removing .credentials";
if (hasCredentials)
{
@@ -516,19 +416,10 @@ namespace GitHub.Runner.Listener.Configuration
_term.WriteLine("Does not exist. Skipping " + currentAction);
}
//delete settings config file
//delete settings config file
currentAction = "Removing .runner";
if (isConfigured)
{
// delete proxy setting
(HostContext.GetService<IRunnerWebProxy>() as RunnerWebProxy).DeleteProxySetting();
// delete agent cert setting
(HostContext.GetService<IRunnerCertificateManager>() as RunnerCertificateManager).DeleteCertificateSetting();
// delete agent runtime option
_store.DeleteRunnerRuntimeOptions();
_store.DeleteSettings();
_term.WriteSuccessMessage("Removed .runner");
}
@@ -551,7 +442,7 @@ namespace GitHub.Runner.Listener.Configuration
Trace.Info(nameof(GetCredentialProvider));
var credentialManager = HostContext.GetService<ICredentialManager>();
string authType = command.GetAuth(defaultValue: Constants.Configuration.AAD);
string authType = command.GetAuth(defaultValue: Constants.Configuration.OAuthAccessToken);
// Create the credential.
Trace.Info("Creating credential for auth: {0}", authType);
@@ -566,7 +457,7 @@ namespace GitHub.Runner.Listener.Configuration
}
private TaskAgent UpdateExistingAgent(TaskAgent agent, RSAParameters publicKey)
private TaskAgent UpdateExistingAgent(TaskAgent agent, RSAParameters publicKey, ISet<string> userLabels)
{
ArgUtil.NotNull(agent, nameof(agent));
agent.Authorization = new TaskAgentAuthorization
@@ -574,18 +465,25 @@ namespace GitHub.Runner.Listener.Configuration
PublicKey = new TaskAgentPublicKey(publicKey.Exponent, publicKey.Modulus),
};
// update - update instead of delete so we don't lose user capabilities etc...
// update should replace the existing labels
agent.Version = BuildConstants.RunnerPackage.Version;
agent.OSDescription = RuntimeInformation.OSDescription;
agent.Labels.Add("self-hosted");
agent.Labels.Add(VarUtil.OS);
agent.Labels.Add(VarUtil.OSArchitecture);
agent.Labels.Clear();
agent.Labels.Add(new AgentLabel("self-hosted", LabelType.System));
agent.Labels.Add(new AgentLabel(VarUtil.OS, LabelType.System));
agent.Labels.Add(new AgentLabel(VarUtil.OSArchitecture, LabelType.System));
foreach (var userLabel in userLabels)
{
agent.Labels.Add(new AgentLabel(userLabel, LabelType.User));
}
return agent;
}
private TaskAgent CreateNewAgent(string agentName, RSAParameters publicKey)
private TaskAgent CreateNewAgent(string agentName, RSAParameters publicKey, ISet<string> userLabels)
{
TaskAgent agent = new TaskAgent(agentName)
{
@@ -598,45 +496,51 @@ namespace GitHub.Runner.Listener.Configuration
OSDescription = RuntimeInformation.OSDescription,
};
agent.Labels.Add("self-hosted");
agent.Labels.Add(VarUtil.OS);
agent.Labels.Add(VarUtil.OSArchitecture);
agent.Labels.Add(new AgentLabel("self-hosted", LabelType.System));
agent.Labels.Add(new AgentLabel(VarUtil.OS, LabelType.System));
agent.Labels.Add(new AgentLabel(VarUtil.OSArchitecture, LabelType.System));
foreach (var userLabel in userLabels)
{
agent.Labels.Add(new AgentLabel(userLabel, LabelType.User));
}
return agent;
}
private async Task<bool> IsHostedServer(string serverUrl, VssCredentials credentials)
private bool IsHostedServer(UriBuilder gitHubUrl)
{
// Determine the service deployment type based on connection data. (Hosted/OnPremises)
var locationServer = HostContext.GetService<ILocationServer>();
VssConnection connection = VssUtil.CreateConnection(new Uri(serverUrl), credentials);
await locationServer.ConnectAsync(connection);
try
{
var connectionData = await locationServer.GetConnectionDataAsync();
Trace.Info($"Server deployment type: {connectionData.DeploymentType}");
return connectionData.DeploymentType.HasFlag(DeploymentFlags.Hosted);
}
catch (Exception ex)
{
// Since the DeploymentType is Enum, deserialization exception means there is a new Enum member been added.
// It's more likely to be Hosted since OnPremises is always behind and customer can update their agent if are on-prem
Trace.Error(ex);
return true;
}
return string.Equals(gitHubUrl.Host, "github.com", StringComparison.OrdinalIgnoreCase) ||
string.Equals(gitHubUrl.Host, "www.github.com", StringComparison.OrdinalIgnoreCase) ||
string.Equals(gitHubUrl.Host, "github.localhost", StringComparison.OrdinalIgnoreCase);
}
private async Task<GitHubAuthResult> GetTenantCredential(string githubUrl, string githubToken)
private async Task<GitHubAuthResult> GetTenantCredential(string githubUrl, string githubToken, string runnerEvent)
{
var gitHubUrl = new UriBuilder(githubUrl);
var githubApiUrl = $"https://api.{gitHubUrl.Host}/repos/{gitHubUrl.Path.Trim('/')}/actions-runners/registration";
var githubApiUrl = "";
var gitHubUrlBuilder = new UriBuilder(githubUrl);
if (IsHostedServer(gitHubUrlBuilder))
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://api.{gitHubUrlBuilder.Host}/actions/runner-registration";
}
else
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://{gitHubUrlBuilder.Host}/api/v3/actions/runner-registration";
}
using (var httpClientHandler = HostContext.CreateHttpClientHandler())
using (var httpClient = new HttpClient(httpClientHandler))
{
httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("RemoteAuth", githubToken);
httpClient.DefaultRequestHeaders.UserAgent.Add(HostContext.UserAgent);
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/vnd.github.shuri-preview+json"));
var response = await httpClient.PostAsync(githubApiUrl, new StringContent("", null, "application/json"));
httpClient.DefaultRequestHeaders.UserAgent.AddRange(HostContext.UserAgents);
var bodyObject = new Dictionary<string, string>()
{
{"url", githubUrl},
{"runner_event", runnerEvent}
};
var response = await httpClient.PostAsync(githubApiUrl, new StringContent(StringUtil.ConvertToJson(bodyObject), null, "application/json"));
if (response.IsSuccessStatusCode)
{

View File

@@ -20,8 +20,6 @@ namespace GitHub.Runner.Listener.Configuration
{
public static readonly Dictionary<string, Type> CredentialTypes = new Dictionary<string, Type>(StringComparer.OrdinalIgnoreCase)
{
{ Constants.Configuration.AAD, typeof(AadDeviceCodeAccessToken)},
{ Constants.Configuration.PAT, typeof(PersonalAccessToken)},
{ Constants.Configuration.OAuth, typeof(OAuthCredential)},
{ Constants.Configuration.OAuthAccessToken, typeof(OAuthAccessTokenCredential)},
};
@@ -52,6 +50,18 @@ namespace GitHub.Runner.Listener.Configuration
}
CredentialData credData = store.GetCredentials();
var migratedCred = store.GetMigratedCredentials();
if (migratedCred != null)
{
credData = migratedCred;
// Re-write .credentials with Token URL
store.SaveCredential(credData);
// Delete .credentials_migrated
store.DeleteMigratedCredential();
}
ICredentialProvider credProv = GetCredentialProvider(credData.Scheme);
credProv.CredentialData = credData;
@@ -80,7 +90,7 @@ namespace GitHub.Runner.Listener.Configuration
if (string.Equals(TokenSchema, "OAuthAccessToken", StringComparison.OrdinalIgnoreCase))
{
return new VssCredentials(null, new VssOAuthAccessTokenCredential(Token), CredentialPromptType.DoNotPrompt);
return new VssCredentials(new VssOAuthAccessTokenCredential(Token), CredentialPromptType.DoNotPrompt);
}
else
{

View File

@@ -1,13 +1,5 @@
using System;
using System.Diagnostics;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using GitHub.Runner.Common.Util;
using GitHub.Services.Client;
using GitHub.Services.Common;
using GitHub.Services.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Sdk;
using GitHub.Services.OAuth;
@@ -37,125 +29,6 @@ namespace GitHub.Runner.Listener.Configuration
public abstract void EnsureCredential(IHostContext context, CommandSettings command, string serverUrl);
}
public sealed class AadDeviceCodeAccessToken : CredentialProvider
{
private string _azureDevOpsClientId = "97877f11-0fc6-4aee-b1ff-febb0519dd00";
public override Boolean RequireInteractive => true;
public AadDeviceCodeAccessToken() : base(Constants.Configuration.AAD) { }
public override VssCredentials GetVssCredentials(IHostContext context)
{
ArgUtil.NotNull(context, nameof(context));
Tracing trace = context.GetTrace(nameof(AadDeviceCodeAccessToken));
trace.Info(nameof(GetVssCredentials));
ArgUtil.NotNull(CredentialData, nameof(CredentialData));
CredentialData.Data.TryGetValue(Constants.Runner.CommandLine.Args.Url, out string serverUrl);
ArgUtil.NotNullOrEmpty(serverUrl, nameof(serverUrl));
var tenantAuthorityUrl = GetTenantAuthorityUrl(context, serverUrl);
if (tenantAuthorityUrl == null)
{
throw new NotSupportedException($"'{serverUrl}' is not backed by Azure Active Directory.");
}
LoggerCallbackHandler.LogCallback = ((LogLevel level, string message, bool containsPii) =>
{
switch (level)
{
case LogLevel.Information:
trace.Info(message);
break;
case LogLevel.Error:
trace.Error(message);
break;
case LogLevel.Warning:
trace.Warning(message);
break;
default:
trace.Verbose(message);
break;
}
});
LoggerCallbackHandler.UseDefaultLogging = false;
AuthenticationContext ctx = new AuthenticationContext(tenantAuthorityUrl.AbsoluteUri);
var queryParameters = $"redirect_uri={Uri.EscapeDataString(new Uri(serverUrl).GetLeftPart(UriPartial.Authority))}";
DeviceCodeResult codeResult = ctx.AcquireDeviceCodeAsync("https://management.core.windows.net/", _azureDevOpsClientId, queryParameters).GetAwaiter().GetResult();
var term = context.GetService<ITerminal>();
term.WriteLine($"Please finish AAD device code flow in browser ({codeResult.VerificationUrl}), user code: {codeResult.UserCode}");
if (string.Equals(CredentialData.Data[Constants.Runner.CommandLine.Flags.LaunchBrowser], bool.TrueString, StringComparison.OrdinalIgnoreCase))
{
try
{
#if OS_WINDOWS
Process.Start(new ProcessStartInfo() { FileName = codeResult.VerificationUrl, UseShellExecute = true });
#elif OS_LINUX
Process.Start(new ProcessStartInfo() { FileName = "xdg-open", Arguments = codeResult.VerificationUrl });
#else
Process.Start(new ProcessStartInfo() { FileName = "open", Arguments = codeResult.VerificationUrl });
#endif
}
catch (Exception ex)
{
// not able to open browser, ex: xdg-open/open is not installed.
trace.Error(ex);
term.WriteLine($"Fail to open browser. {codeResult.Message}");
}
}
AuthenticationResult authResult = ctx.AcquireTokenByDeviceCodeAsync(codeResult).GetAwaiter().GetResult();
ArgUtil.NotNull(authResult, nameof(authResult));
trace.Info($"receive AAD auth result with {authResult.AccessTokenType} token");
var aadCred = new VssAadCredential(new VssAadToken(authResult));
VssCredentials creds = new VssCredentials(null, aadCred, CredentialPromptType.DoNotPrompt);
trace.Info("cred created");
return creds;
}
public override void EnsureCredential(IHostContext context, CommandSettings command, string serverUrl)
{
ArgUtil.NotNull(context, nameof(context));
Tracing trace = context.GetTrace(nameof(AadDeviceCodeAccessToken));
trace.Info(nameof(EnsureCredential));
ArgUtil.NotNull(command, nameof(command));
CredentialData.Data[Constants.Runner.CommandLine.Args.Url] = serverUrl;
CredentialData.Data[Constants.Runner.CommandLine.Flags.LaunchBrowser] = command.GetAutoLaunchBrowser().ToString();
}
private Uri GetTenantAuthorityUrl(IHostContext context, string serverUrl)
{
using (var client = new HttpClient(context.CreateHttpClientHandler()))
{
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.Add("X-TFS-FedAuthRedirect", "Suppress");
client.DefaultRequestHeaders.UserAgent.Clear();
client.DefaultRequestHeaders.UserAgent.AddRange(VssClientHttpRequestSettings.Default.UserAgent);
var requestMessage = new HttpRequestMessage(HttpMethod.Head, $"{serverUrl.Trim('/')}/_apis/connectiondata");
var response = client.SendAsync(requestMessage).GetAwaiter().GetResult();
// Get the tenant from the Login URL, MSA backed accounts will not return `Bearer` www-authenticate header.
var bearerResult = response.Headers.WwwAuthenticate.Where(p => p.Scheme.Equals("Bearer", StringComparison.OrdinalIgnoreCase)).FirstOrDefault();
if (bearerResult != null && bearerResult.Parameter.StartsWith("authorization_uri=", StringComparison.OrdinalIgnoreCase))
{
var authorizationUri = bearerResult.Parameter.Substring("authorization_uri=".Length);
if (Uri.TryCreate(authorizationUri, UriKind.Absolute, out Uri aadTenantUrl))
{
return aadTenantUrl;
}
}
return null;
}
}
}
public sealed class OAuthAccessTokenCredential : CredentialProvider
{
public OAuthAccessTokenCredential() : base(Constants.Configuration.OAuthAccessToken) { }
@@ -175,7 +48,7 @@ namespace GitHub.Runner.Listener.Configuration
ArgUtil.NotNullOrEmpty(token, nameof(token));
trace.Info("token retrieved: {0} chars", token.Length);
VssCredentials creds = new VssCredentials(null, new VssOAuthAccessTokenCredential(token), CredentialPromptType.DoNotPrompt);
VssCredentials creds = new VssCredentials(new VssOAuthAccessTokenCredential(token), CredentialPromptType.DoNotPrompt);
trace.Info("cred created");
return creds;
@@ -190,42 +63,4 @@ namespace GitHub.Runner.Listener.Configuration
CredentialData.Data[Constants.Runner.CommandLine.Args.Token] = command.GetToken();
}
}
public sealed class PersonalAccessToken : CredentialProvider
{
public PersonalAccessToken() : base(Constants.Configuration.PAT) { }
public override VssCredentials GetVssCredentials(IHostContext context)
{
ArgUtil.NotNull(context, nameof(context));
Tracing trace = context.GetTrace(nameof(PersonalAccessToken));
trace.Info(nameof(GetVssCredentials));
ArgUtil.NotNull(CredentialData, nameof(CredentialData));
string token;
if (!CredentialData.Data.TryGetValue(Constants.Runner.CommandLine.Args.Token, out token))
{
token = null;
}
ArgUtil.NotNullOrEmpty(token, nameof(token));
trace.Info("token retrieved: {0} chars", token.Length);
// PAT uses a basic credential
VssBasicCredential basicCred = new VssBasicCredential("ActionsRunner", token);
VssCredentials creds = new VssCredentials(null, basicCred, CredentialPromptType.DoNotPrompt);
trace.Info("cred created");
return creds;
}
public override void EnsureCredential(IHostContext context, CommandSettings command, string serverUrl)
{
ArgUtil.NotNull(context, nameof(context));
Tracing trace = context.GetTrace(nameof(PersonalAccessToken));
trace.Info(nameof(EnsureCredential));
ArgUtil.NotNull(command, nameof(command));
CredentialData.Data[Constants.Runner.CommandLine.Args.Token] = command.GetToken();
}
}
}

View File

@@ -6,7 +6,7 @@ using GitHub.Runner.Common;
namespace GitHub.Runner.Listener.Configuration
{
/// <summary>
/// Manages an RSA key for the agent using the most appropriate store for the target platform.
/// Manages an RSA key for the runner using the most appropriate store for the target platform.
/// </summary>
#if OS_WINDOWS
[ServiceLocator(Default = typeof(RSAEncryptedFileKeyManager))]
@@ -16,10 +16,10 @@ namespace GitHub.Runner.Listener.Configuration
public interface IRSAKeyManager : IRunnerService
{
/// <summary>
/// Creates a new <c>RSACryptoServiceProvider</c> instance for the current agent. If a key file is found then the current
/// Creates a new <c>RSACryptoServiceProvider</c> instance for the current runner. If a key file is found then the current
/// key is returned to the caller.
/// </summary>
/// <returns>An <c>RSACryptoServiceProvider</c> instance representing the key for the agent</returns>
/// <returns>An <c>RSACryptoServiceProvider</c> instance representing the key for the runner</returns>
RSACryptoServiceProvider CreateKey();
/// <summary>
@@ -30,7 +30,7 @@ namespace GitHub.Runner.Listener.Configuration
/// <summary>
/// Gets the <c>RSACryptoServiceProvider</c> instance currently stored by the key manager.
/// </summary>
/// <returns>An <c>RSACryptoServiceProvider</c> instance representing the key for the agent</returns>
/// <returns>An <c>RSACryptoServiceProvider</c> instance representing the key for the runner</returns>
/// <exception cref="CryptographicException">No key exists in the store</exception>
RSACryptoServiceProvider GetKey();
}

View File

@@ -447,7 +447,7 @@ namespace GitHub.Runner.Listener.Configuration
{
Trace.Entering();
string agentServiceExecutable = "\"" + Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Bin), WindowsServiceControlManager.WindowsServiceControllerName) + "\"";
string runnerServiceExecutable = "\"" + Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Bin), WindowsServiceControlManager.WindowsServiceControllerName) + "\"";
IntPtr scmHndl = IntPtr.Zero;
IntPtr svcHndl = IntPtr.Zero;
IntPtr tmpBuf = IntPtr.Zero;
@@ -468,7 +468,7 @@ namespace GitHub.Runner.Listener.Configuration
};
processInvoker.ExecuteAsync(workingDirectory: string.Empty,
fileName: agentServiceExecutable,
fileName: runnerServiceExecutable,
arguments: "init",
environment: null,
requireExitCodeZero: true,
@@ -490,7 +490,7 @@ namespace GitHub.Runner.Listener.Configuration
SERVICE_WIN32_OWN_PROCESS,
ServiceBootFlag.AutoStart,
ServiceError.Normal,
agentServiceExecutable,
runnerServiceExecutable,
null,
IntPtr.Zero,
null,
@@ -678,6 +678,17 @@ namespace GitHub.Runner.Listener.Configuration
if (service != null)
{
service.Start();
try
{
_term.WriteLine("Waiting for service to start...");
service.WaitForStatus(ServiceControllerStatus.Running, TimeSpan.FromSeconds(60));
}
catch (System.ServiceProcess.TimeoutException)
{
throw new InvalidOperationException($"Cannot start the service {serviceName} in a timely fashion.");
}
_term.WriteLine($"Service {serviceName} started successfully");
}
else

View File

@@ -1,6 +1,5 @@
using System;
using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
using GitHub.Services.Common;
using GitHub.Services.OAuth;
@@ -29,7 +28,7 @@ namespace GitHub.Runner.Listener.Configuration
var authorizationUrl = this.CredentialData.Data.GetValueOrDefault("authorizationUrl", null);
// For back compat with .credential file that doesn't has 'oauthEndpointUrl' section
var oathEndpointUrl = this.CredentialData.Data.GetValueOrDefault("oauthEndpointUrl", authorizationUrl);
var oauthEndpointUrl = this.CredentialData.Data.GetValueOrDefault("oauthEndpointUrl", authorizationUrl);
ArgUtil.NotNullOrEmpty(clientId, nameof(clientId));
ArgUtil.NotNullOrEmpty(authorizationUrl, nameof(authorizationUrl));
@@ -39,11 +38,11 @@ namespace GitHub.Runner.Listener.Configuration
var keyManager = context.GetService<IRSAKeyManager>();
var signingCredentials = VssSigningCredentials.Create(() => keyManager.GetKey());
var clientCredential = new VssOAuthJwtBearerClientCredential(clientId, authorizationUrl, signingCredentials);
var agentCredential = new VssOAuthCredential(new Uri(oathEndpointUrl, UriKind.Absolute), VssOAuthGrant.ClientCredentials, clientCredential);
var agentCredential = new VssOAuthCredential(new Uri(oauthEndpointUrl, UriKind.Absolute), VssOAuthGrant.ClientCredentials, clientCredential);
// Construct a credentials cache with a single OAuth credential for communication. The windows credential
// is explicitly set to null to ensure we never do that negotiation.
return new VssCredentials(null, agentCredential, CredentialPromptType.DoNotPrompt);
return new VssCredentials(agentCredential, CredentialPromptType.DoNotPrompt);
}
}
}

View File

@@ -12,8 +12,8 @@ namespace GitHub.Runner.Listener.Configuration
public class OsxServiceControlManager : ServiceControlManager, ILinuxServiceControlManager
{
// This is the name you would see when you do `systemctl list-units | grep runner`
private const string _svcNamePattern = "actions.runner.{0}.{1}.{2}";
private const string _svcDisplayPattern = "GitHub Actions Runner ({0}.{1}.{2})";
private const string _svcNamePattern = "actions.runner.{0}.{1}";
private const string _svcDisplayPattern = "GitHub Actions Runner ({0}.{1})";
private const string _shTemplate = "darwin.svc.sh.template";
private const string _svcShName = "svc.sh";

View File

@@ -20,7 +20,8 @@ namespace GitHub.Runner.Listener.Configuration
bool secret,
string defaultValue,
Func<String, bool> validator,
bool unattended);
bool unattended,
bool isOptional = false);
}
public sealed class PromptManager : RunnerService, IPromptManager
@@ -56,7 +57,8 @@ namespace GitHub.Runner.Listener.Configuration
bool secret,
string defaultValue,
Func<string, bool> validator,
bool unattended)
bool unattended,
bool isOptional = false)
{
Trace.Info(nameof(ReadValue));
ArgUtil.NotNull(validator, nameof(validator));
@@ -70,6 +72,10 @@ namespace GitHub.Runner.Listener.Configuration
{
return defaultValue;
}
else if (isOptional)
{
return string.Empty;
}
// Otherwise throw.
throw new Exception($"Invalid configuration provided for {argName}. Terminating unattended configuration.");
@@ -85,18 +91,28 @@ namespace GitHub.Runner.Listener.Configuration
{
_terminal.Write($"[press Enter for {defaultValue}] ");
}
else if (isOptional){
_terminal.Write($"[press Enter to skip] ");
}
// Read and trim the value.
value = secret ? _terminal.ReadSecret() : _terminal.ReadLine();
value = value?.Trim() ?? string.Empty;
// Return the default if not specified.
if (string.IsNullOrEmpty(value) && !string.IsNullOrEmpty(defaultValue))
if (string.IsNullOrEmpty(value))
{
Trace.Info($"Falling back to the default: '{defaultValue}'");
return defaultValue;
if (!string.IsNullOrEmpty(defaultValue))
{
Trace.Info($"Falling back to the default: '{defaultValue}'");
return defaultValue;
}
else if (isOptional)
{
return string.Empty;
}
}
// Return the value if it is not empty and it is valid.
// Otherwise try the loop again.
if (!string.IsNullOrEmpty(value))

View File

@@ -1,5 +1,6 @@
using System;
using System.Linq;
using System.Text.RegularExpressions;
using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
@@ -37,25 +38,38 @@ namespace GitHub.Runner.Listener.Configuration
serviceName = string.Empty;
serviceDisplayName = string.Empty;
Uri accountUri = new Uri(settings.ServerUrl);
string accountName = string.Empty;
if (accountUri.Host.EndsWith(".githubusercontent.com", StringComparison.OrdinalIgnoreCase))
if (string.IsNullOrEmpty(settings.RepoOrOrgName))
{
accountName = accountUri.AbsolutePath.Split('/', StringSplitOptions.RemoveEmptyEntries).FirstOrDefault();
}
else
{
accountName = accountUri.Host.Split('.').FirstOrDefault();
throw new InvalidOperationException($"Cannot find GitHub repository/organization name from server url: '{settings.ServerUrl}'");
}
if (string.IsNullOrEmpty(accountName))
// For the service name, replace any characters outside of the alpha-numeric set and ".", "_", "-" with "-"
Regex regex = new Regex(@"[^0-9a-zA-Z._\-]");
string repoOrOrgName = regex.Replace(settings.RepoOrOrgName, "-");
serviceName = StringUtil.Format(serviceNamePattern, repoOrOrgName, settings.AgentName);
if (serviceName.Length > 80)
{
throw new InvalidOperationException($"Cannot find GitHub organization name from server url: '{settings.ServerUrl}'");
Trace.Verbose($"Calculated service name is too long (> 80 chars). Trying again by calculating a shorter name.");
int exceededCharLength = serviceName.Length - 80;
string repoOrOrgNameSubstring = StringUtil.SubstringPrefix(repoOrOrgName, 45);
exceededCharLength -= repoOrOrgName.Length - repoOrOrgNameSubstring.Length;
string runnerNameSubstring = settings.AgentName;
// Only trim runner name if it's really necessary
if (exceededCharLength > 0)
{
runnerNameSubstring = StringUtil.SubstringPrefix(settings.AgentName, settings.AgentName.Length - exceededCharLength);
}
serviceName = StringUtil.Format(serviceNamePattern, repoOrOrgNameSubstring, runnerNameSubstring);
}
serviceName = StringUtil.Format(serviceNamePattern, accountName, settings.PoolName, settings.AgentName);
serviceDisplayName = StringUtil.Format(serviceDisplayNamePattern, accountName, settings.PoolName, settings.AgentName);
serviceDisplayName = StringUtil.Format(serviceDisplayNamePattern, repoOrOrgName, settings.AgentName);
Trace.Info($"Service name '{serviceName}' display name '{serviceDisplayName}' will be used for service configuration.");
}

View File

@@ -13,8 +13,8 @@ namespace GitHub.Runner.Listener.Configuration
public class SystemDControlManager : ServiceControlManager, ILinuxServiceControlManager
{
// This is the name you would see when you do `systemctl list-units | grep runner`
private const string _svcNamePattern = "actions.runner.{0}.{1}.{2}.service";
private const string _svcDisplayPattern = "GitHub Actions Runner ({0}.{1}.{2})";
private const string _svcNamePattern = "actions.runner.{0}.{1}.service";
private const string _svcDisplayPattern = "GitHub Actions Runner ({0}.{1})";
private const string _shTemplate = "systemd.svc.sh.template";
private const string _shName = "svc.sh";

Some files were not shown because too many files have changed in this diff Show More