Compare commits

..

147 Commits

Author SHA1 Message Date
Joanna Krzek-Lubowiecka
a3c99852af Add flag to control trace writer logging behaviour 2023-06-13 09:13:43 +00:00
JoannaaKL
b697b12601 Plural 2023-06-02 14:14:09 +00:00
JoannaaKL
3a2817a9ae Update Constants.cs 2023-06-02 13:30:15 +02:00
JoannaaKL
393dd9436e Update Constants.cs 2023-06-02 13:29:50 +02:00
JoannaaKL
2be59d6c10 Use consistent forms 2023-06-02 11:24:01 +00:00
JoannaaKL
4fa33ad4ec Extract to method 2023-06-02 11:13:06 +00:00
JoannaaKL
d056e8fa08 Log template errors as debug messages under feature flag 2023-06-02 11:13:06 +00:00
JoannaaKL
7c1ea89b87 Add test for old behaviour and remove unused method 2023-06-02 11:13:06 +00:00
JoannaaKL
238443852a wip 2023-06-02 11:13:05 +00:00
Nikola Jokic
3a1376f90e Fix uses: docker://image:tag steps when container hook is used (#2626)
* Fix `uses: docker://image:tag` steps when container hook is used

* Update src/Runner.Worker/ActionManager.cs

---------

Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
2023-06-02 11:01:59 +00:00
JoannaaKL
50b3edff3c Revert "Dont log error twice in ExecutionContext on template error (#2634)"
This reverts commit 48cbee08f9.
2023-06-01 06:46:10 +00:00
Nikola Jokic
58f7a379a1 Filter out empty arguments in container hooks (#2633) 2023-05-31 16:58:48 +02:00
Luke Tomlinson
e13627df81 Move Using V2 Flow log to Trace (#2635) 2023-05-31 10:40:34 -04:00
JoannaaKL
48cbee08f9 Dont log error twice in ExecutionContext on template error (#2634) 2023-05-31 16:33:36 +02:00
Philip Harrison
21b49c542c Set runner environment in context and env (#2518)
* Set runner environment in runner context and env

Extract runner_environment from the global context and expose in the
`github.runner` context and env as `RUNNER_ENVIRONMENT`.

Signed-off-by: Philip Harrison <philip@mailharrison.com>

* encoding.

---------

Signed-off-by: Philip Harrison <philip@mailharrison.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-05-26 13:08:41 -04:00
Bassem Dghaidi
8db8bbe13a Update container-hooks to 0.3.2 (#2618) 2023-05-23 09:35:30 -04:00
Vallie Joseph
49b04976f4 Adding Consistency to 'Failed To Resolve Action Download Info' Infrastructure Error Flagging (#2488)
* adding extra catch for download failure in composite actions

* Adding infra error

* Adding error handling centralizing

* updating try catch bubbling

* cleaning up commits

* cleaning up commits

* cleaning up commits

* updating bubbler

* cleaning up test files

* Fixing linting errors

* updating exception bubble

* reverting composite

* updating catch to not exclude other exceptions

* removing uneeded import

* Update src/Runner.Worker/ActionRunner.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Runner.Worker/ActionManager.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Runner.Worker/ActionManager.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Runner.Worker/ActionManager.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Runner.Worker/ActionManager.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* moving download out of for loop; reverting exception wrap

* Update src/Runner.Worker/ActionManager.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Adding blank lines back

* Adding blank lines back

* removing uneeded catch for download fail

* adding var back for consistency

* formatting clean

---------

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-05-11 14:57:24 -04:00
Gabriel
eeb0cf6f1e Add --no-default-labels config option to self-hosted runners (#2443)
* Add --no-default-labels option

Signed-off-by: Gabriel Adrian Samfira <gsamfira@cloudbasesolutions.com>

* Add tests

Signed-off-by: Gabriel Adrian Samfira <gsamfira@cloudbasesolutions.com>

* .

---------

Signed-off-by: Gabriel Adrian Samfira <gsamfira@cloudbasesolutions.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-05-11 12:37:56 -04:00
John Wesley Walker III
f8a28c3c4e [1742] Ensure multiple composite annoations are correctly written. (#2311)
* [1742] Ensure multiple composite annoations are correctly written.

This implementation uses a collector pattern to allow embedded ExecutionContexts to stash Issue objects for later processing by a non-embedded ancestor ExecutionContext.

Also:
 - Provide explicit constructor implementations for ExecutionContext
 - Leverage explicit constructors to solidify immutability of several ExecutionContext class members.
 - Fixed erroneous call to ExecutionContext.Complete in CompositeActionHandler.cs
 - Use a consistent timestamp for FinishTime in ExecutionContext::Complete

* Ensure collected issues are processed only by a non-embedded ExecutionContext.
This was already implicit.  Now, just making it explicit.

* Provide a clear mechanism that allows callers to opt-in/opt-out of ExecutionContext::AddIssue's logging behavior.
* Addressed deserialization inconsistencies in TimelineRecord.cs
* Added TimelineRecord unit tests.
* Refined unit tests related to TimelineRecord::Variables case-insensitivity
* Add a unit test that verifies ExecutionContextLogOptions::LogMessageOverride has the desired effect.
* Responded to PR feedback.
* Don't allow embedded ExecutionContexts to add Issues to a TimelineRecord
2023-05-10 16:24:02 +02:00
Tingluo Huang
1bc14f0607 Trace WebSocket exception into verbose level to reduce noise in diag log. (#2591) 2023-05-08 18:54:34 -04:00
John Hernley
22d1938ac4 Resolve Actions Directly From Launch for Run Service Jobs (#2529)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-05-03 16:04:21 -04:00
John Wesley Walker III
229b9b8ecc Remove Temporary Serialization Shim (#2549) 2023-05-03 15:29:19 +02:00
Yashwanth Anantharaju
896152d78e send annotations to run-service (#2574)
* send annotations to run-service

* skip message deletion

* actually don't skip deletion

* enum as numbers

* fix enum

* linting

* remove unncessary file

* feedback
2023-05-01 08:33:03 -04:00
Yang Cao
8d74a9ead6 Fix null guard bug (#2576) 2023-04-28 20:56:10 +00:00
Per Lundberg
77b8586a03 contribute.md: Fix link to style guidelines (#2560) 2023-04-27 16:19:56 -04:00
Yashwanth Anantharaju
c8c47d4f27 handle conflict errors from run service (#2570)
* handle conflict errors from run service

* nit: formatting

* fix formatting
2023-04-27 13:59:12 -04:00
Tingluo Huang
58f3ff55aa Prepare 2.304.0 runner release. (#2569) 2023-04-26 15:48:13 -04:00
Tingluo Huang
6353ac84d7 Add orchestrationId to useragent for better correlation. (#2568)
* Add orchestrationId to useragent for better correlation.

* .
2023-04-26 19:15:19 +00:00
Yang Cao
ad9a4a45d1 Bypass job server when run service is enabled (#2516)
* Only upload to Results with new job message type

* No need to have separate websocketFeedServer

* Linting fix

* Update src/Runner.Common/JobServerQueue.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* add connection timeout

* Consolidate initializing webclient to result client

* Retry websocket delivery for console logs

* Linter fix

* Do not give up for good, reconnect again in 10 minutes

* Has to reset delivered

* Only first time retry 3 times to connect to websocket

---------

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-04-26 12:51:56 -04:00
John Wesley Walker III
a41397ae93 Rename AcquireJobRequest::StreamID to AcquireJobRequest::JobMessageID (#2547)
* Rename AcquireJobRequest::StreamID to AcquireJobRequest::JobMessageID to match corresponding server-side update.
* appeased the linter
* Added unit tests to prove AcquireJobRequest serialization/deserialization is as-expected.
* Distinguish unit test variations.
* Incorporated PR Feedback.
2023-04-26 18:31:41 +02:00
Erez Testiler
c4d41e95cb Add *.ghe.localhost domains to hosted server check (#2536) 2023-04-24 12:52:28 -04:00
Maggie Spletzer
af6ed41bcb Adding curl --retry-all-errors for external tool downloads (#2557)
* adding retry-all-errors based on curl version >= 7.71.0

* adding logging

* fixing conditional logic
2023-04-20 21:01:11 +00:00
Tingluo Huang
e8b2380a20 Limit the time we wait for waiting websocket to connect. (#2554) 2023-04-19 10:20:00 -04:00
Maggie Spletzer
38ab9dedf4 Adding curl retry for external tool downloads (#2552)
* adding retry for flaky downloads

* adding comment

* Update src/Misc/externals.sh

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

---------

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-04-18 18:31:46 +00:00
Yashwanth Anantharaju
c7629700ad Handle non success raw http responses (#2541)
* handle non success responses

* fix format

* nits

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

---------

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-04-14 16:17:20 -04:00
Yang Cao
b9a0b5dba9 If conclusion is not set, we cannot get default value (#2535) 2023-04-13 13:03:57 -04:00
Yang Cao
766cefe599 Also send in conclusion for steps (#2531) 2023-04-13 15:19:00 +02:00
Eli Entelis
2ecd7d2fc6 which handles broken symlink & unit test added (#2150) (#2196)
* which handles broken symlink & unit test added (#2150)

* Update src/Runner.Sdk/Util/WhichUtil.cs

Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>

* fix pr comments

* trace log added,  in case we find a symlink that is broken

* add check in case the target of the symlink is a relative path; added test that check symlink that targets a full path; added test that check symlink that targets a relative path;

* fix tests

* fix tests for linux

---------

Co-authored-by: Eli Entelis <eli.entelis@hexagon.com>
Co-authored-by: Eli Entelis <42981948+Eliminator1999@users.noreply.github.com>
Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
2023-04-06 22:15:11 -04:00
Yang Cao
0484afeec7 Update Runner to send step updates to Results (#2510)
* Also send Steps update to Results service

* Refactor to separate results server from current job server

* If hit any error while uploading to Results, skip Results upload

* Add proxy authentication and buffer request for WinHttpHandler

* Remove unnecessary null guard

* Also send Results telemetry when step update fails

* IResultsServer is not disposable
2023-04-03 16:31:13 -04:00
Nikola Jokic
1ceb1a67f2 Guard against NullReference while creating HostContext (#2343)
* Guard against NullReference while creating HostContext

* Throw on IOUtil loading object if content is empty or object is null

* removed unused dependency

* Changed exceptions to ArgumentNullException and ArgumentException

* fixed my mistake on LoadObject

* fixed tests

* trigger workflows

* trigger workflows

* encode files using utf8
2023-03-30 15:39:09 +02:00
Luke Tomlinson
9f778b814d Register Runners against V2 servers (#2505)
* Parse runners and send publicKey

* wip

* Fix conflicts

* Cleanup

* Cleanup

* fix test

* fix test

* Add trace for broker message listener

* Feedback

* refactor

* remove dead code

* Remove old comment
2023-03-28 15:45:00 -04:00
Luke Tomlinson
92258f9ea1 Listen directly to Broker for messages (#2500) 2023-03-24 14:00:34 -04:00
Bassem Dghaidi
74eeb82684 ADR: Runner Image Tags (#2494)
* WIP

* WIP

* Add context

* Add 2494-runner-image-tags ADR draft

* Fix ADR title

* Add more information to option 2

* Add decision

* Fix status
2023-03-23 12:09:10 -04:00
Matisse Hack
0e7ca9aedb Fix JIT configurations on Windows (#2497)
* Fix JIT configurations on Windows

* Update src/Runner.Listener/Runner.cs
2023-03-21 15:04:50 -04:00
Tatyana Kostromskaya
bb7b1e8259 Update runner to handle Dotcom/runner-admin based registration flow (#2487) 2023-03-21 16:05:32 +01:00
Tingluo Huang
440c81b770 Bump container hooks version to 0.3.1 (#2496) 2023-03-20 14:53:47 +00:00
Bassem Dghaidi
9958fc0374 Update container hooks version to 0.3.0 (#2492) 2023-03-17 06:00:55 -04:00
Tingluo Huang
81b07eb1c4 Prepare runner release 2.303.0 (#2482) 2023-03-10 10:45:56 +00:00
Jongwoo Han
514ecec5a3 Replace deprecated command with environment file (#2429)
Signed-off-by: jongwooo <jongwooo.han@gmail.com>
2023-03-09 22:09:05 -05:00
Ajay
128b212b13 Support matrix context in output keys (#2477) 2023-03-09 21:30:28 -05:00
Nikola Jokic
2dfa28e6e0 Add update certificates to ./run.sh if RUNNER_UPDATE_CA_CERTS env is set (#2471)
* Included entrypoint that will update certs and run ./run.sh

* update ca if RUNNER_UPDATE_CA env is set

* changed env variable to RUNNER_UPDATE_TRUST_STORE

* moved entrypoint to be run.sh, removed Dockerfile entrypoint, added envvar that will update certs

* Update src/Misc/layoutroot/run.sh

Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>

* Update src/Misc/layoutroot/run.sh

Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>

* Update src/Misc/layoutroot/run.sh

Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>

* Update src/Misc/layoutroot/run.sh

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Misc/layoutroot/run.sh

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* removed doc comment on func

---------

Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-03-08 12:29:55 -05:00
Nikola Jokic
fd96246580 Exit on deprication error (#2299)
* terminate the runner on deprication message

* added TaskAgentVersion exception to catch deprication

* AccessDenied exception with inner exception InvalidTaskAgent

* Access denied exception in program and in message listener

* Fixed copy

* remove trace message from message listener
2023-03-07 14:09:10 +01:00
Ferenc Hammerl
8ef48200b4 Bypass all proxies for all hosts if no_proxy='*' is set (#2395)
* Bypass top level domain even if no_proxy specified it with leading '.'

E.g. no_proxy='.github.com' will now bypass github.com.

* Bypass proxy on all hosts when no_proxy is * (wildcard)

* Undo '.' stripping

* Simplify unit tests

* Respect wildcard even if it's one of many no_proxy items
2023-03-06 11:01:45 +01:00
Tingluo Huang
d61b27b839 Change runner image to make user/folder align with ubuntu-latest hosted runner. (#2469) 2023-03-02 13:42:23 -05:00
Nikola Jokic
542e8a3c98 Runner service exit after consecutive re-try exits (#2426)
* Runner service exit after consecutive re-try exits

* Rename failure counts and include reset count on runner listening for jobs

* Changed from graceful shutdown to stopping=true
2023-02-28 16:00:36 +01:00
Yashwanth Anantharaju
e8975514fd call run service renewjob (#2461)
* call run service renewjob

* format

* formatting

* make it private and expose internals

* lint

* fix exception class

* lint

* fix test as well
2023-02-27 16:50:28 +00:00
Yang Cao
0befa62f64 Add job log upload support (#2447)
* Refactor and add job log upload support

* Rename method to be consistent
2023-02-21 09:55:47 -05:00
Yang Cao
aaf02ab34c Properly guard upload (#2439)
* Revert "Revert "Uploading step logs to Results as well  (#2422)" (#2437)"

This reverts commit 8c096baf49.

* Properly guard the upload to results feature

* Delete skipped file if deletesource is true
2023-02-17 10:31:41 -05:00
JoannaaKL
02c9d1c704 Don't disable lint errors (#2436)
* Update lint.yml

Don't ignore the formatting errors

* Add formatting made by @cory-miller

* Use dotnet format

* Format only modified files

* Add instruction to contribute.md

* Use git status instead of git diff
2023-02-16 14:33:03 +01:00
Tingluo Huang
982784d704 Wait for docker base on env RUNNER_WAIT_FOR_DOCKER_IN_SECONDS. (#2440) 2023-02-15 16:05:36 -05:00
Yang Cao
8c096baf49 Revert "Uploading step logs to Results as well (#2422)" (#2437)
This reverts commit e979331be4.
2023-02-15 11:21:32 -05:00
JoannaaKL
8d6972e38b Don't add Needs constant twice 2023-02-15 10:50:22 +00:00
Tingluo Huang
1ab35b0938 Use v2 version based on https://github.blog/changelog/2023-01-18-code-scanning-codeql-action-v1-is-now-deprecated/ (#2434) 2023-02-14 16:01:52 +00:00
Tingluo Huang
f86e968d38 Prepare runner release 2.302.0 (#2433) 2023-02-14 09:50:09 -05:00
Yang Cao
e979331be4 Uploading step logs to Results as well (#2422)
* Rename queue to results queue

* Add results contracts

* Add Results logging handling

* Adding calls to create and finalize append blob

* Modifications for azurite upload

* Only call upload complete on final section and remove size

* Make method specific to step log so we can support job log later

* Change contract for results

* Add totalline count to the result log upload file

* Actually pass lineCount to Results Service

* Fix typos

* Code cleanup

* Fixing typos

* Apply suggestions from code review

Co-authored-by: Konrad Pabjan <konradpabjan@github.com>

---------

Co-authored-by: Brittany Ellich <brittanyellich@github.com>
Co-authored-by: Konrad Pabjan <konradpabjan@github.com>
2023-02-13 13:18:56 -05:00
Ferenc Hammerl
97195bad58 Replace '(' and ')' with '[' and '] from OS.Description so it doesn't fail User-Agent header validation (#2288)
* Sanitize OS Desc for UserAgents

* Only drop brackets if needed, refactoring

* Add missing ')'

* Readd missing brackets around '(header)'

* Add comments

* Use bracket solution from SDK

* Rename tests
2023-02-08 17:42:27 +01:00
Tingluo Huang
6d1d2460ac Add docker cli to the runner image. (#2425) 2023-02-08 09:21:02 -05:00
Yashwanth Anantharaju
67356a3305 Run service: send more stuff as part of job completed (#2423)
* send more stuff as part of job completed

* feedback

* set only once

* feedback

* feedback

* fix test

* feedback

* nit: spacing

* nit: line

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

---------

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-02-07 20:10:53 +00:00
John Wesley Walker III
9a228e52e9 Defer evaluation of a step's DisplayName until its condition is evaluated. (#2313)
* Defer evaluation of a step's DisplayName until its condition is evaluated.
* Formalize TryUpdateDisplayName and EvaluateDisplayName as members of interface `IStep` (#2374)
2023-02-07 11:42:30 +01:00
Erez Testiler
3cd76671dd Add support for ghe.com domain (#2420) 2023-02-06 17:16:38 -05:00
Yashwanth Anantharaju
e6e5f36dd0 start calling run service for job completion (#2412)
* start calling run service for job completion

* cleanup

* nit: lines

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* clean up

* give sanity back to thboop

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* add clean up back

* clean up

* clean up more

* oops

* copied from existing, but :thumb:

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

---------

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>
2023-02-01 21:18:31 +00:00
Yashwanth Anantharaju
24a27efd4f fix small bug (#2396) 2023-01-30 10:00:31 -05:00
Tingluo Huang
ca7be16dd3 Bump dotnet sdk to latest version. (#2392)
* Bump dotnet sdk to latest version.

* .

* .

* .

* .

* .

* .

* .

* .
2023-01-23 13:07:49 -05:00
Tingluo Huang
f1c57ac0ef Bump runner version to match the released runner. (#2385)
* Bump runner version to match the released runner.

* .
2023-01-19 00:40:33 +00:00
Tingluo Huang
8581a041a5 Revert "split by regex (#2333)" (#2383)
This reverts commit 72830cfc12.
2023-01-19 00:32:24 +00:00
Tingluo Huang
6412390a22 Prepare 2.301.0 runner release. (#2382)
* Prepare 2.301.0 runner release.

* Update releaseNote.md

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* Update releaseNote.md

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* Update releaseNote.md

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* Update releaseNote.md

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* Update releaseNote.md

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* Update releaseNote.md

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>

* Update releaseNote.md

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>
2023-01-18 15:03:24 -05:00
John Sudol
7306014861 Update Node dependencies (#2381)
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2023-01-18 14:19:28 -05:00
John Sudol
d6f8633efc new option to remove local config files (#2367) 2023-01-18 11:28:43 -05:00
Cory Miller
130f6788d5 Add a disclaimer for which runner version is available to a given tenant (#2362)
* Add a disclaimer for which runner version is available to a given tenant

* Update releaseNote.md
2023-01-18 10:41:53 -05:00
John Sudol
9b390e0531 update node to 16.16.0 (#2371) 2023-01-18 10:41:15 -05:00
yujincat
a7101008a2 Show more information in the runner log (#2377)
* fix typo

* add workflow ref in the log

* show job name for all jobs

* update ref

* reflect the feedback

* fix a small bug
2023-01-18 10:40:35 -05:00
Tingluo Huang
4a6630531b Allow provide extra User-Agent for better correlation. (#2370) 2023-01-16 10:18:55 -05:00
Yang Cao
caec043085 Always upload to avoid issues (#2334)
* Remove unnecessary timelineId and timelineRecordId and use Guid stepId

* Log upload error to kusto

* Remove try-catch

* Using a well known telmetry record to avoid replacing issues

* fix Guid format
2022-12-28 11:56:53 -05:00
Tingluo Huang
a1244d2269 Add Header/Footer to multi-line message in StdoutTraceListener. (#2336) 2022-12-22 10:38:29 -05:00
Nikola Jokic
332b97f838 Treat jitconfig as secret. (#2335)
Co-authored-by: TingluoHuang <TingluoHuang@github.com>
2022-12-21 13:30:22 -05:00
Stefan Ruvceski
72830cfc12 split by regex (#2333)
* split by regex

* pr fix

* adding tests

* test fix
2022-12-20 14:28:29 +01:00
Tingluo Huang
29a28a870f Make runner image print diag log to STDOUT. (#2331) 2022-12-19 16:57:16 -05:00
Tingluo Huang
0dd7a113f1 Log GitHub RequestId for better traceability. (#2332) 2022-12-19 19:46:29 +00:00
TingluoHuang
83b8baa45e Bump runner version to 2.300.2 to match released version. 2022-12-19 14:22:51 -05:00
Ferenc Hammerl
d5e566ad17 Release notes for 2.300.1 (#2326)
* Update runnerversion

* Update releaseNote.md
2022-12-19 13:08:36 -05:00
Bethany
64381cca6a Re-add file size check prior to reading file (#2321) (#2330)
* Re-add file size check prior to reading file

* Remove redundant file size check
2022-12-19 12:56:28 -05:00
Stefan Ruvceski
f1b1532f32 set env in ProcessInvoker sanitized (#2280)
* set env in ProcessInvoker sanitized
2022-12-19 15:01:53 +01:00
Nikola Jokic
04761e5353 Initialize container manager based on whether the ContainerHooksPath is set (#2317)
* Added tests around checking if correct manager's Initialize method has been called

* repaired missing initialization on container action handler
2022-12-16 15:40:49 +01:00
Ava Stancu
f9e2fa939c Updated contact links for feature requests (#2314)
Users need to use the Github Community feedback page for all feature/enhancement requests.
2022-12-15 17:17:41 +02:00
Ferenc Hammerl
92acb625fb Update Dockerfile (#2315) 2022-12-15 15:44:07 +01:00
Ava Stancu
6b9e8a6be4 prepare release notes for 2.300.0 (#2312)
* Update runner version

* Update releaseNote.md

* Update releaseNote.md

* Update releaseNote.md

Co-authored-by: JoannaaKL <joannaakl@github.com>

* Update releaseNote.md

* Update releaseNote.md

Co-authored-by: JoannaaKL <joannaakl@github.com>
Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
2022-12-14 10:33:02 +02:00
Brittany Ellich
f41f5d259d Use results for uploading step summaries (#2301)
* Use results service for uploading step summaries

* Use results summary over generic results naming convention

* Apply suggestions from code review

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Addressing feedback

* Fix merge issue

* Remove empty line

* Update Results json objects to use snake case

* Adding the reference

Co-authored-by: Yang Cao <yacaovsnc@github.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2022-12-14 09:28:33 +01:00
Ava Stancu
369a4eccad Made worker logs available to stdout (#2307)
* Made worker logs available to stdout

* Log Worker Standard out line by line

Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
2022-12-08 16:23:52 -05:00
Ava Stancu
088981a372 Listener stdout logging (#2291)
* Added env variable to control wether the terminal is silent

* Log to stdout if PrintLogToStdout is enabled

* Extracted console logging to stdouttracelistener

* Remove useless usings

* Rewrite TraceListener as superclass

* Only print to stdout if env is set

* Add comment for Console.Out

* Format Listener

* Revert var name in terminal

* Check env in hostcontext instead of Tracing constructor

* Remove superclass & dupe logging code

* Log hostType

* Readonly '_' prefix 'hostType'

* Fix test

* Revert Terminal change

Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
2022-12-06 16:16:00 +01:00
Nikola Jokic
852a80fcbd Return exit code when MANUALLY_TRAP_SIG is exported (#2285) 2022-11-28 11:33:01 -05:00
Ferenc Hammerl
63640e91fa Backfill notes and version from 'Release 2.299.1 runner.' (#2277)
* Release 2.299.1 runner.

* Fix typo in releaseNote

Co-authored-by: TingluoHuang <TingluoHuang@github.com>
2022-11-22 15:05:47 +01:00
Nikola Jokic
9122fe7e10 added to lowercase on setting github.action_status (#1944) 2022-11-22 15:05:32 +01:00
Nikola Jokic
6b8452170a (delete.sh) Loggin repaired and made runner_name optional defaulting to hostname (#1871)
* Loggin repaired and made runner_name optional defaulting to hostname

* Update scripts/delete.sh

Co-authored-by: Josh Soref <2119212+jsoref@users.noreply.github.com>

Co-authored-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2022-11-22 14:47:22 +01:00
Nikola Jokic
cc49e65356 added replace to allow create latest svc to apply --replace flag (#2273) 2022-11-22 14:46:11 +01:00
Tatyana Kostromskaya
1632e4a343 Support runner upgrade messages (#2231)
Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>
2022-11-21 16:17:47 +01:00
Ava Stancu
b465102e7f Updated info on process for requesting features and enhancements (#2259) 2022-11-18 16:07:59 +01:00
Amit Rathi
98c857b927 expose github.actor_id, github.workflow_ref & github.workflow_sha as environment variable (#2249)
* expose workflow refs/sha  as environment variables

* fixes environment variable ordering

* job_workflow_ref/sha aren't available in gh ctx
2022-11-17 11:11:52 -05:00
Chris Patterson
dda53af485 Small change to Node.js 12 deprecation message (#2262)
* Small change to Node.js 12 deprecation message

* Update src/Runner.Common/Constants.cs
2022-11-16 16:25:59 +01:00
Tingluo Huang
c0bc4c02f8 Add RUNNER_ALLOW_RUNASROOT=1 to dockerfile (#2254) 2022-11-15 12:38:11 -05:00
Nikola Jokic
c6630ce285 Dockerfile and workflow change for runner image (#2250)
* Workflow

* add back github-token to id:image

* added handle to image name

* removed core require

* multi line added :latest

* added label

* added repository_owner in IMAGE_NAME

* with the release

* release

* markdown label description

* Remove markdown desciprion

* Remove double quotes in labels

* Reverted back releaseVersion
2022-11-10 12:08:13 -05:00
Tingluo Huang
40ed7f8a40 Forward parameters into run() func in run.sh. (#2240) 2022-11-02 18:13:40 -04:00
Cory Miller
7f5067a8b5 prepare release notes for 2.299.0 (#2239) 2022-11-02 18:55:03 +00:00
Tingluo Huang
4adaf9c1e6 Use Global.Variables instead of JobContext and include action path/ref in the message. (#2214)
* Use Global.Variables instead of JobContext and include action path/ref in the message.

* encoding

* .
2022-11-02 09:40:19 -04:00
Nikola Jokic
d301c06a7e run.sh installs SIGINT and SIGTERM traps to gracefully stop runner (#2233)
* run.sh installs SIGINT and SIGTERM traps to gracefully stop runner

* replaced trap to force wait for child processes and to send kill to run-helper instead of Runner.Listener

* run with signal handling if RUNNER_MANUALLY_TRAP_SIG is set

* Update src/Misc/layoutroot/run.sh

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2022-11-02 09:28:14 -04:00
JoannaaKL
3e196355de Allow the container.image to be empty (#2235)
* Allow container.image to be null/empty

* Remove unnecessary workflow type

* Update src/Sdk/DTPipelines/Pipelines/ObjectTemplating/PipelineTemplateConverter.cs

Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>

* Rename ConvertToJobContainer to ConvertToContainer
It converts serviceContainers too

* Remove unused parameter
AllowExpressions was always false

* Allow service in yaml with empty string for value

* Don't attempt to start services without an image

* Improve error messages

* Revert "Remove unused parameter"

This reverts commit f4ab2d50d0.

* Revert "Rename ConvertToJobContainer to ConvertToContainer"

This reverts commit ffc62f8343.

* Fix revert merge conflict

* Removed info logs

* Added info log for services without container images

Co-authored-by: moonblade <moonblade168@gmail.com>
Co-authored-by: Ferenc Hammerl <31069338+fhammerl@users.noreply.github.com>
Co-authored-by: Ava Stancu <avastancu@github.com>
2022-11-02 11:27:37 +01:00
Tauhid Anjum
dad7ad0384 Setting debug using GitHub Action variables (#2234) 2022-10-31 09:51:33 -04:00
Cory Miller
b18bda773f Add generateServiceConfig option for configure command (#2226)
For runners that are already configured but need to add systemd services after the fact, a new command option is added to generate the file only. Once generated, ./svc.sh install and other commands will be available.

This also adds support for falling back to the tenant name in the Actions URL in cases when the GitHub URL is not provided, such as for our hosted larger runners.
2022-10-26 08:48:23 -04:00
Cory Miller
3fc993da59 Disable linter errors (#2216) 2022-10-20 18:59:39 -04:00
Cory Miller
5421fe3f71 Add linter workflow (#2213) 2022-10-18 12:14:27 -04:00
Cory Miller
b87b4aac5c Fix IDE0090 (#2211) 2022-10-18 10:54:08 -04:00
Ferenc Hammerl
daba735b52 Add runner devcontainer (#2187)
Add runner devcontainer.json definition. Sets up dotnet, docker-in-docker, omnisharp and some vscode extensions
2022-10-17 14:03:24 +02:00
Ben Sherman
46ce960fd2 Fix markup for support link (#2114) 2022-10-13 16:56:10 +02:00
Ferenc Hammerl
f4b7f91c21 Allow '--disableupdate' in create-latest-svc.sh (#2201)
* Allow '--disableupdate' in create-latest-svc.sh

* Echo '--disableupdate' when used
2022-10-13 12:40:12 +02:00
Ava Stancu
ff65183e43 Avastancu/joannaakl/service container error log (#2110) (#2182)
* Avastancu/joannaakl/service container error log (#2110)

* adding support for a service container docker logs

* Adding Unit test to ContainerOperationProvider

* Adding another test to ContainerOperationProvider

* placed the docker logs output in dedicated ##group section

* Removed the exception thrown if the service container was not healthy

* Removed duplicated logging to the executionContext

* Updated the container logs sub-section message

* Print service containers only if they were healthy
Unhealthy service logs are printed in ContainerHealthCheckLogs called prior to this step.

* Removed recently added method to inspect docker logs
The method was doing the same thing as the existing DockerLogs method.

* Added execution context error
This will make a failed health check more visible in the UI without disrupting the execution of the program.

* Removing the section 'Waiting for all services to be ready'

Since nested subsections are not being displayed properly and we already need one subsection per service error.

* Update src/Runner.Worker/Container/DockerCommandManager.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Test/L0/TestHostContext.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Change the logic for printing Service Containers logs

Service container logs will be printed in the 'Start containers' section only if there is an error.
Healthy services will have their logs printed in the 'Stop Containers' section.

* Removed unused import

* Added back section group.

* Moved service containers error logs to separate group sections

* Removed the test testing the old logic flow.

* Remove unnecessary 'IsAnyUnhealthy' flag

* Remove printHello() function

* Add newline to TestHostContext

* Remove unnecessary field 'UnhealthyContainers'

* Rename boolean flag indicating service container failure

* Refactor healthcheck logic to separate method to enable unit testing.

* Remove the default value for bool variable

* Update src/Runner.Worker/ContainerOperationProvider.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Runner.Worker/ContainerOperationProvider.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Rename Healthcheck back to ContainerHealthcheck

* Make test sequential

* Unextract the container error logs method

* remove test asserting thrown exception

* Add configure await

* Update src/Test/L0/Worker/ContainerOperationProviderL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Test/L0/Worker/ContainerOperationProviderL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Test/L0/Worker/ContainerOperationProviderL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Test/L0/Worker/ContainerOperationProviderL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Test/L0/Worker/ContainerOperationProviderL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Add back test asserting exception

* Check service exit code if there is no healtcheck configured

* Remove unnecessary healthcheck for healthy service container

* Revert "Check service exit code if there is no healtcheck configured"

This reverts commit fec24e8341.

Co-authored-by: Ava S <avastancu@github.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Do not fail service containers without the healthcheck

Co-authored-by: JoannaaKL <joannaakl@github.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2022-10-11 12:09:24 +02:00
Ferenc Hammerl
0f13055428 Backfill notes from release branch 2.298.2 (wrong branch name) (#2180)
* Update releaseNote.md

* Update runnerversion
2022-10-07 16:05:56 +02:00
Yashwanth Anantharaju
252f4de577 changes to support specific run service URL (#2158)
* changes to support run service url

* feedback
2022-10-06 10:28:32 -04:00
Robin Neatherway
b6a46f2114 Correct grammar in archive extraction error message (#2184) 2022-10-06 08:20:30 -04:00
Ferenc Hammerl
2145432f81 Revert "Avastancu/joannaakl/service container error log (#2110)" (#2179)
This reverts commit 949269104d.
2022-10-05 15:00:22 +00:00
Ferenc Hammerl
86d0ee8389 Backport 2.298.1 (#2175)
* Update releaseNote.md

* Update runnerversion
2022-10-04 15:20:35 +00:00
Ferenc Hammerl
1379ed2c72 Fix incorrect template vars to show SHA for WIN-ARM64 (#2171) 2022-10-04 17:05:18 +02:00
Francesco Renzi
4935be5526 Prepare release notes for v2.298.0 (#2169) 2022-10-04 12:09:58 +00:00
Francesco Renzi
920fba93dc Add warning for users using deprecated commands (#2164) 2022-10-04 12:14:22 +01:00
JoannaaKL
949269104d Avastancu/joannaakl/service container error log (#2110)
* adding support for a service container docker logs

* Adding Unit test to ContainerOperationProvider

* Adding another test to ContainerOperationProvider

* placed the docker logs output in dedicated ##group section

* Removed the exception thrown if the service container was not healthy

* Removed duplicated logging to the executionContext

* Updated the container logs sub-section message

* Print service containers only if they were healthy
Unhealthy service logs are printed in ContainerHealthCheckLogs called prior to this step.

* Removed recently added method to inspect docker logs
The method was doing the same thing as the existing DockerLogs method.

* Added execution context error
This will make a failed health check more visible in the UI without disrupting the execution of the program.

* Removing the section 'Waiting for all services to be ready'

Since nested subsections are not being displayed properly and we already need one subsection per service error.

* Update src/Runner.Worker/Container/DockerCommandManager.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Test/L0/TestHostContext.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Change the logic for printing Service Containers logs

Service container logs will be printed in the 'Start containers' section only if there is an error.
Healthy services will have their logs printed in the 'Stop Containers' section.

* Removed unused import

* Added back section group.

* Moved service containers error logs to separate group sections

* Removed the test testing the old logic flow.

* Remove unnecessary 'IsAnyUnhealthy' flag

* Remove printHello() function

* Add newline to TestHostContext

* Remove unnecessary field 'UnhealthyContainers'

* Rename boolean flag indicating service container failure

* Refactor healthcheck logic to separate method to enable unit testing.

* Remove the default value for bool variable

* Update src/Runner.Worker/ContainerOperationProvider.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Runner.Worker/ContainerOperationProvider.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Rename Healthcheck back to ContainerHealthcheck

* Make test sequential

* Unextract the container error logs method

* remove test asserting thrown exception

* Add configure await

* Update src/Test/L0/Worker/ContainerOperationProviderL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Test/L0/Worker/ContainerOperationProviderL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Test/L0/Worker/ContainerOperationProviderL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Test/L0/Worker/ContainerOperationProviderL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Update src/Test/L0/Worker/ContainerOperationProviderL0.cs

Co-authored-by: Tingluo Huang <tingluohuang@github.com>

* Add back test asserting exception

* Check service exit code if there is no healtcheck configured

* Remove unnecessary healthcheck for healthy service container

* Revert "Check service exit code if there is no healtcheck configured"

This reverts commit fec24e8341.

Co-authored-by: Ava S <avastancu@github.com>
Co-authored-by: Tingluo Huang <tingluohuang@github.com>
2022-10-03 17:50:20 +02:00
Tauhid Anjum
dca4f67143 Adding a new vars context for non-secret variables (#2096)
* Adding a new vars context for non-secret variables

* Fix test case

* Trigger checks

* Remove variables from env context and environment varibale

* remove extra references

* Add prefix handling to configuration variables

* Fix test cases

* Consume variables using vars in context data

* removed action_yaml changes
2022-09-30 09:37:47 -04:00
Thomas Boop
01ff38f975 2.297.0 release notes (#2155)
* 2.297.0 release notes
2022-09-26 11:22:05 -04:00
Tatyana Kostromskaya
bc67f99bae Add link to blog post to node 12 warn (#2156) 2022-09-26 17:05:29 +02:00
Thomas Boop
ae2f4a6f27 POC: Windows arm64 runner build (#2022)
Prerelease for windows-arm64 runner build
2022-09-26 09:20:43 -04:00
Francesco Renzi
15cbadb4af Add file commands for save-state and set-output (#2118) 2022-09-26 10:17:46 +01:00
Thomas Boop
0678e8df09 Add Release branches to pull request spec (#2134) 2022-09-19 15:28:46 +00:00
JoannaaKL
3a1c89715c Remove unused imports (#2126) 2022-09-15 15:55:45 +02:00
JoannaaKL
6cdd27263b Remove unused imports (#2124) 2022-09-15 12:14:10 +02:00
Francesco Renzi
32845a5448 Bump @actions/core from 1.2.6 to 1.9.1 in /src/Misc/expressionFunc/hashFiles (#2123) 2022-09-15 09:43:28 +00:00
Stefan Ruvceski
6e6410d300 fix for issue #2009 - composite summary file (#2077) 2022-09-12 14:51:36 -04:00
Thomas Boop
ed191b78ae Port hotfix to main branch (#2108)
* fix issue with env's overwriting environment

* add release notes

* handle value escaping

* compile regex for runtime perf improvements
2022-09-09 14:32:07 -04:00
Nikola Jokic
75786756bb fix ACTIONS_RUNNER_CONTAINER_HOOKS name in ADR (#2098) 2022-09-06 10:45:00 -04:00
Ferenc Hammerl
5e0c2ef816 2.296.1 Release (#2092) (#2099)
* docker: escape key-value pair as -e KEY and VALUE being environment var

* removed code duplication, removed unused method and test

* add release notes

Co-authored-by: Nikola Jokic <nikola-jokic@github.com>

Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>
Co-authored-by: Nikola Jokic <nikola-jokic@github.com>
2022-09-02 15:43:22 +00:00
Nikola Jokic
95459dea5f docker: escape key-value pair as -e KEY and VALUE being environment var (#2091)
* docker: escape key-value pair as -e KEY and VALUE being environment var

* removed code duplication, removed unused method and test
2022-08-31 13:39:58 -04:00
256 changed files with 14629 additions and 3737 deletions

View File

@@ -0,0 +1,27 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
{
"name": "Actions Runner Devcontainer",
"image": "mcr.microsoft.com/devcontainers/base:focal",
"features": {
"ghcr.io/devcontainers/features/docker-in-docker:1": {},
"ghcr.io/devcontainers/features/dotnet": {
"version": "6.0.405"
},
"ghcr.io/devcontainers/features/node:1": {
"version": "16"
}
},
"customizations": {
"vscode": {
"extensions": [
"ms-azuretools.vscode-docker",
"ms-dotnettools.csharp",
"eamodio.gitlens"
]
}
},
// dotnet restore to install dependencies so OmniSharp works out of the box
// src/Test restores all other projects it references, src/Runner.PluginHost is not one of them
"postCreateCommand": "dotnet restore src/Test && dotnet restore src/Runner.PluginHost",
"remoteUser": "vscode"
}

View File

@@ -1,6 +1,7 @@
# https://editorconfig.org/ # https://editorconfig.org/
[*] [*]
charset = utf-8 # Set default charset to utf-8
insert_final_newline = true # ensure all files end with a single newline insert_final_newline = true # ensure all files end with a single newline
trim_trailing_whitespace = true # attempt to remove trailing whitespace on save trim_trailing_whitespace = true # attempt to remove trailing whitespace on save

View File

@@ -1,5 +1,8 @@
blank_issues_enabled: false blank_issues_enabled: false
contact_links: contact_links:
- name: 🛑 Request a feature in the runner application
url: https://github.com/orgs/community/discussions/categories/actions-and-packages
about: If you have feature requests for GitHub Actions, please use the Actions and Packages section on the Github Product Feedback page.
- name: ✅ Support for GitHub Actions - name: ✅ Support for GitHub Actions
url: https://github.community/c/code-to-cloud/52 url: https://github.community/c/code-to-cloud/52
about: If you have questions about GitHub Actions or need support writing workflows, please ask in the GitHub Community Support forum. about: If you have questions about GitHub Actions or need support writing workflows, please ask in the GitHub Community Support forum.

View File

@@ -1,32 +0,0 @@
---
name: 🛑 Request a feature in the runner application
about: If you have feature requests for GitHub Actions, please use the "feedback and suggestions for GitHub Actions" link below.
title: ''
labels: enhancement
assignees: ''
---
<!--
👋 You're opening a request for an enhancement in the GitHub Actions **runner application**.
🛑 Please stop if you're not certain that the feature you want is in the runner application - if you have a suggestion for improving GitHub Actions, please see the [GitHub Actions Feedback](https://github.com/github/feedback/discussions/categories/actions-and-packages-feedback) discussion forum which is actively monitored. Using the forum ensures that we route your problem to the correct team. 😃
Some additional useful links:
* If you have found a security issue [please submit it here](https://hackerone.com/github)
* If you have questions or issues with the service, writing workflows or actions, then please [visit the GitHub Community Forum's Actions Board](https://github.community/t5/GitHub-Actions/bd-p/actions)
* If you are having an issue or have a question about GitHub Actions then please [contact customer support](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/about-github-actions#contacting-support)
If you have a feature request that is relevant to this repository, the runner, then please include the information below:
-->
**Describe the enhancement**
A clear and concise description of what the features or enhancement you need.
**Code Snippet**
If applicable, add a code snippet.
**Additional information**
Add any other context about the feature here.
NOTE: if the feature request has been agreed upon then the assignee will create an ADR. See docs/adrs/README.md

View File

@@ -10,7 +10,7 @@ on:
- '**.md' - '**.md'
pull_request: pull_request:
branches: branches:
- '*' - '**'
paths-ignore: paths-ignore:
- '**.md' - '**.md'
@@ -18,7 +18,7 @@ jobs:
build: build:
strategy: strategy:
matrix: matrix:
runtime: [ linux-x64, linux-arm64, linux-arm, win-x64, osx-x64, osx-arm64 ] runtime: [ linux-x64, linux-arm64, linux-arm, win-x64, win-arm64, osx-x64, osx-arm64 ]
include: include:
- runtime: linux-x64 - runtime: linux-x64
os: ubuntu-latest os: ubuntu-latest
@@ -44,6 +44,10 @@ jobs:
os: windows-2019 os: windows-2019
devScript: ./dev devScript: ./dev
- runtime: win-arm64
os: windows-latest
devScript: ./dev
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
@@ -82,7 +86,7 @@ jobs:
run: | run: |
${{ matrix.devScript }} test ${{ matrix.devScript }} test
working-directory: src working-directory: src
if: matrix.runtime != 'linux-arm64' && matrix.runtime != 'linux-arm' && matrix.runtime != 'osx-arm64' if: matrix.runtime != 'linux-arm64' && matrix.runtime != 'linux-arm' && matrix.runtime != 'osx-arm64' && matrix.runtime != 'win-arm64'
# Create runner package tar.gz/zip # Create runner package tar.gz/zip
- name: Package Release - name: Package Release
@@ -97,7 +101,7 @@ jobs:
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v2
with: with:
name: runner-package-${{ matrix.runtime }} name: runner-package-${{ matrix.runtime }}
path: | path: |
_package _package
_package_trims/trim_externals _package_trims/trim_externals
_package_trims/trim_runtime _package_trims/trim_runtime

View File

@@ -27,7 +27,7 @@ jobs:
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@v1 uses: github/codeql-action/init@v2
# Override language selection by uncommenting this and choosing your languages # Override language selection by uncommenting this and choosing your languages
# with: # with:
# languages: go, javascript, csharp, python, cpp, java # languages: go, javascript, csharp, python, cpp, java
@@ -38,4 +38,4 @@ jobs:
working-directory: src working-directory: src
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1 uses: github/codeql-action/analyze@v2

24
.github/workflows/lint.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: Lint
on:
pull_request:
branches: [ main ]
jobs:
build:
name: Lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
# Ensure full list of changed files within `super-linter`
fetch-depth: 0
- name: Run linters
uses: github/super-linter@v4
env:
DEFAULT_BRANCH: ${{ github.base_ref }}
EDITORCONFIG_FILE_NAME: .editorconfig
LINTER_RULES_PATH: /src/
VALIDATE_ALL_CODEBASE: false
VALIDATE_CSHARP: true

65
.github/workflows/publish-image.yml vendored Normal file
View File

@@ -0,0 +1,65 @@
name: Publish Runner Image
on:
workflow_dispatch:
inputs:
runnerVersion:
type: string
description: Version of the runner being installed
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository_owner }}/actions-runner
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Compute image version
id: image
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const inputRunnerVersion = "${{ github.event.inputs.runnerVersion }}"
if (inputRunnerVersion) {
console.log(`Using input runner version ${inputRunnerVersion}`)
core.setOutput('version', inputRunnerVersion);
return
}
const runnerVersion = fs.readFileSync('${{ github.workspace }}/src/runnerversion', 'utf8').replace(/\n$/g, '')
console.log(`Using runner version ${runnerVersion}`)
core.setOutput('version', runnerVersion);
- name: Setup Docker buildx
uses: docker/setup-buildx-action@v2
- name: Log into registry ${{ env.REGISTRY }}
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
id: build-and-push
uses: docker/build-push-action@v3
with:
context: ./images
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.image.outputs.version }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
build-args: |
RUNNER_VERSION=${{ steps.image.outputs.version }}
push: true
labels: |
org.opencontainers.image.source=${{github.server_url}}/${{github.repository}}
org.opencontainers.image.description=https://github.com/actions/runner/releases/tag/v${{ steps.image.outputs.version }}
org.opencontainers.image.licenses=MIT

View File

@@ -50,29 +50,33 @@ jobs:
linux-arm64-sha: ${{ steps.sha.outputs.linux-arm64-sha256 }} linux-arm64-sha: ${{ steps.sha.outputs.linux-arm64-sha256 }}
linux-arm-sha: ${{ steps.sha.outputs.linux-arm-sha256 }} linux-arm-sha: ${{ steps.sha.outputs.linux-arm-sha256 }}
win-x64-sha: ${{ steps.sha.outputs.win-x64-sha256 }} win-x64-sha: ${{ steps.sha.outputs.win-x64-sha256 }}
win-arm64-sha: ${{ steps.sha.outputs.win-arm64-sha256 }}
osx-x64-sha: ${{ steps.sha.outputs.osx-x64-sha256 }} osx-x64-sha: ${{ steps.sha.outputs.osx-x64-sha256 }}
osx-arm64-sha: ${{ steps.sha.outputs.osx-arm64-sha256 }} osx-arm64-sha: ${{ steps.sha.outputs.osx-arm64-sha256 }}
linux-x64-sha-noexternals: ${{ steps.sha_noexternals.outputs.linux-x64-sha256 }} linux-x64-sha-noexternals: ${{ steps.sha_noexternals.outputs.linux-x64-sha256 }}
linux-arm64-sha-noexternals: ${{ steps.sha_noexternals.outputs.linux-arm64-sha256 }} linux-arm64-sha-noexternals: ${{ steps.sha_noexternals.outputs.linux-arm64-sha256 }}
linux-arm-sha-noexternals: ${{ steps.sha_noexternals.outputs.linux-arm-sha256 }} linux-arm-sha-noexternals: ${{ steps.sha_noexternals.outputs.linux-arm-sha256 }}
win-x64-sha-noexternals: ${{ steps.sha_noexternals.outputs.win-x64-sha256 }} win-x64-sha-noexternals: ${{ steps.sha_noexternals.outputs.win-x64-sha256 }}
win-arm64-sha-noexternals: ${{ steps.sha_noexternals.outputs.win-arm64-sha256 }}
osx-x64-sha-noexternals: ${{ steps.sha_noexternals.outputs.osx-x64-sha256 }} osx-x64-sha-noexternals: ${{ steps.sha_noexternals.outputs.osx-x64-sha256 }}
osx-arm64-sha-noexternals: ${{ steps.sha_noexternals.outputs.osx-arm64-sha256 }} osx-arm64-sha-noexternals: ${{ steps.sha_noexternals.outputs.osx-arm64-sha256 }}
linux-x64-sha-noruntime: ${{ steps.sha_noruntime.outputs.linux-x64-sha256 }} linux-x64-sha-noruntime: ${{ steps.sha_noruntime.outputs.linux-x64-sha256 }}
linux-arm64-sha-noruntime: ${{ steps.sha_noruntime.outputs.linux-arm64-sha256 }} linux-arm64-sha-noruntime: ${{ steps.sha_noruntime.outputs.linux-arm64-sha256 }}
linux-arm-sha-noruntime: ${{ steps.sha_noruntime.outputs.linux-arm-sha256 }} linux-arm-sha-noruntime: ${{ steps.sha_noruntime.outputs.linux-arm-sha256 }}
win-x64-sha-noruntime: ${{ steps.sha_noruntime.outputs.win-x64-sha256 }} win-x64-sha-noruntime: ${{ steps.sha_noruntime.outputs.win-x64-sha256 }}
win-arm64-sha-noruntime: ${{ steps.sha_noruntime.outputs.win-arm64-sha256 }}
osx-x64-sha-noruntime: ${{ steps.sha_noruntime.outputs.osx-x64-sha256 }} osx-x64-sha-noruntime: ${{ steps.sha_noruntime.outputs.osx-x64-sha256 }}
osx-arm64-sha-noruntime: ${{ steps.sha_noruntime.outputs.osx-arm64-sha256 }} osx-arm64-sha-noruntime: ${{ steps.sha_noruntime.outputs.osx-arm64-sha256 }}
linux-x64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.linux-x64-sha256 }} linux-x64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.linux-x64-sha256 }}
linux-arm64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.linux-arm64-sha256 }} linux-arm64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.linux-arm64-sha256 }}
linux-arm-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.linux-arm-sha256 }} linux-arm-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.linux-arm-sha256 }}
win-x64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.win-x64-sha256 }} win-x64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.win-x64-sha256 }}
win-arm64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.win-arm64-sha256 }}
osx-x64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.osx-x64-sha256 }} osx-x64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.osx-x64-sha256 }}
osx-arm64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.osx-arm64-sha256 }} osx-arm64-sha-noruntime-noexternals: ${{ steps.sha_noruntime_noexternals.outputs.osx-arm64-sha256 }}
strategy: strategy:
matrix: matrix:
runtime: [ linux-x64, linux-arm64, linux-arm, win-x64, osx-x64, osx-arm64 ] runtime: [ linux-x64, linux-arm64, linux-arm, win-x64, osx-x64, osx-arm64, win-arm64 ]
include: include:
- runtime: linux-x64 - runtime: linux-x64
os: ubuntu-latest os: ubuntu-latest
@@ -89,7 +93,7 @@ jobs:
- runtime: osx-x64 - runtime: osx-x64
os: macOS-latest os: macOS-latest
devScript: ./dev.sh devScript: ./dev.sh
- runtime: osx-arm64 - runtime: osx-arm64
os: macOS-latest os: macOS-latest
devScript: ./dev.sh devScript: ./dev.sh
@@ -98,6 +102,10 @@ jobs:
os: windows-2019 os: windows-2019
devScript: ./dev devScript: ./dev
- runtime: win-arm64
os: windows-latest
devScript: ./dev
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
@@ -123,7 +131,7 @@ jobs:
file=$(ls) file=$(ls)
sha=$(sha256sum $file | awk '{ print $1 }') sha=$(sha256sum $file | awk '{ print $1 }')
echo "Computed sha256: $sha for $file" echo "Computed sha256: $sha for $file"
echo "::set-output name=${{matrix.runtime}}-sha256::$sha" echo "${{matrix.runtime}}-sha256=$sha" >> $GITHUB_OUTPUT
shell: bash shell: bash
id: sha id: sha
name: Compute SHA256 name: Compute SHA256
@@ -132,8 +140,8 @@ jobs:
file=$(ls) file=$(ls)
sha=$(sha256sum $file | awk '{ print $1 }') sha=$(sha256sum $file | awk '{ print $1 }')
echo "Computed sha256: $sha for $file" echo "Computed sha256: $sha for $file"
echo "::set-output name=${{matrix.runtime}}-sha256::$sha" echo "${{matrix.runtime}}-sha256=$sha" >> $GITHUB_OUTPUT
echo "::set-output name=sha256::$sha" echo "sha256=$sha" >> $GITHUB_OUTPUT
shell: bash shell: bash
id: sha_noexternals id: sha_noexternals
name: Compute SHA256 name: Compute SHA256
@@ -142,8 +150,8 @@ jobs:
file=$(ls) file=$(ls)
sha=$(sha256sum $file | awk '{ print $1 }') sha=$(sha256sum $file | awk '{ print $1 }')
echo "Computed sha256: $sha for $file" echo "Computed sha256: $sha for $file"
echo "::set-output name=${{matrix.runtime}}-sha256::$sha" echo "${{matrix.runtime}}-sha256=$sha" >> $GITHUB_OUTPUT
echo "::set-output name=sha256::$sha" echo "sha256=$sha" >> $GITHUB_OUTPUT
shell: bash shell: bash
id: sha_noruntime id: sha_noruntime
name: Compute SHA256 name: Compute SHA256
@@ -152,15 +160,15 @@ jobs:
file=$(ls) file=$(ls)
sha=$(sha256sum $file | awk '{ print $1 }') sha=$(sha256sum $file | awk '{ print $1 }')
echo "Computed sha256: $sha for $file" echo "Computed sha256: $sha for $file"
echo "::set-output name=${{matrix.runtime}}-sha256::$sha" echo "${{matrix.runtime}}-sha256=$sha" >> $GITHUB_OUTPUT
echo "::set-output name=sha256::$sha" echo "sha256=$sha" >> $GITHUB_OUTPUT
shell: bash shell: bash
id: sha_noruntime_noexternals id: sha_noruntime_noexternals
name: Compute SHA256 name: Compute SHA256
working-directory: _package_trims/trim_runtime_externals working-directory: _package_trims/trim_runtime_externals
- name: Create trimmedpackages.json for ${{ matrix.runtime }} - name: Create trimmedpackages.json for ${{ matrix.runtime }}
if: matrix.runtime == 'win-x64' if: matrix.runtime == 'win-x64' || matrix.runtime == 'win-arm64'
uses: actions/github-script@0.3.0 uses: actions/github-script@0.3.0
with: with:
github-token: ${{secrets.GITHUB_TOKEN}} github-token: ${{secrets.GITHUB_TOKEN}}
@@ -180,7 +188,7 @@ jobs:
fs.writeFileSync('${{ matrix.runtime }}-trimmedpackages.json', trimmedPackages) fs.writeFileSync('${{ matrix.runtime }}-trimmedpackages.json', trimmedPackages)
- name: Create trimmedpackages.json for ${{ matrix.runtime }} - name: Create trimmedpackages.json for ${{ matrix.runtime }}
if: matrix.runtime != 'win-x64' if: matrix.runtime != 'win-x64' && matrix.runtime != 'win-arm64'
uses: actions/github-script@0.3.0 uses: actions/github-script@0.3.0
with: with:
github-token: ${{secrets.GITHUB_TOKEN}} github-token: ${{secrets.GITHUB_TOKEN}}
@@ -239,24 +247,28 @@ jobs:
const runnerVersion = fs.readFileSync('${{ github.workspace }}/src/runnerversion', 'utf8').replace(/\n$/g, '') const runnerVersion = fs.readFileSync('${{ github.workspace }}/src/runnerversion', 'utf8').replace(/\n$/g, '')
var releaseNote = fs.readFileSync('${{ github.workspace }}/releaseNote.md', 'utf8').replace(/<RUNNER_VERSION>/g, runnerVersion) var releaseNote = fs.readFileSync('${{ github.workspace }}/releaseNote.md', 'utf8').replace(/<RUNNER_VERSION>/g, runnerVersion)
releaseNote = releaseNote.replace(/<WIN_X64_SHA>/g, '${{needs.build.outputs.win-x64-sha}}') releaseNote = releaseNote.replace(/<WIN_X64_SHA>/g, '${{needs.build.outputs.win-x64-sha}}')
releaseNote = releaseNote.replace(/<WIN_ARM64_SHA>/g, '${{needs.build.outputs.win-arm64-sha}}')
releaseNote = releaseNote.replace(/<OSX_X64_SHA>/g, '${{needs.build.outputs.osx-x64-sha}}') releaseNote = releaseNote.replace(/<OSX_X64_SHA>/g, '${{needs.build.outputs.osx-x64-sha}}')
releaseNote = releaseNote.replace(/<OSX_ARM64_SHA>/g, '${{needs.build.outputs.osx-arm64-sha}}') releaseNote = releaseNote.replace(/<OSX_ARM64_SHA>/g, '${{needs.build.outputs.osx-arm64-sha}}')
releaseNote = releaseNote.replace(/<LINUX_X64_SHA>/g, '${{needs.build.outputs.linux-x64-sha}}') releaseNote = releaseNote.replace(/<LINUX_X64_SHA>/g, '${{needs.build.outputs.linux-x64-sha}}')
releaseNote = releaseNote.replace(/<LINUX_ARM_SHA>/g, '${{needs.build.outputs.linux-arm-sha}}') releaseNote = releaseNote.replace(/<LINUX_ARM_SHA>/g, '${{needs.build.outputs.linux-arm-sha}}')
releaseNote = releaseNote.replace(/<LINUX_ARM64_SHA>/g, '${{needs.build.outputs.linux-arm64-sha}}') releaseNote = releaseNote.replace(/<LINUX_ARM64_SHA>/g, '${{needs.build.outputs.linux-arm64-sha}}')
releaseNote = releaseNote.replace(/<WIN_X64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.win-x64-sha-noexternals}}') releaseNote = releaseNote.replace(/<WIN_X64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.win-x64-sha-noexternals}}')
releaseNote = releaseNote.replace(/<WIN_ARM64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.win-arm64-sha-noexternals}}')
releaseNote = releaseNote.replace(/<OSX_X64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.osx-x64-sha-noexternals}}') releaseNote = releaseNote.replace(/<OSX_X64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.osx-x64-sha-noexternals}}')
releaseNote = releaseNote.replace(/<OSX_ARM64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.osx-arm64-sha-noexternals}}') releaseNote = releaseNote.replace(/<OSX_ARM64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.osx-arm64-sha-noexternals}}')
releaseNote = releaseNote.replace(/<LINUX_X64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.linux-x64-sha-noexternals}}') releaseNote = releaseNote.replace(/<LINUX_X64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.linux-x64-sha-noexternals}}')
releaseNote = releaseNote.replace(/<LINUX_ARM_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.linux-arm-sha-noexternals}}') releaseNote = releaseNote.replace(/<LINUX_ARM_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.linux-arm-sha-noexternals}}')
releaseNote = releaseNote.replace(/<LINUX_ARM64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.linux-arm64-sha-noexternals}}') releaseNote = releaseNote.replace(/<LINUX_ARM64_SHA_NOEXTERNALS>/g, '${{needs.build.outputs.linux-arm64-sha-noexternals}}')
releaseNote = releaseNote.replace(/<WIN_X64_SHA_NORUNTIME>/g, '${{needs.build.outputs.win-x64-sha-noruntime}}') releaseNote = releaseNote.replace(/<WIN_X64_SHA_NORUNTIME>/g, '${{needs.build.outputs.win-x64-sha-noruntime}}')
releaseNote = releaseNote.replace(/<WIN_ARM64_SHA_NORUNTIME>/g, '${{needs.build.outputs.win-arm64-sha-noruntime}}')
releaseNote = releaseNote.replace(/<OSX_X64_SHA_NORUNTIME>/g, '${{needs.build.outputs.osx-x64-sha-noruntime}}') releaseNote = releaseNote.replace(/<OSX_X64_SHA_NORUNTIME>/g, '${{needs.build.outputs.osx-x64-sha-noruntime}}')
releaseNote = releaseNote.replace(/<OSX_ARM64_SHA_NORUNTIME>/g, '${{needs.build.outputs.osx-arm64-sha-noruntime}}') releaseNote = releaseNote.replace(/<OSX_ARM64_SHA_NORUNTIME>/g, '${{needs.build.outputs.osx-arm64-sha-noruntime}}')
releaseNote = releaseNote.replace(/<LINUX_X64_SHA_NORUNTIME>/g, '${{needs.build.outputs.linux-x64-sha-noruntime}}') releaseNote = releaseNote.replace(/<LINUX_X64_SHA_NORUNTIME>/g, '${{needs.build.outputs.linux-x64-sha-noruntime}}')
releaseNote = releaseNote.replace(/<LINUX_ARM_SHA_NORUNTIME>/g, '${{needs.build.outputs.linux-arm-sha-noruntime}}') releaseNote = releaseNote.replace(/<LINUX_ARM_SHA_NORUNTIME>/g, '${{needs.build.outputs.linux-arm-sha-noruntime}}')
releaseNote = releaseNote.replace(/<LINUX_ARM64_SHA_NORUNTIME>/g, '${{needs.build.outputs.linux-arm64-sha-noruntime}}') releaseNote = releaseNote.replace(/<LINUX_ARM64_SHA_NORUNTIME>/g, '${{needs.build.outputs.linux-arm64-sha-noruntime}}')
releaseNote = releaseNote.replace(/<WIN_X64_SHA_NORUNTIME_NOEXTERNALS>/g, '${{needs.build.outputs.win-x64-sha-noruntime-noexternals}}') releaseNote = releaseNote.replace(/<WIN_X64_SHA_NORUNTIME_NOEXTERNALS>/g, '${{needs.build.outputs.win-x64-sha-noruntime-noexternals}}')
releaseNote = releaseNote.replace(/<WIN_ARM64_SHA_NORUNTIME_NOEXTERNALS>/g, '${{needs.build.outputs.win-arm64-sha-noruntime-noexternals}}')
releaseNote = releaseNote.replace(/<OSX_X64_SHA_NORUNTIME_NOEXTERNALS>/g, '${{needs.build.outputs.osx-x64-sha-noruntime-noexternals}}') releaseNote = releaseNote.replace(/<OSX_X64_SHA_NORUNTIME_NOEXTERNALS>/g, '${{needs.build.outputs.osx-x64-sha-noruntime-noexternals}}')
releaseNote = releaseNote.replace(/<OSX_ARM64_SHA_NORUNTIME_NOEXTERNALS>/g, '${{needs.build.outputs.osx-arm64-sha-noruntime-noexternals}}') releaseNote = releaseNote.replace(/<OSX_ARM64_SHA_NORUNTIME_NOEXTERNALS>/g, '${{needs.build.outputs.osx-arm64-sha-noruntime-noexternals}}')
releaseNote = releaseNote.replace(/<LINUX_X64_SHA_NORUNTIME_NOEXTERNALS>/g, '${{needs.build.outputs.linux-x64-sha-noruntime-noexternals}}') releaseNote = releaseNote.replace(/<LINUX_X64_SHA_NORUNTIME_NOEXTERNALS>/g, '${{needs.build.outputs.linux-x64-sha-noruntime-noexternals}}')
@@ -271,6 +283,7 @@ jobs:
run: | run: |
ls -l ls -l
echo "${{needs.build.outputs.win-x64-sha}} actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}.zip" | shasum -a 256 -c echo "${{needs.build.outputs.win-x64-sha}} actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}.zip" | shasum -a 256 -c
echo "${{needs.build.outputs.win-arm64-sha}} actions-runner-win-arm64-${{ steps.releaseNote.outputs.version }}.zip" | shasum -a 256 -c
echo "${{needs.build.outputs.osx-x64-sha}} actions-runner-osx-x64-${{ steps.releaseNote.outputs.version }}.tar.gz" | shasum -a 256 -c echo "${{needs.build.outputs.osx-x64-sha}} actions-runner-osx-x64-${{ steps.releaseNote.outputs.version }}.tar.gz" | shasum -a 256 -c
echo "${{needs.build.outputs.osx-arm64-sha}} actions-runner-osx-arm64-${{ steps.releaseNote.outputs.version }}.tar.gz" | shasum -a 256 -c echo "${{needs.build.outputs.osx-arm64-sha}} actions-runner-osx-arm64-${{ steps.releaseNote.outputs.version }}.tar.gz" | shasum -a 256 -c
echo "${{needs.build.outputs.linux-x64-sha}} actions-runner-linux-x64-${{ steps.releaseNote.outputs.version }}.tar.gz" | shasum -a 256 -c echo "${{needs.build.outputs.linux-x64-sha}} actions-runner-linux-x64-${{ steps.releaseNote.outputs.version }}.tar.gz" | shasum -a 256 -c
@@ -300,6 +313,16 @@ jobs:
asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}.zip asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}.zip
asset_content_type: application/octet-stream asset_content_type: application/octet-stream
- name: Upload Release Asset (win-arm64)
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/_package/actions-runner-win-arm64-${{ steps.releaseNote.outputs.version }}.zip
asset_name: actions-runner-win-arm64-${{ steps.releaseNote.outputs.version }}.zip
asset_content_type: application/octet-stream
- name: Upload Release Asset (linux-x64) - name: Upload Release Asset (linux-x64)
uses: actions/upload-release-asset@v1.0.1 uses: actions/upload-release-asset@v1.0.1
env: env:
@@ -361,6 +384,17 @@ jobs:
asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}-noexternals.zip asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}-noexternals.zip
asset_content_type: application/octet-stream asset_content_type: application/octet-stream
# Upload release assets (trim externals)
- name: Upload Release Asset (win-arm64-noexternals)
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/_package_trims/trim_externals/actions-runner-win-arm64-${{ steps.releaseNote.outputs.version }}-noexternals.zip
asset_name: actions-runner-win-arm64-${{ steps.releaseNote.outputs.version }}-noexternals.zip
asset_content_type: application/octet-stream
- name: Upload Release Asset (linux-x64-noexternals) - name: Upload Release Asset (linux-x64-noexternals)
uses: actions/upload-release-asset@v1.0.1 uses: actions/upload-release-asset@v1.0.1
env: env:
@@ -422,6 +456,17 @@ jobs:
asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}-noruntime.zip asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}-noruntime.zip
asset_content_type: application/octet-stream asset_content_type: application/octet-stream
# Upload release assets (trim runtime)
- name: Upload Release Asset (win-arm64-noruntime)
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/_package_trims/trim_runtime/actions-runner-win-arm64-${{ steps.releaseNote.outputs.version }}-noruntime.zip
asset_name: actions-runner-win-arm64-${{ steps.releaseNote.outputs.version }}-noruntime.zip
asset_content_type: application/octet-stream
- name: Upload Release Asset (linux-x64-noruntime) - name: Upload Release Asset (linux-x64-noruntime)
uses: actions/upload-release-asset@v1.0.1 uses: actions/upload-release-asset@v1.0.1
env: env:
@@ -483,6 +528,17 @@ jobs:
asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}-noruntime-noexternals.zip asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}-noruntime-noexternals.zip
asset_content_type: application/octet-stream asset_content_type: application/octet-stream
# Upload release assets (trim runtime and externals)
- name: Upload Release Asset (win-arm64-noruntime-noexternals)
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/_package_trims/trim_runtime_externals/actions-runner-win-arm64-${{ steps.releaseNote.outputs.version }}-noruntime-noexternals.zip
asset_name: actions-runner-win-arm64-${{ steps.releaseNote.outputs.version }}-noruntime-noexternals.zip
asset_content_type: application/octet-stream
- name: Upload Release Asset (linux-x64-noruntime-noexternals) - name: Upload Release Asset (linux-x64-noruntime-noexternals)
uses: actions/upload-release-asset@v1.0.1 uses: actions/upload-release-asset@v1.0.1
env: env:
@@ -544,6 +600,17 @@ jobs:
asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}-trimmedpackages.json asset_name: actions-runner-win-x64-${{ steps.releaseNote.outputs.version }}-trimmedpackages.json
asset_content_type: application/octet-stream asset_content_type: application/octet-stream
# Upload release assets (trimmedpackages.json)
- name: Upload Release Asset (win-arm64-trimmedpackages.json)
uses: actions/upload-release-asset@v1.0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.createRelease.outputs.upload_url }}
asset_path: ${{ github.workspace }}/win-arm64-trimmedpackages.json
asset_name: actions-runner-win-arm64-${{ steps.releaseNote.outputs.version }}-trimmedpackages.json
asset_content_type: application/octet-stream
- name: Upload Release Asset (linux-x64-trimmedpackages.json) - name: Upload Release Asset (linux-x64-trimmedpackages.json)
uses: actions/upload-release-asset@v1.0.1 uses: actions/upload-release-asset@v1.0.1
env: env:
@@ -593,3 +660,52 @@ jobs:
asset_path: ${{ github.workspace }}/linux-arm64-trimmedpackages.json asset_path: ${{ github.workspace }}/linux-arm64-trimmedpackages.json
asset_name: actions-runner-linux-arm64-${{ steps.releaseNote.outputs.version }}-trimmedpackages.json asset_name: actions-runner-linux-arm64-${{ steps.releaseNote.outputs.version }}-trimmedpackages.json
asset_content_type: application/octet-stream asset_content_type: application/octet-stream
publish-image:
needs: release
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository_owner }}/actions-runner
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Compute image version
id: image
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const runnerVersion = fs.readFileSync('${{ github.workspace }}/releaseVersion', 'utf8').replace(/\n$/g, '')
console.log(`Using runner version ${runnerVersion}`)
core.setOutput('version', runnerVersion);
- name: Setup Docker buildx
uses: docker/setup-buildx-action@v2
- name: Log into registry ${{ env.REGISTRY }}
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
id: build-and-push
uses: docker/build-push-action@v3
with:
context: ./images
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.image.outputs.version }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
build-args: |
RUNNER_VERSION=${{ steps.image.outputs.version }}
push: true
labels: |
org.opencontainers.image.source=${{github.server_url}}/${{github.repository}}
org.opencontainers.image.description=https://github.com/actions/runner/releases/tag/v${{ steps.image.outputs.version }}
org.opencontainers.image.licenses=MIT

View File

@@ -22,4 +22,4 @@ Runner releases:
## Contribute ## Contribute
We accept contributions in the form of issues and pull requests. [Read more here](docs/contribute.md) before contributing. We accept contributions in the form of issues and pull requests. The runner typically requires changes across the entire system and we aim for issues in the runner to be entirely self contained and fixable here. Therefore, we will primarily handle bug issues opened in this repo and we kindly request you to create all feature and enhancement requests on the [GitHub Feedback](https://github.com/community/community/discussions/categories/actions-and-packages) page. [Read more about our guidelines here](docs/contribute.md) before contributing.

View File

@@ -16,7 +16,7 @@ We should give them that option, and publish examples how how they can create th
- For example, the current runner overrides `HOME`, we can do that in the hook, but we shouldn't pass that hook as an ENV with the other env's the user has set, as that is not user input, it is how the runner invokes containers - For example, the current runner overrides `HOME`, we can do that in the hook, but we shouldn't pass that hook as an ENV with the other env's the user has set, as that is not user input, it is how the runner invokes containers
## Interface ## Interface
- You will set the variable `ACTIONS_RUNNER_CONTAINER_HOOK=/Users/foo/runner/hooks.js` which is the entrypoint to your hook handler. - You will set the variable `ACTIONS_RUNNER_CONTAINER_HOOKS=/Users/foo/runner/hooks.js` which is the entrypoint to your hook handler.
- There is no partial opt in, you must handle every hook - There is no partial opt in, you must handle every hook
- We will pass a command and some args via `stdin` - We will pass a command and some args via `stdin`
- An exit code of 0 is a success, every other exit code is a failure - An exit code of 0 is a success, every other exit code is a failure

View File

@@ -0,0 +1,65 @@
# ADR 2494: Runner Image Tags
**Date**: 2023-03-17
**Status**: Accepted<!-- |Accepted|Rejected|Superceded|Deprecated -->
## Context
Following the [adoption of actions-runner-controller by GitHub](https://github.com/actions/actions-runner-controller/discussions/2072) and the introduction of the new runner scale set autoscaling mode, we needed to provide a basic runner image that could be used off the shelf without much friction.
The [current runner image](https://github.com/actions/runner/pkgs/container/actions-runner) is published to GHCR. Each release of this image is tagged with the runner version and the most recent release is also tagged with `latest`.
While the use of `latest` is common practice, we recommend that users pin a specific version of the runner image for a predictable runtime and improved security posture. However, we still notice that a large number of end users are relying on the `latest` tag & raising issues when they encounter problems.
Add to that, the community actions-runner-controller maintainers have issued a [deprecation notice](https://github.com/actions/actions-runner-controller/issues/2056) of the `latest` tag for the existing runner images (https://github.com/orgs/actions-runner-controller/packages).
## Decision
Proceed with Option 2, keeping the `latest` tag and adding the `NOTES.txt` file to our helm charts with the notice.
### Option 1: Remove the `latest` tag
By removing the `latest` tag, we have to proceed with either of these options:
1. Remove the runner image reference in the `values.yaml` provided with the `gha-runner-scale-set` helm chart and mark these fields as required so that users have to explicitly specify a runner image and a specific tag. This will obviously introduce more friction for users who want to start using actions-runner-controller for the first time.
```yaml
spec:
containers:
- name: runner
image: ""
tag: ""
command: ["/home/runner/run.sh"]
```
1. Pin a specific runner image tag in the `values.yaml` provided with the `gha-runner-scale-set` helm chart. This will reduce friction for users who want to start using actions-runner-controller for the first time but will require us to update the `values.yaml` with every new runner release.
```yaml
spec:
containers:
- name: runner
image: "ghcr.io/actions/actions-runner"
tag: "v2.300.0"
command: ["/home/runner/run.sh"]
```
### Option 2: Keep the `latest` tag
Keeping the `latest` tag is also a reasonable option especially if we don't expect to make any breaking changes to the runner image. We could enhance this by adding a [NOTES.txt](https://helm.sh/docs/chart_template_guide/notes_files/) to the helm chart which will be displayed to the user after a successful helm install/upgrade. This will help users understand the implications of using the `latest` tag and how to pin a specific version of the runner image.
The runner image release workflow will need to be updated so that the image is pushed to GHCR and tagged only when the runner rollout has reached all scale units.
## Consequences
Proceeding with **option 1** means:
1. We will enhance the runtime predictability and security posture of our end users
1. We will have to update the `values.yaml` with every new runner release (that can be automated)
1. We will introduce friction for users who want to start using actions-runner-controller for the first time
Proceeding with **option 2** means:
1. We will have to continue to maintain the `latest` tag
1. We will assume that end users will be able to handle the implications of using the `latest` tag
1. Runner image release workflow needs to be updated

View File

@@ -64,4 +64,4 @@ Make sure the runner has access to actions service for GitHub.com or GitHub Ente
## Still not working? ## Still not working?
Contact [GitHub Support](https://support.github.com] if you have further questuons, or log an issue at https://github.com/actions/runner if you think it's a runner issue. Contact [GitHub Support](https://support.github.com) if you have further questuons, or log an issue at https://github.com/actions/runner if you think it's a runner issue.

View File

@@ -1,6 +1,6 @@
# Contributions # Contributions
We welcome contributions in the form of issues and pull requests. We view the contributions and the process as the same for github and external contributors. We welcome contributions in the form of issues and pull requests. We view the contributions and the process as the same for github and external contributors.Please note the runner typically requires changes across the entire system and we aim for issues in the runner to be entirely self contained and fixable here. Therefore, we will primarily handle bug issues opened in this repo and we kindly request you to create all feature and enhancement requests on the [GitHub Feedback](https://github.com/community/community/discussions/categories/actions-and-packages) page.
> IMPORTANT: Building your own runner is critical for the dev inner loop process when contributing changes. However, only runners built and distributed by GitHub (releases) are supported in production. Be aware that workflows and orchestrations run service side with the runner being a remote process to run steps. For that reason, the service can pull the runner forward so customizations can be lost. > IMPORTANT: Building your own runner is critical for the dev inner loop process when contributing changes. However, only runners built and distributed by GitHub (releases) are supported in production. Be aware that workflows and orchestrations run service side with the runner being a remote process to run steps. For that reason, the service can pull the runner forward so customizations can be lost.
@@ -27,6 +27,8 @@ An ADR is an Architectural Decision Record. This allows consensus on the direct
![Win](res/win_sm.png) Visual Studio 2017 or newer [Install here](https://visualstudio.microsoft.com) (needed for dev sh script) ![Win](res/win_sm.png) Visual Studio 2017 or newer [Install here](https://visualstudio.microsoft.com) (needed for dev sh script)
![Win-arm](res/win_sm.png) Visual Studio 2022 17.3 Preview or later. [Install here](https://docs.microsoft.com/en-us/visualstudio/releases/2022/release-notes-preview)
## Quickstart: Run a job from a real repository ## Quickstart: Run a job from a real repository
If you just want to get from building the sourcecode to using it to execute an action, you will need: If you just want to get from building the sourcecode to using it to execute an action, you will need:
@@ -155,4 +157,12 @@ cat (Runner/Worker)_TIMESTAMP.log # view your log file
## Styling ## Styling
We use the .NET Foundation and CoreCLR style guidelines [located here]( We use the .NET Foundation and CoreCLR style guidelines [located here](
https://github.com/dotnet/corefx/blob/master/Documentation/coding-guidelines/coding-style.md) https://github.com/dotnet/runtime/blob/main/docs/coding-guidelines/coding-style.md)
### Format C# Code
To format both staged and unstaged .cs files
```
cd ./src
./dev.(cmd|sh) format
```

View File

@@ -35,7 +35,7 @@ All the configs below can be found in `.vscode/launch.json`.
If you launch `Run` or `Run [build]`, it starts a process called `Runner.Listener`. If you launch `Run` or `Run [build]`, it starts a process called `Runner.Listener`.
This process will receive any job queued on this repository if the job runs on matching labels (e.g `runs-on: self-hosted`). This process will receive any job queued on this repository if the job runs on matching labels (e.g `runs-on: self-hosted`).
Once a job is received, a `Runner.Listener` starts a new process of `Runner.Worker`. Once a job is received, a `Runner.Listener` starts a new process of `Runner.Worker`.
Since this is a diferent process, you can't use the same debugger session debug it. Since this is a different process, you can't use the same debugger session debug it.
Instead, a parallel debugging session has to be started, using a different launch config. Instead, a parallel debugging session has to be started, using a different launch config.
Luckily, VS Code supports multiple parallel debugging sessions. Luckily, VS Code supports multiple parallel debugging sessions.

49
images/Dockerfile Normal file
View File

@@ -0,0 +1,49 @@
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0 as build
ARG RUNNER_VERSION
ARG RUNNER_ARCH="x64"
ARG RUNNER_CONTAINER_HOOKS_VERSION=0.3.2
ARG DOCKER_VERSION=20.10.23
RUN apt update -y && apt install curl unzip -y
WORKDIR /actions-runner
RUN curl -f -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-${RUNNER_ARCH}-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./runner.tar.gz \
&& rm runner.tar.gz
RUN curl -f -L -o runner-container-hooks.zip https://github.com/actions/runner-container-hooks/releases/download/v${RUNNER_CONTAINER_HOOKS_VERSION}/actions-runner-hooks-k8s-${RUNNER_CONTAINER_HOOKS_VERSION}.zip \
&& unzip ./runner-container-hooks.zip -d ./k8s \
&& rm runner-container-hooks.zip
RUN export DOCKER_ARCH=x86_64 \
&& if [ "$RUNNER_ARCH" = "arm64" ]; then export DOCKER_ARCH=aarch64 ; fi \
&& curl -fLo docker.tgz https://download.docker.com/linux/static/stable/${DOCKER_ARCH}/docker-${DOCKER_VERSION}.tgz \
&& tar zxvf docker.tgz \
&& rm -rf docker.tgz
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0
ENV DEBIAN_FRONTEND=noninteractive
ENV RUNNER_MANUALLY_TRAP_SIG=1
ENV ACTIONS_RUNNER_PRINT_LOG_TO_STDOUT=1
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends \
sudo \
&& rm -rf /var/lib/apt/lists/*
RUN adduser --disabled-password --gecos "" --uid 1001 runner \
&& groupadd docker --gid 123 \
&& usermod -aG sudo runner \
&& usermod -aG docker runner \
&& echo "%sudo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers \
&& echo "Defaults env_keep += \"DEBIAN_FRONTEND\"" >> /etc/sudoers
WORKDIR /home/runner
COPY --chown=runner:docker --from=build /actions-runner .
RUN install -o root -g root -m 755 docker/* /usr/bin/ && rm -rf docker
USER runner

View File

@@ -1,9 +1,22 @@
## Features
- Runner changes for communication with Results service (#2510, #2531, #2535, #2516)
- Add `*.ghe.localhost` domains to hosted server check (#2536)
- Add `OrchestrationId` to user-agent for better telemetry correlation. (#2568)
## Bugs ## Bugs
- Avoid key based command injection via Docker command arguments (#2062) - Fix JIT configurations on Windows (#2497)
- Guard against NullReference while creating HostContext (#2343)
- Handles broken symlink in `Which` (#2150, #2196)
- Adding curl retry for external tool downloads (#2552, #2557)
- Limit the time we wait for waiting websocket to connect. (#2554)
## Misc ## Misc
- Added step context name and start/finish time in step telemetry (#2069) - Bump container hooks version to 0.3.1 in runner image (#2496)
- Improved error logs when there is a missing 'using' token configuration in the metadata file (#2052) - Runner changes to communicate with vNext services (#2487, #2500, #2505, #2541, #2547)
- Added full job name and nested workflow details in log (#2049)
_Note: Actions Runner follows a progressive release policy, so the latest release might not be available to your enterprise, organization, or repository yet.
To confirm which version of the Actions Runner you should expect, please view the download instructions for your enterprise, organization, or repository.
See https://docs.github.com/en/enterprise-cloud@latest/actions/hosting-your-own-runners/adding-self-hosted-runners_
## Windows x64 ## Windows x64
We recommend configuring the runner in a root folder of the Windows drive (e.g. "C:\actions-runner"). This will help avoid issues related to service identity folder permissions and long file path restrictions on Windows. We recommend configuring the runner in a root folder of the Windows drive (e.g. "C:\actions-runner"). This will help avoid issues related to service identity folder permissions and long file path restrictions on Windows.
@@ -19,6 +32,22 @@ Add-Type -AssemblyName System.IO.Compression.FileSystem ;
[System.IO.Compression.ZipFile]::ExtractToDirectory("$PWD\actions-runner-win-x64-<RUNNER_VERSION>.zip", "$PWD") [System.IO.Compression.ZipFile]::ExtractToDirectory("$PWD\actions-runner-win-x64-<RUNNER_VERSION>.zip", "$PWD")
``` ```
## [Pre-release] Windows arm64
**Warning:** Windows arm64 runners are currently in preview status and use [unofficial versions of nodejs](https://unofficial-builds.nodejs.org/). They are not intended for production workflows.
We recommend configuring the runner in a root folder of the Windows drive (e.g. "C:\actions-runner"). This will help avoid issues related to service identity folder permissions and long file path restrictions on Windows.
The following snipped needs to be run on `powershell`:
``` powershell
# Create a folder under the drive root
mkdir \actions-runner ; cd \actions-runner
# Download the latest runner package
Invoke-WebRequest -Uri https://github.com/actions/runner/releases/download/v<RUNNER_VERSION>/actions-runner-win-arm64-<RUNNER_VERSION>.zip -OutFile actions-runner-win-arm64-<RUNNER_VERSION>.zip
# Extract the installer
Add-Type -AssemblyName System.IO.Compression.FileSystem ;
[System.IO.Compression.ZipFile]::ExtractToDirectory("$PWD\actions-runner-win-arm64-<RUNNER_VERSION>.zip", "$PWD")
```
## OSX x64 ## OSX x64
``` bash ``` bash
@@ -82,6 +111,7 @@ For additional details about configuring, running, or shutting down the runner p
The SHA-256 checksums for the packages included in this build are shown below: The SHA-256 checksums for the packages included in this build are shown below:
- actions-runner-win-x64-<RUNNER_VERSION>.zip <!-- BEGIN SHA win-x64 --><WIN_X64_SHA><!-- END SHA win-x64 --> - actions-runner-win-x64-<RUNNER_VERSION>.zip <!-- BEGIN SHA win-x64 --><WIN_X64_SHA><!-- END SHA win-x64 -->
- actions-runner-win-arm64-<RUNNER_VERSION>.zip <!-- BEGIN SHA win-arm64 --><WIN_ARM64_SHA><!-- END SHA win-arm64 -->
- actions-runner-osx-x64-<RUNNER_VERSION>.tar.gz <!-- BEGIN SHA osx-x64 --><OSX_X64_SHA><!-- END SHA osx-x64 --> - actions-runner-osx-x64-<RUNNER_VERSION>.tar.gz <!-- BEGIN SHA osx-x64 --><OSX_X64_SHA><!-- END SHA osx-x64 -->
- actions-runner-osx-arm64-<RUNNER_VERSION>.tar.gz <!-- BEGIN SHA osx-arm64 --><OSX_ARM64_SHA><!-- END SHA osx-arm64 --> - actions-runner-osx-arm64-<RUNNER_VERSION>.tar.gz <!-- BEGIN SHA osx-arm64 --><OSX_ARM64_SHA><!-- END SHA osx-arm64 -->
- actions-runner-linux-x64-<RUNNER_VERSION>.tar.gz <!-- BEGIN SHA linux-x64 --><LINUX_X64_SHA><!-- END SHA linux-x64 --> - actions-runner-linux-x64-<RUNNER_VERSION>.tar.gz <!-- BEGIN SHA linux-x64 --><LINUX_X64_SHA><!-- END SHA linux-x64 -->
@@ -89,6 +119,7 @@ The SHA-256 checksums for the packages included in this build are shown below:
- actions-runner-linux-arm-<RUNNER_VERSION>.tar.gz <!-- BEGIN SHA linux-arm --><LINUX_ARM_SHA><!-- END SHA linux-arm --> - actions-runner-linux-arm-<RUNNER_VERSION>.tar.gz <!-- BEGIN SHA linux-arm --><LINUX_ARM_SHA><!-- END SHA linux-arm -->
- actions-runner-win-x64-<RUNNER_VERSION>-noexternals.zip <!-- BEGIN SHA win-x64_noexternals --><WIN_X64_SHA_NOEXTERNALS><!-- END SHA win-x64_noexternals --> - actions-runner-win-x64-<RUNNER_VERSION>-noexternals.zip <!-- BEGIN SHA win-x64_noexternals --><WIN_X64_SHA_NOEXTERNALS><!-- END SHA win-x64_noexternals -->
- actions-runner-win-arm64-<RUNNER_VERSION>-noexternals.zip <!-- BEGIN SHA win-arm64_noexternals --><WIN_ARM64_SHA_NOEXTERNALS><!-- END SHA win-arm64_noexternals -->
- actions-runner-osx-x64-<RUNNER_VERSION>-noexternals.tar.gz <!-- BEGIN SHA osx-x64_noexternals --><OSX_X64_SHA_NOEXTERNALS><!-- END SHA osx-x64_noexternals --> - actions-runner-osx-x64-<RUNNER_VERSION>-noexternals.tar.gz <!-- BEGIN SHA osx-x64_noexternals --><OSX_X64_SHA_NOEXTERNALS><!-- END SHA osx-x64_noexternals -->
- actions-runner-osx-arm64-<RUNNER_VERSION>-noexternals.tar.gz <!-- BEGIN SHA osx-arm64_noexternals --><OSX_ARM64_SHA_NOEXTERNALS><!-- END SHA osx-arm64_noexternals --> - actions-runner-osx-arm64-<RUNNER_VERSION>-noexternals.tar.gz <!-- BEGIN SHA osx-arm64_noexternals --><OSX_ARM64_SHA_NOEXTERNALS><!-- END SHA osx-arm64_noexternals -->
- actions-runner-linux-x64-<RUNNER_VERSION>-noexternals.tar.gz <!-- BEGIN SHA linux-x64_noexternals --><LINUX_X64_SHA_NOEXTERNALS><!-- END SHA linux-x64_noexternals --> - actions-runner-linux-x64-<RUNNER_VERSION>-noexternals.tar.gz <!-- BEGIN SHA linux-x64_noexternals --><LINUX_X64_SHA_NOEXTERNALS><!-- END SHA linux-x64_noexternals -->
@@ -96,6 +127,7 @@ The SHA-256 checksums for the packages included in this build are shown below:
- actions-runner-linux-arm-<RUNNER_VERSION>-noexternals.tar.gz <!-- BEGIN SHA linux-arm_noexternals --><LINUX_ARM_SHA_NOEXTERNALS><!-- END SHA linux-arm_noexternals --> - actions-runner-linux-arm-<RUNNER_VERSION>-noexternals.tar.gz <!-- BEGIN SHA linux-arm_noexternals --><LINUX_ARM_SHA_NOEXTERNALS><!-- END SHA linux-arm_noexternals -->
- actions-runner-win-x64-<RUNNER_VERSION>-noruntime.zip <!-- BEGIN SHA win-x64_noruntime --><WIN_X64_SHA_NORUNTIME><!-- END SHA win-x64_noruntime --> - actions-runner-win-x64-<RUNNER_VERSION>-noruntime.zip <!-- BEGIN SHA win-x64_noruntime --><WIN_X64_SHA_NORUNTIME><!-- END SHA win-x64_noruntime -->
- actions-runner-win-arm64-<RUNNER_VERSION>-noruntime.zip <!-- BEGIN SHA win-arm64_noruntime --><WIN_ARM64_SHA_NORUNTIME><!-- END SHA win-arm64_noruntime -->
- actions-runner-osx-x64-<RUNNER_VERSION>-noruntime.tar.gz <!-- BEGIN SHA osx-x64_noruntime --><OSX_X64_SHA_NORUNTIME><!-- END SHA osx-x64_noruntime --> - actions-runner-osx-x64-<RUNNER_VERSION>-noruntime.tar.gz <!-- BEGIN SHA osx-x64_noruntime --><OSX_X64_SHA_NORUNTIME><!-- END SHA osx-x64_noruntime -->
- actions-runner-osx-arm64-<RUNNER_VERSION>-noruntime.tar.gz <!-- BEGIN SHA osx-arm64_noruntime --><OSX_ARM64_SHA_NORUNTIME><!-- END SHA osx-arm64_noruntime --> - actions-runner-osx-arm64-<RUNNER_VERSION>-noruntime.tar.gz <!-- BEGIN SHA osx-arm64_noruntime --><OSX_ARM64_SHA_NORUNTIME><!-- END SHA osx-arm64_noruntime -->
- actions-runner-linux-x64-<RUNNER_VERSION>-noruntime.tar.gz <!-- BEGIN SHA linux-x64_noruntime --><LINUX_X64_SHA_NORUNTIME><!-- END SHA linux-x64_noruntime --> - actions-runner-linux-x64-<RUNNER_VERSION>-noruntime.tar.gz <!-- BEGIN SHA linux-x64_noruntime --><LINUX_X64_SHA_NORUNTIME><!-- END SHA linux-x64_noruntime -->
@@ -103,6 +135,7 @@ The SHA-256 checksums for the packages included in this build are shown below:
- actions-runner-linux-arm-<RUNNER_VERSION>-noruntime.tar.gz <!-- BEGIN SHA linux-arm_noruntime --><LINUX_ARM_SHA_NORUNTIME><!-- END SHA linux-arm_noruntime --> - actions-runner-linux-arm-<RUNNER_VERSION>-noruntime.tar.gz <!-- BEGIN SHA linux-arm_noruntime --><LINUX_ARM_SHA_NORUNTIME><!-- END SHA linux-arm_noruntime -->
- actions-runner-win-x64-<RUNNER_VERSION>-noruntime-noexternals.zip <!-- BEGIN SHA win-x64_noruntime_noexternals --><WIN_X64_SHA_NORUNTIME_NOEXTERNALS><!-- END SHA win-x64_noruntime_noexternals --> - actions-runner-win-x64-<RUNNER_VERSION>-noruntime-noexternals.zip <!-- BEGIN SHA win-x64_noruntime_noexternals --><WIN_X64_SHA_NORUNTIME_NOEXTERNALS><!-- END SHA win-x64_noruntime_noexternals -->
- actions-runner-win-arm64-<RUNNER_VERSION>-noruntime-noexternals.zip <!-- BEGIN SHA win-arm64_noruntime_noexternals --><WIN_ARM64_SHA_NORUNTIME_NOEXTERNALS><!-- END SHA win-arm64_noruntime_noexternals -->
- actions-runner-osx-x64-<RUNNER_VERSION>-noruntime-noexternals.tar.gz <!-- BEGIN SHA osx-x64_noruntime_noexternals --><OSX_X64_SHA_NORUNTIME_NOEXTERNALS><!-- END SHA osx-x64_noruntime_noexternals --> - actions-runner-osx-x64-<RUNNER_VERSION>-noruntime-noexternals.tar.gz <!-- BEGIN SHA osx-x64_noruntime_noexternals --><OSX_X64_SHA_NORUNTIME_NOEXTERNALS><!-- END SHA osx-x64_noruntime_noexternals -->
- actions-runner-osx-arm64-<RUNNER_VERSION>-noruntime-noexternals.tar.gz <!-- BEGIN SHA osx-arm64_noruntime_noexternals --><OSX_ARM64_SHA_NORUNTIME_NOEXTERNALS><!-- END SHA osx-arm64_noruntime_noexternals --> - actions-runner-osx-arm64-<RUNNER_VERSION>-noruntime-noexternals.tar.gz <!-- BEGIN SHA osx-arm64_noruntime_noexternals --><OSX_ARM64_SHA_NORUNTIME_NOEXTERNALS><!-- END SHA osx-arm64_noruntime_noexternals -->
- actions-runner-linux-x64-<RUNNER_VERSION>-noruntime-noexternals.tar.gz <!-- BEGIN SHA linux-x64_noruntime_noexternals --><LINUX_X64_SHA_NORUNTIME_NOEXTERNALS><!-- END SHA linux-x64_noruntime_noexternals --> - actions-runner-linux-x64-<RUNNER_VERSION>-noruntime-noexternals.tar.gz <!-- BEGIN SHA linux-x64_noruntime_noexternals --><LINUX_X64_SHA_NORUNTIME_NOEXTERNALS><!-- END SHA linux-x64_noruntime_noexternals -->

View File

@@ -13,7 +13,7 @@ set -e
flags_found=false flags_found=false
while getopts 's:g:n:r:u:l:' opt; do while getopts 's:g:n:r:u:l:df' opt; do
flags_found=true flags_found=true
case $opt in case $opt in
@@ -35,6 +35,12 @@ while getopts 's:g:n:r:u:l:' opt; do
l) l)
labels=$OPTARG labels=$OPTARG
;; ;;
f)
replace='true'
;;
d)
disableupdate='true'
;;
*) *)
echo " echo "
Runner Service Installer Runner Service Installer
@@ -49,7 +55,9 @@ Usage:
-n optional name of the runner, defaults to hostname -n optional name of the runner, defaults to hostname
-r optional name of the runner group to add the runner to, defaults to the Default group -r optional name of the runner group to add the runner to, defaults to the Default group
-u optional user svc will run as, defaults to current -u optional user svc will run as, defaults to current
-l optional list of labels (split by comma) applied on the runner" -l optional list of labels (split by comma) applied on the runner
-d optional allow runner to remain on the current version for one month after the release of a newer version
-f optional replace any existing runner with the same name"
exit 0 exit 0
;; ;;
esac esac
@@ -169,8 +177,8 @@ fi
echo echo
echo "Configuring ${runner_name} @ $runner_url" echo "Configuring ${runner_name} @ $runner_url"
echo "./config.sh --unattended --url $runner_url --token *** --name $runner_name ${labels:+--labels $labels} ${runner_group:+--runnergroup \"$runner_group\"}" echo "./config.sh --unattended --url $runner_url --token *** --name $runner_name ${labels:+--labels $labels} ${runner_group:+--runnergroup \"$runner_group\"} ${disableupdate:+--disableupdate}"
sudo -E -u ${svc_user} ./config.sh --unattended --url $runner_url --token $RUNNER_TOKEN --name $runner_name ${labels:+--labels $labels} ${runner_group:+--runnergroup "$runner_group"} sudo -E -u ${svc_user} ./config.sh --unattended --url $runner_url --token $RUNNER_TOKEN ${replace:+--replace} --name $runner_name ${labels:+--labels $labels} ${runner_group:+--runnergroup "$runner_group"} ${disableupdate:+--disableupdate}
#--------------------------------------- #---------------------------------------
# Configuring as a service # Configuring as a service

View File

@@ -1,4 +1,4 @@
#/bin/bash #!/bin/bash
set -e set -e
@@ -12,7 +12,7 @@ set -e
# #
# Usage: # Usage:
# export RUNNER_CFG_PAT=<yourPAT> # export RUNNER_CFG_PAT=<yourPAT>
# ./delete.sh scope name # ./delete.sh <scope> [<name>]
# #
# scope required repo (:owner/:repo) or org (:organization) # scope required repo (:owner/:repo) or org (:organization)
# name optional defaults to hostname. name to delete # name optional defaults to hostname. name to delete
@@ -26,17 +26,17 @@ set -e
runner_scope=${1} runner_scope=${1}
runner_name=${2} runner_name=${2}
echo "Deleting runner ${runner_name} @ ${runner_scope}" function fatal()
function fatal()
{ {
echo "error: $1" >&2 echo "error: $1" >&2
exit 1 exit 1
} }
if [ -z "${runner_scope}" ]; then fatal "supply scope as argument 1"; fi if [ -z "${runner_scope}" ]; then fatal "supply scope as argument 1"; fi
if [ -z "${runner_name}" ]; then fatal "supply name as argument 2"; fi
if [ -z "${RUNNER_CFG_PAT}" ]; then fatal "RUNNER_CFG_PAT must be set before calling"; fi if [ -z "${RUNNER_CFG_PAT}" ]; then fatal "RUNNER_CFG_PAT must be set before calling"; fi
if [ -z "${runner_name}" ]; then runner_name=`hostname`; fi
echo "Deleting runner ${runner_name} @ ${runner_scope}"
which curl || fatal "curl required. Please install in PATH with apt-get, brew, etc" which curl || fatal "curl required. Please install in PATH with apt-get, brew, etc"
which jq || fatal "jq required. Please install in PATH with apt-get, brew, etc" which jq || fatal "jq required. Please install in PATH with apt-get, brew, etc"

View File

@@ -24,6 +24,9 @@
<PropertyGroup Condition="'$(BUILD_OS)' == 'Windows' AND '$(PackageRuntime)' == 'win-x86'"> <PropertyGroup Condition="'$(BUILD_OS)' == 'Windows' AND '$(PackageRuntime)' == 'win-x86'">
<DefineConstants>$(DefineConstants);X86</DefineConstants> <DefineConstants>$(DefineConstants);X86</DefineConstants>
</PropertyGroup> </PropertyGroup>
<PropertyGroup Condition="'$(BUILD_OS)' == 'Windows' AND '$(PackageRuntime)' == 'win-arm64'">
<DefineConstants>$(DefineConstants);ARM64</DefineConstants>
</PropertyGroup>
<PropertyGroup Condition="'$(BUILD_OS)' == 'OSX' AND '$(PackageRuntime)' == 'osx-x64'"> <PropertyGroup Condition="'$(BUILD_OS)' == 'OSX' AND '$(PackageRuntime)' == 'osx-x64'">
<DefineConstants>$(DefineConstants);X64</DefineConstants> <DefineConstants>$(DefineConstants);X64</DefineConstants>

View File

@@ -1 +1 @@
1d709d93e5d3c6c6c656a61aa6c1781050224788a05b0e6ecc4c3c0408bdf89c 39f2a931565d6a10e695ac8ed14bb9dcbb568151410349b32dbf9c27bae29602

View File

@@ -1 +1 @@
b92a47cfeaad02255b1f7a377060651b73ae5e5db22a188dbbcb4183ab03a03d 29ffb303537d8ba674fbebc7729292c21c4ebd17b3198f91ed593ef4cbbb67b5

View File

@@ -1 +1 @@
68a9a8ef0843a8bb74241894f6f63fd76241a82295c5337d3cc7a940a314c78e de6868a836fa3cb9e5ddddbc079da1c25e819aa2d2fc193cc9931c353687c57c

View File

@@ -1 +1 @@
02c7126ff4d63ee2a0ae390c81434c125630522aadf35903bbeebb1a99d8af99 339d3e1a5fd28450c0fe6cb820cc7aae291f0f9e2d153ac34e1f7b080e35d30e

View File

@@ -1 +1 @@
c9d5a542f8d765168855a89e83ae0a8970d00869041c4f9a766651c04c72b212 dcb7f606c1d7d290381e5020ee73e7f16dcbd2f20ac9b431362ccbb5120d449c

View File

@@ -0,0 +1 @@
1bbcb0e9a2cf4be4b1fce77458de139b70ac58efcbb415a6db028b9373ae1673

View File

@@ -1 +1 @@
d94f2fbaf210297162bc9f3add819d73682c3aa6899e321c3872412b924d5504 44cd25f3c104d0abb44d262397a80e0b2c4f206465c5d899a22eec043dac0fb3

View File

@@ -1 +1 @@
6ed30a2c1ee403a610d63e82bb230b9ba846a9c25cec9e4ea8672fb6ed4e1a51 3807dcbf947e840c33535fb466b096d76bf09e5c0254af8fc8cbbb24c6388222

View File

@@ -1 +1 @@
711c30c51ec52c9b7a9a2eb399d6ab2ab5ee1dc72de11879f2f36f919f163d78 ee01eee80cd8a460a4b9780ee13fdd20f25c59e754b4ccd99df55fbba2a85634

View File

@@ -1 +1 @@
a49479ca4b4988a06c097e8d22c51fd08a11c13f40807366236213d0e008cf6a a9fb9c14e24e79aec97d4da197dd7bfc6364297d6fce573afb2df48cc9a931f8

View File

@@ -1 +1 @@
cc4708962a80325de0baa5ae8484e0cb9ae976ac6a4178c1c0d448b8c52bd7f7 a4e0e8fc62eba0967a39c7d693dcd0aeb8b2bed0765f9c38df80d42884f65341

View File

@@ -1 +1 @@
8e97df75230b843462a9b4c578ccec604ee4b4a1066120c85b04374317fa372b 17ac17fbe785b3d6fa2868d8d17185ebfe0c90b4b0ddf6b67eac70e42bcd989b

View File

@@ -0,0 +1 @@
89f24657a550f1e818b0e9975e5b80edcf4dd22b7d4bccbb9e48e37f45d30fb1

View File

@@ -1 +1 @@
f75a671e5a188c76680739689aa75331a2c09d483dce9c80023518c48fd67a18 24fd131b5dce33ef16038b771407bc0507da8682a72fb3b7780607235f76db0b

View File

@@ -1 +1,4 @@
To update hashFiles under `Misc/layoutbin` run `npm install && npm run all` To compile this package (output will be stored in `Misc/layoutbin`) run `npm install && npm run all`.
> Note: this package also needs to be recompiled for dependabot PRs updating one of
> its dependencies.

View File

@@ -14,7 +14,7 @@
"devDependencies": { "devDependencies": {
"@types/node": "^12.7.12", "@types/node": "^12.7.12",
"@typescript-eslint/parser": "^5.15.0", "@typescript-eslint/parser": "^5.15.0",
"@zeit/ncc": "^0.20.5", "@vercel/ncc": "^0.36.0",
"eslint": "^8.11.0", "eslint": "^8.11.0",
"eslint-plugin-github": "^4.3.5", "eslint-plugin-github": "^4.3.5",
"prettier": "^1.19.1", "prettier": "^1.19.1",
@@ -22,9 +22,13 @@
} }
}, },
"node_modules/@actions/core": { "node_modules/@actions/core": {
"version": "1.2.6", "version": "1.9.1",
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.2.6.tgz", "resolved": "https://registry.npmjs.org/@actions/core/-/core-1.9.1.tgz",
"integrity": "sha512-ZQYitnqiyBc3D+k7LsgSBmMDVkOVidaagDG7j3fOym77jNunWRuYx7VSHa9GNfFZh+zh61xsCjRj4JxMZlDqTA==" "integrity": "sha512-5ad+U2YGrmmiw6du20AQW5XuWo7UKN2052FjSV7MX+Wfjf8sCqcsZe62NfgHys4QI4/Y+vQvLKYL8jWtA1ZBTA==",
"dependencies": {
"@actions/http-client": "^2.0.1",
"uuid": "^8.3.2"
}
}, },
"node_modules/@actions/glob": { "node_modules/@actions/glob": {
"version": "0.1.0", "version": "0.1.0",
@@ -35,6 +39,14 @@
"minimatch": "^3.0.4" "minimatch": "^3.0.4"
} }
}, },
"node_modules/@actions/http-client": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.0.1.tgz",
"integrity": "sha512-PIXiMVtz6VvyaRsGY268qvj57hXQEpsYogYOu2nrQhlf+XCGmZstmuZBbAybUl1nQGnvS1k1eEsQ69ZoD7xlSw==",
"dependencies": {
"tunnel": "^0.0.6"
}
},
"node_modules/@eslint/eslintrc": { "node_modules/@eslint/eslintrc": {
"version": "1.2.1", "version": "1.2.1",
"resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-1.2.1.tgz", "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-1.2.1.tgz",
@@ -334,11 +346,10 @@
"url": "https://opencollective.com/typescript-eslint" "url": "https://opencollective.com/typescript-eslint"
} }
}, },
"node_modules/@zeit/ncc": { "node_modules/@vercel/ncc": {
"version": "0.20.5", "version": "0.36.0",
"resolved": "https://registry.npmjs.org/@zeit/ncc/-/ncc-0.20.5.tgz", "resolved": "https://registry.npmjs.org/@vercel/ncc/-/ncc-0.36.0.tgz",
"integrity": "sha512-XU6uzwvv95DqxciQx+aOLhbyBx/13ky+RK1y88Age9Du3BlA4mMPCy13BGjayOrrumOzlq1XV3SD/BWiZENXlw==", "integrity": "sha512-/ZTUJ/ZkRt694k7KJNimgmHjtQcRuVwsST2Z6XfYveQIuBbHR+EqkTc1jfgPkQmMyk/vtpxo3nVxe8CNuau86A==",
"deprecated": "@zeit/ncc is no longer maintained. Please use @vercel/ncc instead.",
"dev": true, "dev": true,
"bin": { "bin": {
"ncc": "dist/ncc/cli.js" "ncc": "dist/ncc/cli.js"
@@ -1710,9 +1721,9 @@
"dev": true "dev": true
}, },
"node_modules/json5": { "node_modules/json5": {
"version": "1.0.1", "version": "1.0.2",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz", "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz",
"integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==", "integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"minimist": "^1.2.0" "minimist": "^1.2.0"
@@ -1812,9 +1823,9 @@
} }
}, },
"node_modules/minimatch": { "node_modules/minimatch": {
"version": "3.0.4", "version": "3.1.2",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz", "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.2.tgz",
"integrity": "sha512-yJHVQEhyqPLUTgt9B83PXu6W3rx4MvvHvSUvToogpwoGDOUQ+yDrR0HRot+yOCdCO7u4hX3pWft6kWBBcqh0UA==", "integrity": "sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==",
"dependencies": { "dependencies": {
"brace-expansion": "^1.1.7" "brace-expansion": "^1.1.7"
}, },
@@ -2381,6 +2392,14 @@
"typescript": ">=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta" "typescript": ">=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta"
} }
}, },
"node_modules/tunnel": {
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/tunnel/-/tunnel-0.0.6.tgz",
"integrity": "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg==",
"engines": {
"node": ">=0.6.11 <=0.7.0 || >=0.7.3"
}
},
"node_modules/type-check": { "node_modules/type-check": {
"version": "0.4.0", "version": "0.4.0",
"resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz", "resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz",
@@ -2442,6 +2461,14 @@
"punycode": "^2.1.0" "punycode": "^2.1.0"
} }
}, },
"node_modules/uuid": {
"version": "8.3.2",
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==",
"bin": {
"uuid": "dist/bin/uuid"
}
},
"node_modules/v8-compile-cache": { "node_modules/v8-compile-cache": {
"version": "2.1.0", "version": "2.1.0",
"resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.1.0.tgz", "resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.1.0.tgz",
@@ -2503,9 +2530,13 @@
}, },
"dependencies": { "dependencies": {
"@actions/core": { "@actions/core": {
"version": "1.2.6", "version": "1.9.1",
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.2.6.tgz", "resolved": "https://registry.npmjs.org/@actions/core/-/core-1.9.1.tgz",
"integrity": "sha512-ZQYitnqiyBc3D+k7LsgSBmMDVkOVidaagDG7j3fOym77jNunWRuYx7VSHa9GNfFZh+zh61xsCjRj4JxMZlDqTA==" "integrity": "sha512-5ad+U2YGrmmiw6du20AQW5XuWo7UKN2052FjSV7MX+Wfjf8sCqcsZe62NfgHys4QI4/Y+vQvLKYL8jWtA1ZBTA==",
"requires": {
"@actions/http-client": "^2.0.1",
"uuid": "^8.3.2"
}
}, },
"@actions/glob": { "@actions/glob": {
"version": "0.1.0", "version": "0.1.0",
@@ -2516,6 +2547,14 @@
"minimatch": "^3.0.4" "minimatch": "^3.0.4"
} }
}, },
"@actions/http-client": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.0.1.tgz",
"integrity": "sha512-PIXiMVtz6VvyaRsGY268qvj57hXQEpsYogYOu2nrQhlf+XCGmZstmuZBbAybUl1nQGnvS1k1eEsQ69ZoD7xlSw==",
"requires": {
"tunnel": "^0.0.6"
}
},
"@eslint/eslintrc": { "@eslint/eslintrc": {
"version": "1.2.1", "version": "1.2.1",
"resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-1.2.1.tgz", "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-1.2.1.tgz",
@@ -2707,10 +2746,10 @@
"eslint-visitor-keys": "^3.0.0" "eslint-visitor-keys": "^3.0.0"
} }
}, },
"@zeit/ncc": { "@vercel/ncc": {
"version": "0.20.5", "version": "0.36.0",
"resolved": "https://registry.npmjs.org/@zeit/ncc/-/ncc-0.20.5.tgz", "resolved": "https://registry.npmjs.org/@vercel/ncc/-/ncc-0.36.0.tgz",
"integrity": "sha512-XU6uzwvv95DqxciQx+aOLhbyBx/13ky+RK1y88Age9Du3BlA4mMPCy13BGjayOrrumOzlq1XV3SD/BWiZENXlw==", "integrity": "sha512-/ZTUJ/ZkRt694k7KJNimgmHjtQcRuVwsST2Z6XfYveQIuBbHR+EqkTc1jfgPkQmMyk/vtpxo3nVxe8CNuau86A==",
"dev": true "dev": true
}, },
"acorn": { "acorn": {
@@ -3716,9 +3755,9 @@
"dev": true "dev": true
}, },
"json5": { "json5": {
"version": "1.0.1", "version": "1.0.2",
"resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz", "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz",
"integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==", "integrity": "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==",
"dev": true, "dev": true,
"requires": { "requires": {
"minimist": "^1.2.0" "minimist": "^1.2.0"
@@ -3800,9 +3839,9 @@
} }
}, },
"minimatch": { "minimatch": {
"version": "3.0.4", "version": "3.1.2",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz", "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.2.tgz",
"integrity": "sha512-yJHVQEhyqPLUTgt9B83PXu6W3rx4MvvHvSUvToogpwoGDOUQ+yDrR0HRot+yOCdCO7u4hX3pWft6kWBBcqh0UA==", "integrity": "sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==",
"requires": { "requires": {
"brace-expansion": "^1.1.7" "brace-expansion": "^1.1.7"
} }
@@ -4189,6 +4228,11 @@
"tslib": "^1.8.1" "tslib": "^1.8.1"
} }
}, },
"tunnel": {
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/tunnel/-/tunnel-0.0.6.tgz",
"integrity": "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg=="
},
"type-check": { "type-check": {
"version": "0.4.0", "version": "0.4.0",
"resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz", "resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz",
@@ -4231,6 +4275,11 @@
"punycode": "^2.1.0" "punycode": "^2.1.0"
} }
}, },
"uuid": {
"version": "8.3.2",
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg=="
},
"v8-compile-cache": { "v8-compile-cache": {
"version": "2.1.0", "version": "2.1.0",
"resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.1.0.tgz", "resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.1.0.tgz",

View File

@@ -26,7 +26,7 @@
"devDependencies": { "devDependencies": {
"@types/node": "^12.7.12", "@types/node": "^12.7.12",
"@typescript-eslint/parser": "^5.15.0", "@typescript-eslint/parser": "^5.15.0",
"@zeit/ncc": "^0.20.5", "@vercel/ncc": "^0.36.0",
"eslint": "^8.11.0", "eslint": "^8.11.0",
"eslint-plugin-github": "^4.3.5", "eslint-plugin-github": "^4.3.5",
"prettier": "^1.19.1", "prettier": "^1.19.1",

View File

@@ -3,8 +3,9 @@ PACKAGERUNTIME=$1
PRECACHE=$2 PRECACHE=$2
NODE_URL=https://nodejs.org/dist NODE_URL=https://nodejs.org/dist
UNOFFICIAL_NODE_URL=https://unofficial-builds.nodejs.org/download/release
NODE12_VERSION="12.22.7" NODE12_VERSION="12.22.7"
NODE16_VERSION="16.13.0" NODE16_VERSION="16.16.0"
get_abs_path() { get_abs_path() {
# exploits the fact that pwd will print abs path when no args # exploits the fact that pwd will print abs path when no args
@@ -54,12 +55,23 @@ function acquireExternalTool() {
# Download from source to the partial file. # Download from source to the partial file.
echo "Downloading $download_source" echo "Downloading $download_source"
mkdir -p "$(dirname "$download_target")" || checkRC 'mkdir' mkdir -p "$(dirname "$download_target")" || checkRC 'mkdir'
CURL_VERSION=$(curl --version | awk 'NR==1{print $2}')
echo "Curl version: $CURL_VERSION"
# curl -f Fail silently (no output at all) on HTTP errors (H) # curl -f Fail silently (no output at all) on HTTP errors (H)
# -k Allow connections to SSL sites without certs (H) # -k Allow connections to SSL sites without certs (H)
# -S Show error. With -s, make curl show errors when they occur # -S Show error. With -s, make curl show errors when they occur
# -L Follow redirects (H) # -L Follow redirects (H)
# -o FILE Write to FILE instead of stdout # -o FILE Write to FILE instead of stdout
curl -fkSL -o "$partial_target" "$download_source" 2>"${download_target}_download.log" || checkRC 'curl' # --retry 3 Retries transient errors 3 times (timeouts, 5xx)
if [[ "$(printf '%s\n' "7.71.0" "$CURL_VERSION" | sort -V | head -n1)" != "7.71.0" ]]; then
# Curl version is less than or equal to 7.71.0, skipping retry-all-errors flag
curl -fkSL --retry 3 -o "$partial_target" "$download_source" 2>"${download_target}_download.log" || checkRC 'curl'
else
# Curl version is greater than 7.71.0, running curl with --retry-all-errors flag
curl -fkSL --retry 3 --retry-all-errors -o "$partial_target" "$download_source" 2>"${download_target}_download.log" || checkRC 'curl'
fi
# Move the partial file to the download target. # Move the partial file to the download target.
mv "$partial_target" "$download_target" || checkRC 'mv' mv "$partial_target" "$download_target" || checkRC 'mv'
@@ -134,6 +146,16 @@ if [[ "$PACKAGERUNTIME" == "win-x64" || "$PACKAGERUNTIME" == "win-x86" ]]; then
fi fi
fi fi
# Download the external tools only for Windows.
if [[ "$PACKAGERUNTIME" == "win-arm64" ]]; then
# todo: replace these with official release when available
acquireExternalTool "$UNOFFICIAL_NODE_URL/v${NODE16_VERSION}/$PACKAGERUNTIME/node.exe" node16/bin
acquireExternalTool "$UNOFFICIAL_NODE_URL/v${NODE16_VERSION}/$PACKAGERUNTIME/node.lib" node16/bin
if [[ "$PRECACHE" != "" ]]; then
acquireExternalTool "https://github.com/microsoft/vswhere/releases/download/2.6.7/vswhere.exe" vswhere
fi
fi
# Download the external tools only for OSX. # Download the external tools only for OSX.
if [[ "$PACKAGERUNTIME" == "osx-x64" ]]; then if [[ "$PACKAGERUNTIME" == "osx-x64" ]]; then
acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/node-v${NODE12_VERSION}-darwin-x64.tar.gz" node12 fix_nested_dir acquireExternalTool "$NODE_URL/v${NODE12_VERSION}/node-v${NODE12_VERSION}-darwin-x64.tar.gz" node12 fix_nested_dir

View File

@@ -24,7 +24,8 @@ if (exitServiceAfterNFailures <= 0) {
exitServiceAfterNFailures = NaN; exitServiceAfterNFailures = NaN;
} }
var consecutiveFailureCount = 0; var unknownFailureRetryCount = 0;
var retriableFailureRetryCount = 0;
var gracefulShutdown = function () { var gracefulShutdown = function () {
console.log("Shutting down runner listener"); console.log("Shutting down runner listener");
@@ -62,7 +63,8 @@ var runService = function () {
listener.stdout.on("data", (data) => { listener.stdout.on("data", (data) => {
if (data.toString("utf8").includes("Listening for Jobs")) { if (data.toString("utf8").includes("Listening for Jobs")) {
consecutiveFailureCount = 0; unknownFailureRetryCount = 0;
retriableFailureRetryCount = 0;
} }
process.stdout.write(data.toString("utf8")); process.stdout.write(data.toString("utf8"));
}); });
@@ -92,24 +94,38 @@ var runService = function () {
console.log( console.log(
"Runner listener exit with retryable error, re-launch runner in 5 seconds." "Runner listener exit with retryable error, re-launch runner in 5 seconds."
); );
consecutiveFailureCount = 0; unknownFailureRetryCount = 0;
retriableFailureRetryCount++;
if (retriableFailureRetryCount >= 10) {
console.error(
"Stopping the runner after 10 consecutive re-tryable failures"
);
stopping = true;
}
} else if (code === 3 || code === 4) { } else if (code === 3 || code === 4) {
console.log( console.log(
"Runner listener exit because of updating, re-launch runner in 5 seconds." "Runner listener exit because of updating, re-launch runner in 5 seconds."
); );
consecutiveFailureCount = 0; unknownFailureRetryCount = 0;
retriableFailureRetryCount++;
if (retriableFailureRetryCount >= 10) {
console.error(
"Stopping the runner after 10 consecutive re-tryable failures"
);
stopping = true;
}
} else { } else {
var messagePrefix = "Runner listener exit with undefined return code"; var messagePrefix = "Runner listener exit with undefined return code";
consecutiveFailureCount++; unknownFailureRetryCount++;
retriableFailureRetryCount = 0;
if ( if (
!isNaN(exitServiceAfterNFailures) && !isNaN(exitServiceAfterNFailures) &&
consecutiveFailureCount >= exitServiceAfterNFailures unknownFailureRetryCount >= exitServiceAfterNFailures
) { ) {
console.error( console.error(
`${messagePrefix}, exiting service after ${consecutiveFailureCount} consecutive failures` `${messagePrefix}, exiting service after ${unknownFailureRetryCount} consecutive failures`
); );
gracefulShutdown(); stopping = true
return;
} else { } else {
console.log(`${messagePrefix}, re-launch runner in 5 seconds.`); console.log(`${messagePrefix}, re-launch runner in 5 seconds.`);
} }

File diff suppressed because it is too large Load Diff

View File

@@ -18,6 +18,20 @@ while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symli
done done
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )" DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
# Wait for docker to start
if [ ! -z "$RUNNER_WAIT_FOR_DOCKER_IN_SECONDS" ]; then
if [ "$RUNNER_WAIT_FOR_DOCKER_IN_SECONDS" -gt 0 ]; then
echo "Waiting for docker to be ready."
for i in $(seq "$RUNNER_WAIT_FOR_DOCKER_IN_SECONDS"); do
if docker ps > /dev/null 2>&1; then
echo "Docker is ready."
break
fi
"$DIR"/safe_sleep.sh 1
done
fi
fi
updateFile="update.finished" updateFile="update.finished"
"$DIR"/bin/Runner.Listener run $* "$DIR"/bin/Runner.Listener run $*

View File

@@ -9,16 +9,79 @@ while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symli
[[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located [[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done done
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )" DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
# run the helper process which keep the listener alive
while :; run() {
do # run the helper process which keep the listener alive
cp -f "$DIR"/run-helper.sh.template "$DIR"/run-helper.sh while :;
"$DIR"/run-helper.sh $* do
returnCode=$? cp -f "$DIR"/run-helper.sh.template "$DIR"/run-helper.sh
if [[ $returnCode -eq 2 ]]; then "$DIR"/run-helper.sh $*
echo "Restarting runner..." returnCode=$?
else if [[ $returnCode -eq 2 ]]; then
echo "Exiting runner..." echo "Restarting runner..."
exit 0 else
echo "Exiting runner..."
exit 0
fi
done
}
runWithManualTrap() {
# Set job control
set -m
trap 'kill -INT -$PID' INT TERM
# run the helper process which keep the listener alive
while :;
do
cp -f "$DIR"/run-helper.sh.template "$DIR"/run-helper.sh
"$DIR"/run-helper.sh $* &
PID=$!
wait -f $PID
returnCode=$?
if [[ $returnCode -eq 2 ]]; then
echo "Restarting runner..."
else
echo "Exiting runner..."
# Unregister signal handling before exit
trap - INT TERM
# wait for last parts to be logged
wait $PID
exit $returnCode
fi
done
}
function updateCerts() {
local sudo_prefix=""
local user_id=`id -u`
if [ $user_id -ne 0 ]; then
if [[ ! -x "$(command -v sudo)" ]]; then
echo "Warning: failed to update certificate store: sudo is required but not found"
return 1
else
sudo_prefix="sudo"
fi
fi fi
done
if [[ -x "$(command -v update-ca-certificates)" ]]; then
eval $sudo_prefix "update-ca-certificates"
elif [[ -x "$(command -v update-ca-trust)" ]]; then
eval $sudo_prefix "update-ca-trust"
else
echo "Warning: failed to update certificate store: update-ca-certificates or update-ca-trust not found. This can happen if you're using a different runner base image."
return 1
fi
}
if [[ ! -z "$RUNNER_UPDATE_CA_CERTS" ]]; then
updateCerts
fi
if [[ -z "$RUNNER_MANUALLY_TRAP_SIG" ]]; then
run $*
else
runWithManualTrap $*
fi

View File

@@ -66,12 +66,15 @@ libmscordbi.dylib
libmscordbi.so libmscordbi.so
Microsoft.CSharp.dll Microsoft.CSharp.dll
Microsoft.DiaSymReader.Native.amd64.dll Microsoft.DiaSymReader.Native.amd64.dll
Microsoft.DiaSymReader.Native.arm64.dll
Microsoft.VisualBasic.Core.dll Microsoft.VisualBasic.Core.dll
Microsoft.VisualBasic.dll Microsoft.VisualBasic.dll
Microsoft.Win32.Primitives.dll Microsoft.Win32.Primitives.dll
Microsoft.Win32.Registry.dll Microsoft.Win32.Registry.dll
mscordaccore.dll mscordaccore.dll
mscordaccore_amd64_amd64_6.0.522.21309.dll mscordaccore_amd64_amd64_6.0.522.21309.dll
mscordaccore_arm64_arm64_6.0.522.21309.dll
mscordaccore_amd64_amd64_6.0.1322.58009.dll
mscordbi.dll mscordbi.dll
mscorlib.dll mscorlib.dll
mscorrc.debug.dll mscorrc.debug.dll
@@ -261,4 +264,4 @@ System.Xml.XmlSerializer.dll
System.Xml.XPath.dll System.Xml.XPath.dll
System.Xml.XPath.XDocument.dll System.Xml.XPath.XDocument.dll
ucrtbase.dll ucrtbase.dll
WindowsBase.dll WindowsBase.dll

View File

@@ -1,4 +1,3 @@
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
@@ -32,7 +31,7 @@ namespace GitHub.Runner.Common
new EscapeMapping(token: "%", replacement: "%25"), new EscapeMapping(token: "%", replacement: "%25"),
}; };
private readonly Dictionary<string, string> _properties = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); private readonly Dictionary<string, string> _properties = new(StringComparer.OrdinalIgnoreCase);
public const string Prefix = "##["; public const string Prefix = "##[";
public const string _commandKey = "::"; public const string _commandKey = "::";

View File

@@ -1,5 +1,3 @@
using System;
namespace GitHub.Runner.Common namespace GitHub.Runner.Common
{ {
public enum ActionResult public enum ActionResult

View File

@@ -0,0 +1,51 @@
using System;
using System.Threading;
using System.Threading.Tasks;
using GitHub.DistributedTask.Pipelines;
using GitHub.DistributedTask.WebApi;
using GitHub.Services.Common;
using GitHub.Services.WebApi;
namespace GitHub.Runner.Common
{
[ServiceLocator(Default = typeof(ActionsRunServer))]
public interface IActionsRunServer : IRunnerService
{
Task ConnectAsync(Uri serverUrl, VssCredentials credentials);
Task<AgentJobRequestMessage> GetJobMessageAsync(string id, CancellationToken token);
}
public sealed class ActionsRunServer : RunnerService, IActionsRunServer
{
private bool _hasConnection;
private VssConnection _connection;
private TaskAgentHttpClient _taskAgentClient;
public async Task ConnectAsync(Uri serverUrl, VssCredentials credentials)
{
_connection = await EstablishVssConnection(serverUrl, credentials, TimeSpan.FromSeconds(100));
_taskAgentClient = _connection.GetClient<TaskAgentHttpClient>();
_hasConnection = true;
}
private void CheckConnection()
{
if (!_hasConnection)
{
throw new InvalidOperationException($"SetConnection");
}
}
public Task<AgentJobRequestMessage> GetJobMessageAsync(string id, CancellationToken cancellationToken)
{
CheckConnection();
var jobMessage = RetryRequest<AgentJobRequestMessage>(async () =>
{
return await _taskAgentClient.GetJobMessageAsync(id, cancellationToken);
}, cancellationToken);
return jobMessage;
}
}
}

View File

@@ -0,0 +1,56 @@
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using GitHub.Actions.RunService.WebApi;
using GitHub.DistributedTask.Pipelines;
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Sdk;
using GitHub.Services.Common;
using Sdk.RSWebApi.Contracts;
using Sdk.WebApi.WebApi.RawClient;
namespace GitHub.Runner.Common
{
[ServiceLocator(Default = typeof(BrokerServer))]
public interface IBrokerServer : IRunnerService
{
Task ConnectAsync(Uri serverUrl, VssCredentials credentials);
Task<TaskAgentMessage> GetRunnerMessageAsync(CancellationToken token, TaskAgentStatus status, string version);
}
public sealed class BrokerServer : RunnerService, IBrokerServer
{
private bool _hasConnection;
private Uri _brokerUri;
private RawConnection _connection;
private BrokerHttpClient _brokerHttpClient;
public async Task ConnectAsync(Uri serverUri, VssCredentials credentials)
{
_brokerUri = serverUri;
_connection = VssUtil.CreateRawConnection(serverUri, credentials);
_brokerHttpClient = await _connection.GetClientAsync<BrokerHttpClient>();
_hasConnection = true;
}
private void CheckConnection()
{
if (!_hasConnection)
{
throw new InvalidOperationException($"SetConnection");
}
}
public Task<TaskAgentMessage> GetRunnerMessageAsync(CancellationToken cancellationToken, TaskAgentStatus status, string version)
{
CheckConnection();
var jobMessage = RetryRequest<TaskAgentMessage>(
async () => await _brokerHttpClient.GetRunnerMessageAsync(version, status, cancellationToken), cancellationToken);
return jobMessage;
}
}
}

View File

@@ -1,4 +1,3 @@
using GitHub.Runner.Common.Util;
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using GitHub.DistributedTask.Logging; using GitHub.DistributedTask.Logging;

View File

@@ -1,4 +1,3 @@
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using System; using System;
using System.IO; using System.IO;
@@ -51,6 +50,12 @@ namespace GitHub.Runner.Common
[DataMember(EmitDefaultValue = false)] [DataMember(EmitDefaultValue = false)]
public string MonitorSocketAddress { get; set; } public string MonitorSocketAddress { get; set; }
[DataMember(EmitDefaultValue = false)]
public bool UseV2Flow { get; set; }
[DataMember(EmitDefaultValue = false)]
public string ServerUrlV2 { get; set; }
[IgnoreDataMember] [IgnoreDataMember]
public bool IsHostedServer public bool IsHostedServer
{ {
@@ -75,17 +80,18 @@ namespace GitHub.Runner.Common
{ {
get get
{ {
Uri accountUri = new Uri(this.ServerUrl); Uri accountUri = new(this.ServerUrl);
string repoOrOrgName = string.Empty; string repoOrOrgName = string.Empty;
if (accountUri.Host.EndsWith(".githubusercontent.com", StringComparison.OrdinalIgnoreCase)) if (accountUri.Host.EndsWith(".githubusercontent.com", StringComparison.OrdinalIgnoreCase) && !string.IsNullOrEmpty(this.GitHubUrl))
{ {
Uri gitHubUrl = new Uri(this.GitHubUrl); Uri gitHubUrl = new(this.GitHubUrl);
// Use the "NWO part" from the GitHub URL path // Use the "NWO part" from the GitHub URL path
repoOrOrgName = gitHubUrl.AbsolutePath.Trim('/'); repoOrOrgName = gitHubUrl.AbsolutePath.Trim('/');
} }
else
if (string.IsNullOrEmpty(repoOrOrgName))
{ {
repoOrOrgName = accountUri.AbsolutePath.Split('/', StringSplitOptions.RemoveEmptyEntries).FirstOrDefault(); repoOrOrgName = accountUri.AbsolutePath.Split('/', StringSplitOptions.RemoveEmptyEntries).FirstOrDefault();
} }

View File

@@ -1,4 +1,4 @@
using System; using System;
namespace GitHub.Runner.Common namespace GitHub.Runner.Common
{ {
@@ -77,7 +77,7 @@ namespace GitHub.Runner.Common
public static readonly Architecture PlatformArchitecture = Architecture.X64; public static readonly Architecture PlatformArchitecture = Architecture.X64;
#elif ARM #elif ARM
public static readonly Architecture PlatformArchitecture = Architecture.Arm; public static readonly Architecture PlatformArchitecture = Architecture.Arm;
#elif ARM64 #elif ARM64
public static readonly Architecture PlatformArchitecture = Architecture.Arm64; public static readonly Architecture PlatformArchitecture = Architecture.Arm64;
#endif #endif
@@ -90,7 +90,6 @@ namespace GitHub.Runner.Common
public static class Args public static class Args
{ {
public static readonly string Auth = "auth"; public static readonly string Auth = "auth";
public static readonly string JitConfig = "jitconfig";
public static readonly string Labels = "labels"; public static readonly string Labels = "labels";
public static readonly string MonitorSocketAddress = "monitorsocketaddress"; public static readonly string MonitorSocketAddress = "monitorsocketaddress";
public static readonly string Name = "name"; public static readonly string Name = "name";
@@ -105,11 +104,13 @@ namespace GitHub.Runner.Common
public static readonly string Token = "token"; public static readonly string Token = "token";
public static readonly string PAT = "pat"; public static readonly string PAT = "pat";
public static readonly string WindowsLogonPassword = "windowslogonpassword"; public static readonly string WindowsLogonPassword = "windowslogonpassword";
public static readonly string JitConfig = "jitconfig";
public static string[] Secrets => new[] public static string[] Secrets => new[]
{ {
PAT, PAT,
Token, Token,
WindowsLogonPassword, WindowsLogonPassword,
JitConfig,
}; };
} }
@@ -128,7 +129,10 @@ namespace GitHub.Runner.Common
public static readonly string Check = "check"; public static readonly string Check = "check";
public static readonly string Commit = "commit"; public static readonly string Commit = "commit";
public static readonly string Ephemeral = "ephemeral"; public static readonly string Ephemeral = "ephemeral";
public static readonly string GenerateServiceConfig = "generateServiceConfig";
public static readonly string Help = "help"; public static readonly string Help = "help";
public static readonly string Local = "local";
public static readonly string NoDefaultLabels = "no-default-labels";
public static readonly string Replace = "replace"; public static readonly string Replace = "replace";
public static readonly string DisableUpdate = "disableupdate"; public static readonly string DisableUpdate = "disableupdate";
public static readonly string Once = "once"; // Keep this around since customers still relies on it public static readonly string Once = "once"; // Keep this around since customers still relies on it
@@ -151,18 +155,23 @@ namespace GitHub.Runner.Common
{ {
public static readonly string DiskSpaceWarning = "runner.diskspace.warning"; public static readonly string DiskSpaceWarning = "runner.diskspace.warning";
public static readonly string Node12Warning = "DistributedTask.AddWarningToNode12Action"; public static readonly string Node12Warning = "DistributedTask.AddWarningToNode12Action";
public static readonly string LogTemplateErrorsToTraceWriter = "DistributedTask.LogTemplateErrorsToTraceWriter";
public static readonly string UseContainerPathForTemplate = "DistributedTask.UseContainerPathForTemplate"; public static readonly string UseContainerPathForTemplate = "DistributedTask.UseContainerPathForTemplate";
public static readonly string AllowRunnerContainerHooks = "DistributedTask.AllowRunnerContainerHooks"; public static readonly string AllowRunnerContainerHooks = "DistributedTask.AllowRunnerContainerHooks";
} }
public static readonly string InternalTelemetryIssueDataKey = "_internal_telemetry"; public static readonly string InternalTelemetryIssueDataKey = "_internal_telemetry";
public static readonly Guid TelemetryRecordId = new Guid("11111111-1111-1111-1111-111111111111");
public static readonly string WorkerCrash = "WORKER_CRASH"; public static readonly string WorkerCrash = "WORKER_CRASH";
public static readonly string LowDiskSpace = "LOW_DISK_SPACE"; public static readonly string LowDiskSpace = "LOW_DISK_SPACE";
public static readonly string UnsupportedCommand = "UNSUPPORTED_COMMAND"; public static readonly string UnsupportedCommand = "UNSUPPORTED_COMMAND";
public static readonly string ResultsUploadFailure = "RESULTS_UPLOAD_FAILURE";
public static readonly string UnsupportedCommandMessage = "The `{0}` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/";
public static readonly string UnsupportedCommandMessageDisabled = "The `{0}` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/"; public static readonly string UnsupportedCommandMessageDisabled = "The `{0}` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/";
public static readonly string UnsupportedStopCommandTokenDisabled = "You cannot use a endToken that is an empty string, the string 'pause-logging', or another workflow command. For more information see: https://docs.github.com/actions/learn-github-actions/workflow-commands-for-github-actions#example-stopping-and-starting-workflow-commands or opt into insecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_STOPCOMMAND_TOKENS` environment variable to `true`."; public static readonly string UnsupportedStopCommandTokenDisabled = "You cannot use a endToken that is an empty string, the string 'pause-logging', or another workflow command. For more information see: https://docs.github.com/actions/learn-github-actions/workflow-commands-for-github-actions#example-stopping-and-starting-workflow-commands or opt into insecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_STOPCOMMAND_TOKENS` environment variable to `true`.";
public static readonly string UnsupportedSummarySize = "$GITHUB_STEP_SUMMARY upload aborted, supports content up to a size of {0}k, got {1}k. For more information see: https://docs.github.com/actions/using-workflows/workflow-commands-for-github-actions#adding-a-markdown-summary"; public static readonly string UnsupportedSummarySize = "$GITHUB_STEP_SUMMARY upload aborted, supports content up to a size of {0}k, got {1}k. For more information see: https://docs.github.com/actions/using-workflows/workflow-commands-for-github-actions#adding-a-markdown-summary";
public static readonly string Node12DetectedAfterEndOfLife = "Node.js 12 actions are deprecated. Please update the following actions to use Node.js 16: {0}"; public static readonly string SummaryUploadError = "$GITHUB_STEP_SUMMARY upload aborted, an error occurred when uploading the summary. For more information see: https://docs.github.com/actions/using-workflows/workflow-commands-for-github-actions#adding-a-markdown-summary";
public static readonly string Node12DetectedAfterEndOfLife = "Node.js 12 actions are deprecated. Please update the following actions to use Node.js 16: {0}. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/.";
} }
public static class RunnerEvent public static class RunnerEvent
@@ -241,8 +250,9 @@ namespace GitHub.Runner.Common
public static readonly string ToolsDirectory = "agent.ToolsDirectory"; public static readonly string ToolsDirectory = "agent.ToolsDirectory";
// Set this env var to "node12" to downgrade the node version for internal functions (e.g hashfiles). This does NOT affect the version of node actions. // Set this env var to "node12" to downgrade the node version for internal functions (e.g hashfiles). This does NOT affect the version of node actions.
public static readonly string ForcedInternalNodeVersion = "ACTIONS_RUNNER_FORCED_INTERNAL_NODE_VERSION"; public static readonly string ForcedInternalNodeVersion = "ACTIONS_RUNNER_FORCED_INTERNAL_NODE_VERSION";
public static readonly string ForcedActionsNodeVersion = "ACTIONS_RUNNER_FORCE_ACTIONS_NODE_VERSION"; public static readonly string ForcedActionsNodeVersion = "ACTIONS_RUNNER_FORCE_ACTIONS_NODE_VERSION";
public static readonly string PrintLogToStdout = "ACTIONS_RUNNER_PRINT_LOG_TO_STDOUT";
} }
public static class System public static class System
@@ -253,7 +263,16 @@ namespace GitHub.Runner.Common
public static readonly string AccessToken = "system.accessToken"; public static readonly string AccessToken = "system.accessToken";
public static readonly string Culture = "system.culture"; public static readonly string Culture = "system.culture";
public static readonly string PhaseDisplayName = "system.phaseDisplayName"; public static readonly string PhaseDisplayName = "system.phaseDisplayName";
public static readonly string JobRequestType = "system.jobRequestType";
public static readonly string OrchestrationId = "system.orchestrationId";
} }
} }
public static class OperatingSystem
{
public static readonly int Windows11BuildVersion = 22000;
// Both windows 10 and windows 11 share the same Major Version 10, need to use the build version to differentiate
public static readonly int Windows11MajorVersion = 10;
}
} }
} }

View File

@@ -1,4 +1,3 @@
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using System; using System;
using System.Collections.Concurrent; using System.Collections.Concurrent;
@@ -15,7 +14,7 @@ namespace GitHub.Runner.Common
public sealed class ExtensionManager : RunnerService, IExtensionManager public sealed class ExtensionManager : RunnerService, IExtensionManager
{ {
private readonly ConcurrentDictionary<Type, List<IExtension>> _cache = new ConcurrentDictionary<Type, List<IExtension>>(); private readonly ConcurrentDictionary<Type, List<IExtension>> _cache = new();
public List<T> GetExtensions<T>() where T : class, IExtension public List<T> GetExtensions<T>() where T : class, IExtension
{ {
@@ -61,6 +60,8 @@ namespace GitHub.Runner.Common
Add<T>(extensions, "GitHub.Runner.Worker.AddPathFileCommand, Runner.Worker"); Add<T>(extensions, "GitHub.Runner.Worker.AddPathFileCommand, Runner.Worker");
Add<T>(extensions, "GitHub.Runner.Worker.SetEnvFileCommand, Runner.Worker"); Add<T>(extensions, "GitHub.Runner.Worker.SetEnvFileCommand, Runner.Worker");
Add<T>(extensions, "GitHub.Runner.Worker.CreateStepSummaryCommand, Runner.Worker"); Add<T>(extensions, "GitHub.Runner.Worker.CreateStepSummaryCommand, Runner.Worker");
Add<T>(extensions, "GitHub.Runner.Worker.SaveStateFileCommand, Runner.Worker");
Add<T>(extensions, "GitHub.Runner.Worker.SetOutputFileCommand, Runner.Worker");
break; break;
case "GitHub.Runner.Listener.Check.ICheckExtension": case "GitHub.Runner.Listener.Check.ICheckExtension":
Add<T>(extensions, "GitHub.Runner.Listener.Check.InternetCheck, Runner.Listener"); Add<T>(extensions, "GitHub.Runner.Listener.Check.InternetCheck, Runner.Listener");

View File

@@ -1,4 +1,4 @@
using System; using System;
using System.Collections.Concurrent; using System.Collections.Concurrent;
using System.Collections.Generic; using System.Collections.Generic;
using System.Diagnostics; using System.Diagnostics;
@@ -51,12 +51,12 @@ namespace GitHub.Runner.Common
private static int _defaultLogRetentionDays = 30; private static int _defaultLogRetentionDays = 30;
private static int[] _vssHttpMethodEventIds = new int[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 24 }; private static int[] _vssHttpMethodEventIds = new int[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 24 };
private static int[] _vssHttpCredentialEventIds = new int[] { 11, 13, 14, 15, 16, 17, 18, 20, 21, 22, 27, 29 }; private static int[] _vssHttpCredentialEventIds = new int[] { 11, 13, 14, 15, 16, 17, 18, 20, 21, 22, 27, 29 };
private readonly ConcurrentDictionary<Type, object> _serviceInstances = new ConcurrentDictionary<Type, object>(); private readonly ConcurrentDictionary<Type, object> _serviceInstances = new();
private readonly ConcurrentDictionary<Type, Type> _serviceTypes = new ConcurrentDictionary<Type, Type>(); private readonly ConcurrentDictionary<Type, Type> _serviceTypes = new();
private readonly ISecretMasker _secretMasker = new SecretMasker(); private readonly ISecretMasker _secretMasker = new SecretMasker();
private readonly List<ProductInfoHeaderValue> _userAgents = new List<ProductInfoHeaderValue>() { new ProductInfoHeaderValue($"GitHubActionsRunner-{BuildConstants.RunnerPackage.PackageName}", BuildConstants.RunnerPackage.Version) }; private readonly List<ProductInfoHeaderValue> _userAgents = new() { new ProductInfoHeaderValue($"GitHubActionsRunner-{BuildConstants.RunnerPackage.PackageName}", BuildConstants.RunnerPackage.Version) };
private CancellationTokenSource _runnerShutdownTokenSource = new CancellationTokenSource(); private CancellationTokenSource _runnerShutdownTokenSource = new();
private object _perfLock = new object(); private object _perfLock = new();
private Tracing _trace; private Tracing _trace;
private Tracing _actionsHttpTrace; private Tracing _actionsHttpTrace;
private Tracing _netcoreHttpTrace; private Tracing _netcoreHttpTrace;
@@ -66,7 +66,7 @@ namespace GitHub.Runner.Common
private IDisposable _diagListenerSubscription; private IDisposable _diagListenerSubscription;
private StartupType _startupType; private StartupType _startupType;
private string _perfFile; private string _perfFile;
private RunnerWebProxy _webProxy = new RunnerWebProxy(); private RunnerWebProxy _webProxy = new();
public event EventHandler Unloading; public event EventHandler Unloading;
public CancellationToken RunnerShutdownToken => _runnerShutdownTokenSource.Token; public CancellationToken RunnerShutdownToken => _runnerShutdownTokenSource.Token;
@@ -94,6 +94,13 @@ namespace GitHub.Runner.Common
this.SecretMasker.AddValueEncoder(ValueEncoders.PowerShellPreAmpersandEscape); this.SecretMasker.AddValueEncoder(ValueEncoders.PowerShellPreAmpersandEscape);
this.SecretMasker.AddValueEncoder(ValueEncoders.PowerShellPostAmpersandEscape); this.SecretMasker.AddValueEncoder(ValueEncoders.PowerShellPostAmpersandEscape);
// Create StdoutTraceListener if ENV is set
StdoutTraceListener stdoutTraceListener = null;
if (StringUtil.ConvertToBoolean(Environment.GetEnvironmentVariable(Constants.Variables.Agent.PrintLogToStdout)))
{
stdoutTraceListener = new StdoutTraceListener(hostType);
}
// Create the trace manager. // Create the trace manager.
if (string.IsNullOrEmpty(logFile)) if (string.IsNullOrEmpty(logFile))
{ {
@@ -113,11 +120,11 @@ namespace GitHub.Runner.Common
// this should give us _diag folder under runner root directory // this should give us _diag folder under runner root directory
string diagLogDirectory = Path.Combine(new DirectoryInfo(Path.GetDirectoryName(Assembly.GetEntryAssembly().Location)).Parent.FullName, Constants.Path.DiagDirectory); string diagLogDirectory = Path.Combine(new DirectoryInfo(Path.GetDirectoryName(Assembly.GetEntryAssembly().Location)).Parent.FullName, Constants.Path.DiagDirectory);
_traceManager = new TraceManager(new HostTraceListener(diagLogDirectory, hostType, logPageSize, logRetentionDays), this.SecretMasker); _traceManager = new TraceManager(new HostTraceListener(diagLogDirectory, hostType, logPageSize, logRetentionDays), stdoutTraceListener, this.SecretMasker);
} }
else else
{ {
_traceManager = new TraceManager(new HostTraceListener(logFile), this.SecretMasker); _traceManager = new TraceManager(new HostTraceListener(logFile), stdoutTraceListener, this.SecretMasker);
} }
_trace = GetTrace(nameof(HostContext)); _trace = GetTrace(nameof(HostContext));
@@ -213,12 +220,26 @@ namespace GitHub.Runner.Common
var runnerFile = GetConfigFile(WellKnownConfigFile.Runner); var runnerFile = GetConfigFile(WellKnownConfigFile.Runner);
if (File.Exists(runnerFile)) if (File.Exists(runnerFile))
{ {
var runnerSettings = IOUtil.LoadObject<RunnerSettings>(runnerFile); var runnerSettings = IOUtil.LoadObject<RunnerSettings>(runnerFile, true);
_userAgents.Add(new ProductInfoHeaderValue("RunnerId", runnerSettings.AgentId.ToString(CultureInfo.InvariantCulture))); _userAgents.Add(new ProductInfoHeaderValue("RunnerId", runnerSettings.AgentId.ToString(CultureInfo.InvariantCulture)));
_userAgents.Add(new ProductInfoHeaderValue("GroupId", runnerSettings.PoolId.ToString(CultureInfo.InvariantCulture))); _userAgents.Add(new ProductInfoHeaderValue("GroupId", runnerSettings.PoolId.ToString(CultureInfo.InvariantCulture)));
} }
_userAgents.Add(new ProductInfoHeaderValue("CommitSHA", BuildConstants.Source.CommitHash)); _userAgents.Add(new ProductInfoHeaderValue("CommitSHA", BuildConstants.Source.CommitHash));
var extraUserAgent = Environment.GetEnvironmentVariable("GITHUB_ACTIONS_RUNNER_EXTRA_USER_AGENT");
if (!string.IsNullOrEmpty(extraUserAgent))
{
var extraUserAgentSplit = extraUserAgent.Split('/', StringSplitOptions.RemoveEmptyEntries);
if (extraUserAgentSplit.Length != 2)
{
_trace.Error($"GITHUB_ACTIONS_RUNNER_EXTRA_USER_AGENT is not in the format of 'name/version'.");
}
var extraUserAgentHeader = new ProductInfoHeaderValue(extraUserAgentSplit[0], extraUserAgentSplit[1]);
_trace.Info($"Adding extra user agent '{extraUserAgentHeader}' to all HTTP requests.");
_userAgents.Add(extraUserAgentHeader);
}
} }
public string GetDirectory(WellKnownDirectory directory) public string GetDirectory(WellKnownDirectory directory)

View File

@@ -1,4 +1,3 @@
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using System; using System;
using System.Diagnostics; using System.Diagnostics;
@@ -165,7 +164,7 @@ namespace GitHub.Runner.Common
{ {
if (_enableLogRetention) if (_enableLogRetention)
{ {
DirectoryInfo diags = new DirectoryInfo(_logFileDirectory); DirectoryInfo diags = new(_logFileDirectory);
var logs = diags.GetFiles($"{_logFilePrefix}*.log"); var logs = diags.GetFiles($"{_logFilePrefix}*.log");
foreach (var log in logs) foreach (var log in logs)
{ {

View File

@@ -1,10 +1,7 @@
using System; using System;
using System.IO;
using System.IO.Pipes;
using System.Net; using System.Net;
using System.Net.Sockets; using System.Net.Sockets;
using System.Text; using System.Text;
using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
namespace GitHub.Runner.Common namespace GitHub.Runner.Common

View File

@@ -1,4 +1,4 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.IO; using System.IO;
using System.Linq; using System.Linq;
@@ -11,9 +11,10 @@ using System.Threading.Tasks;
using GitHub.DistributedTask.WebApi; using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using GitHub.Services.Common; using GitHub.Services.Common;
using GitHub.Services.OAuth;
using GitHub.Services.Results.Client;
using GitHub.Services.WebApi; using GitHub.Services.WebApi;
using GitHub.Services.WebApi.Utilities.Internal; using GitHub.Services.WebApi.Utilities.Internal;
using Newtonsoft.Json;
namespace GitHub.Runner.Common namespace GitHub.Runner.Common
{ {
@@ -198,13 +199,15 @@ namespace GitHub.Runner.Common
{ {
Trace.Info($"Attempting to start websocket client with delay {delay}."); Trace.Info($"Attempting to start websocket client with delay {delay}.");
await Task.Delay(delay); await Task.Delay(delay);
await this._websocketClient.ConnectAsync(new Uri(feedStreamUrl), default(CancellationToken)); using var connectTimeoutTokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(30));
await this._websocketClient.ConnectAsync(new Uri(feedStreamUrl), connectTimeoutTokenSource.Token);
Trace.Info($"Successfully started websocket client."); Trace.Info($"Successfully started websocket client.");
} }
catch (Exception ex) catch (Exception ex)
{ {
Trace.Info("Exception caught during websocket client connect, fallback of HTTP would be used now instead of websocket."); Trace.Info("Exception caught during websocket client connect, fallback of HTTP would be used now instead of websocket.");
Trace.Error(ex); Trace.Error(ex);
this._websocketClient = null;
} }
} }
@@ -251,7 +254,7 @@ namespace GitHub.Runner.Common
{ {
failedAttemptsToPostBatchedLinesByWebsocket++; failedAttemptsToPostBatchedLinesByWebsocket++;
Trace.Info($"Caught exception during append web console line to websocket, let's fallback to sending via non-websocket call (total calls: {totalBatchedLinesAttemptedByWebsocket}, failed calls: {failedAttemptsToPostBatchedLinesByWebsocket}, websocket state: {this._websocketClient?.State})."); Trace.Info($"Caught exception during append web console line to websocket, let's fallback to sending via non-websocket call (total calls: {totalBatchedLinesAttemptedByWebsocket}, failed calls: {failedAttemptsToPostBatchedLinesByWebsocket}, websocket state: {this._websocketClient?.State}).");
Trace.Error(ex); Trace.Verbose(ex.ToString());
if (totalBatchedLinesAttemptedByWebsocket > _minWebsocketBatchedLinesCountToConsider) if (totalBatchedLinesAttemptedByWebsocket > _minWebsocketBatchedLinesCountToConsider)
{ {
// let's consider failure percentage // let's consider failure percentage
@@ -306,6 +309,7 @@ namespace GitHub.Runner.Common
return _taskClient.CreateAttachmentAsync(scopeIdentifier, hubName, planId, timelineId, timelineRecordId, type, name, uploadStream, cancellationToken: cancellationToken); return _taskClient.CreateAttachmentAsync(scopeIdentifier, hubName, planId, timelineId, timelineRecordId, type, name, uploadStream, cancellationToken: cancellationToken);
} }
public Task<TaskLog> CreateLogAsync(Guid scopeIdentifier, string hubName, Guid planId, TaskLog log, CancellationToken cancellationToken) public Task<TaskLog> CreateLogAsync(Guid scopeIdentifier, string hubName, Guid planId, TaskLog log, CancellationToken cancellationToken)
{ {
CheckConnection(); CheckConnection();

View File

@@ -17,9 +17,10 @@ namespace GitHub.Runner.Common
TaskCompletionSource<int> JobRecordUpdated { get; } TaskCompletionSource<int> JobRecordUpdated { get; }
event EventHandler<ThrottlingEventArgs> JobServerQueueThrottling; event EventHandler<ThrottlingEventArgs> JobServerQueueThrottling;
Task ShutdownAsync(); Task ShutdownAsync();
void Start(Pipelines.AgentJobRequestMessage jobRequest); void Start(Pipelines.AgentJobRequestMessage jobRequest, bool resultServiceOnly = false);
void QueueWebConsoleLine(Guid stepRecordId, string line, long? lineNumber = null); void QueueWebConsoleLine(Guid stepRecordId, string line, long? lineNumber = null);
void QueueFileUpload(Guid timelineId, Guid timelineRecordId, string type, string name, string path, bool deleteSource); void QueueFileUpload(Guid timelineId, Guid timelineRecordId, string type, string name, string path, bool deleteSource);
void QueueResultsUpload(Guid timelineRecordId, string name, string path, string type, bool deleteSource, bool finalize, bool firstBlock, long totalLines);
void QueueTimelineRecordUpdate(Guid timelineId, TimelineRecord timelineRecord); void QueueTimelineRecordUpdate(Guid timelineId, TimelineRecord timelineRecord);
} }
@@ -30,6 +31,7 @@ namespace GitHub.Runner.Common
private static readonly TimeSpan _delayForWebConsoleLineDequeue = TimeSpan.FromMilliseconds(500); private static readonly TimeSpan _delayForWebConsoleLineDequeue = TimeSpan.FromMilliseconds(500);
private static readonly TimeSpan _delayForTimelineUpdateDequeue = TimeSpan.FromMilliseconds(500); private static readonly TimeSpan _delayForTimelineUpdateDequeue = TimeSpan.FromMilliseconds(500);
private static readonly TimeSpan _delayForFileUploadDequeue = TimeSpan.FromMilliseconds(1000); private static readonly TimeSpan _delayForFileUploadDequeue = TimeSpan.FromMilliseconds(1000);
private static readonly TimeSpan _delayForResultsUploadDequeue = TimeSpan.FromMilliseconds(1000);
// Job message information // Job message information
private Guid _scopeIdentifier; private Guid _scopeIdentifier;
@@ -39,31 +41,36 @@ namespace GitHub.Runner.Common
private Guid _jobTimelineRecordId; private Guid _jobTimelineRecordId;
// queue for web console line // queue for web console line
private readonly ConcurrentQueue<ConsoleLineInfo> _webConsoleLineQueue = new ConcurrentQueue<ConsoleLineInfo>(); private readonly ConcurrentQueue<ConsoleLineInfo> _webConsoleLineQueue = new();
// queue for file upload (log file or attachment) // queue for file upload (log file or attachment)
private readonly ConcurrentQueue<UploadFileInfo> _fileUploadQueue = new ConcurrentQueue<UploadFileInfo>(); private readonly ConcurrentQueue<UploadFileInfo> _fileUploadQueue = new();
private readonly ConcurrentQueue<ResultsUploadFileInfo> _resultsFileUploadQueue = new();
// queue for timeline or timeline record update (one queue per timeline) // queue for timeline or timeline record update (one queue per timeline)
private readonly ConcurrentDictionary<Guid, ConcurrentQueue<TimelineRecord>> _timelineUpdateQueue = new ConcurrentDictionary<Guid, ConcurrentQueue<TimelineRecord>>(); private readonly ConcurrentDictionary<Guid, ConcurrentQueue<TimelineRecord>> _timelineUpdateQueue = new();
// indicate how many timelines we have, we will process _timelineUpdateQueue base on the order of timeline in this list // indicate how many timelines we have, we will process _timelineUpdateQueue base on the order of timeline in this list
private readonly List<Guid> _allTimelines = new List<Guid>(); private readonly List<Guid> _allTimelines = new();
// bufferd timeline records that fail to update // bufferd timeline records that fail to update
private readonly Dictionary<Guid, List<TimelineRecord>> _bufferedRetryRecords = new Dictionary<Guid, List<TimelineRecord>>(); private readonly Dictionary<Guid, List<TimelineRecord>> _bufferedRetryRecords = new();
// Task for each queue's dequeue process // Task for each queue's dequeue process
private Task _webConsoleLineDequeueTask; private Task _webConsoleLineDequeueTask;
private Task _fileUploadDequeueTask; private Task _fileUploadDequeueTask;
private Task _resultsUploadDequeueTask;
private Task _timelineUpdateDequeueTask; private Task _timelineUpdateDequeueTask;
// common // common
private IJobServer _jobServer; private IJobServer _jobServer;
private IResultsServer _resultsServer;
private Task[] _allDequeueTasks; private Task[] _allDequeueTasks;
private readonly TaskCompletionSource<int> _jobCompletionSource = new TaskCompletionSource<int>(); private readonly TaskCompletionSource<int> _jobCompletionSource = new();
private readonly TaskCompletionSource<int> _jobRecordUpdated = new TaskCompletionSource<int>(); private readonly TaskCompletionSource<int> _jobRecordUpdated = new();
private bool _queueInProcess = false; private bool _queueInProcess = false;
private bool _resultsServiceOnly = false;
public TaskCompletionSource<int> JobRecordUpdated => _jobRecordUpdated; public TaskCompletionSource<int> JobRecordUpdated => _jobRecordUpdated;
@@ -79,19 +86,49 @@ namespace GitHub.Runner.Common
private bool _webConsoleLineAggressiveDequeue = true; private bool _webConsoleLineAggressiveDequeue = true;
private bool _firstConsoleOutputs = true; private bool _firstConsoleOutputs = true;
private bool _resultsClientInitiated = false;
private delegate Task ResultsFileUploadHandler(ResultsUploadFileInfo file);
public override void Initialize(IHostContext hostContext) public override void Initialize(IHostContext hostContext)
{ {
base.Initialize(hostContext); base.Initialize(hostContext);
_jobServer = hostContext.GetService<IJobServer>(); _jobServer = hostContext.GetService<IJobServer>();
_resultsServer = hostContext.GetService<IResultsServer>();
} }
public void Start(Pipelines.AgentJobRequestMessage jobRequest) public void Start(Pipelines.AgentJobRequestMessage jobRequest, bool resultServiceOnly = false)
{ {
Trace.Entering(); Trace.Entering();
_resultsServiceOnly = resultServiceOnly;
var serviceEndPoint = jobRequest.Resources.Endpoints.Single(x => string.Equals(x.Name, WellKnownServiceEndpointNames.SystemVssConnection, StringComparison.OrdinalIgnoreCase)); var serviceEndPoint = jobRequest.Resources.Endpoints.Single(x => string.Equals(x.Name, WellKnownServiceEndpointNames.SystemVssConnection, StringComparison.OrdinalIgnoreCase));
_jobServer.InitializeWebsocketClient(serviceEndPoint); if (!resultServiceOnly)
{
_jobServer.InitializeWebsocketClient(serviceEndPoint);
}
// This code is usually wrapped by an instance of IExecutionContext which isn't available here.
jobRequest.Variables.TryGetValue("system.github.results_endpoint", out VariableValue resultsEndpointVariable);
var resultsReceiverEndpoint = resultsEndpointVariable?.Value;
if (serviceEndPoint?.Authorization != null &&
serviceEndPoint.Authorization.Parameters.TryGetValue("AccessToken", out var accessToken) &&
!string.IsNullOrEmpty(accessToken) &&
!string.IsNullOrEmpty(resultsReceiverEndpoint))
{
string liveConsoleFeedUrl = null;
Trace.Info("Initializing results client");
if (resultServiceOnly
&& serviceEndPoint.Data.TryGetValue("FeedStreamUrl", out var feedStreamUrl)
&& !string.IsNullOrEmpty(feedStreamUrl))
{
liveConsoleFeedUrl = feedStreamUrl;
}
_resultsServer.InitializeResultsClient(new Uri(resultsReceiverEndpoint), liveConsoleFeedUrl, accessToken);
_resultsClientInitiated = true;
}
if (_queueInProcess) if (_queueInProcess)
{ {
@@ -120,10 +157,13 @@ namespace GitHub.Runner.Common
Trace.Info("Start process file upload queue."); Trace.Info("Start process file upload queue.");
_fileUploadDequeueTask = ProcessFilesUploadQueueAsync(); _fileUploadDequeueTask = ProcessFilesUploadQueueAsync();
Trace.Info("Start results file upload queue.");
_resultsUploadDequeueTask = ProcessResultsUploadQueueAsync();
Trace.Info("Start process timeline update queue."); Trace.Info("Start process timeline update queue.");
_timelineUpdateDequeueTask = ProcessTimelinesUpdateQueueAsync(); _timelineUpdateDequeueTask = ProcessTimelinesUpdateQueueAsync();
_allDequeueTasks = new Task[] { _webConsoleLineDequeueTask, _fileUploadDequeueTask, _timelineUpdateDequeueTask }; _allDequeueTasks = new Task[] { _webConsoleLineDequeueTask, _fileUploadDequeueTask, _timelineUpdateDequeueTask, _resultsUploadDequeueTask };
_queueInProcess = true; _queueInProcess = true;
} }
@@ -154,6 +194,10 @@ namespace GitHub.Runner.Common
await ProcessFilesUploadQueueAsync(runOnce: true); await ProcessFilesUploadQueueAsync(runOnce: true);
Trace.Info("File upload queue drained."); Trace.Info("File upload queue drained.");
Trace.Verbose("Draining results upload queue.");
await ProcessResultsUploadQueueAsync(runOnce: true);
Trace.Info("Results upload queue drained.");
// ProcessTimelinesUpdateQueueAsync() will throw exception during shutdown // ProcessTimelinesUpdateQueueAsync() will throw exception during shutdown
// if there is any timeline records that failed to update contains output variabls. // if there is any timeline records that failed to update contains output variabls.
Trace.Verbose("Draining timeline update queue."); Trace.Verbose("Draining timeline update queue.");
@@ -163,6 +207,9 @@ namespace GitHub.Runner.Common
Trace.Info($"Disposing job server ..."); Trace.Info($"Disposing job server ...");
await _jobServer.DisposeAsync(); await _jobServer.DisposeAsync();
Trace.Info($"Disposing results server ...");
await _resultsServer.DisposeAsync();
Trace.Info("All queue process tasks have been stopped, and all queues are drained."); Trace.Info("All queue process tasks have been stopped, and all queues are drained.");
} }
@@ -204,6 +251,45 @@ namespace GitHub.Runner.Common
_fileUploadQueue.Enqueue(newFile); _fileUploadQueue.Enqueue(newFile);
} }
public void QueueResultsUpload(Guid timelineRecordId, string name, string path, string type, bool deleteSource, bool finalize, bool firstBlock, long totalLines)
{
if (!_resultsClientInitiated)
{
Trace.Verbose("Skipping results upload");
try
{
if (deleteSource)
{
File.Delete(path);
}
}
catch (Exception ex)
{
Trace.Info("Catch exception during delete skipped results upload file.");
Trace.Error(ex);
}
return;
}
// all parameter not null, file path exist.
var newFile = new ResultsUploadFileInfo()
{
Name = name,
Path = path,
Type = type,
PlanId = _planId.ToString(),
JobId = _jobTimelineRecordId.ToString(),
RecordId = timelineRecordId,
DeleteSource = deleteSource,
Finalize = finalize,
FirstBlock = firstBlock,
TotalLines = totalLines,
};
Trace.Verbose("Enqueue results file upload queue: file '{0}' attach to job {1} step {2}", newFile.Path, _jobTimelineRecordId, timelineRecordId);
_resultsFileUploadQueue.Enqueue(newFile);
}
public void QueueTimelineRecordUpdate(Guid timelineId, TimelineRecord timelineRecord) public void QueueTimelineRecordUpdate(Guid timelineId, TimelineRecord timelineRecord)
{ {
ArgUtil.NotEmpty(timelineId, nameof(timelineId)); ArgUtil.NotEmpty(timelineId, nameof(timelineId));
@@ -237,8 +323,8 @@ namespace GitHub.Runner.Common
} }
// Group consolelines by timeline record of each step // Group consolelines by timeline record of each step
Dictionary<Guid, List<TimelineRecordLogLine>> stepsConsoleLines = new Dictionary<Guid, List<TimelineRecordLogLine>>(); Dictionary<Guid, List<TimelineRecordLogLine>> stepsConsoleLines = new();
List<Guid> stepRecordIds = new List<Guid>(); // We need to keep lines in order List<Guid> stepRecordIds = new(); // We need to keep lines in order
int linesCounter = 0; int linesCounter = 0;
ConsoleLineInfo lineInfo; ConsoleLineInfo lineInfo;
while (_webConsoleLineQueue.TryDequeue(out lineInfo)) while (_webConsoleLineQueue.TryDequeue(out lineInfo))
@@ -264,7 +350,7 @@ namespace GitHub.Runner.Common
{ {
// Split consolelines into batch, each batch will container at most 100 lines. // Split consolelines into batch, each batch will container at most 100 lines.
int batchCounter = 0; int batchCounter = 0;
List<List<TimelineRecordLogLine>> batchedLines = new List<List<TimelineRecordLogLine>>(); List<List<TimelineRecordLogLine>> batchedLines = new();
foreach (var line in stepsConsoleLines[stepRecordId]) foreach (var line in stepsConsoleLines[stepRecordId])
{ {
var currentBatch = batchedLines.ElementAtOrDefault(batchCounter); var currentBatch = batchedLines.ElementAtOrDefault(batchCounter);
@@ -299,10 +385,17 @@ namespace GitHub.Runner.Common
{ {
try try
{ {
// Give at most 60s for each request. // Give at most 60s for each request.
using (var timeoutTokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(60))) using (var timeoutTokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(60)))
{ {
await _jobServer.AppendTimelineRecordFeedAsync(_scopeIdentifier, _hubName, _planId, _jobTimelineId, _jobTimelineRecordId, stepRecordId, batch.Select(logLine => logLine.Line).ToList(), batch[0].LineNumber, timeoutTokenSource.Token); if (_resultsServiceOnly)
{
await _resultsServer.AppendLiveConsoleFeedAsync(_scopeIdentifier, _hubName, _planId, _jobTimelineId, _jobTimelineRecordId, stepRecordId, batch.Select(logLine => logLine.Line).ToList(), batch[0].LineNumber, timeoutTokenSource.Token);
}
else
{
await _jobServer.AppendTimelineRecordFeedAsync(_scopeIdentifier, _hubName, _planId, _jobTimelineId, _jobTimelineRecordId, stepRecordId, batch.Select(logLine => logLine.Line).ToList(), batch[0].LineNumber, timeoutTokenSource.Token);
}
} }
if (_firstConsoleOutputs) if (_firstConsoleOutputs)
@@ -338,7 +431,7 @@ namespace GitHub.Runner.Common
{ {
while (!_jobCompletionSource.Task.IsCompleted || runOnce) while (!_jobCompletionSource.Task.IsCompleted || runOnce)
{ {
List<UploadFileInfo> filesToUpload = new List<UploadFileInfo>(); List<UploadFileInfo> filesToUpload = new();
UploadFileInfo dequeueFile; UploadFileInfo dequeueFile;
while (_fileUploadQueue.TryDequeue(out dequeueFile)) while (_fileUploadQueue.TryDequeue(out dequeueFile))
{ {
@@ -394,17 +487,105 @@ namespace GitHub.Runner.Common
} }
} }
private async Task ProcessResultsUploadQueueAsync(bool runOnce = false)
{
Trace.Info("Starting results-based upload queue...");
while (!_jobCompletionSource.Task.IsCompleted || runOnce)
{
List<ResultsUploadFileInfo> filesToUpload = new();
ResultsUploadFileInfo dequeueFile;
while (_resultsFileUploadQueue.TryDequeue(out dequeueFile))
{
filesToUpload.Add(dequeueFile);
// process at most 10 file uploads.
if (!runOnce && filesToUpload.Count > 10)
{
break;
}
}
if (filesToUpload.Count > 0)
{
if (runOnce)
{
Trace.Info($"Uploading {filesToUpload.Count} file(s) in one shot through results service.");
}
int errorCount = 0;
foreach (var file in filesToUpload)
{
try
{
if (String.Equals(file.Type, ChecksAttachmentType.StepSummary, StringComparison.OrdinalIgnoreCase))
{
await UploadSummaryFile(file);
}
else if (String.Equals(file.Type, CoreAttachmentType.ResultsLog, StringComparison.OrdinalIgnoreCase))
{
if (file.RecordId != _jobTimelineRecordId)
{
Trace.Info($"Got a step log file to send to results service.");
await UploadResultsStepLogFile(file);
}
else if (file.RecordId == _jobTimelineRecordId)
{
Trace.Info($"Got a job log file to send to results service.");
await UploadResultsJobLogFile(file);
}
}
}
catch (Exception ex)
{
Trace.Info("Catch exception during file upload to results, keep going since the process is best effort.");
Trace.Error(ex);
errorCount++;
// If we hit any exceptions uploading to Results, let's skip any additional uploads to Results
_resultsClientInitiated = false;
SendResultsTelemetry(ex);
}
}
Trace.Info("Tried to upload {0} file(s) to results, success rate: {1}/{0}.", filesToUpload.Count, filesToUpload.Count - errorCount);
}
if (runOnce)
{
break;
}
else
{
await Task.Delay(_delayForResultsUploadDequeue);
}
}
}
private void SendResultsTelemetry(Exception ex)
{
var issue = new Issue() { Type = IssueType.Warning, Message = $"Caught exception with results. {ex.Message}" };
issue.Data[Constants.Runner.InternalTelemetryIssueDataKey] = Constants.Runner.ResultsUploadFailure;
var telemetryRecord = new TimelineRecord()
{
Id = Constants.Runner.TelemetryRecordId,
};
telemetryRecord.Issues.Add(issue);
QueueTimelineRecordUpdate(_jobTimelineId, telemetryRecord);
}
private async Task ProcessTimelinesUpdateQueueAsync(bool runOnce = false) private async Task ProcessTimelinesUpdateQueueAsync(bool runOnce = false)
{ {
while (!_jobCompletionSource.Task.IsCompleted || runOnce) while (!_jobCompletionSource.Task.IsCompleted || runOnce)
{ {
List<PendingTimelineRecord> pendingUpdates = new List<PendingTimelineRecord>(); List<PendingTimelineRecord> pendingUpdates = new();
foreach (var timeline in _allTimelines) foreach (var timeline in _allTimelines)
{ {
ConcurrentQueue<TimelineRecord> recordQueue; ConcurrentQueue<TimelineRecord> recordQueue;
if (_timelineUpdateQueue.TryGetValue(timeline, out recordQueue)) if (_timelineUpdateQueue.TryGetValue(timeline, out recordQueue))
{ {
List<TimelineRecord> records = new List<TimelineRecord>(); List<TimelineRecord> records = new();
TimelineRecord record; TimelineRecord record;
while (recordQueue.TryDequeue(out record)) while (recordQueue.TryDequeue(out record))
{ {
@@ -426,7 +607,7 @@ namespace GitHub.Runner.Common
// we need track whether we have new sub-timeline been created on the last run. // we need track whether we have new sub-timeline been created on the last run.
// if so, we need continue update timeline record even we on the last run. // if so, we need continue update timeline record even we on the last run.
bool pendingSubtimelineUpdate = false; bool pendingSubtimelineUpdate = false;
List<Exception> mainTimelineRecordsUpdateErrors = new List<Exception>(); List<Exception> mainTimelineRecordsUpdateErrors = new();
if (pendingUpdates.Count > 0) if (pendingUpdates.Count > 0)
{ {
foreach (var update in pendingUpdates) foreach (var update in pendingUpdates)
@@ -441,7 +622,7 @@ namespace GitHub.Runner.Common
foreach (var detailTimeline in update.PendingRecords.Where(r => r.Details != null)) foreach (var detailTimeline in update.PendingRecords.Where(r => r.Details != null))
{ {
if (!_allTimelines.Contains(detailTimeline.Details.Id)) if (!_resultsServiceOnly && !_allTimelines.Contains(detailTimeline.Details.Id))
{ {
try try
{ {
@@ -463,7 +644,27 @@ namespace GitHub.Runner.Common
try try
{ {
await _jobServer.UpdateTimelineRecordsAsync(_scopeIdentifier, _hubName, _planId, update.TimelineId, update.PendingRecords, default(CancellationToken)); if (!_resultsServiceOnly)
{
await _jobServer.UpdateTimelineRecordsAsync(_scopeIdentifier, _hubName, _planId, update.TimelineId, update.PendingRecords, default(CancellationToken));
}
try
{
if (_resultsClientInitiated)
{
await _resultsServer.UpdateResultsWorkflowStepsAsync(_scopeIdentifier, _hubName, _planId, update.TimelineId, update.PendingRecords, default(CancellationToken));
}
}
catch (Exception e)
{
Trace.Info("Catch exception during update steps, skip update Results.");
Trace.Error(e);
_resultsClientInitiated = false;
SendResultsTelemetry(e);
}
if (_bufferedRetryRecords.Remove(update.TimelineId)) if (_bufferedRetryRecords.Remove(update.TimelineId))
{ {
Trace.Verbose("Cleanup buffered timeline record for timeline: {0}.", update.TimelineId); Trace.Verbose("Cleanup buffered timeline record for timeline: {0}.", update.TimelineId);
@@ -529,7 +730,7 @@ namespace GitHub.Runner.Common
return timelineRecords; return timelineRecords;
} }
Dictionary<Guid, TimelineRecord> dict = new Dictionary<Guid, TimelineRecord>(); Dictionary<Guid, TimelineRecord> dict = new();
foreach (TimelineRecord rec in timelineRecords) foreach (TimelineRecord rec in timelineRecords)
{ {
if (rec == null) if (rec == null)
@@ -555,17 +756,17 @@ namespace GitHub.Runner.Common
timelineRecord.State = rec.State ?? timelineRecord.State; timelineRecord.State = rec.State ?? timelineRecord.State;
timelineRecord.WorkerName = rec.WorkerName ?? timelineRecord.WorkerName; timelineRecord.WorkerName = rec.WorkerName ?? timelineRecord.WorkerName;
if (rec.ErrorCount != null && rec.ErrorCount > 0) if (rec.ErrorCount > 0)
{ {
timelineRecord.ErrorCount = rec.ErrorCount; timelineRecord.ErrorCount = rec.ErrorCount;
} }
if (rec.WarningCount != null && rec.WarningCount > 0) if (rec.WarningCount > 0)
{ {
timelineRecord.WarningCount = rec.WarningCount; timelineRecord.WarningCount = rec.WarningCount;
} }
if (rec.NoticeCount != null && rec.NoticeCount > 0) if (rec.NoticeCount > 0)
{ {
timelineRecord.NoticeCount = rec.NoticeCount; timelineRecord.NoticeCount = rec.NoticeCount;
} }
@@ -596,7 +797,7 @@ namespace GitHub.Runner.Common
foreach (var record in mergedRecords) foreach (var record in mergedRecords)
{ {
Trace.Verbose($" Record: t={record.RecordType}, n={record.Name}, s={record.State}, st={record.StartTime}, {record.PercentComplete}%, ft={record.FinishTime}, r={record.Result}: {record.CurrentOperation}"); Trace.Verbose($" Record: t={record.RecordType}, n={record.Name}, s={record.State}, st={record.StartTime}, {record.PercentComplete}%, ft={record.FinishTime}, r={record.Result}: {record.CurrentOperation}");
if (record.Issues != null && record.Issues.Count > 0) if (record.Issues != null)
{ {
foreach (var issue in record.Issues) foreach (var issue in record.Issues)
{ {
@@ -606,7 +807,7 @@ namespace GitHub.Runner.Common
} }
} }
if (record.Variables != null && record.Variables.Count > 0) if (record.Variables != null)
{ {
foreach (var variable in record.Variables) foreach (var variable in record.Variables)
{ {
@@ -623,27 +824,30 @@ namespace GitHub.Runner.Common
bool uploadSucceed = false; bool uploadSucceed = false;
try try
{ {
if (String.Equals(file.Type, CoreAttachmentType.Log, StringComparison.OrdinalIgnoreCase)) if (!_resultsServiceOnly)
{ {
// Create the log if (String.Equals(file.Type, CoreAttachmentType.Log, StringComparison.OrdinalIgnoreCase))
var taskLog = await _jobServer.CreateLogAsync(_scopeIdentifier, _hubName, _planId, new TaskLog(String.Format(@"logs\{0:D}", file.TimelineRecordId)), default(CancellationToken));
// Upload the contents
using (FileStream fs = File.Open(file.Path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{ {
var logUploaded = await _jobServer.AppendLogContentAsync(_scopeIdentifier, _hubName, _planId, taskLog.Id, fs, default(CancellationToken)); // Create the log
var taskLog = await _jobServer.CreateLogAsync(_scopeIdentifier, _hubName, _planId, new TaskLog(String.Format(@"logs\{0:D}", file.TimelineRecordId)), default(CancellationToken));
// Upload the contents
using (FileStream fs = File.Open(file.Path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
var logUploaded = await _jobServer.AppendLogContentAsync(_scopeIdentifier, _hubName, _planId, taskLog.Id, fs, default(CancellationToken));
}
// Create a new record and only set the Log field
var attachmentUpdataRecord = new TimelineRecord() { Id = file.TimelineRecordId, Log = taskLog };
QueueTimelineRecordUpdate(file.TimelineId, attachmentUpdataRecord);
} }
else
// Create a new record and only set the Log field
var attachmentUpdataRecord = new TimelineRecord() { Id = file.TimelineRecordId, Log = taskLog };
QueueTimelineRecordUpdate(file.TimelineId, attachmentUpdataRecord);
}
else
{
// Create attachment
using (FileStream fs = File.Open(file.Path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{ {
var result = await _jobServer.CreateAttachmentAsync(_scopeIdentifier, _hubName, _planId, file.TimelineId, file.TimelineRecordId, file.Type, file.Name, fs, default(CancellationToken)); // Create attachment
using (FileStream fs = File.Open(file.Path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
var result = await _jobServer.CreateAttachmentAsync(_scopeIdentifier, _hubName, _planId, file.TimelineId, file.TimelineRecordId, file.Type, file.Name, fs, default(CancellationToken));
}
} }
} }
@@ -665,6 +869,69 @@ namespace GitHub.Runner.Common
} }
} }
} }
private async Task UploadSummaryFile(ResultsUploadFileInfo file)
{
Trace.Info($"Starting to upload summary file to results service {file.Name}, {file.Path}");
ResultsFileUploadHandler summaryHandler = async (file) =>
{
await _resultsServer.CreateResultsStepSummaryAsync(file.PlanId, file.JobId, file.RecordId, file.Path, CancellationToken.None);
};
await UploadResultsFile(file, summaryHandler);
}
private async Task UploadResultsStepLogFile(ResultsUploadFileInfo file)
{
Trace.Info($"Starting upload of step log file to results service {file.Name}, {file.Path}");
ResultsFileUploadHandler stepLogHandler = async (file) =>
{
await _resultsServer.CreateResultsStepLogAsync(file.PlanId, file.JobId, file.RecordId, file.Path, file.Finalize, file.FirstBlock, file.TotalLines, CancellationToken.None);
};
await UploadResultsFile(file, stepLogHandler);
}
private async Task UploadResultsJobLogFile(ResultsUploadFileInfo file)
{
Trace.Info($"Starting upload of job log file to results service {file.Name}, {file.Path}");
ResultsFileUploadHandler jobLogHandler = async (file) =>
{
await _resultsServer.CreateResultsJobLogAsync(file.PlanId, file.JobId, file.Path, file.Finalize, file.FirstBlock, file.TotalLines, CancellationToken.None);
};
await UploadResultsFile(file, jobLogHandler);
}
private async Task UploadResultsFile(ResultsUploadFileInfo file, ResultsFileUploadHandler uploadHandler)
{
if (!_resultsClientInitiated)
{
return;
}
bool uploadSucceed = false;
try
{
await uploadHandler(file);
uploadSucceed = true;
}
finally
{
if (uploadSucceed && file.DeleteSource)
{
try
{
File.Delete(file.Path);
}
catch (Exception ex)
{
Trace.Info("Exception encountered during deletion of a temporary file that was already successfully uploaded to results.");
Trace.Error(ex);
}
}
}
}
} }
internal class PendingTimelineRecord internal class PendingTimelineRecord
@@ -683,6 +950,19 @@ namespace GitHub.Runner.Common
public bool DeleteSource { get; set; } public bool DeleteSource { get; set; }
} }
internal class ResultsUploadFileInfo
{
public string Name { get; set; }
public string Type { get; set; }
public string Path { get; set; }
public string PlanId { get; set; }
public string JobId { get; set; }
public Guid RecordId { get; set; }
public bool DeleteSource { get; set; }
public bool Finalize { get; set; }
public bool FirstBlock { get; set; }
public long TotalLines { get; set; }
}
internal class ConsoleLineInfo internal class ConsoleLineInfo
{ {

View File

@@ -0,0 +1,42 @@
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using GitHub.DistributedTask.WebApi;
using GitHub.Services.Launch.Client;
using GitHub.Services.WebApi;
namespace GitHub.Runner.Common
{
[ServiceLocator(Default = typeof(LaunchServer))]
public interface ILaunchServer : IRunnerService
{
void InitializeLaunchClient(Uri uri, string token);
Task<ActionDownloadInfoCollection> ResolveActionsDownloadInfoAsync(Guid planId, Guid jobId, ActionReferenceList actionReferenceList, CancellationToken cancellationToken);
}
public sealed class LaunchServer : RunnerService, ILaunchServer
{
private LaunchHttpClient _launchClient;
public void InitializeLaunchClient(Uri uri, string token)
{
var httpMessageHandler = HostContext.CreateHttpClientHandler();
this._launchClient = new LaunchHttpClient(uri, httpMessageHandler, token, disposeHandler: true);
}
public Task<ActionDownloadInfoCollection> ResolveActionsDownloadInfoAsync(Guid planId, Guid jobId, ActionReferenceList actionReferenceList,
CancellationToken cancellationToken)
{
if (_launchClient != null)
{
return _launchClient.GetResolveActionsDownloadInfoAsync(planId, jobId, actionReferenceList,
cancellationToken: cancellationToken);
}
throw new InvalidOperationException("Launch client is not initialized.");
}
}
}

View File

@@ -1,4 +1,3 @@
using GitHub.Runner.Common.Util;
using System; using System;
using System.IO; using System.IO;
@@ -22,6 +21,12 @@ namespace GitHub.Runner.Common
// 8 MB // 8 MB
public const int PageSize = 8 * 1024 * 1024; public const int PageSize = 8 * 1024 * 1024;
// For Results
public static string BlocksFolder = "blocks";
// 2 MB
public const int BlockSize = 2 * 1024 * 1024;
private Guid _timelineId; private Guid _timelineId;
private Guid _timelineRecordId; private Guid _timelineRecordId;
private FileStream _pageData; private FileStream _pageData;
@@ -33,6 +38,13 @@ namespace GitHub.Runner.Common
private string _pagesFolder; private string _pagesFolder;
private IJobServerQueue _jobServerQueue; private IJobServerQueue _jobServerQueue;
private string _resultsDataFileName;
private FileStream _resultsBlockData;
private StreamWriter _resultsBlockWriter;
private string _resultsBlockFolder;
private int _blockByteCount;
private int _blockCount;
public long TotalLines => _totalLines; public long TotalLines => _totalLines;
public override void Initialize(IHostContext hostContext) public override void Initialize(IHostContext hostContext)
@@ -40,8 +52,10 @@ namespace GitHub.Runner.Common
base.Initialize(hostContext); base.Initialize(hostContext);
_totalLines = 0; _totalLines = 0;
_pagesFolder = Path.Combine(hostContext.GetDirectory(WellKnownDirectory.Diag), PagingFolder); _pagesFolder = Path.Combine(hostContext.GetDirectory(WellKnownDirectory.Diag), PagingFolder);
_jobServerQueue = HostContext.GetService<IJobServerQueue>();
Directory.CreateDirectory(_pagesFolder); Directory.CreateDirectory(_pagesFolder);
_resultsBlockFolder = Path.Combine(hostContext.GetDirectory(WellKnownDirectory.Diag), BlocksFolder);
Directory.CreateDirectory(_resultsBlockFolder);
_jobServerQueue = HostContext.GetService<IJobServerQueue>();
} }
public void Setup(Guid timelineId, Guid timelineRecordId) public void Setup(Guid timelineId, Guid timelineRecordId)
@@ -61,11 +75,17 @@ namespace GitHub.Runner.Common
// lazy creation on write // lazy creation on write
if (_pageWriter == null) if (_pageWriter == null)
{ {
Create(); NewPage();
}
if (_resultsBlockWriter == null)
{
NewBlock();
} }
string line = $"{DateTime.UtcNow.ToString("O")} {message}"; string line = $"{DateTime.UtcNow.ToString("O")} {message}";
_pageWriter.WriteLine(line); _pageWriter.WriteLine(line);
_resultsBlockWriter.WriteLine(line);
_totalLines++; _totalLines++;
if (line.IndexOf('\n') != -1) if (line.IndexOf('\n') != -1)
@@ -79,21 +99,25 @@ namespace GitHub.Runner.Common
} }
} }
_byteCount += System.Text.Encoding.UTF8.GetByteCount(line); var bytes = System.Text.Encoding.UTF8.GetByteCount(line);
_byteCount += bytes;
_blockByteCount += bytes;
if (_byteCount >= PageSize) if (_byteCount >= PageSize)
{ {
NewPage(); NewPage();
} }
if (_blockByteCount >= BlockSize)
{
NewBlock();
}
} }
public void End() public void End()
{ {
EndPage(); EndPage();
} EndBlock(true);
private void Create()
{
NewPage();
} }
private void NewPage() private void NewPage()
@@ -118,5 +142,27 @@ namespace GitHub.Runner.Common
_jobServerQueue.QueueFileUpload(_timelineId, _timelineRecordId, "DistributedTask.Core.Log", "CustomToolLog", _dataFileName, true); _jobServerQueue.QueueFileUpload(_timelineId, _timelineRecordId, "DistributedTask.Core.Log", "CustomToolLog", _dataFileName, true);
} }
} }
private void NewBlock()
{
EndBlock(false);
_blockByteCount = 0;
_resultsDataFileName = Path.Combine(_resultsBlockFolder, $"{_timelineId}_{_timelineRecordId}.{++_blockCount}");
_resultsBlockData = new FileStream(_resultsDataFileName, FileMode.CreateNew, FileAccess.ReadWrite, FileShare.ReadWrite);
_resultsBlockWriter = new StreamWriter(_resultsBlockData, System.Text.Encoding.UTF8);
}
private void EndBlock(bool finalize)
{
if (_resultsBlockWriter != null)
{
_resultsBlockWriter.Flush();
_resultsBlockData.Flush();
_resultsBlockWriter.Dispose();
_resultsBlockWriter = null;
_resultsBlockData = null;
_jobServerQueue.QueueResultsUpload(_timelineRecordId, "ResultsLog", _resultsDataFileName, "Results.Core.Log", deleteSource: true, finalize, firstBlock: _resultsDataFileName.EndsWith(".1"), totalLines: _totalLines);
}
}
} }
} }

View File

@@ -76,7 +76,7 @@ namespace GitHub.Runner.Common
public async Task<WorkerMessage> ReceiveAsync(CancellationToken cancellationToken) public async Task<WorkerMessage> ReceiveAsync(CancellationToken cancellationToken)
{ {
WorkerMessage result = new WorkerMessage(MessageType.NotInitialized, string.Empty); WorkerMessage result = new(MessageType.NotInitialized, string.Empty);
result.MessageType = (MessageType)await _readStream.ReadInt32Async(cancellationToken); result.MessageType = (MessageType)await _readStream.ReadInt32Async(cancellationToken);
result.Body = await _readStream.ReadStringAsync(cancellationToken); result.Body = await _readStream.ReadStringAsync(cancellationToken);
Trace.Info($"Receiving message of length {result.Body.Length}, with hash '{IOUtil.GetSha256Hash(result.Body)}'"); Trace.Info($"Receiving message of length {result.Body.Length}, with hash '{IOUtil.GetSha256Hash(result.Body)}'");

View File

@@ -291,7 +291,7 @@ namespace GitHub.Runner.Common
public static string GetEnvironmentVariable(this Process process, IHostContext hostContext, string variable) public static string GetEnvironmentVariable(this Process process, IHostContext hostContext, string variable)
{ {
var trace = hostContext.GetTrace(nameof(LinuxProcessExtensions)); var trace = hostContext.GetTrace(nameof(LinuxProcessExtensions));
Dictionary<string, string> env = new Dictionary<string, string>(); Dictionary<string, string> env = new();
if (Directory.Exists("/proc")) if (Directory.Exists("/proc"))
{ {
@@ -322,8 +322,8 @@ namespace GitHub.Runner.Common
// It doesn't escape '=' or ' ', so we can't parse the output into a dictionary of all envs. // It doesn't escape '=' or ' ', so we can't parse the output into a dictionary of all envs.
// So we only look for the env you request, in the format of variable=value. (it won't work if you variable contains = or space) // So we only look for the env you request, in the format of variable=value. (it won't work if you variable contains = or space)
trace.Info($"Read env from output of `ps e -p {process.Id} -o command`"); trace.Info($"Read env from output of `ps e -p {process.Id} -o command`");
List<string> psOut = new List<string>(); List<string> psOut = new();
object outputLock = new object(); object outputLock = new();
using (var p = hostContext.CreateService<IProcessInvoker>()) using (var p = hostContext.CreateService<IProcessInvoker>())
{ {
p.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stdout) p.OutputDataReceived += delegate (object sender, ProcessDataReceivedEventArgs stdout)

View File

@@ -1,4 +1,4 @@
using GitHub.Runner.Common.Util; using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;

View File

@@ -0,0 +1,262 @@
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net.Http.Headers;
using System.Net.WebSockets;
using System.Security;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Sdk;
using GitHub.Services.Common;
using GitHub.Services.Results.Client;
using GitHub.Services.WebApi.Utilities.Internal;
namespace GitHub.Runner.Common
{
[ServiceLocator(Default = typeof(ResultServer))]
public interface IResultsServer : IRunnerService, IAsyncDisposable
{
void InitializeResultsClient(Uri uri, string liveConsoleFeedUrl, string token);
Task<bool> AppendLiveConsoleFeedAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, Guid timelineRecordId, Guid stepId, IList<string> lines, long? startLine, CancellationToken cancellationToken);
// logging and console
Task CreateResultsStepSummaryAsync(string planId, string jobId, Guid stepId, string file,
CancellationToken cancellationToken);
Task CreateResultsStepLogAsync(string planId, string jobId, Guid stepId, string file, bool finalize,
bool firstBlock, long lineCount, CancellationToken cancellationToken);
Task CreateResultsJobLogAsync(string planId, string jobId, string file, bool finalize, bool firstBlock,
long lineCount, CancellationToken cancellationToken);
Task UpdateResultsWorkflowStepsAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId,
IEnumerable<TimelineRecord> records, CancellationToken cancellationToken);
}
public sealed class ResultServer : RunnerService, IResultsServer
{
private ResultsHttpClient _resultsClient;
private ClientWebSocket _websocketClient;
private DateTime? _lastConnectionFailure;
private static readonly TimeSpan MinDelayForWebsocketReconnect = TimeSpan.FromMilliseconds(100);
private static readonly TimeSpan MaxDelayForWebsocketReconnect = TimeSpan.FromMilliseconds(500);
private Task _websocketConnectTask;
private String _liveConsoleFeedUrl;
private string _token;
public void InitializeResultsClient(Uri uri, string liveConsoleFeedUrl, string token)
{
var httpMessageHandler = HostContext.CreateHttpClientHandler();
this._resultsClient = new ResultsHttpClient(uri, httpMessageHandler, token, disposeHandler: true);
_token = token;
if (!string.IsNullOrEmpty(liveConsoleFeedUrl))
{
_liveConsoleFeedUrl = liveConsoleFeedUrl;
InitializeWebsocketClient(liveConsoleFeedUrl, token, TimeSpan.Zero, retryConnection: true);
}
}
public Task CreateResultsStepSummaryAsync(string planId, string jobId, Guid stepId, string file,
CancellationToken cancellationToken)
{
if (_resultsClient != null)
{
return _resultsClient.UploadStepSummaryAsync(planId, jobId, stepId, file,
cancellationToken: cancellationToken);
}
throw new InvalidOperationException("Results client is not initialized.");
}
public Task CreateResultsStepLogAsync(string planId, string jobId, Guid stepId, string file, bool finalize,
bool firstBlock, long lineCount, CancellationToken cancellationToken)
{
if (_resultsClient != null)
{
return _resultsClient.UploadResultsStepLogAsync(planId, jobId, stepId, file, finalize, firstBlock,
lineCount, cancellationToken: cancellationToken);
}
throw new InvalidOperationException("Results client is not initialized.");
}
public Task CreateResultsJobLogAsync(string planId, string jobId, string file, bool finalize, bool firstBlock,
long lineCount, CancellationToken cancellationToken)
{
if (_resultsClient != null)
{
return _resultsClient.UploadResultsJobLogAsync(planId, jobId, file, finalize, firstBlock, lineCount,
cancellationToken: cancellationToken);
}
throw new InvalidOperationException("Results client is not initialized.");
}
public Task UpdateResultsWorkflowStepsAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId,
IEnumerable<TimelineRecord> records, CancellationToken cancellationToken)
{
if (_resultsClient != null)
{
try
{
var timelineRecords = records.ToList();
return _resultsClient.UpdateWorkflowStepsAsync(planId, new List<TimelineRecord>(timelineRecords),
cancellationToken: cancellationToken);
}
catch (Exception ex)
{
// Log error, but continue as this call is best-effort
Trace.Info($"Failed to update steps status due to {ex.GetType().Name}");
Trace.Error(ex);
}
}
throw new InvalidOperationException("Results client is not initialized.");
}
public ValueTask DisposeAsync()
{
CloseWebSocket(WebSocketCloseStatus.NormalClosure, CancellationToken.None);
GC.SuppressFinalize(this);
return ValueTask.CompletedTask;
}
private void InitializeWebsocketClient(string liveConsoleFeedUrl, string accessToken, TimeSpan delay, bool retryConnection = false)
{
if (string.IsNullOrEmpty(accessToken))
{
Trace.Info($"No access token from server");
return;
}
if (string.IsNullOrEmpty(liveConsoleFeedUrl))
{
Trace.Info($"No live console feed url from server");
return;
}
Trace.Info($"Creating websocket client ..." + liveConsoleFeedUrl);
this._websocketClient = new ClientWebSocket();
this._websocketClient.Options.SetRequestHeader("Authorization", $"Bearer {accessToken}");
var userAgentValues = new List<ProductInfoHeaderValue>();
userAgentValues.AddRange(UserAgentUtility.GetDefaultRestUserAgent());
userAgentValues.AddRange(HostContext.UserAgents);
this._websocketClient.Options.SetRequestHeader("User-Agent", string.Join(" ", userAgentValues.Select(x => x.ToString())));
// during initialization, retry upto 3 times to setup connection
this._websocketConnectTask = ConnectWebSocketClient(liveConsoleFeedUrl, delay, retryConnection);
}
private async Task ConnectWebSocketClient(string feedStreamUrl, TimeSpan delay, bool retryConnection = false)
{
bool connected = false;
int retries = 0;
do
{
try
{
Trace.Info($"Attempting to start websocket client with delay {delay}.");
await Task.Delay(delay);
using var connectTimeoutTokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(30));
await this._websocketClient.ConnectAsync(new Uri(feedStreamUrl), connectTimeoutTokenSource.Token);
Trace.Info($"Successfully started websocket client.");
connected = true;
}
catch (Exception ex)
{
Trace.Info("Exception caught during websocket client connect, retry connection.");
Trace.Error(ex);
retries++;
this._websocketClient = null;
_lastConnectionFailure = DateTime.Now;
}
} while (retryConnection && !connected && retries < 3);
}
public async Task<bool> AppendLiveConsoleFeedAsync(Guid scopeIdentifier, string hubName, Guid planId, Guid timelineId, Guid timelineRecordId, Guid stepId, IList<string> lines, long? startLine, CancellationToken cancellationToken)
{
if (_websocketConnectTask != null)
{
await _websocketConnectTask;
}
bool delivered = false;
int retries = 0;
// "_websocketClient != null" implies either: We have a successful connection OR we have to attempt sending again and then reconnect
// ...in other words, if websocket client is null, we will skip sending to websocket
if (_websocketClient != null)
{
var linesWrapper = startLine.HasValue
? new TimelineRecordFeedLinesWrapper(stepId, lines, startLine.Value)
: new TimelineRecordFeedLinesWrapper(stepId, lines);
var jsonData = StringUtil.ConvertToJson(linesWrapper);
var jsonDataBytes = Encoding.UTF8.GetBytes(jsonData);
// break the message into chunks of 1024 bytes
for (var i = 0; i < jsonDataBytes.Length; i += 1 * 1024)
{
var lastChunk = i + (1 * 1024) >= jsonDataBytes.Length;
var chunk = new ArraySegment<byte>(jsonDataBytes, i, Math.Min(1 * 1024, jsonDataBytes.Length - i));
delivered = false;
while (!delivered && retries < 3)
{
try
{
if (_websocketClient != null)
{
await _websocketClient.SendAsync(chunk, WebSocketMessageType.Text, endOfMessage: lastChunk, cancellationToken);
delivered = true;
}
}
catch (Exception ex)
{
var delay = BackoffTimerHelper.GetRandomBackoff(MinDelayForWebsocketReconnect, MaxDelayForWebsocketReconnect);
Trace.Info($"Websocket is not open, let's attempt to connect back again with random backoff {delay} ms.");
Trace.Verbose(ex.ToString());
retries++;
InitializeWebsocketClient(_liveConsoleFeedUrl, _token, delay);
}
}
}
}
if (!delivered)
{
// Giving up for now, so next invocation of this method won't attempt to reconnect
_websocketClient = null;
// however if 10 minutes have already passed, let's try reestablish connection again
if (_lastConnectionFailure.HasValue && DateTime.Now > _lastConnectionFailure.Value.AddMinutes(10))
{
// Some minutes passed since we retried last time, try connection again
InitializeWebsocketClient(_liveConsoleFeedUrl, _token, TimeSpan.Zero);
}
}
return delivered;
}
private void CloseWebSocket(WebSocketCloseStatus closeStatus, CancellationToken cancellationToken)
{
try
{
_websocketClient?.CloseOutputAsync(closeStatus, "Closing websocket", cancellationToken);
}
catch (Exception websocketEx)
{
// In some cases this might be okay since the websocket might be open yet, so just close and don't trace exceptions
Trace.Info($"Failed to close websocket gracefully {websocketEx.GetType().Name}");
}
}
}
}

View File

@@ -1,14 +1,14 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Runtime.Serialization;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using GitHub.Actions.RunService.WebApi;
using GitHub.DistributedTask.Pipelines; using GitHub.DistributedTask.Pipelines;
using GitHub.DistributedTask.WebApi; using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using GitHub.Services.Common; using GitHub.Services.Common;
using GitHub.Services.WebApi; using Sdk.RSWebApi.Contracts;
using Sdk.WebApi.WebApi.RawClient;
namespace GitHub.Runner.Common namespace GitHub.Runner.Common
{ {
@@ -18,47 +18,35 @@ namespace GitHub.Runner.Common
Task ConnectAsync(Uri serverUrl, VssCredentials credentials); Task ConnectAsync(Uri serverUrl, VssCredentials credentials);
Task<AgentJobRequestMessage> GetJobMessageAsync(string id, CancellationToken token); Task<AgentJobRequestMessage> GetJobMessageAsync(string id, CancellationToken token);
Task CompleteJobAsync(
Guid planId,
Guid jobId,
TaskResult result,
Dictionary<String, VariableValue> outputs,
IList<StepResult> stepResults,
IList<Annotation> jobAnnotations,
CancellationToken token);
Task<RenewJobResponse> RenewJobAsync(Guid planId, Guid jobId, CancellationToken token);
} }
public sealed class RunServer : RunnerService, IRunServer public sealed class RunServer : RunnerService, IRunServer
{ {
private bool _hasConnection; private bool _hasConnection;
private VssConnection _connection; private Uri requestUri;
private TaskAgentHttpClient _taskAgentClient; private RawConnection _connection;
private RunServiceHttpClient _runServiceHttpClient;
public async Task ConnectAsync(Uri serverUrl, VssCredentials credentials) public async Task ConnectAsync(Uri serverUri, VssCredentials credentials)
{ {
_connection = await EstablishVssConnection(serverUrl, credentials, TimeSpan.FromSeconds(100)); requestUri = serverUri;
_taskAgentClient = _connection.GetClient<TaskAgentHttpClient>();
_connection = VssUtil.CreateRawConnection(serverUri, credentials);
_runServiceHttpClient = await _connection.GetClientAsync<RunServiceHttpClient>();
_hasConnection = true; _hasConnection = true;
} }
private async Task<VssConnection> EstablishVssConnection(Uri serverUrl, VssCredentials credentials, TimeSpan timeout)
{
Trace.Info($"EstablishVssConnection");
Trace.Info($"Establish connection with {timeout.TotalSeconds} seconds timeout.");
int attemptCount = 5;
while (attemptCount-- > 0)
{
var connection = VssUtil.CreateConnection(serverUrl, credentials, timeout: timeout);
try
{
await connection.ConnectAsync();
return connection;
}
catch (Exception ex) when (attemptCount > 0)
{
Trace.Info($"Catch exception during connect. {attemptCount} attempt left.");
Trace.Error(ex);
await HostContext.Delay(TimeSpan.FromMilliseconds(100), CancellationToken.None);
}
}
// should never reach here.
throw new InvalidOperationException(nameof(EstablishVssConnection));
}
private void CheckConnection() private void CheckConnection()
{ {
if (!_hasConnection) if (!_hasConnection)
@@ -70,37 +58,30 @@ namespace GitHub.Runner.Common
public Task<AgentJobRequestMessage> GetJobMessageAsync(string id, CancellationToken cancellationToken) public Task<AgentJobRequestMessage> GetJobMessageAsync(string id, CancellationToken cancellationToken)
{ {
CheckConnection(); CheckConnection();
var jobMessage = RetryRequest<AgentJobRequestMessage>(async () => return RetryRequest<AgentJobRequestMessage>(
{ async () => await _runServiceHttpClient.GetJobMessageAsync(requestUri, id, cancellationToken), cancellationToken,
return await _taskAgentClient.GetJobMessageAsync(id, cancellationToken); shouldRetry: ex => ex is not TaskOrchestrationJobAlreadyAcquiredException);
}, cancellationToken);
return jobMessage;
} }
private async Task<T> RetryRequest<T>(Func<Task<T>> func, public Task CompleteJobAsync(
CancellationToken cancellationToken, Guid planId,
int maxRetryAttemptsCount = 5 Guid jobId,
) TaskResult result,
Dictionary<String, VariableValue> outputs,
IList<StepResult> stepResults,
IList<Annotation> jobAnnotations,
CancellationToken cancellationToken)
{ {
var retryCount = 0; CheckConnection();
while (true) return RetryRequest(
{ async () => await _runServiceHttpClient.CompleteJobAsync(requestUri, planId, jobId, result, outputs, stepResults, jobAnnotations, cancellationToken), cancellationToken);
retryCount++; }
cancellationToken.ThrowIfCancellationRequested();
try public Task<RenewJobResponse> RenewJobAsync(Guid planId, Guid jobId, CancellationToken cancellationToken)
{ {
return await func(); CheckConnection();
} return RetryRequest<RenewJobResponse>(
// TODO: Add handling of non-retriable exceptions: https://github.com/github/actions-broker/issues/122 async () => await _runServiceHttpClient.RenewJobAsync(requestUri, planId, jobId, cancellationToken), cancellationToken);
catch (Exception ex) when (retryCount < maxRetryAttemptsCount)
{
Trace.Error("Catch exception during get full job message");
Trace.Error(ex);
var backOff = BackoffTimerHelper.GetRandomBackoff(TimeSpan.FromSeconds(5), TimeSpan.FromSeconds(15));
Trace.Warning($"Back off {backOff.TotalSeconds} seconds before next retry. {maxRetryAttemptsCount - retryCount} attempt left.");
await Task.Delay(backOff, cancellationToken);
}
}
} }
} }
} }

View File

@@ -3,7 +3,7 @@
<PropertyGroup> <PropertyGroup>
<TargetFramework>net6.0</TargetFramework> <TargetFramework>net6.0</TargetFramework>
<OutputType>Library</OutputType> <OutputType>Library</OutputType>
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64;osx-arm64</RuntimeIdentifiers> <RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64;osx-arm64;win-arm64</RuntimeIdentifiers>
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch> <TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
<NoWarn>NU1701;NU1603</NoWarn> <NoWarn>NU1701;NU1603</NoWarn>
<Version>$(Version)</Version> <Version>$(Version)</Version>

View File

@@ -0,0 +1,237 @@
using GitHub.DistributedTask.WebApi;
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using GitHub.Services.WebApi;
using GitHub.Services.Common;
using GitHub.Runner.Sdk;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Linq;
namespace GitHub.Runner.Common
{
[ServiceLocator(Default = typeof(RunnerDotcomServer))]
public interface IRunnerDotcomServer : IRunnerService
{
Task<List<TaskAgent>> GetRunnersAsync(int runnerGroupId, string githubUrl, string githubToken, string agentName);
Task<DistributedTask.WebApi.Runner> AddRunnerAsync(int runnerGroupId, TaskAgent agent, string githubUrl, string githubToken, string publicKey);
Task<List<TaskAgentPool>> GetRunnerGroupsAsync(string githubUrl, string githubToken);
string GetGitHubRequestId(HttpResponseHeaders headers);
}
public enum RequestType
{
Get,
Post,
Patch,
Delete
}
public class RunnerDotcomServer : RunnerService, IRunnerDotcomServer
{
private ITerminal _term;
public override void Initialize(IHostContext hostContext)
{
base.Initialize(hostContext);
_term = hostContext.GetService<ITerminal>();
}
public async Task<List<TaskAgent>> GetRunnersAsync(int runnerGroupId, string githubUrl, string githubToken, string agentName = null)
{
var githubApiUrl = "";
var gitHubUrlBuilder = new UriBuilder(githubUrl);
var path = gitHubUrlBuilder.Path.Split('/', '\\', StringSplitOptions.RemoveEmptyEntries);
if (path.Length == 1)
{
// org runner
if (UrlUtil.IsHostedServer(gitHubUrlBuilder))
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://api.{gitHubUrlBuilder.Host}/orgs/{path[0]}/actions/runner-groups/{runnerGroupId}/runners";
}
else
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://{gitHubUrlBuilder.Host}/api/v3/orgs/{path[0]}/actions/runner-groups/{runnerGroupId}/runners";
}
}
else if (path.Length == 2)
{
// repo or enterprise runner.
if (!string.Equals(path[0], "enterprises", StringComparison.OrdinalIgnoreCase))
{
return null;
}
if (UrlUtil.IsHostedServer(gitHubUrlBuilder))
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://api.{gitHubUrlBuilder.Host}/{path[0]}/{path[1]}/actions/runner-groups/{runnerGroupId}/runners";
}
else
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://{gitHubUrlBuilder.Host}/api/v3/{path[0]}/{path[1]}/actions/runner-groups/{runnerGroupId}/runners";
}
}
else
{
throw new ArgumentException($"'{githubUrl}' should point to an org or enterprise.");
}
var runnersList = await RetryRequest<ListRunnersResponse>(githubApiUrl, githubToken, RequestType.Get, 3, "Failed to get agents pools");
var agents = runnersList.ToTaskAgents();
if (string.IsNullOrEmpty(agentName))
{
return agents;
}
return agents.Where(x => string.Equals(x.Name, agentName, StringComparison.OrdinalIgnoreCase)).ToList();
}
public async Task<List<TaskAgentPool>> GetRunnerGroupsAsync(string githubUrl, string githubToken)
{
var githubApiUrl = "";
var gitHubUrlBuilder = new UriBuilder(githubUrl);
var path = gitHubUrlBuilder.Path.Split('/', '\\', StringSplitOptions.RemoveEmptyEntries);
if (path.Length == 1)
{
// org runner
if (UrlUtil.IsHostedServer(gitHubUrlBuilder))
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://api.{gitHubUrlBuilder.Host}/orgs/{path[0]}/actions/runner-groups";
}
else
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://{gitHubUrlBuilder.Host}/api/v3/orgs/{path[0]}/actions/runner-groups";
}
}
else if (path.Length == 2)
{
// repo or enterprise runner.
if (!string.Equals(path[0], "enterprises", StringComparison.OrdinalIgnoreCase))
{
return null;
}
if (UrlUtil.IsHostedServer(gitHubUrlBuilder))
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://api.{gitHubUrlBuilder.Host}/{path[0]}/{path[1]}/actions/runner-groups";
}
else
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://{gitHubUrlBuilder.Host}/api/v3/{path[0]}/{path[1]}/actions/runner-groups";
}
}
else
{
throw new ArgumentException($"'{githubUrl}' should point to an org or enterprise.");
}
var agentPools = await RetryRequest<RunnerGroupList>(githubApiUrl, githubToken, RequestType.Get, 3, "Failed to get agents pools");
return agentPools?.ToAgentPoolList();
}
public async Task<DistributedTask.WebApi.Runner> AddRunnerAsync(int runnerGroupId, TaskAgent agent, string githubUrl, string githubToken, string publicKey)
{
var gitHubUrlBuilder = new UriBuilder(githubUrl);
var path = gitHubUrlBuilder.Path.Split('/', '\\', StringSplitOptions.RemoveEmptyEntries);
string githubApiUrl;
if (UrlUtil.IsHostedServer(gitHubUrlBuilder))
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://api.{gitHubUrlBuilder.Host}/actions/runners/register";
}
else
{
githubApiUrl = $"{gitHubUrlBuilder.Scheme}://{gitHubUrlBuilder.Host}/api/v3/actions/runners/register";
}
var bodyObject = new Dictionary<string, Object>()
{
{"url", githubUrl},
{"group_id", runnerGroupId},
{"name", agent.Name},
{"version", agent.Version},
{"updates_disabled", agent.DisableUpdate},
{"ephemeral", agent.Ephemeral},
{"labels", agent.Labels},
{"public_key", publicKey}
};
var body = new StringContent(StringUtil.ConvertToJson(bodyObject), null, "application/json");
return await RetryRequest<DistributedTask.WebApi.Runner>(githubApiUrl, githubToken, RequestType.Post, 3, "Failed to add agent", body);
}
private async Task<T> RetryRequest<T>(string githubApiUrl, string githubToken, RequestType requestType, int maxRetryAttemptsCount = 5, string errorMessage = null, StringContent body = null)
{
int retry = 0;
while (true)
{
retry++;
using (var httpClientHandler = HostContext.CreateHttpClientHandler())
using (var httpClient = new HttpClient(httpClientHandler))
{
httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("RemoteAuth", githubToken);
httpClient.DefaultRequestHeaders.UserAgent.AddRange(HostContext.UserAgents);
var responseStatus = System.Net.HttpStatusCode.OK;
try
{
HttpResponseMessage response = null;
if (requestType == RequestType.Get)
{
response = await httpClient.GetAsync(githubApiUrl);
}
else
{
response = await httpClient.PostAsync(githubApiUrl, body);
}
if (response != null)
{
responseStatus = response.StatusCode;
var githubRequestId = GetGitHubRequestId(response.Headers);
if (response.IsSuccessStatusCode)
{
Trace.Info($"Http response code: {response.StatusCode} from '{requestType.ToString()} {githubApiUrl}' ({githubRequestId})");
var jsonResponse = await response.Content.ReadAsStringAsync();
return StringUtil.ConvertFromJson<T>(jsonResponse);
}
else
{
_term.WriteError($"Http response code: {response.StatusCode} from '{requestType.ToString()} {githubApiUrl}' (Request Id: {githubRequestId})");
var errorResponse = await response.Content.ReadAsStringAsync();
_term.WriteError(errorResponse);
response.EnsureSuccessStatusCode();
}
}
}
catch (Exception ex) when (retry < maxRetryAttemptsCount && responseStatus != System.Net.HttpStatusCode.NotFound)
{
Trace.Error($"{errorMessage} -- Atempt: {retry}");
Trace.Error(ex);
}
}
var backOff = BackoffTimerHelper.GetRandomBackoff(TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(5));
Trace.Info($"Retrying in {backOff.Seconds} seconds");
await Task.Delay(backOff);
}
}
public string GetGitHubRequestId(HttpResponseHeaders headers)
{
if (headers.TryGetValues("x-github-request-id", out var headerValues))
{
return headerValues.FirstOrDefault();
}
return string.Empty;
}
}
}

View File

@@ -1,9 +1,8 @@
using GitHub.DistributedTask.WebApi; using GitHub.DistributedTask.WebApi;
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using GitHub.Runner.Common.Util;
using GitHub.Services.WebApi; using GitHub.Services.WebApi;
using GitHub.Services.Common; using GitHub.Services.Common;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
@@ -39,7 +38,7 @@ namespace GitHub.Runner.Common
Task<TaskAgentSession> CreateAgentSessionAsync(Int32 poolId, TaskAgentSession session, CancellationToken cancellationToken); Task<TaskAgentSession> CreateAgentSessionAsync(Int32 poolId, TaskAgentSession session, CancellationToken cancellationToken);
Task DeleteAgentMessageAsync(Int32 poolId, Int64 messageId, Guid sessionId, CancellationToken cancellationToken); Task DeleteAgentMessageAsync(Int32 poolId, Int64 messageId, Guid sessionId, CancellationToken cancellationToken);
Task DeleteAgentSessionAsync(Int32 poolId, Guid sessionId, CancellationToken cancellationToken); Task DeleteAgentSessionAsync(Int32 poolId, Guid sessionId, CancellationToken cancellationToken);
Task<TaskAgentMessage> GetAgentMessageAsync(Int32 poolId, Guid sessionId, Int64? lastMessageId, TaskAgentStatus status, CancellationToken cancellationToken); Task<TaskAgentMessage> GetAgentMessageAsync(Int32 poolId, Guid sessionId, Int64? lastMessageId, TaskAgentStatus status, string runnerVersion, CancellationToken cancellationToken);
// job request // job request
Task<TaskAgentJobRequest> GetAgentRequestAsync(int poolId, long requestId, CancellationToken cancellationToken); Task<TaskAgentJobRequest> GetAgentRequestAsync(int poolId, long requestId, CancellationToken cancellationToken);
@@ -180,31 +179,6 @@ namespace GitHub.Runner.Common
} }
} }
private async Task<VssConnection> EstablishVssConnection(Uri serverUrl, VssCredentials credentials, TimeSpan timeout)
{
Trace.Info($"Establish connection with {timeout.TotalSeconds} seconds timeout.");
int attemptCount = 5;
while (attemptCount-- > 0)
{
var connection = VssUtil.CreateConnection(serverUrl, credentials, timeout: timeout);
try
{
await connection.ConnectAsync();
return connection;
}
catch (Exception ex) when (attemptCount > 0)
{
Trace.Info($"Catch exception during connect. {attemptCount} attempt left.");
Trace.Error(ex);
await HostContext.Delay(TimeSpan.FromMilliseconds(100), CancellationToken.None);
}
}
// should never reach here.
throw new InvalidOperationException(nameof(EstablishVssConnection));
}
private void CheckConnection(RunnerConnectionType connectionType) private void CheckConnection(RunnerConnectionType connectionType)
{ {
switch (connectionType) switch (connectionType)
@@ -298,10 +272,10 @@ namespace GitHub.Runner.Common
return _messageTaskAgentClient.DeleteAgentSessionAsync(poolId, sessionId, cancellationToken: cancellationToken); return _messageTaskAgentClient.DeleteAgentSessionAsync(poolId, sessionId, cancellationToken: cancellationToken);
} }
public Task<TaskAgentMessage> GetAgentMessageAsync(Int32 poolId, Guid sessionId, Int64? lastMessageId, TaskAgentStatus status, CancellationToken cancellationToken) public Task<TaskAgentMessage> GetAgentMessageAsync(Int32 poolId, Guid sessionId, Int64? lastMessageId, TaskAgentStatus status, string runnerVersion, CancellationToken cancellationToken)
{ {
CheckConnection(RunnerConnectionType.MessageQueue); CheckConnection(RunnerConnectionType.MessageQueue);
return _messageTaskAgentClient.GetMessageAsync(poolId, sessionId, lastMessageId, status, cancellationToken: cancellationToken); return _messageTaskAgentClient.GetMessageAsync(poolId, sessionId, lastMessageId, status, runnerVersion, cancellationToken: cancellationToken);
} }
//----------------------------------------------------------------- //-----------------------------------------------------------------

View File

@@ -1,4 +1,10 @@
using System; using System;
using System.Threading;
using System.Threading.Tasks;
using GitHub.Runner.Sdk;
using GitHub.Services.Common;
using GitHub.Services.WebApi;
using Sdk.WebApi.WebApi.RawClient;
namespace GitHub.Runner.Common namespace GitHub.Runner.Common
{ {
@@ -21,9 +27,9 @@ namespace GitHub.Runner.Common
protected IHostContext HostContext { get; private set; } protected IHostContext HostContext { get; private set; }
protected Tracing Trace { get; private set; } protected Tracing Trace { get; private set; }
public string TraceName public string TraceName
{ {
get get
{ {
return GetType().Name; return GetType().Name;
} }
@@ -35,5 +41,71 @@ namespace GitHub.Runner.Common
Trace = HostContext.GetTrace(TraceName); Trace = HostContext.GetTrace(TraceName);
Trace.Entering(); Trace.Entering();
} }
protected async Task<VssConnection> EstablishVssConnection(Uri serverUrl, VssCredentials credentials, TimeSpan timeout)
{
Trace.Info($"EstablishVssConnection");
Trace.Info($"Establish connection with {timeout.TotalSeconds} seconds timeout.");
int attemptCount = 5;
while (attemptCount-- > 0)
{
var connection = VssUtil.CreateConnection(serverUrl, credentials, timeout: timeout);
try
{
await connection.ConnectAsync();
return connection;
}
catch (Exception ex) when (attemptCount > 0)
{
Trace.Info($"Catch exception during connect. {attemptCount} attempt left.");
Trace.Error(ex);
await HostContext.Delay(TimeSpan.FromMilliseconds(100), CancellationToken.None);
}
}
// should never reach here.
throw new InvalidOperationException(nameof(EstablishVssConnection));
}
protected async Task RetryRequest(Func<Task> func,
CancellationToken cancellationToken,
int maxRetryAttemptsCount = 5
)
{
async Task<Unit> wrappedFunc()
{
await func();
return Unit.Value;
}
await RetryRequest<Unit>(wrappedFunc, cancellationToken, maxRetryAttemptsCount);
}
protected async Task<T> RetryRequest<T>(Func<Task<T>> func,
CancellationToken cancellationToken,
int maxRetryAttemptsCount = 5,
Func<Exception, bool> shouldRetry = null
)
{
var retryCount = 0;
while (true)
{
retryCount++;
cancellationToken.ThrowIfCancellationRequested();
try
{
return await func();
}
// TODO: Add handling of non-retriable exceptions: https://github.com/github/actions-broker/issues/122
catch (Exception ex) when (retryCount < maxRetryAttemptsCount && (shouldRetry == null || shouldRetry(ex)))
{
Trace.Error("Catch exception during request");
Trace.Error(ex);
var backOff = BackoffTimerHelper.GetRandomBackoff(TimeSpan.FromSeconds(5), TimeSpan.FromSeconds(15));
Trace.Warning($"Back off {backOff.TotalSeconds} seconds before next retry. {maxRetryAttemptsCount - retryCount} attempt left.");
await Task.Delay(backOff, cancellationToken);
}
}
}
} }
} }

View File

@@ -0,0 +1,96 @@
using System;
using System.Diagnostics;
using System.Globalization;
using System.IO;
using GitHub.Runner.Sdk;
namespace GitHub.Runner.Common
{
public sealed class StdoutTraceListener : ConsoleTraceListener
{
private readonly string _hostType;
public StdoutTraceListener(string hostType)
{
this._hostType = hostType;
}
// Copied and modified slightly from .Net Core source code. Modification was required to make it compile.
// There must be some TraceFilter extension class that is missing in this source code.
public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message)
{
if (Filter != null && !Filter.ShouldTrace(eventCache, source, eventType, id, message, null, null, null))
{
return;
}
if (!string.IsNullOrEmpty(message))
{
var messageLines = message.Split(Environment.NewLine);
foreach (var messageLine in messageLines)
{
WriteHeader(source, eventType, id);
WriteLine(messageLine);
WriteFooter(eventCache);
}
}
}
internal bool IsEnabled(TraceOptions opts)
{
return (opts & TraceOutputOptions) != 0;
}
// Altered from the original .Net Core implementation.
private void WriteHeader(string source, TraceEventType eventType, int id)
{
string type = null;
switch (eventType)
{
case TraceEventType.Critical:
type = "CRIT";
break;
case TraceEventType.Error:
type = "ERR ";
break;
case TraceEventType.Warning:
type = "WARN";
break;
case TraceEventType.Information:
type = "INFO";
break;
case TraceEventType.Verbose:
type = "VERB";
break;
default:
type = eventType.ToString();
break;
}
Write(StringUtil.Format("[{0} {1:u} {2} {3}] ", _hostType.ToUpperInvariant(), DateTime.UtcNow, type, source));
}
// Copied and modified slightly from .Net Core source code to make it compile. The original code
// accesses a private indentLevel field. In this code it has been modified to use the getter/setter.
private void WriteFooter(TraceEventCache eventCache)
{
if (eventCache == null)
return;
IndentLevel++;
if (IsEnabled(TraceOptions.ProcessId))
WriteLine("ProcessId=" + eventCache.ProcessId);
if (IsEnabled(TraceOptions.ThreadId))
WriteLine("ThreadId=" + eventCache.ThreadId);
if (IsEnabled(TraceOptions.DateTime))
WriteLine("DateTime=" + eventCache.DateTime.ToString("o", CultureInfo.InvariantCulture));
if (IsEnabled(TraceOptions.Timestamp))
WriteLine("Timestamp=" + eventCache.Timestamp);
IndentLevel--;
}
}
}

View File

@@ -18,7 +18,7 @@ namespace GitHub.Runner.Common
string ReadSecret(); string ReadSecret();
void Write(string message, ConsoleColor? colorCode = null); void Write(string message, ConsoleColor? colorCode = null);
void WriteLine(); void WriteLine();
void WriteLine(string line, ConsoleColor? colorCode = null); void WriteLine(string line, ConsoleColor? colorCode = null, bool skipTracing = false);
void WriteError(Exception ex); void WriteError(Exception ex);
void WriteError(string line); void WriteError(string line);
void WriteSection(string message); void WriteSection(string message);
@@ -81,7 +81,7 @@ namespace GitHub.Runner.Common
} }
// Trace whether a value was entered. // Trace whether a value was entered.
string val = new String(chars.ToArray()); string val = new(chars.ToArray());
if (!string.IsNullOrEmpty(val)) if (!string.IsNullOrEmpty(val))
{ {
HostContext.SecretMasker.AddValue(val); HostContext.SecretMasker.AddValue(val);
@@ -116,9 +116,12 @@ namespace GitHub.Runner.Common
// Do not add a format string overload. Terminal messages are user facing and therefore // Do not add a format string overload. Terminal messages are user facing and therefore
// should be localized. Use the Loc method in the StringUtil class. // should be localized. Use the Loc method in the StringUtil class.
public void WriteLine(string line, ConsoleColor? colorCode = null) public void WriteLine(string line, ConsoleColor? colorCode = null, bool skipTracing = false)
{ {
Trace.Info($"WRITE LINE: {line}"); if (!skipTracing)
{
Trace.Info($"WRITE LINE: {line}");
}
if (!Silent) if (!Silent)
{ {
if (colorCode != null) if (colorCode != null)

View File

@@ -1,10 +1,9 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Linq; using System.Linq;
using System.Net.Http; using System.Net.Http;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using GitHub.Services.Common.Internal; using GitHub.Services.Common.Internal;

View File

@@ -1,4 +1,3 @@
using GitHub.Runner.Common.Util;
using System; using System;
using System.Collections.Concurrent; using System.Collections.Concurrent;
using System.Diagnostics; using System.Diagnostics;
@@ -15,23 +14,25 @@ namespace GitHub.Runner.Common
public sealed class TraceManager : ITraceManager public sealed class TraceManager : ITraceManager
{ {
private readonly ConcurrentDictionary<string, Tracing> _sources = new ConcurrentDictionary<string, Tracing>(StringComparer.OrdinalIgnoreCase); private readonly ConcurrentDictionary<string, Tracing> _sources = new(StringComparer.OrdinalIgnoreCase);
private readonly HostTraceListener _hostTraceListener; private readonly HostTraceListener _hostTraceListener;
private readonly StdoutTraceListener _stdoutTraceListener;
private TraceSetting _traceSetting; private TraceSetting _traceSetting;
private ISecretMasker _secretMasker; private ISecretMasker _secretMasker;
public TraceManager(HostTraceListener traceListener, ISecretMasker secretMasker) public TraceManager(HostTraceListener traceListener, StdoutTraceListener stdoutTraceListener, ISecretMasker secretMasker)
: this(traceListener, new TraceSetting(), secretMasker) : this(traceListener, stdoutTraceListener, new TraceSetting(), secretMasker)
{ {
} }
public TraceManager(HostTraceListener traceListener, TraceSetting traceSetting, ISecretMasker secretMasker) public TraceManager(HostTraceListener traceListener, StdoutTraceListener stdoutTraceListener, TraceSetting traceSetting, ISecretMasker secretMasker)
{ {
// Validate and store params. // Validate and store params.
ArgUtil.NotNull(traceListener, nameof(traceListener)); ArgUtil.NotNull(traceListener, nameof(traceListener));
ArgUtil.NotNull(traceSetting, nameof(traceSetting)); ArgUtil.NotNull(traceSetting, nameof(traceSetting));
ArgUtil.NotNull(secretMasker, nameof(secretMasker)); ArgUtil.NotNull(secretMasker, nameof(secretMasker));
_hostTraceListener = traceListener; _hostTraceListener = traceListener;
_stdoutTraceListener = stdoutTraceListener;
_traceSetting = traceSetting; _traceSetting = traceSetting;
_secretMasker = secretMasker; _secretMasker = secretMasker;
@@ -82,7 +83,7 @@ namespace GitHub.Runner.Common
Level = sourceTraceLevel.ToSourceLevels() Level = sourceTraceLevel.ToSourceLevels()
}; };
} }
return new Tracing(name, _secretMasker, sourceSwitch, _hostTraceListener); return new Tracing(name, _secretMasker, sourceSwitch, _hostTraceListener, _stdoutTraceListener);
} }
} }
} }

View File

@@ -1,5 +1,3 @@

using GitHub.Runner.Common.Util;
using Newtonsoft.Json; using Newtonsoft.Json;
using System; using System;
using System.Diagnostics; using System.Diagnostics;
@@ -14,7 +12,7 @@ namespace GitHub.Runner.Common
private ISecretMasker _secretMasker; private ISecretMasker _secretMasker;
private TraceSource _traceSource; private TraceSource _traceSource;
public Tracing(string name, ISecretMasker secretMasker, SourceSwitch sourceSwitch, HostTraceListener traceListener) public Tracing(string name, ISecretMasker secretMasker, SourceSwitch sourceSwitch, HostTraceListener traceListener, StdoutTraceListener stdoutTraceListener = null)
{ {
ArgUtil.NotNull(secretMasker, nameof(secretMasker)); ArgUtil.NotNull(secretMasker, nameof(secretMasker));
_secretMasker = secretMasker; _secretMasker = secretMasker;
@@ -29,6 +27,10 @@ namespace GitHub.Runner.Common
} }
_traceSource.Listeners.Add(traceListener); _traceSource.Listeners.Add(traceListener);
if (stdoutTraceListener != null)
{
_traceSource.Listeners.Add(stdoutTraceListener);
}
} }
public void Info(string message) public void Info(string message)

View File

@@ -0,0 +1,8 @@
// Represents absence of value.
namespace GitHub.Runner.Common
{
public readonly struct Unit
{
public static readonly Unit Value = default;
}
}

View File

@@ -0,0 +1,14 @@
namespace GitHub.Runner.Common.Util
{
using System;
using GitHub.DistributedTask.WebApi;
public static class MessageUtil
{
public static bool IsRunServiceJob(string messageType)
{
return string.Equals(messageType, JobRequestMessageTypes.RunnerJobRequest, StringComparison.OrdinalIgnoreCase);
}
}
}

View File

@@ -7,7 +7,7 @@ namespace GitHub.Runner.Common.Util
{ {
private const string _defaultNodeVersion = "node16"; private const string _defaultNodeVersion = "node16";
#if OS_OSX && ARM64 #if (OS_OSX || OS_WINDOWS) && ARM64
public static readonly ReadOnlyCollection<string> BuiltInNodeVersions = new(new[] { "node16" }); public static readonly ReadOnlyCollection<string> BuiltInNodeVersions = new(new[] { "node16" });
#else #else
public static readonly ReadOnlyCollection<string> BuiltInNodeVersions = new(new[] { "node12", "node16" }); public static readonly ReadOnlyCollection<string> BuiltInNodeVersions = new(new[] { "node12", "node16" });

View File

@@ -1,7 +1,4 @@
using System; using System;
using System.Collections.Generic;
using System.Linq;
using GitHub.Runner.Sdk;
namespace GitHub.Runner.Common.Util namespace GitHub.Runner.Common.Util
{ {

View File

@@ -0,0 +1,209 @@
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Runtime.InteropServices;
using System.Security.Cryptography;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Listener.Configuration;
using GitHub.Runner.Sdk;
using GitHub.Services.Common;
using GitHub.Runner.Common.Util;
using GitHub.Services.OAuth;
namespace GitHub.Runner.Listener
{
public sealed class BrokerMessageListener : RunnerService, IMessageListener
{
private RunnerSettings _settings;
private ITerminal _term;
private TimeSpan _getNextMessageRetryInterval;
private TaskAgentStatus runnerStatus = TaskAgentStatus.Online;
private CancellationTokenSource _getMessagesTokenSource;
private IBrokerServer _brokerServer;
public override void Initialize(IHostContext hostContext)
{
base.Initialize(hostContext);
_term = HostContext.GetService<ITerminal>();
_brokerServer = HostContext.GetService<IBrokerServer>();
}
public async Task<Boolean> CreateSessionAsync(CancellationToken token)
{
await RefreshBrokerConnection();
return await Task.FromResult(true);
}
public async Task DeleteSessionAsync()
{
await Task.CompletedTask;
}
public void OnJobStatus(object sender, JobStatusEventArgs e)
{
Trace.Info("Received job status event. JobState: {0}", e.Status);
runnerStatus = e.Status;
try
{
_getMessagesTokenSource?.Cancel();
}
catch (ObjectDisposedException)
{
Trace.Info("_getMessagesTokenSource is already disposed.");
}
}
public async Task<TaskAgentMessage> GetNextMessageAsync(CancellationToken token)
{
bool encounteringError = false;
int continuousError = 0;
Stopwatch heartbeat = new();
heartbeat.Restart();
var maxRetryCount = 10;
while (true)
{
TaskAgentMessage message = null;
_getMessagesTokenSource = CancellationTokenSource.CreateLinkedTokenSource(token);
try
{
message = await _brokerServer.GetRunnerMessageAsync(_getMessagesTokenSource.Token, runnerStatus, BuildConstants.RunnerPackage.Version);
if (message == null)
{
continue;
}
return message;
}
catch (OperationCanceledException) when (_getMessagesTokenSource.Token.IsCancellationRequested && !token.IsCancellationRequested)
{
Trace.Info("Get messages has been cancelled using local token source. Continue to get messages with new status.");
continue;
}
catch (OperationCanceledException) when (token.IsCancellationRequested)
{
Trace.Info("Get next message has been cancelled.");
throw;
}
catch (TaskAgentAccessTokenExpiredException)
{
Trace.Info("Runner OAuth token has been revoked. Unable to pull message.");
throw;
}
catch (AccessDeniedException e) when (e.InnerException is InvalidTaskAgentVersionException)
{
throw;
}
catch (Exception ex)
{
Trace.Error("Catch exception during get next message.");
Trace.Error(ex);
if (!IsGetNextMessageExceptionRetriable(ex))
{
throw;
}
else
{
continuousError++;
//retry after a random backoff to avoid service throttling
//in case of there is a service error happened and all agents get kicked off of the long poll and all agent try to reconnect back at the same time.
if (continuousError <= 5)
{
// random backoff [15, 30]
_getNextMessageRetryInterval = BackoffTimerHelper.GetRandomBackoff(TimeSpan.FromSeconds(15), TimeSpan.FromSeconds(30), _getNextMessageRetryInterval);
}
else if (continuousError >= maxRetryCount)
{
throw;
}
else
{
// more aggressive backoff [30, 60]
_getNextMessageRetryInterval = BackoffTimerHelper.GetRandomBackoff(TimeSpan.FromSeconds(30), TimeSpan.FromSeconds(60), _getNextMessageRetryInterval);
}
if (!encounteringError)
{
//print error only on the first consecutive error
_term.WriteError($"{DateTime.UtcNow:u}: Runner connect error: {ex.Message}. Retrying until reconnected.");
encounteringError = true;
}
// re-create VssConnection before next retry
await RefreshBrokerConnection();
Trace.Info("Sleeping for {0} seconds before retrying.", _getNextMessageRetryInterval.TotalSeconds);
await HostContext.Delay(_getNextMessageRetryInterval, token);
}
}
finally
{
_getMessagesTokenSource.Dispose();
}
if (message == null)
{
if (heartbeat.Elapsed > TimeSpan.FromMinutes(30))
{
Trace.Info($"No message retrieved within last 30 minutes.");
heartbeat.Restart();
}
else
{
Trace.Verbose($"No message retrieved.");
}
continue;
}
Trace.Info($"Message '{message.MessageId}' received.");
}
}
public async Task DeleteMessageAsync(TaskAgentMessage message)
{
await Task.CompletedTask;
}
private bool IsGetNextMessageExceptionRetriable(Exception ex)
{
if (ex is TaskAgentNotFoundException ||
ex is TaskAgentPoolNotFoundException ||
ex is TaskAgentSessionExpiredException ||
ex is AccessDeniedException ||
ex is VssUnauthorizedException)
{
Trace.Info($"Non-retriable exception: {ex.Message}");
return false;
}
else
{
Trace.Info($"Retriable exception: {ex.Message}");
return true;
}
}
private async Task RefreshBrokerConnection()
{
var configManager = HostContext.GetService<IConfigurationManager>();
_settings = configManager.LoadSettings();
if (_settings.ServerUrlV2 == null)
{
throw new InvalidOperationException("ServerUrlV2 is not set");
}
var credMgr = HostContext.GetService<ICredentialManager>();
VssCredentials creds = credMgr.LoadCredentials();
await _brokerServer.ConnectAsync(new Uri(_settings.ServerUrlV2), creds);
}
}
}

View File

@@ -347,8 +347,8 @@ namespace GitHub.Runner.Listener.Check
public sealed class HttpEventSourceListener : EventListener public sealed class HttpEventSourceListener : EventListener
{ {
private readonly List<string> _logs; private readonly List<string> _logs;
private readonly object _lock = new object(); private readonly object _lock = new();
private readonly Dictionary<string, HashSet<string>> _ignoredEvent = new Dictionary<string, HashSet<string>> private readonly Dictionary<string, HashSet<string>> _ignoredEvent = new()
{ {
{ {
"Microsoft-System-Net-Http", "Microsoft-System-Net-Http",

View File

@@ -2,7 +2,6 @@ using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.IO; using System.IO;
using System.Linq; using System.Linq;
using System.Net;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using GitHub.Runner.Common; using GitHub.Runner.Common;
@@ -168,4 +167,4 @@ namespace GitHub.Runner.Listener.Check
return result; return result;
} }
} }
} }

View File

@@ -2,7 +2,6 @@ using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.IO; using System.IO;
using System.Linq; using System.Linq;
using System.Net;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using GitHub.Runner.Common; using GitHub.Runner.Common;
@@ -87,7 +86,7 @@ namespace GitHub.Runner.Listener.Check
result.Logs.Add($"{DateTime.UtcNow.ToString("O")} ***************************************************************************************************************"); result.Logs.Add($"{DateTime.UtcNow.ToString("O")} ***************************************************************************************************************");
// Request to github.com or ghes server // Request to github.com or ghes server
Uri requestUrl = new Uri(url); Uri requestUrl = new(url);
var env = new Dictionary<string, string>() var env = new Dictionary<string, string>()
{ {
{ "HOSTNAME", requestUrl.Host }, { "HOSTNAME", requestUrl.Host },
@@ -179,4 +178,4 @@ namespace GitHub.Runner.Listener.Check
return result; return result;
} }
} }
} }

View File

@@ -4,7 +4,6 @@ using System;
using System.Collections; using System.Collections;
using System.Collections.Generic; using System.Collections.Generic;
using System.Linq; using System.Linq;
using GitHub.DistributedTask.Logging;
using GitHub.Runner.Common; using GitHub.Runner.Common;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
@@ -12,7 +11,7 @@ namespace GitHub.Runner.Listener
{ {
public sealed class CommandSettings public sealed class CommandSettings
{ {
private readonly Dictionary<string, string> _envArgs = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); private readonly Dictionary<string, string> _envArgs = new(StringComparer.OrdinalIgnoreCase);
private readonly CommandLineParser _parser; private readonly CommandLineParser _parser;
private readonly IPromptManager _promptManager; private readonly IPromptManager _promptManager;
private readonly Tracing _trace; private readonly Tracing _trace;
@@ -27,17 +26,19 @@ namespace GitHub.Runner.Listener
}; };
// Valid flags and args for specific command - key: command, value: array of valid flags and args // Valid flags and args for specific command - key: command, value: array of valid flags and args
private readonly Dictionary<string, string[]> validOptions = new Dictionary<string, string[]> private readonly Dictionary<string, string[]> validOptions = new()
{ {
// Valid configure flags and args // Valid configure flags and args
[Constants.Runner.CommandLine.Commands.Configure] = [Constants.Runner.CommandLine.Commands.Configure] =
new string[] new string[]
{ {
Constants.Runner.CommandLine.Flags.DisableUpdate, Constants.Runner.CommandLine.Flags.DisableUpdate,
Constants.Runner.CommandLine.Flags.Ephemeral, Constants.Runner.CommandLine.Flags.Ephemeral,
Constants.Runner.CommandLine.Flags.GenerateServiceConfig,
Constants.Runner.CommandLine.Flags.Replace, Constants.Runner.CommandLine.Flags.Replace,
Constants.Runner.CommandLine.Flags.RunAsService, Constants.Runner.CommandLine.Flags.RunAsService,
Constants.Runner.CommandLine.Flags.Unattended, Constants.Runner.CommandLine.Flags.Unattended,
Constants.Runner.CommandLine.Flags.NoDefaultLabels,
Constants.Runner.CommandLine.Args.Auth, Constants.Runner.CommandLine.Args.Auth,
Constants.Runner.CommandLine.Args.Labels, Constants.Runner.CommandLine.Args.Labels,
Constants.Runner.CommandLine.Args.MonitorSocketAddress, Constants.Runner.CommandLine.Args.MonitorSocketAddress,
@@ -56,7 +57,8 @@ namespace GitHub.Runner.Listener
new string[] new string[]
{ {
Constants.Runner.CommandLine.Args.Token, Constants.Runner.CommandLine.Args.Token,
Constants.Runner.CommandLine.Args.PAT Constants.Runner.CommandLine.Args.PAT,
Constants.Runner.CommandLine.Flags.Local
}, },
// Valid run flags and args // Valid run flags and args
[Constants.Runner.CommandLine.Commands.Run] = [Constants.Runner.CommandLine.Commands.Run] =
@@ -80,11 +82,14 @@ namespace GitHub.Runner.Listener
// Flags. // Flags.
public bool Check => TestFlag(Constants.Runner.CommandLine.Flags.Check); public bool Check => TestFlag(Constants.Runner.CommandLine.Flags.Check);
public bool Commit => TestFlag(Constants.Runner.CommandLine.Flags.Commit); public bool Commit => TestFlag(Constants.Runner.CommandLine.Flags.Commit);
public bool DisableUpdate => TestFlag(Constants.Runner.CommandLine.Flags.DisableUpdate);
public bool Ephemeral => TestFlag(Constants.Runner.CommandLine.Flags.Ephemeral);
public bool GenerateServiceConfig => TestFlag(Constants.Runner.CommandLine.Flags.GenerateServiceConfig);
public bool Help => TestFlag(Constants.Runner.CommandLine.Flags.Help); public bool Help => TestFlag(Constants.Runner.CommandLine.Flags.Help);
public bool NoDefaultLabels => TestFlag(Constants.Runner.CommandLine.Flags.NoDefaultLabels);
public bool Unattended => TestFlag(Constants.Runner.CommandLine.Flags.Unattended); public bool Unattended => TestFlag(Constants.Runner.CommandLine.Flags.Unattended);
public bool Version => TestFlag(Constants.Runner.CommandLine.Flags.Version); public bool Version => TestFlag(Constants.Runner.CommandLine.Flags.Version);
public bool Ephemeral => TestFlag(Constants.Runner.CommandLine.Flags.Ephemeral); public bool RemoveLocalConfig => TestFlag(Constants.Runner.CommandLine.Flags.Local);
public bool DisableUpdate => TestFlag(Constants.Runner.CommandLine.Flags.DisableUpdate);
// Keep this around since customers still relies on it // Keep this around since customers still relies on it
public bool RunOnce => TestFlag(Constants.Runner.CommandLine.Flags.Once); public bool RunOnce => TestFlag(Constants.Runner.CommandLine.Flags.Once);
@@ -138,7 +143,7 @@ namespace GitHub.Runner.Listener
// Validate commandline parser result // Validate commandline parser result
public List<string> Validate() public List<string> Validate()
{ {
List<string> unknowns = new List<string>(); List<string> unknowns = new();
// detect unknown commands // detect unknown commands
unknowns.AddRange(_parser.Commands.Where(x => !validOptions.Keys.Contains(x, StringComparer.OrdinalIgnoreCase))); unknowns.AddRange(_parser.Commands.Where(x => !validOptions.Keys.Contains(x, StringComparer.OrdinalIgnoreCase)));
@@ -179,7 +184,7 @@ namespace GitHub.Runner.Listener
{ {
command = Constants.Runner.CommandLine.Commands.Warmup; command = Constants.Runner.CommandLine.Commands.Warmup;
} }
return command; return command;
} }

View File

@@ -1,10 +1,3 @@
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
using GitHub.Services.Common;
using GitHub.Services.Common.Internal;
using GitHub.Services.OAuth;
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Linq; using System.Linq;
@@ -14,6 +7,13 @@ using System.Runtime.InteropServices;
using System.Security.Cryptography; using System.Security.Cryptography;
using System.Text; using System.Text;
using System.Threading.Tasks; using System.Threading.Tasks;
using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk;
using GitHub.Services.Common;
using GitHub.Services.Common.Internal;
using GitHub.Services.OAuth;
namespace GitHub.Runner.Listener.Configuration namespace GitHub.Runner.Listener.Configuration
{ {
@@ -31,12 +31,14 @@ namespace GitHub.Runner.Listener.Configuration
{ {
private IConfigurationStore _store; private IConfigurationStore _store;
private IRunnerServer _runnerServer; private IRunnerServer _runnerServer;
private IRunnerDotcomServer _dotcomServer;
private ITerminal _term; private ITerminal _term;
public override void Initialize(IHostContext hostContext) public override void Initialize(IHostContext hostContext)
{ {
base.Initialize(hostContext); base.Initialize(hostContext);
_runnerServer = HostContext.GetService<IRunnerServer>(); _runnerServer = HostContext.GetService<IRunnerServer>();
_dotcomServer = HostContext.GetService<IRunnerDotcomServer>();
Trace.Verbose("Creating _store"); Trace.Verbose("Creating _store");
_store = hostContext.GetService<IConfigurationStore>(); _store = hostContext.GetService<IConfigurationStore>();
Trace.Verbose("store created"); Trace.Verbose("store created");
@@ -81,17 +83,39 @@ namespace GitHub.Runner.Listener.Configuration
_term.WriteLine("--------------------------------------------------------------------------------"); _term.WriteLine("--------------------------------------------------------------------------------");
Trace.Info(nameof(ConfigureAsync)); Trace.Info(nameof(ConfigureAsync));
if (command.GenerateServiceConfig)
{
#if OS_LINUX
if (!IsConfigured())
{
throw new InvalidOperationException("--generateServiceConfig requires that the runner is already configured. For configuring a new runner as a service, run './config.sh'.");
}
RunnerSettings settings = _store.GetSettings();
Trace.Info($"generate service config for runner: {settings.AgentId}");
var controlManager = HostContext.GetService<ILinuxServiceControlManager>();
controlManager.GenerateScripts(settings);
return;
#else
throw new NotSupportedException("--generateServiceConfig is only supported on Linux.");
#endif
}
if (IsConfigured()) if (IsConfigured())
{ {
throw new InvalidOperationException("Cannot configure the runner because it is already configured. To reconfigure the runner, run 'config.cmd remove' or './config.sh remove' first."); throw new InvalidOperationException("Cannot configure the runner because it is already configured. To reconfigure the runner, run 'config.cmd remove' or './config.sh remove' first.");
} }
RunnerSettings runnerSettings = new RunnerSettings(); RunnerSettings runnerSettings = new();
// Loop getting url and creds until you can connect // Loop getting url and creds until you can connect
ICredentialProvider credProvider = null; ICredentialProvider credProvider = null;
VssCredentials creds = null; VssCredentials creds = null;
_term.WriteSection("Authentication"); _term.WriteSection("Authentication");
string registerToken = string.Empty;
while (true) while (true)
{ {
// When testing against a dev deployment of Actions Service, set this environment variable // When testing against a dev deployment of Actions Service, set this environment variable
@@ -109,9 +133,11 @@ namespace GitHub.Runner.Listener.Configuration
else else
{ {
runnerSettings.GitHubUrl = inputUrl; runnerSettings.GitHubUrl = inputUrl;
var registerToken = await GetRunnerTokenAsync(command, inputUrl, "registration"); registerToken = await GetRunnerTokenAsync(command, inputUrl, "registration");
GitHubAuthResult authResult = await GetTenantCredential(inputUrl, registerToken, Constants.RunnerEvent.Register); GitHubAuthResult authResult = await GetTenantCredential(inputUrl, registerToken, Constants.RunnerEvent.Register);
runnerSettings.ServerUrl = authResult.TenantUrl; runnerSettings.ServerUrl = authResult.TenantUrl;
runnerSettings.UseV2Flow = authResult.UseV2Flow;
Trace.Info($"Using V2 flow: {runnerSettings.UseV2Flow}");
creds = authResult.ToVssCredentials(); creds = authResult.ToVssCredentials();
Trace.Info("cred retrieved via GitHub auth"); Trace.Info("cred retrieved via GitHub auth");
} }
@@ -155,9 +181,11 @@ namespace GitHub.Runner.Listener.Configuration
// We want to use the native CSP of the platform for storage, so we use the RSACSP directly // We want to use the native CSP of the platform for storage, so we use the RSACSP directly
RSAParameters publicKey; RSAParameters publicKey;
var keyManager = HostContext.GetService<IRSAKeyManager>(); var keyManager = HostContext.GetService<IRSAKeyManager>();
string publicKeyXML;
using (var rsa = keyManager.CreateKey()) using (var rsa = keyManager.CreateKey())
{ {
publicKey = rsa.ExportParameters(false); publicKey = rsa.ExportParameters(false);
publicKeyXML = rsa.ToXmlString(includePrivateParameters: false);
} }
_term.WriteSection("Runner Registration"); _term.WriteSection("Runner Registration");
@@ -165,9 +193,17 @@ namespace GitHub.Runner.Listener.Configuration
// If we have more than one runner group available, allow the user to specify which one to be added into // If we have more than one runner group available, allow the user to specify which one to be added into
string poolName = null; string poolName = null;
TaskAgentPool agentPool = null; TaskAgentPool agentPool = null;
List<TaskAgentPool> agentPools = await _runnerServer.GetAgentPoolsAsync(); List<TaskAgentPool> agentPools;
TaskAgentPool defaultPool = agentPools?.Where(x => x.IsInternal).FirstOrDefault(); if (runnerSettings.UseV2Flow)
{
agentPools = await _dotcomServer.GetRunnerGroupsAsync(runnerSettings.GitHubUrl, registerToken);
}
else
{
agentPools = await _runnerServer.GetAgentPoolsAsync();
}
TaskAgentPool defaultPool = agentPools?.Where(x => x.IsInternal).FirstOrDefault();
if (agentPools?.Where(x => !x.IsHosted).Count() > 0) if (agentPools?.Where(x => !x.IsHosted).Count() > 0)
{ {
poolName = command.GetRunnerGroupName(defaultPool?.Name); poolName = command.GetRunnerGroupName(defaultPool?.Name);
@@ -205,8 +241,16 @@ namespace GitHub.Runner.Listener.Configuration
var userLabels = command.GetLabels(); var userLabels = command.GetLabels();
_term.WriteLine(); _term.WriteLine();
List<TaskAgent> agents;
if (runnerSettings.UseV2Flow)
{
agents = await _dotcomServer.GetRunnersAsync(runnerSettings.PoolId, runnerSettings.GitHubUrl, registerToken, runnerSettings.AgentName);
}
else
{
agents = await _runnerServer.GetAgentsAsync(runnerSettings.PoolId, runnerSettings.AgentName);
}
var agents = await _runnerServer.GetAgentsAsync(runnerSettings.PoolId, runnerSettings.AgentName);
Trace.Verbose("Returns {0} agents", agents.Count); Trace.Verbose("Returns {0} agents", agents.Count);
agent = agents.FirstOrDefault(); agent = agents.FirstOrDefault();
if (agent != null) if (agent != null)
@@ -215,7 +259,7 @@ namespace GitHub.Runner.Listener.Configuration
if (command.GetReplace()) if (command.GetReplace())
{ {
// Update existing agent with new PublicKey, agent version. // Update existing agent with new PublicKey, agent version.
agent = UpdateExistingAgent(agent, publicKey, userLabels, runnerSettings.Ephemeral, command.DisableUpdate); agent = UpdateExistingAgent(agent, publicKey, userLabels, runnerSettings.Ephemeral, command.DisableUpdate, command.NoDefaultLabels);
try try
{ {
@@ -249,11 +293,27 @@ namespace GitHub.Runner.Listener.Configuration
else else
{ {
// Create a new agent. // Create a new agent.
agent = CreateNewAgent(runnerSettings.AgentName, publicKey, userLabels, runnerSettings.Ephemeral, command.DisableUpdate); agent = CreateNewAgent(runnerSettings.AgentName, publicKey, userLabels, runnerSettings.Ephemeral, command.DisableUpdate, command.NoDefaultLabels);
try try
{ {
agent = await _runnerServer.AddAgentAsync(runnerSettings.PoolId, agent); if (runnerSettings.UseV2Flow)
{
var runner = await _dotcomServer.AddRunnerAsync(runnerSettings.PoolId, agent, runnerSettings.GitHubUrl, registerToken, publicKeyXML);
runnerSettings.ServerUrlV2 = runner.RunnerAuthorization.ServerUrl;
agent.Id = runner.Id;
agent.Authorization = new TaskAgentAuthorization()
{
AuthorizationUrl = runner.RunnerAuthorization.AuthorizationUrl,
ClientId = new Guid(runner.RunnerAuthorization.ClientId)
};
}
else
{
agent = await _runnerServer.AddAgentAsync(runnerSettings.PoolId, agent);
}
if (command.DisableUpdate && if (command.DisableUpdate &&
command.DisableUpdate != agent.DisableUpdate) command.DisableUpdate != agent.DisableUpdate)
{ {
@@ -304,24 +364,28 @@ namespace GitHub.Runner.Listener.Configuration
} }
// Testing agent connection, detect any potential connection issue, like local clock skew that cause OAuth token expired. // Testing agent connection, detect any potential connection issue, like local clock skew that cause OAuth token expired.
var credMgr = HostContext.GetService<ICredentialManager>();
VssCredentials credential = credMgr.LoadCredentials(); if (!runnerSettings.UseV2Flow)
try
{ {
await _runnerServer.ConnectAsync(new Uri(runnerSettings.ServerUrl), credential); var credMgr = HostContext.GetService<ICredentialManager>();
// ConnectAsync() hits _apis/connectionData which is an anonymous endpoint VssCredentials credential = credMgr.LoadCredentials();
// Need to hit an authenticate endpoint to trigger OAuth token exchange. try
await _runnerServer.GetAgentPoolsAsync(); {
_term.WriteSuccessMessage("Runner connection is good"); await _runnerServer.ConnectAsync(new Uri(runnerSettings.ServerUrl), credential);
} // ConnectAsync() hits _apis/connectionData which is an anonymous endpoint
catch (VssOAuthTokenRequestException ex) when (ex.Message.Contains("Current server time is")) // Need to hit an authenticate endpoint to trigger OAuth token exchange.
{ await _runnerServer.GetAgentPoolsAsync();
// there are two exception messages server send that indicate clock skew. _term.WriteSuccessMessage("Runner connection is good");
// 1. The bearer token expired on {jwt.ValidTo}. Current server time is {DateTime.UtcNow}. }
// 2. The bearer token is not valid until {jwt.ValidFrom}. Current server time is {DateTime.UtcNow}. catch (VssOAuthTokenRequestException ex) when (ex.Message.Contains("Current server time is"))
Trace.Error("Catch exception during test agent connection."); {
Trace.Error(ex); // there are two exception messages server send that indicate clock skew.
throw new Exception("The local machine's clock may be out of sync with the server time by more than five minutes. Please sync your clock with your domain or internet time and try again."); // 1. The bearer token expired on {jwt.ValidTo}. Current server time is {DateTime.UtcNow}.
// 2. The bearer token is not valid until {jwt.ValidFrom}. Current server time is {DateTime.UtcNow}.
Trace.Error("Catch exception during test agent connection.");
Trace.Error(ex);
throw new Exception("The local machine's clock may be out of sync with the server time by more than five minutes. Please sync your clock with your domain or internet time and try again.");
}
} }
_term.WriteSection("Runner settings"); _term.WriteSection("Runner settings");
@@ -490,7 +554,7 @@ namespace GitHub.Runner.Listener.Configuration
} }
private TaskAgent UpdateExistingAgent(TaskAgent agent, RSAParameters publicKey, ISet<string> userLabels, bool ephemeral, bool disableUpdate) private TaskAgent UpdateExistingAgent(TaskAgent agent, RSAParameters publicKey, ISet<string> userLabels, bool ephemeral, bool disableUpdate, bool noDefaultLabels)
{ {
ArgUtil.NotNull(agent, nameof(agent)); ArgUtil.NotNull(agent, nameof(agent));
agent.Authorization = new TaskAgentAuthorization agent.Authorization = new TaskAgentAuthorization
@@ -507,9 +571,16 @@ namespace GitHub.Runner.Listener.Configuration
agent.Labels.Clear(); agent.Labels.Clear();
agent.Labels.Add(new AgentLabel("self-hosted", LabelType.System)); if (!noDefaultLabels)
agent.Labels.Add(new AgentLabel(VarUtil.OS, LabelType.System)); {
agent.Labels.Add(new AgentLabel(VarUtil.OSArchitecture, LabelType.System)); agent.Labels.Add(new AgentLabel("self-hosted", LabelType.System));
agent.Labels.Add(new AgentLabel(VarUtil.OS, LabelType.System));
agent.Labels.Add(new AgentLabel(VarUtil.OSArchitecture, LabelType.System));
}
else if (userLabels.Count == 0)
{
throw new NotSupportedException("Disabling default labels via --no-default-labels without specifying --labels is not supported");
}
foreach (var userLabel in userLabels) foreach (var userLabel in userLabels)
{ {
@@ -519,9 +590,9 @@ namespace GitHub.Runner.Listener.Configuration
return agent; return agent;
} }
private TaskAgent CreateNewAgent(string agentName, RSAParameters publicKey, ISet<string> userLabels, bool ephemeral, bool disableUpdate) private TaskAgent CreateNewAgent(string agentName, RSAParameters publicKey, ISet<string> userLabels, bool ephemeral, bool disableUpdate, bool noDefaultLabels)
{ {
TaskAgent agent = new TaskAgent(agentName) TaskAgent agent = new(agentName)
{ {
Authorization = new TaskAgentAuthorization Authorization = new TaskAgentAuthorization
{ {
@@ -534,9 +605,16 @@ namespace GitHub.Runner.Listener.Configuration
DisableUpdate = disableUpdate DisableUpdate = disableUpdate
}; };
agent.Labels.Add(new AgentLabel("self-hosted", LabelType.System)); if (!noDefaultLabels)
agent.Labels.Add(new AgentLabel(VarUtil.OS, LabelType.System)); {
agent.Labels.Add(new AgentLabel(VarUtil.OSArchitecture, LabelType.System)); agent.Labels.Add(new AgentLabel("self-hosted", LabelType.System));
agent.Labels.Add(new AgentLabel(VarUtil.OS, LabelType.System));
agent.Labels.Add(new AgentLabel(VarUtil.OSArchitecture, LabelType.System));
}
else if (userLabels.Count == 0)
{
throw new NotSupportedException("Disabling default labels via --no-default-labels without specifying --labels is not supported");
}
foreach (var userLabel in userLabels) foreach (var userLabel in userLabels)
{ {
@@ -615,7 +693,7 @@ namespace GitHub.Runner.Listener.Configuration
} }
int retryCount = 0; int retryCount = 0;
while(retryCount < 3) while (retryCount < 3)
{ {
using (var httpClientHandler = HostContext.CreateHttpClientHandler()) using (var httpClientHandler = HostContext.CreateHttpClientHandler())
using (var httpClient = new HttpClient(httpClientHandler)) using (var httpClient = new HttpClient(httpClientHandler))
@@ -625,28 +703,29 @@ namespace GitHub.Runner.Listener.Configuration
httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("basic", base64EncodingToken); httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("basic", base64EncodingToken);
httpClient.DefaultRequestHeaders.UserAgent.AddRange(HostContext.UserAgents); httpClient.DefaultRequestHeaders.UserAgent.AddRange(HostContext.UserAgents);
httpClient.DefaultRequestHeaders.Accept.ParseAdd("application/vnd.github.v3+json"); httpClient.DefaultRequestHeaders.Accept.ParseAdd("application/vnd.github.v3+json");
var responseStatus = System.Net.HttpStatusCode.OK; var responseStatus = System.Net.HttpStatusCode.OK;
try try
{ {
var response = await httpClient.PostAsync(githubApiUrl, new StringContent(string.Empty)); var response = await httpClient.PostAsync(githubApiUrl, new StringContent(string.Empty));
responseStatus = response.StatusCode; responseStatus = response.StatusCode;
var githubRequestId = _dotcomServer.GetGitHubRequestId(response.Headers);
if (response.IsSuccessStatusCode) if (response.IsSuccessStatusCode)
{ {
Trace.Info($"Http response code: {response.StatusCode} from 'POST {githubApiUrl}'"); Trace.Info($"Http response code: {response.StatusCode} from 'POST {githubApiUrl}' ({githubRequestId})");
var jsonResponse = await response.Content.ReadAsStringAsync(); var jsonResponse = await response.Content.ReadAsStringAsync();
return StringUtil.ConvertFromJson<GitHubRunnerRegisterToken>(jsonResponse); return StringUtil.ConvertFromJson<GitHubRunnerRegisterToken>(jsonResponse);
} }
else else
{ {
_term.WriteError($"Http response code: {response.StatusCode} from 'POST {githubApiUrl}'"); _term.WriteError($"Http response code: {response.StatusCode} from 'POST {githubApiUrl}' (Request Id: {githubRequestId})");
var errorResponse = await response.Content.ReadAsStringAsync(); var errorResponse = await response.Content.ReadAsStringAsync();
_term.WriteError(errorResponse); _term.WriteError(errorResponse);
response.EnsureSuccessStatusCode(); response.EnsureSuccessStatusCode();
} }
} }
catch(Exception ex) when (retryCount < 2 && responseStatus != System.Net.HttpStatusCode.NotFound) catch (Exception ex) when (retryCount < 2 && responseStatus != System.Net.HttpStatusCode.NotFound)
{ {
retryCount++; retryCount++;
Trace.Error($"Failed to get JIT runner token -- Atempt: {retryCount}"); Trace.Error($"Failed to get JIT runner token -- Atempt: {retryCount}");
@@ -693,22 +772,23 @@ namespace GitHub.Runner.Listener.Configuration
{ {
var response = await httpClient.PostAsync(githubApiUrl, new StringContent(StringUtil.ConvertToJson(bodyObject), null, "application/json")); var response = await httpClient.PostAsync(githubApiUrl, new StringContent(StringUtil.ConvertToJson(bodyObject), null, "application/json"));
responseStatus = response.StatusCode; responseStatus = response.StatusCode;
var githubRequestId = _dotcomServer.GetGitHubRequestId(response.Headers);
if(response.IsSuccessStatusCode) if (response.IsSuccessStatusCode)
{ {
Trace.Info($"Http response code: {response.StatusCode} from 'POST {githubApiUrl}'"); Trace.Info($"Http response code: {response.StatusCode} from 'POST {githubApiUrl}' ({githubRequestId})");
var jsonResponse = await response.Content.ReadAsStringAsync(); var jsonResponse = await response.Content.ReadAsStringAsync();
return StringUtil.ConvertFromJson<GitHubAuthResult>(jsonResponse); return StringUtil.ConvertFromJson<GitHubAuthResult>(jsonResponse);
} }
else else
{ {
_term.WriteError($"Http response code: {response.StatusCode} from 'POST {githubApiUrl}'"); _term.WriteError($"Http response code: {response.StatusCode} from 'POST {githubApiUrl}' (Request Id: {githubRequestId})");
var errorResponse = await response.Content.ReadAsStringAsync(); var errorResponse = await response.Content.ReadAsStringAsync();
_term.WriteError(errorResponse); _term.WriteError(errorResponse);
response.EnsureSuccessStatusCode(); response.EnsureSuccessStatusCode();
} }
} }
catch(Exception ex) when (retryCount < 2 && responseStatus != System.Net.HttpStatusCode.NotFound) catch (Exception ex) when (retryCount < 2 && responseStatus != System.Net.HttpStatusCode.NotFound)
{ {
retryCount++; retryCount++;
Trace.Error($"Failed to get tenant credentials -- Atempt: {retryCount}"); Trace.Error($"Failed to get tenant credentials -- Atempt: {retryCount}");

View File

@@ -1,4 +1,4 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Runtime.Serialization; using System.Runtime.Serialization;
using GitHub.Runner.Common; using GitHub.Runner.Common;
@@ -18,10 +18,10 @@ namespace GitHub.Runner.Listener.Configuration
public class CredentialManager : RunnerService, ICredentialManager public class CredentialManager : RunnerService, ICredentialManager
{ {
public static readonly Dictionary<string, Type> CredentialTypes = new Dictionary<string, Type>(StringComparer.OrdinalIgnoreCase) public static readonly Dictionary<string, Type> CredentialTypes = new(StringComparer.OrdinalIgnoreCase)
{ {
{ Constants.Configuration.OAuth, typeof(OAuthCredential)}, { Constants.Configuration.OAuth, typeof(OAuthCredential) },
{ Constants.Configuration.OAuthAccessToken, typeof(OAuthAccessTokenCredential)}, { Constants.Configuration.OAuthAccessToken, typeof(OAuthAccessTokenCredential) },
}; };
public ICredentialProvider GetCredentialProvider(string credType) public ICredentialProvider GetCredentialProvider(string credType)
@@ -93,6 +93,9 @@ namespace GitHub.Runner.Listener.Configuration
[DataMember(Name = "token")] [DataMember(Name = "token")]
public string Token { get; set; } public string Token { get; set; }
[DataMember(Name = "use_v2_flow")]
public bool UseV2Flow { get; set; }
public VssCredentials ToVssCredentials() public VssCredentials ToVssCredentials()
{ {
ArgUtil.NotNullOrEmpty(TokenSchema, nameof(TokenSchema)); ArgUtil.NotNullOrEmpty(TokenSchema, nameof(TokenSchema));

View File

@@ -48,7 +48,7 @@ namespace GitHub.Runner.Listener.Configuration
ArgUtil.NotNullOrEmpty(token, nameof(token)); ArgUtil.NotNullOrEmpty(token, nameof(token));
trace.Info("token retrieved: {0} chars", token.Length); trace.Info("token retrieved: {0} chars", token.Length);
VssCredentials creds = new VssCredentials(new VssOAuthAccessTokenCredential(token), CredentialPromptType.DoNotPrompt); VssCredentials creds = new(new VssOAuthAccessTokenCredential(token), CredentialPromptType.DoNotPrompt);
trace.Info("cred created"); trace.Info("cred created");
return creds; return creds;

View File

@@ -1,5 +1,4 @@
using GitHub.Runner.Common; using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using System; using System;
@@ -72,7 +71,7 @@ namespace GitHub.Runner.Listener.Configuration
{ {
return defaultValue; return defaultValue;
} }
else if (isOptional) else if (isOptional)
{ {
return string.Empty; return string.Empty;
} }
@@ -87,11 +86,12 @@ namespace GitHub.Runner.Listener.Configuration
// Write the message prompt. // Write the message prompt.
_terminal.Write($"{description} "); _terminal.Write($"{description} ");
if(!string.IsNullOrEmpty(defaultValue)) if (!string.IsNullOrEmpty(defaultValue))
{ {
_terminal.Write($"[press Enter for {defaultValue}] "); _terminal.Write($"[press Enter for {defaultValue}] ");
} }
else if (isOptional){ else if (isOptional)
{
_terminal.Write($"[press Enter to skip] "); _terminal.Write($"[press Enter to skip] ");
} }
@@ -112,7 +112,7 @@ namespace GitHub.Runner.Listener.Configuration
return string.Empty; return string.Empty;
} }
} }
// Return the value if it is not empty and it is valid. // Return the value if it is not empty and it is valid.
// Otherwise try the loop again. // Otherwise try the loop again.
if (!string.IsNullOrEmpty(value)) if (!string.IsNullOrEmpty(value))

View File

@@ -3,7 +3,6 @@ using System;
using System.IO; using System.IO;
using System.Security.Cryptography; using System.Security.Cryptography;
using System.Threading; using System.Threading;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Common; using GitHub.Runner.Common;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;

View File

@@ -44,7 +44,7 @@ namespace GitHub.Runner.Listener.Configuration
} }
// For the service name, replace any characters outside of the alpha-numeric set and ".", "_", "-" with "-" // For the service name, replace any characters outside of the alpha-numeric set and ".", "_", "-" with "-"
Regex regex = new Regex(@"[^0-9a-zA-Z._\-]"); Regex regex = new(@"[^0-9a-zA-Z._\-]");
string repoOrOrgName = regex.Replace(settings.RepoOrOrgName, "-"); string repoOrOrgName = regex.Replace(settings.RepoOrOrgName, "-");
serviceName = StringUtil.Format(serviceNamePattern, repoOrOrgName, settings.AgentName); serviceName = StringUtil.Format(serviceNamePattern, repoOrOrgName, settings.AgentName);

View File

@@ -1,4 +1,4 @@
#if OS_LINUX #if OS_LINUX
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.IO; using System.IO;
@@ -6,7 +6,6 @@ using System.Linq;
using System.Text; using System.Text;
using GitHub.Runner.Common.Util; using GitHub.Runner.Common.Util;
using GitHub.Runner.Common; using GitHub.Runner.Common;
using GitHub.Runner.Sdk;
namespace GitHub.Runner.Listener.Configuration namespace GitHub.Runner.Listener.Configuration
{ {

View File

@@ -0,0 +1,3 @@
using System.Runtime.CompilerServices;
[assembly: InternalsVisibleTo("Test")]

View File

@@ -7,6 +7,7 @@ using System.Text;
using System.Text.RegularExpressions; using System.Text.RegularExpressions;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using GitHub.DistributedTask.Pipelines;
using GitHub.DistributedTask.WebApi; using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common; using GitHub.Runner.Common;
using GitHub.Runner.Common.Util; using GitHub.Runner.Common.Util;
@@ -14,6 +15,7 @@ using GitHub.Runner.Sdk;
using GitHub.Services.Common; using GitHub.Services.Common;
using GitHub.Services.WebApi; using GitHub.Services.WebApi;
using GitHub.Services.WebApi.Jwt; using GitHub.Services.WebApi.Jwt;
using Sdk.RSWebApi.Contracts;
using Pipelines = GitHub.DistributedTask.Pipelines; using Pipelines = GitHub.DistributedTask.Pipelines;
namespace GitHub.Runner.Listener namespace GitHub.Runner.Listener
@@ -37,8 +39,8 @@ namespace GitHub.Runner.Listener
// and the server will not send another job while this one is still running. // and the server will not send another job while this one is still running.
public sealed class JobDispatcher : RunnerService, IJobDispatcher public sealed class JobDispatcher : RunnerService, IJobDispatcher
{ {
private static Regex _invalidJsonRegex = new Regex(@"invalid\ Json\ at\ position\ '(\d+)':", RegexOptions.Compiled | RegexOptions.IgnoreCase); private static Regex _invalidJsonRegex = new(@"invalid\ Json\ at\ position\ '(\d+)':", RegexOptions.Compiled | RegexOptions.IgnoreCase);
private readonly Lazy<Dictionary<long, TaskResult>> _localRunJobResult = new Lazy<Dictionary<long, TaskResult>>(); private readonly Lazy<Dictionary<long, TaskResult>> _localRunJobResult = new();
private int _poolId; private int _poolId;
IConfigurationStore _configurationStore; IConfigurationStore _configurationStore;
@@ -47,17 +49,19 @@ namespace GitHub.Runner.Listener
private static readonly string _workerProcessName = $"Runner.Worker{IOUtil.ExeExtension}"; private static readonly string _workerProcessName = $"Runner.Worker{IOUtil.ExeExtension}";
// this is not thread-safe // this is not thread-safe
private readonly Queue<Guid> _jobDispatchedQueue = new Queue<Guid>(); private readonly Queue<Guid> _jobDispatchedQueue = new();
private readonly ConcurrentDictionary<Guid, WorkerDispatcher> _jobInfos = new ConcurrentDictionary<Guid, WorkerDispatcher>(); private readonly ConcurrentDictionary<Guid, WorkerDispatcher> _jobInfos = new();
// allow up to 30sec for any data to be transmitted over the process channel // allow up to 30sec for any data to be transmitted over the process channel
// timeout limit can be overwritten by environment GITHUB_ACTIONS_RUNNER_CHANNEL_TIMEOUT // timeout limit can be overwritten by environment GITHUB_ACTIONS_RUNNER_CHANNEL_TIMEOUT
private TimeSpan _channelTimeout; private TimeSpan _channelTimeout;
private TaskCompletionSource<bool> _runOnceJobCompleted = new TaskCompletionSource<bool>(); private TaskCompletionSource<bool> _runOnceJobCompleted = new();
public event EventHandler<JobStatusEventArgs> JobStatus; public event EventHandler<JobStatusEventArgs> JobStatus;
private bool _isRunServiceJob;
public override void Initialize(IHostContext hostContext) public override void Initialize(IHostContext hostContext)
{ {
base.Initialize(hostContext); base.Initialize(hostContext);
@@ -86,6 +90,8 @@ namespace GitHub.Runner.Listener
{ {
Trace.Info($"Job request {jobRequestMessage.RequestId} for plan {jobRequestMessage.Plan.PlanId} job {jobRequestMessage.JobId} received."); Trace.Info($"Job request {jobRequestMessage.RequestId} for plan {jobRequestMessage.Plan.PlanId} job {jobRequestMessage.JobId} received.");
_isRunServiceJob = MessageUtil.IsRunServiceJob(jobRequestMessage.MessageType);
WorkerDispatcher currentDispatch = null; WorkerDispatcher currentDispatch = null;
if (_jobDispatchedQueue.Count > 0) if (_jobDispatchedQueue.Count > 0)
{ {
@@ -111,7 +117,7 @@ namespace GitHub.Runner.Listener
} }
} }
WorkerDispatcher newDispatch = new WorkerDispatcher(jobRequestMessage.JobId, jobRequestMessage.RequestId); WorkerDispatcher newDispatch = new(jobRequestMessage.JobId, jobRequestMessage.RequestId);
if (runOnce) if (runOnce)
{ {
Trace.Info("Start dispatcher for one time used runner."); Trace.Info("Start dispatcher for one time used runner.");
@@ -239,6 +245,13 @@ namespace GitHub.Runner.Listener
return; return;
} }
if (this._isRunServiceJob)
{
Trace.Error($"We are not yet checking the state of jobrequest {jobDispatch.JobId} status. Cancel running worker right away.");
jobDispatch.WorkerCancellationTokenSource.Cancel();
return;
}
// based on the current design, server will only send one job for a given runner at a time. // based on the current design, server will only send one job for a given runner at a time.
// if the runner received a new job request while a previous job request is still running, this typically indicates two situations // if the runner received a new job request while a previous job request is still running, this typically indicates two situations
// 1. a runner bug caused a server and runner mismatch on the state of the job request, e.g. the runner didn't renew the jobrequest // 1. a runner bug caused a server and runner mismatch on the state of the job request, e.g. the runner didn't renew the jobrequest
@@ -357,9 +370,11 @@ namespace GitHub.Runner.Listener
term.WriteLine($"{DateTime.UtcNow:u}: Running job: {message.JobDisplayName}"); term.WriteLine($"{DateTime.UtcNow:u}: Running job: {message.JobDisplayName}");
// first job request renew succeed. // first job request renew succeed.
TaskCompletionSource<int> firstJobRequestRenewed = new TaskCompletionSource<int>(); TaskCompletionSource<int> firstJobRequestRenewed = new();
var notification = HostContext.GetService<IJobNotification>(); var notification = HostContext.GetService<IJobNotification>();
var systemConnection = message.Resources.Endpoints.SingleOrDefault(x => string.Equals(x.Name, WellKnownServiceEndpointNames.SystemVssConnection, StringComparison.OrdinalIgnoreCase));
// lock renew cancellation token. // lock renew cancellation token.
using (var lockRenewalTokenSource = new CancellationTokenSource()) using (var lockRenewalTokenSource = new CancellationTokenSource())
using (var workerProcessCancelTokenSource = new CancellationTokenSource()) using (var workerProcessCancelTokenSource = new CancellationTokenSource())
@@ -369,7 +384,7 @@ namespace GitHub.Runner.Listener
// start renew job request // start renew job request
Trace.Info($"Start renew job request {requestId} for job {message.JobId}."); Trace.Info($"Start renew job request {requestId} for job {message.JobId}.");
Task renewJobRequest = RenewJobRequestAsync(_poolId, requestId, lockToken, orchestrationId, firstJobRequestRenewed, lockRenewalTokenSource.Token); Task renewJobRequest = RenewJobRequestAsync(message, systemConnection, _poolId, requestId, lockToken, orchestrationId, firstJobRequestRenewed, lockRenewalTokenSource.Token);
// wait till first renew succeed or job request is cancelled // wait till first renew succeed or job request is cancelled
// not even start worker if the first renew fail // not even start worker if the first renew fail
@@ -391,15 +406,16 @@ namespace GitHub.Runner.Listener
await renewJobRequest; await renewJobRequest;
// complete job request with result Cancelled // complete job request with result Cancelled
await CompleteJobRequestAsync(_poolId, message, lockToken, TaskResult.Canceled); await CompleteJobRequestAsync(_poolId, message, systemConnection, lockToken, TaskResult.Canceled);
return; return;
} }
HostContext.WritePerfCounter($"JobRequestRenewed_{requestId.ToString()}"); HostContext.WritePerfCounter($"JobRequestRenewed_{requestId.ToString()}");
Task<int> workerProcessTask = null; Task<int> workerProcessTask = null;
object _outputLock = new object(); object _outputLock = new();
List<string> workerOutput = new List<string>(); List<string> workerOutput = new();
bool printToStdout = StringUtil.ConvertToBoolean(Environment.GetEnvironmentVariable(Constants.Variables.Agent.PrintLogToStdout));
using (var processChannel = HostContext.CreateService<IProcessChannel>()) using (var processChannel = HostContext.CreateService<IProcessChannel>())
using (var processInvoker = HostContext.CreateService<IProcessInvoker>()) using (var processInvoker = HostContext.CreateService<IProcessInvoker>())
{ {
@@ -421,7 +437,15 @@ namespace GitHub.Runner.Listener
{ {
lock (_outputLock) lock (_outputLock)
{ {
workerOutput.Add(stdout.Data); if (!stdout.Data.StartsWith("[WORKER"))
{
workerOutput.Add(stdout.Data);
}
if (printToStdout)
{
term.WriteLine(stdout.Data, skipTracing: true);
}
} }
} }
}; };
@@ -499,7 +523,6 @@ namespace GitHub.Runner.Listener
// we get first jobrequest renew succeed and start the worker process with the job message. // we get first jobrequest renew succeed and start the worker process with the job message.
// send notification to machine provisioner. // send notification to machine provisioner.
var systemConnection = message.Resources.Endpoints.SingleOrDefault(x => string.Equals(x.Name, WellKnownServiceEndpointNames.SystemVssConnection, StringComparison.OrdinalIgnoreCase));
var accessToken = systemConnection?.Authorization?.Parameters["AccessToken"]; var accessToken = systemConnection?.Authorization?.Parameters["AccessToken"];
notification.JobStarted(message.JobId, accessToken, systemConnection.Url); notification.JobStarted(message.JobId, accessToken, systemConnection.Url);
@@ -522,18 +545,14 @@ namespace GitHub.Runner.Listener
detailInfo = string.Join(Environment.NewLine, workerOutput); detailInfo = string.Join(Environment.NewLine, workerOutput);
Trace.Info($"Return code {returnCode} indicate worker encounter an unhandled exception or app crash, attach worker stdout/stderr to JobRequest result."); Trace.Info($"Return code {returnCode} indicate worker encounter an unhandled exception or app crash, attach worker stdout/stderr to JobRequest result.");
var jobServer = HostContext.GetService<IJobServer>(); var jobServer = await InitializeJobServerAsync(systemConnection);
VssCredentials jobServerCredential = VssUtil.GetVssCredential(systemConnection);
VssConnection jobConnection = VssUtil.CreateConnection(systemConnection.Url, jobServerCredential);
await jobServer.ConnectAsync(jobConnection);
await LogWorkerProcessUnhandledException(jobServer, message, detailInfo); await LogWorkerProcessUnhandledException(jobServer, message, detailInfo);
// Go ahead to finish the job with result 'Failed' if the STDERR from worker is System.IO.IOException, since it typically means we are running out of disk space. // Go ahead to finish the job with result 'Failed' if the STDERR from worker is System.IO.IOException, since it typically means we are running out of disk space.
if (detailInfo.Contains(typeof(System.IO.IOException).ToString(), StringComparison.OrdinalIgnoreCase)) if (detailInfo.Contains(typeof(System.IO.IOException).ToString(), StringComparison.OrdinalIgnoreCase))
{ {
Trace.Info($"Finish job with result 'Failed' due to IOException."); Trace.Info($"Finish job with result 'Failed' due to IOException.");
await ForceFailJob(jobServer, message); await ForceFailJob(jobServer, message, detailInfo);
} }
} }
@@ -548,7 +567,7 @@ namespace GitHub.Runner.Listener
await renewJobRequest; await renewJobRequest;
// complete job request // complete job request
await CompleteJobRequestAsync(_poolId, message, lockToken, result, detailInfo); await CompleteJobRequestAsync(_poolId, message, systemConnection, lockToken, result, detailInfo);
// print out unhandled exception happened in worker after we complete job request. // print out unhandled exception happened in worker after we complete job request.
// when we run out of disk space, report back to server has higher priority. // when we run out of disk space, report back to server has higher priority.
@@ -645,7 +664,7 @@ namespace GitHub.Runner.Listener
await renewJobRequest; await renewJobRequest;
// complete job request // complete job request
await CompleteJobRequestAsync(_poolId, message, lockToken, resultOnAbandonOrCancel); await CompleteJobRequestAsync(_poolId, message, systemConnection, lockToken, resultOnAbandonOrCancel);
} }
finally finally
{ {
@@ -658,7 +677,7 @@ namespace GitHub.Runner.Listener
finally finally
{ {
Busy = false; Busy = false;
if (JobStatus != null) if (JobStatus != null)
{ {
JobStatus(this, new JobStatusEventArgs(TaskAgentStatus.Online)); JobStatus(this, new JobStatusEventArgs(TaskAgentStatus.Online));
@@ -666,9 +685,128 @@ namespace GitHub.Runner.Listener
} }
} }
public async Task RenewJobRequestAsync(int poolId, long requestId, Guid lockToken, string orchestrationId, TaskCompletionSource<int> firstJobRequestRenewed, CancellationToken token) internal async Task RenewJobRequestAsync(Pipelines.AgentJobRequestMessage message, ServiceEndpoint systemConnection, int poolId, long requestId, Guid lockToken, string orchestrationId, TaskCompletionSource<int> firstJobRequestRenewed, CancellationToken token)
{
if (this._isRunServiceJob)
{
var runServer = await GetRunServerAsync(systemConnection);
await RenewJobRequestAsync(runServer, message.Plan.PlanId, message.JobId, firstJobRequestRenewed, token);
}
else
{
var runnerServer = HostContext.GetService<IRunnerServer>();
await RenewJobRequestAsync(runnerServer, poolId, requestId, lockToken, orchestrationId, firstJobRequestRenewed, token);
}
}
private async Task RenewJobRequestAsync(IRunServer runServer, Guid planId, Guid jobId, TaskCompletionSource<int> firstJobRequestRenewed, CancellationToken token)
{
TaskAgentJobRequest request = null;
int firstRenewRetryLimit = 5;
int encounteringError = 0;
// renew lock during job running.
// stop renew only if cancellation token for lock renew task been signal or exception still happen after retry.
while (!token.IsCancellationRequested)
{
try
{
var renewResponse = await runServer.RenewJobAsync(planId, jobId, token);
Trace.Info($"Successfully renew job {jobId}, job is valid till {renewResponse.LockedUntil}");
if (!firstJobRequestRenewed.Task.IsCompleted)
{
// fire first renew succeed event.
firstJobRequestRenewed.TrySetResult(0);
}
if (encounteringError > 0)
{
encounteringError = 0;
HostContext.WritePerfCounter("JobRenewRecovered");
}
// renew again after 60 sec delay
await HostContext.Delay(TimeSpan.FromSeconds(60), token);
}
catch (TaskOrchestrationJobNotFoundException)
{
// no need for retry. the job is not valid anymore.
Trace.Info($"TaskAgentJobNotFoundException received when renew job {jobId}, job is no longer valid, stop renew job request.");
return;
}
catch (OperationCanceledException) when (token.IsCancellationRequested)
{
// OperationCanceledException may caused by http timeout or _lockRenewalTokenSource.Cance();
// Stop renew only on cancellation token fired.
Trace.Info($"job renew has been cancelled, stop renew job {jobId}.");
return;
}
catch (Exception ex)
{
Trace.Error($"Catch exception during renew runner job {jobId}.");
Trace.Error(ex);
encounteringError++;
// retry
TimeSpan remainingTime = TimeSpan.Zero;
if (!firstJobRequestRenewed.Task.IsCompleted)
{
// retry 5 times every 10 sec for the first renew
if (firstRenewRetryLimit-- > 0)
{
remainingTime = TimeSpan.FromSeconds(10);
}
}
else
{
// retry till reach lockeduntil + 5 mins extra buffer.
remainingTime = request.LockedUntil.Value + TimeSpan.FromMinutes(5) - DateTime.UtcNow;
}
if (remainingTime > TimeSpan.Zero)
{
TimeSpan delayTime;
if (!firstJobRequestRenewed.Task.IsCompleted)
{
Trace.Info($"Retrying lock renewal for job {jobId}. The first job renew request has failed.");
delayTime = BackoffTimerHelper.GetRandomBackoff(TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(10));
}
else
{
Trace.Info($"Retrying lock renewal for job {jobId}. Job is valid until {request.LockedUntil.Value}.");
if (encounteringError > 5)
{
delayTime = BackoffTimerHelper.GetRandomBackoff(TimeSpan.FromSeconds(15), TimeSpan.FromSeconds(30));
}
else
{
delayTime = BackoffTimerHelper.GetRandomBackoff(TimeSpan.FromSeconds(5), TimeSpan.FromSeconds(15));
}
}
try
{
// back-off before next retry.
await HostContext.Delay(delayTime, token);
}
catch (OperationCanceledException) when (token.IsCancellationRequested)
{
Trace.Info($"job renew has been cancelled, stop renew job {jobId}.");
}
}
else
{
Trace.Info($"Lock renewal has run out of retry, stop renew lock for job {jobId}.");
HostContext.WritePerfCounter("JobRenewReachLimit");
return;
}
}
}
}
private async Task RenewJobRequestAsync(IRunnerServer runnerServer, int poolId, long requestId, Guid lockToken, string orchestrationId, TaskCompletionSource<int> firstJobRequestRenewed, CancellationToken token)
{ {
var runnerServer = HostContext.GetService<IRunnerServer>();
TaskAgentJobRequest request = null; TaskAgentJobRequest request = null;
int firstRenewRetryLimit = 5; int firstRenewRetryLimit = 5;
int encounteringError = 0; int encounteringError = 0;
@@ -831,90 +969,93 @@ namespace GitHub.Runner.Listener
var systemConnection = message.Resources.Endpoints.SingleOrDefault(x => string.Equals(x.Name, WellKnownServiceEndpointNames.SystemVssConnection)); var systemConnection = message.Resources.Endpoints.SingleOrDefault(x => string.Equals(x.Name, WellKnownServiceEndpointNames.SystemVssConnection));
ArgUtil.NotNull(systemConnection, nameof(systemConnection)); ArgUtil.NotNull(systemConnection, nameof(systemConnection));
var jobServer = HostContext.GetService<IJobServer>(); var server = await InitializeJobServerAsync(systemConnection);
VssCredentials jobServerCredential = VssUtil.GetVssCredential(systemConnection);
VssConnection jobConnection = VssUtil.CreateConnection(systemConnection.Url, jobServerCredential);
await jobServer.ConnectAsync(jobConnection); if (server is IJobServer jobServer)
var timeline = await jobServer.GetTimelineAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, message.Timeline.Id, CancellationToken.None);
var updatedRecords = new List<TimelineRecord>();
var logPages = new Dictionary<Guid, Dictionary<int, string>>();
var logRecords = new Dictionary<Guid, TimelineRecord>();
foreach (var log in logs)
{ {
var logName = Path.GetFileNameWithoutExtension(log); var timeline = await jobServer.GetTimelineAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, message.Timeline.Id, CancellationToken.None);
var logNameParts = logName.Split('_', StringSplitOptions.RemoveEmptyEntries);
if (logNameParts.Length != 3)
{
Trace.Warning($"log file '{log}' doesn't follow naming convension 'GUID_GUID_INT'.");
continue;
}
var logPageSeperator = logName.IndexOf('_');
var logRecordId = Guid.Empty;
var pageNumber = 0;
if (!Guid.TryParse(logNameParts[0], out Guid timelineId) || timelineId != timeline.Id) var updatedRecords = new List<TimelineRecord>();
var logPages = new Dictionary<Guid, Dictionary<int, string>>();
var logRecords = new Dictionary<Guid, TimelineRecord>();
foreach (var log in logs)
{ {
Trace.Warning($"log file '{log}' is not belongs to current job"); var logName = Path.GetFileNameWithoutExtension(log);
continue; var logNameParts = logName.Split('_', StringSplitOptions.RemoveEmptyEntries);
} if (logNameParts.Length != 3)
if (!Guid.TryParse(logNameParts[1], out logRecordId))
{
Trace.Warning($"log file '{log}' doesn't follow naming convension 'GUID_GUID_INT'.");
continue;
}
if (!int.TryParse(logNameParts[2], out pageNumber))
{
Trace.Warning($"log file '{log}' doesn't follow naming convension 'GUID_GUID_INT'.");
continue;
}
var record = timeline.Records.FirstOrDefault(x => x.Id == logRecordId);
if (record != null)
{
if (!logPages.ContainsKey(record.Id))
{ {
logPages[record.Id] = new Dictionary<int, string>(); Trace.Warning($"log file '{log}' doesn't follow naming convension 'GUID_GUID_INT'.");
logRecords[record.Id] = record; continue;
}
var logPageSeperator = logName.IndexOf('_');
var logRecordId = Guid.Empty;
var pageNumber = 0;
if (!Guid.TryParse(logNameParts[0], out Guid timelineId) || timelineId != timeline.Id)
{
Trace.Warning($"log file '{log}' is not belongs to current job");
continue;
} }
logPages[record.Id][pageNumber] = log; if (!Guid.TryParse(logNameParts[1], out logRecordId))
}
}
foreach (var pages in logPages)
{
var record = logRecords[pages.Key];
if (record.Log == null)
{
// Create the log
record.Log = await jobServer.CreateLogAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, new TaskLog(String.Format(@"logs\{0:D}", record.Id)), default(CancellationToken));
// Need to post timeline record updates to reflect the log creation
updatedRecords.Add(record.Clone());
}
for (var i = 1; i <= pages.Value.Count; i++)
{
var logFile = pages.Value[i];
// Upload the contents
using (FileStream fs = File.Open(logFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{ {
var logUploaded = await jobServer.AppendLogContentAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, record.Log.Id, fs, default(CancellationToken)); Trace.Warning($"log file '{log}' doesn't follow naming convension 'GUID_GUID_INT'.");
continue;
} }
Trace.Info($"Uploaded unfinished log '{logFile}' for current job."); if (!int.TryParse(logNameParts[2], out pageNumber))
IOUtil.DeleteFile(logFile); {
Trace.Warning($"log file '{log}' doesn't follow naming convension 'GUID_GUID_INT'.");
continue;
}
var record = timeline.Records.FirstOrDefault(x => x.Id == logRecordId);
if (record != null)
{
if (!logPages.ContainsKey(record.Id))
{
logPages[record.Id] = new Dictionary<int, string>();
logRecords[record.Id] = record;
}
logPages[record.Id][pageNumber] = log;
}
}
foreach (var pages in logPages)
{
var record = logRecords[pages.Key];
if (record.Log == null)
{
// Create the log
record.Log = await jobServer.CreateLogAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, new TaskLog(String.Format(@"logs\{0:D}", record.Id)), default(CancellationToken));
// Need to post timeline record updates to reflect the log creation
updatedRecords.Add(record.Clone());
}
for (var i = 1; i <= pages.Value.Count; i++)
{
var logFile = pages.Value[i];
// Upload the contents
using (FileStream fs = File.Open(logFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
var logUploaded = await jobServer.AppendLogContentAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, record.Log.Id, fs, default(CancellationToken));
}
Trace.Info($"Uploaded unfinished log '{logFile}' for current job.");
IOUtil.DeleteFile(logFile);
}
}
if (updatedRecords.Count > 0)
{
await jobServer.UpdateTimelineRecordsAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, message.Timeline.Id, updatedRecords, CancellationToken.None);
} }
} }
else
if (updatedRecords.Count > 0)
{ {
await jobServer.UpdateTimelineRecordsAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, message.Timeline.Id, updatedRecords, CancellationToken.None); Trace.Info("Job server does not support log upload yet.");
} }
} }
catch (Exception ex) catch (Exception ex)
@@ -924,7 +1065,7 @@ namespace GitHub.Runner.Listener
} }
} }
private async Task CompleteJobRequestAsync(int poolId, Pipelines.AgentJobRequestMessage message, Guid lockToken, TaskResult result, string detailInfo = null) private async Task CompleteJobRequestAsync(int poolId, Pipelines.AgentJobRequestMessage message, ServiceEndpoint systemConnection, Guid lockToken, TaskResult result, string detailInfo = null)
{ {
Trace.Entering(); Trace.Entering();
@@ -934,9 +1075,31 @@ namespace GitHub.Runner.Listener
return; return;
} }
if (this._isRunServiceJob)
{
var runServer = await GetRunServerAsync(systemConnection);
var unhandledExceptionIssue = new Issue() { Type = IssueType.Error, Message = detailInfo };
var unhandledAnnotation = unhandledExceptionIssue.ToAnnotation();
var jobAnnotations = new List<Annotation>();
if (unhandledAnnotation.HasValue)
{
jobAnnotations.Add(unhandledAnnotation.Value);
}
try
{
await runServer.CompleteJobAsync(message.Plan.PlanId, message.JobId, result, outputs: null, stepResults: null, jobAnnotations: jobAnnotations, CancellationToken.None);
}
catch (Exception ex)
{
Trace.Error("Fail to raise job completion back to service.");
Trace.Error(ex);
}
return;
}
var runnerServer = HostContext.GetService<IRunnerServer>(); var runnerServer = HostContext.GetService<IRunnerServer>();
int completeJobRequestRetryLimit = 5; int completeJobRequestRetryLimit = 5;
List<Exception> exceptions = new List<Exception>(); List<Exception> exceptions = new();
while (completeJobRequestRetryLimit-- > 0) while (completeJobRequestRetryLimit-- > 0)
{ {
try try
@@ -970,66 +1133,102 @@ namespace GitHub.Runner.Listener
} }
// log an error issue to job level timeline record // log an error issue to job level timeline record
private async Task LogWorkerProcessUnhandledException(IJobServer jobServer, Pipelines.AgentJobRequestMessage message, string errorMessage) private async Task LogWorkerProcessUnhandledException(IRunnerService server, Pipelines.AgentJobRequestMessage message, string detailInfo)
{ {
try if (server is IJobServer jobServer)
{ {
var timeline = await jobServer.GetTimelineAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, message.Timeline.Id, CancellationToken.None);
ArgUtil.NotNull(timeline, nameof(timeline));
TimelineRecord jobRecord = timeline.Records.FirstOrDefault(x => x.Id == message.JobId && x.RecordType == "Job");
ArgUtil.NotNull(jobRecord, nameof(jobRecord));
try try
{ {
if (!string.IsNullOrEmpty(errorMessage) && var timeline = await jobServer.GetTimelineAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, message.Timeline.Id, CancellationToken.None);
message.Variables.TryGetValue("DistributedTask.EnableRunnerIPCDebug", out var enableRunnerIPCDebug) && ArgUtil.NotNull(timeline, nameof(timeline));
StringUtil.ConvertToBoolean(enableRunnerIPCDebug.Value))
{ TimelineRecord jobRecord = timeline.Records.FirstOrDefault(x => x.Id == message.JobId && x.RecordType == "Job");
// the trace should be best effort and not affect any job result ArgUtil.NotNull(jobRecord, nameof(jobRecord));
var match = _invalidJsonRegex.Match(errorMessage);
if (match.Success && var unhandledExceptionIssue = new Issue() { Type = IssueType.Error, Message = detailInfo };
match.Groups.Count == 2) unhandledExceptionIssue.Data[Constants.Runner.InternalTelemetryIssueDataKey] = Constants.Runner.WorkerCrash;
{ jobRecord.ErrorCount++;
var jsonPosition = int.Parse(match.Groups[1].Value); jobRecord.Issues.Add(unhandledExceptionIssue);
var serializedJobMessage = JsonUtility.ToString(message);
var originalJson = serializedJobMessage.Substring(jsonPosition - 10, 20); await jobServer.UpdateTimelineRecordsAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, message.Timeline.Id, new TimelineRecord[] { jobRecord }, CancellationToken.None);
errorMessage = $"Runner sent Json at position '{jsonPosition}': {originalJson} ({Convert.ToBase64String(Encoding.UTF8.GetBytes(originalJson))})\n{errorMessage}";
}
}
} }
catch (Exception ex) catch (Exception ex)
{ {
Trace.Error("Fail to report unhandled exception from Runner.Worker process");
Trace.Error(ex); Trace.Error(ex);
errorMessage = $"Fail to check json IPC error: {ex.Message}\n{errorMessage}";
} }
var unhandledExceptionIssue = new Issue() { Type = IssueType.Error, Message = errorMessage };
unhandledExceptionIssue.Data[Constants.Runner.InternalTelemetryIssueDataKey] = Constants.Runner.WorkerCrash;
jobRecord.ErrorCount++;
jobRecord.Issues.Add(unhandledExceptionIssue);
await jobServer.UpdateTimelineRecordsAsync(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, message.Timeline.Id, new TimelineRecord[] { jobRecord }, CancellationToken.None);
} }
catch (Exception ex) else
{ {
Trace.Error("Fail to report unhandled exception from Runner.Worker process"); Trace.Info("Job server does not support handling unhandled exception yet, error message: {0}", detailInfo);
Trace.Error(ex); return;
} }
} }
// raise job completed event to fail the job. // raise job completed event to fail the job.
private async Task ForceFailJob(IJobServer jobServer, Pipelines.AgentJobRequestMessage message) private async Task ForceFailJob(IRunnerService server, Pipelines.AgentJobRequestMessage message, string detailInfo)
{ {
try if (server is IJobServer jobServer)
{ {
var jobCompletedEvent = new JobCompletedEvent(message.RequestId, message.JobId, TaskResult.Failed); try
await jobServer.RaisePlanEventAsync<JobCompletedEvent>(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, jobCompletedEvent, CancellationToken.None); {
var jobCompletedEvent = new JobCompletedEvent(message.RequestId, message.JobId, TaskResult.Failed);
await jobServer.RaisePlanEventAsync<JobCompletedEvent>(message.Plan.ScopeIdentifier, message.Plan.PlanType, message.Plan.PlanId, jobCompletedEvent, CancellationToken.None);
}
catch (Exception ex)
{
Trace.Error("Fail to raise JobCompletedEvent back to service.");
Trace.Error(ex);
}
} }
catch (Exception ex) else if (server is IRunServer runServer)
{ {
Trace.Error("Fail to raise JobCompletedEvent back to service."); try
Trace.Error(ex); {
var unhandledExceptionIssue = new Issue() { Type = IssueType.Error, Message = detailInfo };
var unhandledAnnotation = unhandledExceptionIssue.ToAnnotation();
var jobAnnotations = new List<Annotation>();
if (unhandledAnnotation.HasValue)
{
jobAnnotations.Add(unhandledAnnotation.Value);
}
await runServer.CompleteJobAsync(message.Plan.PlanId, message.JobId, TaskResult.Failed, outputs: null, stepResults: null, jobAnnotations: jobAnnotations, CancellationToken.None);
}
catch (Exception ex)
{
Trace.Error("Fail to raise job completion back to service.");
Trace.Error(ex);
}
} }
else
{
throw new NotSupportedException($"Server type {server.GetType().FullName} is not supported.");
}
}
private async Task<IRunnerService> InitializeJobServerAsync(ServiceEndpoint systemConnection)
{
if (this._isRunServiceJob)
{
return await GetRunServerAsync(systemConnection);
}
else
{
var jobServer = HostContext.GetService<IJobServer>();
VssCredentials jobServerCredential = VssUtil.GetVssCredential(systemConnection);
VssConnection jobConnection = VssUtil.CreateConnection(systemConnection.Url, jobServerCredential);
await jobServer.ConnectAsync(jobConnection);
return jobServer;
}
}
private async Task<IRunServer> GetRunServerAsync(ServiceEndpoint systemConnection)
{
var runServer = HostContext.GetService<IRunServer>();
VssCredentials jobServerCredential = VssUtil.GetVssCredential(systemConnection);
await runServer.ConnectAsync(systemConnection.Url, jobServerCredential);
return runServer;
} }
private class WorkerDispatcher : IDisposable private class WorkerDispatcher : IDisposable
@@ -1039,7 +1238,7 @@ namespace GitHub.Runner.Listener
public Task WorkerDispatch { get; set; } public Task WorkerDispatch { get; set; }
public CancellationTokenSource WorkerCancellationTokenSource { get; private set; } public CancellationTokenSource WorkerCancellationTokenSource { get; private set; }
public CancellationTokenSource WorkerCancelTimeoutKillTokenSource { get; private set; } public CancellationTokenSource WorkerCancelTimeoutKillTokenSource { get; private set; }
private readonly object _lock = new object(); private readonly object _lock = new();
public WorkerDispatcher(Guid jobId, long requestId) public WorkerDispatcher(Guid jobId, long requestId)
{ {

View File

@@ -38,7 +38,7 @@ namespace GitHub.Runner.Listener
private readonly TimeSpan _sessionCreationRetryInterval = TimeSpan.FromSeconds(30); private readonly TimeSpan _sessionCreationRetryInterval = TimeSpan.FromSeconds(30);
private readonly TimeSpan _sessionConflictRetryLimit = TimeSpan.FromMinutes(4); private readonly TimeSpan _sessionConflictRetryLimit = TimeSpan.FromMinutes(4);
private readonly TimeSpan _clockSkewRetryLimit = TimeSpan.FromMinutes(30); private readonly TimeSpan _clockSkewRetryLimit = TimeSpan.FromMinutes(30);
private readonly Dictionary<string, int> _sessionCreationExceptionTracker = new Dictionary<string, int>(); private readonly Dictionary<string, int> _sessionCreationExceptionTracker = new();
private TaskAgentStatus runnerStatus = TaskAgentStatus.Online; private TaskAgentStatus runnerStatus = TaskAgentStatus.Online;
private CancellationTokenSource _getMessagesTokenSource; private CancellationTokenSource _getMessagesTokenSource;
@@ -182,7 +182,7 @@ namespace GitHub.Runner.Listener
try try
{ {
_getMessagesTokenSource?.Cancel(); _getMessagesTokenSource?.Cancel();
} }
catch (ObjectDisposedException) catch (ObjectDisposedException)
{ {
Trace.Info("_getMessagesTokenSource is already disposed."); Trace.Info("_getMessagesTokenSource is already disposed.");
@@ -198,7 +198,7 @@ namespace GitHub.Runner.Listener
bool encounteringError = false; bool encounteringError = false;
int continuousError = 0; int continuousError = 0;
string errorMessage = string.Empty; string errorMessage = string.Empty;
Stopwatch heartbeat = new Stopwatch(); Stopwatch heartbeat = new();
heartbeat.Restart(); heartbeat.Restart();
while (true) while (true)
{ {
@@ -211,6 +211,7 @@ namespace GitHub.Runner.Listener
_session.SessionId, _session.SessionId,
_lastMessageId, _lastMessageId,
runnerStatus, runnerStatus,
BuildConstants.RunnerPackage.Version,
_getMessagesTokenSource.Token); _getMessagesTokenSource.Token);
// Decrypt the message body if the session is using encryption // Decrypt the message body if the session is using encryption
@@ -244,6 +245,10 @@ namespace GitHub.Runner.Listener
_accessTokenRevoked = true; _accessTokenRevoked = true;
throw; throw;
} }
catch (AccessDeniedException e) when (e.InnerException is InvalidTaskAgentVersionException)
{
throw;
}
catch (Exception ex) catch (Exception ex)
{ {
Trace.Error("Catch exception during get next message."); Trace.Error("Catch exception during get next message.");
@@ -288,7 +293,7 @@ namespace GitHub.Runner.Listener
await HostContext.Delay(_getNextMessageRetryInterval, token); await HostContext.Delay(_getNextMessageRetryInterval, token);
} }
} }
finally finally
{ {
_getMessagesTokenSource.Dispose(); _getMessagesTokenSource.Dispose();
} }

View File

@@ -1,13 +1,12 @@
using GitHub.Runner.Common; using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using System; using System;
using System.Globalization; using System.Globalization;
using System.IO; using System.IO;
using System.Reflection; using System.Reflection;
using System.Runtime.InteropServices; using System.Runtime.InteropServices;
using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using GitHub.DistributedTask.WebApi;
namespace GitHub.Runner.Listener namespace GitHub.Runner.Listener
{ {
@@ -18,7 +17,7 @@ namespace GitHub.Runner.Listener
// Add environment variables from .env file // Add environment variables from .env file
LoadAndSetEnv(); LoadAndSetEnv();
using (HostContext context = new HostContext("Runner")) using (HostContext context = new("Runner"))
{ {
return MainAsync(context, args).GetAwaiter().GetResult(); return MainAsync(context, args).GetAwaiter().GetResult();
} }
@@ -60,6 +59,18 @@ namespace GitHub.Runner.Listener
terminal.WriteLine("This runner version is built for Windows. Please install a correct build for your OS."); terminal.WriteLine("This runner version is built for Windows. Please install a correct build for your OS.");
return Constants.Runner.ReturnCode.TerminatedError; return Constants.Runner.ReturnCode.TerminatedError;
} }
#if ARM64
// A little hacky, but windows gives no way to differentiate between windows 10 and 11.
// By default only 11 supports native x64 app emulation on arm, so we only want to support windows 11
// https://docs.microsoft.com/en-us/windows/arm/overview#build-windows-apps-that-run-on-arm
// Windows 10 and 11 share a MajorVersion, so we also check the build version. Minor for both is 0, so doing < 0 doesn't really make a lot of sense.
if (Environment.OSVersion.Version.Major < Constants.OperatingSystem.Windows11MajorVersion ||
Environment.OSVersion.Version.Build < Constants.OperatingSystem.Windows11BuildVersion)
{
terminal.WriteLine("Win-arm64 runners require windows 11 or later. Please upgrade your operating system.");
return Constants.Runner.ReturnCode.TerminatedError;
}
#endif
break; break;
default: default:
terminal.WriteLine($"Running the runner on this platform is not supported. The current platform is {RuntimeInformation.OSDescription} and it was built for {Constants.Runner.Platform.ToString()}."); terminal.WriteLine($"Running the runner on this platform is not supported. The current platform is {RuntimeInformation.OSDescription} and it was built for {Constants.Runner.Platform.ToString()}.");
@@ -127,6 +138,12 @@ namespace GitHub.Runner.Listener
} }
} }
catch (AccessDeniedException e) when (e.InnerException is InvalidTaskAgentVersionException)
{
terminal.WriteError($"An error occured: {e.Message}");
trace.Error(e);
return Constants.Runner.ReturnCode.TerminatedError;
}
catch (Exception e) catch (Exception e)
{ {
terminal.WriteError($"An error occurred: {e.Message}"); terminal.WriteError($"An error occurred: {e.Message}");

View File

@@ -3,7 +3,7 @@
<PropertyGroup> <PropertyGroup>
<TargetFramework>net6.0</TargetFramework> <TargetFramework>net6.0</TargetFramework>
<OutputType>Exe</OutputType> <OutputType>Exe</OutputType>
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64;osx-arm64</RuntimeIdentifiers> <RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64;osx-arm64;win-arm64</RuntimeIdentifiers>
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch> <TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
<NoWarn>NU1701;NU1603</NoWarn> <NoWarn>NU1701;NU1603</NoWarn>
<Version>$(Version)</Version> <Version>$(Version)</Version>

View File

@@ -1,15 +1,16 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.IO; using System.IO;
using System.IO.Compression;
using System.Linq; using System.Linq;
using System.Reflection; using System.Reflection;
using System.Runtime.CompilerServices; using System.Runtime.CompilerServices;
using System.Security.Cryptography;
using System.Text; using System.Text;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using GitHub.DistributedTask.WebApi; using GitHub.DistributedTask.WebApi;
using GitHub.Runner.Common; using GitHub.Runner.Common;
using GitHub.Runner.Common.Util;
using GitHub.Runner.Listener.Check; using GitHub.Runner.Listener.Check;
using GitHub.Runner.Listener.Configuration; using GitHub.Runner.Listener.Configuration;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
@@ -29,7 +30,7 @@ namespace GitHub.Runner.Listener
private IMessageListener _listener; private IMessageListener _listener;
private ITerminal _term; private ITerminal _term;
private bool _inConfigStage; private bool _inConfigStage;
private ManualResetEvent _completedCommand = new ManualResetEvent(false); private ManualResetEvent _completedCommand = new(false);
public override void Initialize(IHostContext hostContext) public override void Initialize(IHostContext hostContext)
{ {
@@ -136,6 +137,12 @@ namespace GitHub.Runner.Listener
// remove config files, remove service, and exit // remove config files, remove service, and exit
if (command.Remove) if (command.Remove)
{ {
// only remove local config files and exit
if (command.RemoveLocalConfig)
{
configManager.DeleteLocalRunnerConfig();
return Constants.Runner.ReturnCode.Success;
}
try try
{ {
await configManager.UnconfigureAsync(command); await configManager.UnconfigureAsync(command);
@@ -204,10 +211,16 @@ namespace GitHub.Runner.Listener
foreach (var config in jitConfig) foreach (var config in jitConfig)
{ {
var configFile = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Root), config.Key); var configFile = Path.Combine(HostContext.GetDirectory(WellKnownDirectory.Root), config.Key);
var configContent = Encoding.UTF8.GetString(Convert.FromBase64String(config.Value)); var configContent = Convert.FromBase64String(config.Value);
File.WriteAllText(configFile, configContent, Encoding.UTF8); #if OS_WINDOWS
if (configFile == HostContext.GetConfigFile(WellKnownConfigFile.RSACredentials))
{
configContent = ProtectedData.Protect(configContent, null, DataProtectionScope.LocalMachine);
}
#endif
File.WriteAllBytes(configFile, configContent);
File.SetAttributes(configFile, File.GetAttributes(configFile) | FileAttributes.Hidden); File.SetAttributes(configFile, File.GetAttributes(configFile) | FileAttributes.Hidden);
Trace.Info($"Save {configContent.Length} chars to '{configFile}'."); Trace.Info($"Saved {configContent.Length} bytes to '{configFile}'.");
} }
} }
catch (Exception ex) catch (Exception ex)
@@ -326,13 +339,26 @@ namespace GitHub.Runner.Listener
} }
} }
private IMessageListener GetMesageListener(RunnerSettings settings)
{
if (settings.UseV2Flow)
{
Trace.Info($"Using BrokerMessageListener");
var brokerListener = new BrokerMessageListener();
brokerListener.Initialize(HostContext);
return brokerListener;
}
return HostContext.GetService<IMessageListener>();
}
//create worker manager, create message listener and start listening to the queue //create worker manager, create message listener and start listening to the queue
private async Task<int> RunAsync(RunnerSettings settings, bool runOnce = false) private async Task<int> RunAsync(RunnerSettings settings, bool runOnce = false)
{ {
try try
{ {
Trace.Info(nameof(RunAsync)); Trace.Info(nameof(RunAsync));
_listener = HostContext.GetService<IMessageListener>(); _listener = GetMesageListener(settings);
if (!await _listener.CreateSessionAsync(HostContext.RunnerShutdownToken)) if (!await _listener.CreateSessionAsync(HostContext.RunnerShutdownToken))
{ {
return Constants.Runner.ReturnCode.TerminatedError; return Constants.Runner.ReturnCode.TerminatedError;
@@ -431,12 +457,22 @@ namespace GitHub.Runner.Listener
message = await getNextMessage; //get next message message = await getNextMessage; //get next message
HostContext.WritePerfCounter($"MessageReceived_{message.MessageType}"); HostContext.WritePerfCounter($"MessageReceived_{message.MessageType}");
if (string.Equals(message.MessageType, AgentRefreshMessage.MessageType, StringComparison.OrdinalIgnoreCase)) if (string.Equals(message.MessageType, AgentRefreshMessage.MessageType, StringComparison.OrdinalIgnoreCase) ||
string.Equals(message.MessageType, RunnerRefreshMessage.MessageType, StringComparison.OrdinalIgnoreCase))
{ {
if (autoUpdateInProgress == false) if (autoUpdateInProgress == false)
{ {
autoUpdateInProgress = true; autoUpdateInProgress = true;
var runnerUpdateMessage = JsonUtility.FromString<AgentRefreshMessage>(message.Body); AgentRefreshMessage runnerUpdateMessage = null;
if (string.Equals(message.MessageType, AgentRefreshMessage.MessageType, StringComparison.OrdinalIgnoreCase))
{
runnerUpdateMessage = JsonUtility.FromString<AgentRefreshMessage>(message.Body);
}
else
{
var brokerRunnerUpdateMessage = JsonUtility.FromString<RunnerRefreshMessage>(message.Body);
runnerUpdateMessage = new AgentRefreshMessage(brokerRunnerUpdateMessage.RunnerId, brokerRunnerUpdateMessage.TargetVersion, TimeSpan.FromSeconds(brokerRunnerUpdateMessage.TimeoutInSeconds));
}
#if DEBUG #if DEBUG
// Can mock the update for testing // Can mock the update for testing
if (StringUtil.ConvertToBoolean(Environment.GetEnvironmentVariable("GITHUB_ACTIONS_RUNNER_IS_MOCK_UPDATE"))) if (StringUtil.ConvertToBoolean(Environment.GetEnvironmentVariable("GITHUB_ACTIONS_RUNNER_IS_MOCK_UPDATE")))
@@ -487,7 +523,7 @@ namespace GitHub.Runner.Listener
} }
} }
// Broker flow // Broker flow
else if (string.Equals(message.MessageType, JobRequestMessageTypes.RunnerJobRequest, StringComparison.OrdinalIgnoreCase)) else if (MessageUtil.IsRunServiceJob(message.MessageType))
{ {
if (autoUpdateInProgress || runOnceJobReceived) if (autoUpdateInProgress || runOnceJobReceived)
{ {
@@ -497,16 +533,36 @@ namespace GitHub.Runner.Listener
else else
{ {
var messageRef = StringUtil.ConvertFromJson<RunnerJobRequestRef>(message.Body); var messageRef = StringUtil.ConvertFromJson<RunnerJobRequestRef>(message.Body);
Pipelines.AgentJobRequestMessage jobRequestMessage = null;
// Create connection // Create connection
var credMgr = HostContext.GetService<ICredentialManager>(); var credMgr = HostContext.GetService<ICredentialManager>();
var creds = credMgr.LoadCredentials(); var creds = credMgr.LoadCredentials();
var runServer = HostContext.CreateService<IRunServer>(); if (string.IsNullOrEmpty(messageRef.RunServiceUrl))
await runServer.ConnectAsync(new Uri(settings.ServerUrl), creds); {
var jobMessage = await runServer.GetJobMessageAsync(messageRef.RunnerRequestId, messageQueueLoopTokenSource.Token); var actionsRunServer = HostContext.CreateService<IActionsRunServer>();
await actionsRunServer.ConnectAsync(new Uri(settings.ServerUrl), creds);
jobRequestMessage = await actionsRunServer.GetJobMessageAsync(messageRef.RunnerRequestId, messageQueueLoopTokenSource.Token);
}
else
{
var runServer = HostContext.CreateService<IRunServer>();
await runServer.ConnectAsync(new Uri(messageRef.RunServiceUrl), creds);
try
{
jobRequestMessage =
await runServer.GetJobMessageAsync(messageRef.RunnerRequestId,
messageQueueLoopTokenSource.Token);
}
catch (TaskOrchestrationJobAlreadyAcquiredException)
{
Trace.Info("Job is already acquired, skip this message.");
continue;
}
}
jobDispatcher.Run(jobMessage, runOnce); jobDispatcher.Run(jobRequestMessage, runOnce);
if (runOnce) if (runOnce)
{ {
Trace.Info("One time used runner received job message."); Trace.Info("One time used runner received job message.");
@@ -627,7 +683,9 @@ Config Options:
--token string Registration token. Required if unattended --token string Registration token. Required if unattended
--name string Name of the runner to configure (default {Environment.MachineName ?? "myrunner"}) --name string Name of the runner to configure (default {Environment.MachineName ?? "myrunner"})
--runnergroup string Name of the runner group to add this runner to (defaults to the default runner group) --runnergroup string Name of the runner group to add this runner to (defaults to the default runner group)
--labels string Extra labels in addition to the default: 'self-hosted,{Constants.Runner.Platform},{Constants.Runner.PlatformArchitecture}' --labels string Custom labels that will be added to the runner. This option is mandatory if --no-default-labels is used.
--no-default-labels Disables adding the default labels: 'self-hosted,{Constants.Runner.Platform},{Constants.Runner.PlatformArchitecture}'
--local Removes the runner config files from your local machine. Used as an option to the remove command
--work string Relative runner work directory (default {Constants.Path.WorkDirectory}) --work string Relative runner work directory (default {Constants.Path.WorkDirectory})
--replace Replace any existing runner with the same name (default false) --replace Replace any existing runner with the same name (default false)
--pat GitHub personal access token with repo scope. Used for checking network connectivity when executing `.{separator}run.{ext} --check` --pat GitHub personal access token with repo scope. Used for checking network connectivity when executing `.{separator}run.{ext} --check`

View File

@@ -9,5 +9,7 @@ namespace GitHub.Runner.Listener
public string Id { get; set; } public string Id { get; set; }
[DataMember(Name = "runner_request_id")] [DataMember(Name = "runner_request_id")]
public string RunnerRequestId { get; set; } public string RunnerRequestId { get; set; }
[DataMember(Name = "run_service_url")]
public string RunServiceUrl { get; set; }
} }
} }

View File

@@ -32,14 +32,14 @@ namespace GitHub.Runner.Listener
private static string _platform = BuildConstants.RunnerPackage.PackageName; private static string _platform = BuildConstants.RunnerPackage.PackageName;
private static string _dotnetRuntime = "dotnetRuntime"; private static string _dotnetRuntime = "dotnetRuntime";
private static string _externals = "externals"; private static string _externals = "externals";
private readonly Dictionary<string, string> _contentHashes = new Dictionary<string, string>(); private readonly Dictionary<string, string> _contentHashes = new();
private PackageMetadata _targetPackage; private PackageMetadata _targetPackage;
private ITerminal _terminal; private ITerminal _terminal;
private IRunnerServer _runnerServer; private IRunnerServer _runnerServer;
private int _poolId; private int _poolId;
private int _agentId; private int _agentId;
private readonly ConcurrentQueue<string> _updateTrace = new ConcurrentQueue<string>(); private readonly ConcurrentQueue<string> _updateTrace = new();
private Task _cloneAndCalculateContentHashTask; private Task _cloneAndCalculateContentHashTask;
private string _dotnetRuntimeCloneDirectory; private string _dotnetRuntimeCloneDirectory;
private string _externalsCloneDirectory; private string _externalsCloneDirectory;
@@ -134,7 +134,7 @@ namespace GitHub.Runner.Listener
string flagFile = "update.finished"; string flagFile = "update.finished";
IOUtil.DeleteFile(flagFile); IOUtil.DeleteFile(flagFile);
// kick off update script // kick off update script
Process invokeScript = new Process(); Process invokeScript = new();
#if OS_WINDOWS #if OS_WINDOWS
invokeScript.StartInfo.FileName = WhichUtil.Which("cmd.exe", trace: Trace); invokeScript.StartInfo.FileName = WhichUtil.Which("cmd.exe", trace: Trace);
invokeScript.StartInfo.Arguments = $"/c \"{updateScript}\""; invokeScript.StartInfo.Arguments = $"/c \"{updateScript}\"";
@@ -191,9 +191,9 @@ namespace GitHub.Runner.Listener
} }
Trace.Info($"Version '{_targetPackage.Version}' of '{_targetPackage.Type}' package available in server."); Trace.Info($"Version '{_targetPackage.Version}' of '{_targetPackage.Type}' package available in server.");
PackageVersion serverVersion = new PackageVersion(_targetPackage.Version); PackageVersion serverVersion = new(_targetPackage.Version);
Trace.Info($"Current running runner version is {BuildConstants.RunnerPackage.Version}"); Trace.Info($"Current running runner version is {BuildConstants.RunnerPackage.Version}");
PackageVersion runnerVersion = new PackageVersion(BuildConstants.RunnerPackage.Version); PackageVersion runnerVersion = new(BuildConstants.RunnerPackage.Version);
return serverVersion.CompareTo(runnerVersion) > 0; return serverVersion.CompareTo(runnerVersion) > 0;
} }
@@ -476,7 +476,7 @@ namespace GitHub.Runner.Listener
long downloadSize = 0; long downloadSize = 0;
//open zip stream in async mode //open zip stream in async mode
using (HttpClient httpClient = new HttpClient(HostContext.CreateHttpClientHandler())) using (HttpClient httpClient = new(HostContext.CreateHttpClientHandler()))
{ {
if (!string.IsNullOrEmpty(_targetPackage.Token)) if (!string.IsNullOrEmpty(_targetPackage.Token))
{ {
@@ -486,7 +486,7 @@ namespace GitHub.Runner.Listener
Trace.Info($"Downloading {packageDownloadUrl}"); Trace.Info($"Downloading {packageDownloadUrl}");
using (FileStream fs = new FileStream(archiveFile, FileMode.Create, FileAccess.Write, FileShare.None, bufferSize: 4096, useAsync: true)) using (FileStream fs = new(archiveFile, FileMode.Create, FileAccess.Write, FileShare.None, bufferSize: 4096, useAsync: true))
using (Stream result = await httpClient.GetStreamAsync(packageDownloadUrl)) using (Stream result = await httpClient.GetStreamAsync(packageDownloadUrl))
{ {
//81920 is the default used by System.IO.Stream.CopyTo and is under the large object heap threshold (85k). //81920 is the default used by System.IO.Stream.CopyTo and is under the large object heap threshold (85k).
@@ -596,7 +596,7 @@ namespace GitHub.Runner.Listener
int exitCode = await processInvoker.ExecuteAsync(extractDirectory, tar, $"-xzf \"{archiveFile}\"", null, token); int exitCode = await processInvoker.ExecuteAsync(extractDirectory, tar, $"-xzf \"{archiveFile}\"", null, token);
if (exitCode != 0) if (exitCode != 0)
{ {
throw new NotSupportedException($"Can't use 'tar -xzf' extract archive file: {archiveFile}. return code: {exitCode}."); throw new NotSupportedException($"Can't use 'tar -xzf' to extract archive file: {archiveFile}. return code: {exitCode}.");
} }
} }
} }

View File

@@ -1,13 +1,10 @@
using System; using System;
using System.Collections.Generic;
using System.Globalization; using System.Globalization;
using System.IO; using System.IO;
using System.Linq;
using System.Reflection; using System.Reflection;
using System.Runtime.Loader; using System.Runtime.Loader;
using System.Text; using System.Text;
using System.Threading; using System.Threading;
using System.Threading.Tasks;
using GitHub.Runner.Sdk; using GitHub.Runner.Sdk;
using GitHub.DistributedTask.WebApi; using GitHub.DistributedTask.WebApi;
@@ -15,7 +12,7 @@ namespace GitHub.Runner.PluginHost
{ {
public static class Program public static class Program
{ {
private static CancellationTokenSource tokenSource = new CancellationTokenSource(); private static CancellationTokenSource tokenSource = new();
private static string executingAssemblyLocation = string.Empty; private static string executingAssemblyLocation = string.Empty;
public static int Main(string[] args) public static int Main(string[] args)

View File

@@ -3,7 +3,7 @@
<PropertyGroup> <PropertyGroup>
<TargetFramework>net6.0</TargetFramework> <TargetFramework>net6.0</TargetFramework>
<OutputType>Exe</OutputType> <OutputType>Exe</OutputType>
<RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64;osx-arm64</RuntimeIdentifiers> <RuntimeIdentifiers>win-x64;win-x86;linux-x64;linux-arm64;linux-arm;osx-x64;osx-arm64;win-arm64</RuntimeIdentifiers>
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch> <TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
<NoWarn>NU1701;NU1603</NoWarn> <NoWarn>NU1701;NU1603</NoWarn>
<Version>$(Version)</Version> <Version>$(Version)</Version>

View File

@@ -63,7 +63,7 @@ namespace GitHub.Runner.Plugins.Artifact
string containerPath = actionsStorageArtifact.Name; // In actions storage artifacts, name equals the path string containerPath = actionsStorageArtifact.Name; // In actions storage artifacts, name equals the path
long containerId = actionsStorageArtifact.ContainerId; long containerId = actionsStorageArtifact.ContainerId;
FileContainerServer fileContainerServer = new FileContainerServer(context.VssConnection, projectId: new Guid(), containerId, containerPath); FileContainerServer fileContainerServer = new(context.VssConnection, projectId: new Guid(), containerId, containerPath);
await fileContainerServer.DownloadFromContainerAsync(context, targetPath, token); await fileContainerServer.DownloadFromContainerAsync(context, targetPath, token);
context.Output("Artifact download finished."); context.Output("Artifact download finished.");

View File

@@ -23,10 +23,10 @@ namespace GitHub.Runner.Plugins.Artifact
//81920 is the default used by System.IO.Stream.CopyTo and is under the large object heap threshold (85k). //81920 is the default used by System.IO.Stream.CopyTo and is under the large object heap threshold (85k).
private const int _defaultCopyBufferSize = 81920; private const int _defaultCopyBufferSize = 81920;
private readonly ConcurrentQueue<string> _fileUploadQueue = new ConcurrentQueue<string>(); private readonly ConcurrentQueue<string> _fileUploadQueue = new();
private readonly ConcurrentQueue<DownloadInfo> _fileDownloadQueue = new ConcurrentQueue<DownloadInfo>(); private readonly ConcurrentQueue<DownloadInfo> _fileDownloadQueue = new();
private readonly ConcurrentDictionary<string, ConcurrentQueue<string>> _fileUploadTraceLog = new ConcurrentDictionary<string, ConcurrentQueue<string>>(); private readonly ConcurrentDictionary<string, ConcurrentQueue<string>> _fileUploadTraceLog = new();
private readonly ConcurrentDictionary<string, ConcurrentQueue<string>> _fileUploadProgressLog = new ConcurrentDictionary<string, ConcurrentQueue<string>>(); private readonly ConcurrentDictionary<string, ConcurrentQueue<string>> _fileUploadProgressLog = new();
private readonly FileContainerHttpClient _fileContainerHttpClient; private readonly FileContainerHttpClient _fileContainerHttpClient;
private CancellationTokenSource _uploadCancellationTokenSource; private CancellationTokenSource _uploadCancellationTokenSource;
@@ -67,7 +67,7 @@ namespace GitHub.Runner.Plugins.Artifact
CancellationToken cancellationToken) CancellationToken cancellationToken)
{ {
// Find out all container items need to be processed // Find out all container items need to be processed
List<FileContainerItem> containerItems = new List<FileContainerItem>(); List<FileContainerItem> containerItems = new();
int retryCount = 0; int retryCount = 0;
while (retryCount < 3) while (retryCount < 3)
{ {
@@ -106,7 +106,7 @@ namespace GitHub.Runner.Plugins.Artifact
// Create all required empty folders and emptry files, gather a list of files that we need to download from server. // Create all required empty folders and emptry files, gather a list of files that we need to download from server.
int foldersCreated = 0; int foldersCreated = 0;
int emptryFilesCreated = 0; int emptryFilesCreated = 0;
List<DownloadInfo> downloadFiles = new List<DownloadInfo>(); List<DownloadInfo> downloadFiles = new();
foreach (var item in containerItems.OrderBy(x => x.Path)) foreach (var item in containerItems.OrderBy(x => x.Path))
{ {
if (!item.Path.StartsWith(_containerPath, StringComparison.OrdinalIgnoreCase)) if (!item.Path.StartsWith(_containerPath, StringComparison.OrdinalIgnoreCase))
@@ -306,7 +306,7 @@ namespace GitHub.Runner.Plugins.Artifact
Task downloadMonitor = DownloadReportingAsync(context, files.Count(), token); Task downloadMonitor = DownloadReportingAsync(context, files.Count(), token);
// Start parallel download tasks. // Start parallel download tasks.
List<Task<DownloadResult>> parallelDownloadingTasks = new List<Task<DownloadResult>>(); List<Task<DownloadResult>> parallelDownloadingTasks = new();
for (int downloader = 0; downloader < concurrentDownloads; downloader++) for (int downloader = 0; downloader < concurrentDownloads; downloader++)
{ {
parallelDownloadingTasks.Add(DownloadAsync(context, downloader, token)); parallelDownloadingTasks.Add(DownloadAsync(context, downloader, token));
@@ -358,7 +358,7 @@ namespace GitHub.Runner.Plugins.Artifact
Task uploadMonitor = UploadReportingAsync(context, files.Count(), _uploadCancellationTokenSource.Token); Task uploadMonitor = UploadReportingAsync(context, files.Count(), _uploadCancellationTokenSource.Token);
// Start parallel upload tasks. // Start parallel upload tasks.
List<Task<UploadResult>> parallelUploadingTasks = new List<Task<UploadResult>>(); List<Task<UploadResult>> parallelUploadingTasks = new();
for (int uploader = 0; uploader < concurrentUploads; uploader++) for (int uploader = 0; uploader < concurrentUploads; uploader++)
{ {
parallelUploadingTasks.Add(UploadAsync(context, uploader, _uploadCancellationTokenSource.Token)); parallelUploadingTasks.Add(UploadAsync(context, uploader, _uploadCancellationTokenSource.Token));
@@ -381,8 +381,8 @@ namespace GitHub.Runner.Plugins.Artifact
private async Task<DownloadResult> DownloadAsync(RunnerActionPluginExecutionContext context, int downloaderId, CancellationToken token) private async Task<DownloadResult> DownloadAsync(RunnerActionPluginExecutionContext context, int downloaderId, CancellationToken token)
{ {
List<DownloadInfo> failedFiles = new List<DownloadInfo>(); List<DownloadInfo> failedFiles = new();
Stopwatch downloadTimer = new Stopwatch(); Stopwatch downloadTimer = new();
while (_fileDownloadQueue.TryDequeue(out DownloadInfo fileToDownload)) while (_fileDownloadQueue.TryDequeue(out DownloadInfo fileToDownload))
{ {
token.ThrowIfCancellationRequested(); token.ThrowIfCancellationRequested();
@@ -396,7 +396,7 @@ namespace GitHub.Runner.Plugins.Artifact
{ {
context.Debug($"Start downloading file: '{fileToDownload.ItemPath}' (Downloader {downloaderId})"); context.Debug($"Start downloading file: '{fileToDownload.ItemPath}' (Downloader {downloaderId})");
downloadTimer.Restart(); downloadTimer.Restart();
using (FileStream fs = new FileStream(fileToDownload.LocalPath, FileMode.Create, FileAccess.Write, FileShare.None, bufferSize: _defaultFileStreamBufferSize, useAsync: true)) using (FileStream fs = new(fileToDownload.LocalPath, FileMode.Create, FileAccess.Write, FileShare.None, bufferSize: _defaultFileStreamBufferSize, useAsync: true))
using (var downloadStream = await _fileContainerHttpClient.DownloadFileAsync(_containerId, fileToDownload.ItemPath, token, _projectId)) using (var downloadStream = await _fileContainerHttpClient.DownloadFileAsync(_containerId, fileToDownload.ItemPath, token, _projectId))
{ {
await downloadStream.CopyToAsync(fs, _defaultCopyBufferSize, token); await downloadStream.CopyToAsync(fs, _defaultCopyBufferSize, token);
@@ -453,10 +453,10 @@ namespace GitHub.Runner.Plugins.Artifact
private async Task<UploadResult> UploadAsync(RunnerActionPluginExecutionContext context, int uploaderId, CancellationToken token) private async Task<UploadResult> UploadAsync(RunnerActionPluginExecutionContext context, int uploaderId, CancellationToken token)
{ {
List<string> failedFiles = new List<string>(); List<string> failedFiles = new();
long uploadedSize = 0; long uploadedSize = 0;
string fileToUpload; string fileToUpload;
Stopwatch uploadTimer = new Stopwatch(); Stopwatch uploadTimer = new();
while (_fileUploadQueue.TryDequeue(out fileToUpload)) while (_fileUploadQueue.TryDequeue(out fileToUpload))
{ {
token.ThrowIfCancellationRequested(); token.ThrowIfCancellationRequested();

Some files were not shown because too many files have changed in this diff Show More