-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Support Actions concurrency
syntax
#32751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
3551677
to
fcf4517
Compare
52833e7
to
130f2a2
Compare
461c7c1
to
d5168a2
Compare
e038ed2
to
f77f266
Compare
concurrency
for Actionsconcurrency
syntax
ad71599
to
8f5948b
Compare
This comment was marked as resolved.
This comment was marked as resolved.
) Move the main logic of `generateTaskContext` and `findTaskNeeds` to the `services` layer. This is a part of #32751, since we need the git context and `needs` to parse the concurrency expressions. --------- Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
If you'd like to work with others' PRs often, maybe you can ask @go-gitea/technical-oversight-committee or some core members like @lunny to apply a merger's access, then you can edit PRs directly |
I would expect this to happen < 5 times per year. I actually only applied as maintainer due to work with others' PRs. I will ask formally in discord. |
New Defect found, needs fixing in gitea/act
workflow on: push
jobs:
test:
concurrency: test
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4 short syntax not parseable. Do we have this bug in act_runner as well? Do we have this bug in Gitea already? |
One thing that I've noticed while running it is that manually cancelling a job that is blocking another job will not start the blocked job. You have to manually cancel & re-run the blocked job for it to be picked up. So, if e.g. you have a job with: concurrency:
group: main
cancel-in-progress: false And you push twice to |
Thanks for pointing this out, I convert is to a draft until I am certain this works correctly. I will add this scenario to the test cases and try to fix this |
Sounds good. Feel free to ping / DM on discord if you want me to test something, or need something else. I've been running this for a while on my instance already. |
@ChristopherHX Sorry for the late reply, as I was busy with other work last month. Thank you for the improvements and bug fixes to this PR. Please feel free to update the code. I will also check if there are still issues with this PR. |
Hi, No problem, once we both agree this is finished and can be merged I approve. Good that you already looked at the cancellation problem. I still think about if my code removal is correct or not in sense of not only let existing tests pass. I had another behavior difference
Would we want to add a custom boolean to implement both behaviors? concurrency:
cancel-pending: true # default true for GitHub Actions Compat, false for current implementation Would we want to keep the behavior difference and document it? |
if err != nil { | ||
ctx.ServerError("GetRunByIndex", err) | ||
} | ||
if err := actions_service.EmitJobsIfReady(run.ID); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Queue EmitJobsIfReady from Actions Notifier Job and Workflow Completed?
(The event source used by workflow_run and workflow_job webhooks / action trigger)
I mean whatif the job is cancelled by abandoning or other reasons.
Less explicit code that has the same weakness towards all ways of cancellation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a job completes (regardless of its status being successful, failed or cancelled), we need to check for other jobs that might have been blocked by it. I add the cancelled job to the queue so that the blocked jobs can be detected.
I mean whatif the job is cancelled by abandoning or other reasons.
Yes, this change cannot cover all cases where jobs are cancelled. So I am thinking that if we can do this check when a job reaches its final status (Success, Failed, Cancelled, and Skipped). Maybe we can move UpdateRunJob
to services
layer to allow it to call EmitJobsIfReady
when a job's status is updated.
I think these changes look good. Thanks for simplifying the code.
Yes. I noticed the behavior difference between GitHub actions and my implementation. I think for now we can use GitHub's logic. And in the future, we can add the |
models/actions/run_job.go
Outdated
RawConcurrencyGroup string // raw concurrency.group | ||
RawConcurrencyCancel string // raw concurrency.cancel-in-progress | ||
IsConcurrencyEvaluated bool // whether RawConcurrencyGroup have been evaluated, only valid when RawConcurrencyGroup is not empty | ||
ConcurrencyGroup string `xorm:"index"` // evaluated concurrency.group | ||
ConcurrencyCancel bool // evaluated concurrency.cancel-in-progress |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Are there some examples about how RawConcurrencyGroup vs ConcurrencyGroup works?
- Does the
xorm:"index"
have effect? For example: are there SQLs likeWHERE concurrency_group=...
without other index? Or should the index be optimized like(repo_id, concurrency_group)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the first, while not an example...
- RawConcurrencyGroup stores something like
${{ github.run_id }}
, e.g. once job dependencies are ready / before the job is queued this is going to be evaluated- this field is not used to cancel / block any jobs/workflows
- avoid parsing the workflow again
- While ConcurrencyGroup stores the expression result and is used for concurrency processing
- Used in calculate Concurrency Blocked / to find jobs/workflows to cancel that has state running/queued
In my Opinion we might need this for the ActionRun table as well, once we rerun the workflow/job variables in ${{ vars.MY_VAR }}
might be updated.
I keep the point 2 unanswered, because I need to understand this as well. repo_id in the index sounds reasonable but could also be not specific enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For question 2, since the scope of concurrency_group
is the repo, we always use repo_id
when querying with concurrency_group
. So we can use combined index here.
Signed-off-by: Zettat123 <zettat123@gmail.com>
Fix #24769
Fix #32662
Fix #33260
Depends on https://gitea.com/gitea/act/pulls/124
This PR removes the auto-cancellation feature added by #25716. Users need to manually add
concurrency
to workflows to control concurrent workflows or jobs.