Skip to content

Maintenance: Require integration tests before PR merge #1149

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 of 5 tasks
dreamorosi opened this issue Nov 8, 2022 · 6 comments
Closed
1 of 5 tasks

Maintenance: Require integration tests before PR merge #1149

dreamorosi opened this issue Nov 8, 2022 · 6 comments
Labels
automation This item relates to automation internal PRs that introduce changes in governance, tech debt and chores (linting setup, baseline, etc.) rejected This is something we will not be working on. At least, not in the measurable future

Comments

@dreamorosi
Copy link
Contributor

Summary

At the moment integration (e2e) tests are run manually by maintainers. Our standard operating procedure requires to run these tests before merging a PR and before making a release, however human errors happen and maintainers can forget to.

This issue serves to track the efforts needed to make sure that these tests are part of the requirements needed to merge a PR.

Maintainers should be able to bypass this check based on context (i.e. changes were made only to documentation).

Why is this needed?

In order to reduce the chances of human error. Automating this would enforce the requirement of running these tests.

Which area does this relate to?

Automation, Tests

Solution

  • Investigate branch status checks
  • Investigate how they can be enabled on PRs so that a specific workflow has to be successful before merge
  • If required, investigate how the run-e2e-tests workflow can report its status into a PR under these checks

Acknowledgment

@dreamorosi dreamorosi added triage This item has not been triaged by a maintainer, please wait internal PRs that introduce changes in governance, tech debt and chores (linting setup, baseline, etc.) labels Nov 8, 2022
@dreamorosi dreamorosi added automation This item relates to automation need-more-information Requires more information before making any calls discussing The issue needs to be discussed, elaborated, or refined and removed triage This item has not been triaged by a maintainer, please wait labels Nov 13, 2022
@github-actions

This comment was marked as off-topic.

@github-actions github-actions bot added the pending-close-response-required This issue will be closed soon unless the discussion moves forward label Feb 28, 2023
@dreamorosi dreamorosi removed pending-close-response-required This issue will be closed soon unless the discussion moves forward need-more-information Requires more information before making any calls labels Feb 28, 2023
@am29d
Copy link
Contributor

am29d commented Nov 21, 2023

I have looked into that and I see two options: 1/ run e2e test workflow when PR is approved, 2/ run e2e test workflow when specific label was added

For 1 this could look like that:

on:
  pull_request_review:
    types: [submitted]

jobs:
  approved:
    if: github.event.review.state == 'APPROVED'
    runs-on: ubuntu-latest
    steps:
      - run: echo "This PR was approved"

For 2:

name: CI

on:
  pull_request:
    types: [ labeled ]

jobs:
  build:
    if: ${{ github.event.label.name == 'e2e-test' }}
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Run a one-line script
      run: echo Hello, world!

Sadly, both approaches only execute the workflow once, so in cases when we need to rerun e2e tests, we need to remove label and add again or dismiss approval. Both will flood the status history of the PR.

@dreamorosi
Copy link
Contributor Author

Thanks for looking into this - there's a third option that I'd like to explore next year which is to use webhooks to listen for events and trigger the tests on a specific comment from maintainers. This is similar to how Dependabot works when you ask to rebase or close a PR.

If we have a requirement to enable integration tests more consistently in the short term we can do it like Python does and run on merge to main instead

@am29d
Copy link
Contributor

am29d commented Nov 22, 2023

Sounds good. There is also an option for merge queues we can use. Which is essentially a staging branch to run specific workflows and then automatically merge to main (feat/my-feat -> staging -> main).

@dreamorosi
Copy link
Contributor Author

Closing this in favour of #2123

@dreamorosi dreamorosi closed this as not planned Won't fix, can't repro, duplicate, stale Feb 21, 2024
@github-project-automation github-project-automation bot moved this from Backlog to Coming soon in Powertools for AWS Lambda (TypeScript) Feb 21, 2024
@dreamorosi dreamorosi moved this from Coming soon to Closed in Powertools for AWS Lambda (TypeScript) Feb 21, 2024
Copy link
Contributor

⚠️ COMMENT VISIBILITY WARNING ⚠️

This issue is now closed. Please be mindful that future comments are hard for our team to see.

If you need more assistance, please either tag a team member or open a new issue that references this one.

If you wish to keep having a conversation with other community members under this issue feel free to do so.

@dreamorosi dreamorosi added rejected This is something we will not be working on. At least, not in the measurable future and removed discussing The issue needs to be discussed, elaborated, or refined labels Feb 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automation This item relates to automation internal PRs that introduce changes in governance, tech debt and chores (linting setup, baseline, etc.) rejected This is something we will not be working on. At least, not in the measurable future
Projects
Development

No branches or pull requests

2 participants