-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move CI to GitHub Actions #321
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Alex Bozarth <[email protected]>
Note that I have only opened this draft to allow for iterative feedback. It is far from complete and the current workflow is just my first working example (or MVP for agile folks). I decided having an open draft would be ok |
also update openssl3 workflow to only use the available 4 cores Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
move the build against liboqs/oqsprovider matrix builds to a triggerable option not run automatically Signed-off-by: Alex Bozarth <[email protected]>
Final update before I go on vacation til Monday. I added workflows for curl and httpd and moved the build against liboqs main from a matrix build to a trigger option. I've updated the description and previous commits to clarify my latest changes. |
@ajbozarth I would like to add a linter for Docker files, such as Hadolint or should we rely on MegaLinter instead? What do you think? |
I have noticed that some GitHub Actions are not using the latest versions, and I also observed the use of a fixed value with the -j flag. Wouldn’t it be better to rely on -j$(nproc) instead? What do you think? |
This is a great idea, I'm not sure on which linter to use, I've never linted Dockerfiles before. Creating a linter GitHub Action would be a great task you can do in parallel to this PR. You would most likely need to do a lot of small updates to get the linter to pass initially though.
How so? The actions I added check out and run the latest code locally, there's currently no versioning at all.
This is something I'm aware of, on CircleCI it was being hard-coded to |
Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
A note here on my most recent update: I had previously enabled triggering each workflow with a flag to use liboqs/oqsprovider main branches, I have moved this to a callable action (meaning it can be triggered by another workflow but not manually triggered). Instead I have created |
Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
note this workflow does not include tests Signed-off-by: Alex Bozarth <[email protected]>
note this workflow does not include tests Signed-off-by: Alex Bozarth <[email protected]>
I just pushed a few more workflows, included in them are locust and wireshark, both of which are just build and have no test step:
|
Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
I'm moving this out of draft since it is now in a "complete" state, that is it is fully functional for building and testing all our current demos. I still have quite a bit of work left to do as detailed in the following list, but this could be merge as is and further work done in a follow up PR if desired. To Do's:
My planned next step is to work on the adding push to the workflows, but with US Thanksgiving this week I might not get to it until next week |
Signed-off-by: Alex Bozarth <[email protected]>
A couple of months back I wrote a workflow to push multiarch images for our CI containers using the ARM runner (so no emulation required). Maybe a similar approach would be helpful here? See this workflow. |
I knew there was a way to do this with docker, but I hadn't thought of a way to leverage it in GitHub workflows until looking at yours. Having separate jobs like that would be overkill, but I can add a dependent job (ie a job that waits for the matrix builds to finish) that just runs the docker manifest command to connect the separately tagged images. This would still require deciding how we want to separately tag the platform images though. |
Much better: There's hardly any performance/runtime differential between x64 and arm64. Thanks for that pointer @SWilson4 !
That would be sensible.
That would be less sensible: If we (spend the cycles to) build it, we should also push it.
That also would not be sensible (nor necessarily always be the case if we'd be able to use fast arm64 runners): Given the hint by @SWilson4 above, I'm very much against keeping using an emulator (see open-quantum-safe/tsc#5) to begin with. The current build-in-parallel-and-push-after-jobs-merge workflow in linux.yml already successfully does what we're discussing here, no? See for example https://github.com/open-quantum-safe/oqs-demos/actions/runs/12034148163 --> Could that approach be used in all jobs (after switching from emulation to real HW)? |
PS: Works like a charm: See https://github.com/open-quantum-safe/oqs-demos/actions/runs/12212192366. But does require CI code duplication (edit/add: Different "runs-on" jobs). Question: Can one specify different runners for different parts of a matrix build? Edit/add: Simple reference to "oqs-arm64" doesn't seem to cut it: oqs-demos/.github/workflows/linux.yml Lines 19 to 24 in 8e434c1
|
Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
Signed-off-by: Alex Bozarth <[email protected]>
Based on @baentsch comments I realized I missed the real takeaway from @SWilson4 example: the runners. After a bit of research I found why I had not previously considered that solution: GitHub does not currently provide linux/arm64 runners. After some more research and testing I was able to update the workflows to leverage runners instead of QEMU. This was only possible due to the existence of the I also update the push to tag the images Implementation Note:For the new push job I wanted the Potential security risks:While researching runners I ran into the issue that forks do not have access to self-hosted runners, meaning that when the workflows run on a forks main branch they will never start the Remaining ToDo
Remaining Open Discussion:
|
Small follow up to the above comment:
|
Hi @ajbozarth, Thank you for your patience, and I apologize for the delay. I've just finished the implementation, and you can find the |
I'd be OK with the latter if you'd be willing to add code run at/to the upstreams to facilitate that, either by way of adding snippets to the documentation in the upstream release Wikis or by creating CI code triggered by upstream releases: OK for you @ajbozarth (may be part of the documentation ToDo?)
I'd think this would also be resolved if the corresponding trigger is done in the upstream CI at release time, no? |
Signed-off-by: Alex Bozarth <[email protected]>
Once this PR is merged to main I can open PRs on liboqs and oqsprovider adding a small workflow to tigger the build.yml workflow. Based on this I will need to update |
Signed-off-by: Alex Bozarth <[email protected]>
I opened open-quantum-safe/oqs-provider#587 as a draft, but for some reason the new workflow didn't run, so I'll have to wait to debug it until this is merged. Once I get that working I will add the same workflow to liboqs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for agreeing to add release triggers, @ajbozarth . Also very good this is now using the native arm64 runner. The only thing open then from my side is documentation: What's triggered when and how and for which demos? What's a blueprint ("bare minimum CI") for adding new demos? Where and when are (which versions of) the images pushed to?
Where do you suggest I add this? Should it go in a new section on the README (if so where in the README) or should I create a new md file? |
@baentsch further followup on my question: I noticed that our contributor docs are in the GitHub wiki and not a markdown file. Is there a motivation for this? The wiki can only be edited with write access and can't take PRs. I think that doc would be the best place to add the CI docs, but I can only contribute it if it's in a markdown file instead of the wiki. Would it be appropriate to move that content into a new CONTRIBUTING.md file? |
Signed-off-by: Alex Bozarth <[email protected]>
@baentsch So I decided to go through my suggestion above about the CONTRIBUTING.md and move the wiki content to the new file and added a CI section to the bottom. This means we could get rid of the wiki page if you're ok with this change. I also updated the README in a few places and added workflow badges to the table for each demo (I left the openssl3 one at the top since it's not in the table?). Lastly I added a workflow template file for future use. Though I put it in the templates dir I did not add the metadata file to propagate it to the GitHub Action UI. It's just there as a starting point for new demos. |
Update on the liboqs/oqs-provider upstream CI integrationThis is much more complicated than I expected. In short there is no way to do what I wanted to, that is to add a workflow to liboqs/oqs-provider that runs the build.yml here and then shows success/failure in the CI for those repos. The method I intended to use try to run the workflow against the repo that runs it, meaning it will run the workflows against liboqs repo contents instead of oqs-demos repo. The proper way of triggering a workflow from another repo is to make a In SummaryBased on my research and understanding of what we want from this feature, it would actually be less work and hassle to either just run it manually after updates are made to liboqs/oqs-provider, or set up a cron job to run it (this would still require the edits to build.yml) once a week or something. @baentsch ? |
Signed-off-by: Alex Bozarth <[email protected]>
Per my above comment I have pushed an update adding a cron job that will run all the workflows against liboqs and oqs-provider As for displaying the results, IIUC the badges I added to the README will only look at the latest run, so unless a new commit was push in the last week they will show the results of the weekly build against liboqs main. I believe this is an okay behavior (consistent with many other repos). This also means that we can add a badge for build.yml on liboqs and oqs-provider to display that week's results. If this is approved I will open PRs to add those badges after this is merged Implementation Note: The explanation for the weird !contains() line can be found here: https://stackoverflow.com/a/73495922 |
Signed-off-by: Alex Bozarth <[email protected]>
This is precisely the same issue/suggestion made #329 (comment). The reason for not doing it earlier was the simple lack of interest by more people to contribute. This has now changed with you and @Hawazyn being active, so the switch to a file is goodness and follows the same switch we did in |
|
@ajbozarth This PR is quite extensive, and I really appreciate the time and effort you’ve put into it—it’s clear how much thought and care has gone into your work. While I’m eager to contribute, I feel a bit overwhelmed. If it’s feasible and doesn’t disrupt progress, could you kindly assign me specific tasks along with brief guidance? This would help me support your effort more effectively. |
PR for moving our CI from CircleCI to GitHub Actions. This work includes decoupling build and test steps from the push step as well as support multi-platform builds.
Implementation notes:
build_main
flag to true.latest
image will be pushed to DockerHub and ghcr.io when a workflow is run on the main branch, when not a fork, not a PR, and not building against liboqs main branchlatest
image is actually a manifest that points to new platform-based imageslatest-x86_64
andlatest-arm64
release_tag
can be set that will replacelatest
in the above naming convention.Fixes #294 Fixes #213 Fixes #284