-
Notifications
You must be signed in to change notification settings - Fork 76
Adding flake check and weekly flake.lock maintence job in git workflow #1224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Pull Request Test Coverage Report for Build 20214601820Details
💛 - Coveralls |
3e88275 to
4c8ac22
Compare
|
45 minutes! yikes 😵 I think we need to aggressively limit what is being checked in the PR flake check as laid out in the comment in the original issue
I think the full flake check should be limited to the weekly actionz Perhaps just some veery basic smoke-testing is sufficient as the formatter checks would really be redundant alongside the work I am doing in #1221 |
Yeah, I wanted to do that but after checking #1221 I realized it would be redundant. Also in the issue using nix flake check -L was suggested, which is why I went with that approach. A weekly maintenance job might be the best option for now, since running it takes 45 minutes which is alot |
Note that caching should improve this a lot of the build time is the dependencies
Note this is incorrect, it needs to be
For now yes, but in the long run we should strive to make it less redundant and more useful, otherwise it'll bitrot. Some notes on how to achieve this below.
formatter checks will be quick, the long build time is for the project dependencies under multiple toolchains which should be mostly cached across builds, and release builds as part of the compilation of each package separately for simple/quick things like shellcheck we can build those separately: nix build .#checks.x86_64-linux.shellcheck -Lwe can also define checks that just have other checks as build inputs: diff --git a/flake.nix b/flake.nix
index 85e16184..509da043 100644
--- a/flake.nix
+++ b/flake.nix
@@ -164,6 +164,16 @@
};
formatter = pkgs.nixfmt-tree;
checks = packages // {
+ quick = simpleCheck {
+ name = "quick";
+ src = pkgs.runCommand "dummy src" { } "mkdir $out";
+ buildInputs = with self.outputs.checks.${system}; [
+ shfmt
+ shellcheck
+ nix-fmt-check
+ ];
+ };
+
payjoin-workspace-nextest = craneLib.cargoNextest (
commonArgs
// {so that we can do something like nix build .#checks.x86_64-linux.quick -Lboth in CI and when running locally to have more useful fail fast scenarios. we should also split the msrv and nightly jobs to not run in the same github action, running one after the other on the same machine is wasteful, and there's no way for the build of one toolchain to help by caching something useful for the build of another. all the checks that have finaly just to temper expectations in the long run, |
nothingmuch
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i would suggest rebasing on top of #1226, which implements the suggestions of my previous comment.
otherwise, there is a redundancy because when a new pr is opened that will run the flake checks, so updating will run them twice as implemented
| - name: Run full flake check (updated lock) | ||
| run: nix flake check -L |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this because the flake check on the PR it opens is planned to be partial?
if so maybe it's better to do it in the reverse order:
First:
nix flake check -L --recreate-lock-file --no-write-lock-fileto make a new lockfile
then add --offline to the update-flake-lock action's nix arguments, if it passes this to nix flake update --commit-lockfile then i think it will commit without re-fetching the upstream flake inputs so the same flake inputs as were used when running the checks with the recreated lock file should be used... a cleaner approach would be to PR to the update-flake-lock action to allow it to run the check after the nix flake update command but before opening the PR
| uses: DeterminateSystems/magic-nix-cache-action@main | ||
|
|
||
| - name: Run full flake check | ||
| run: nix flake check -L |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see other comment, we should only do this once per update, if this remains then the other one is redundant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Edit: I get the comment now.
@nothingmuch
|
btw, for nice progress, you can use: ( |
Thank you very much! Your comments have been really helpful. I’ll make the required changes. |
this results in much faster builds and therefore checks, improving developer experience. if we additionally want to build the packages in release mode, or run certain tests in release mode, we can build the workspace dependencies using both release and dev profiles as appropriate, but unless we intend to ship e.g. docker containers based on our built packages we don't need the release profile for the foreseeable future.
|
rebasing on top of #1228 should result in speedsups |
4620e50 to
648e5d4
Compare
Add flake check in CI workflow and weekly flake.lock update and check after lock updates
648e5d4 to
c8efd97
Compare
After rebasing and running Also I seem to not be able to pass: ❌ checks.x86_64-linux.payjoin-workspace-nextest-stable integration test in |
|
i guess the debug symbols are too much, but in addition it seems crane's
|
|
i pushed additional commits:
|
This PR addresses some of the issues in #457 specifically:
Adding a Nix flake check to the GitHub workflow
Adding a flake.lock maintenance job to automatically update flake.lock
Pull Request Checklist
Please confirm the following before requesting review:
AI
in the body of this PR.