yayy i finally found a reasonable use case for having a kubernetes, the insanity can begin
-
@sophie uhhh i don't know but kubernetes does exactly that as far as i'm aware
else i'm gonna paw at it until it does
@alina this one hasn't found any kind of setting in hydra to create workers on demand yet (which is why it moved its nix ci to gitlab because that can just do that) so it'd be really interesting how hydra and kubernetes can do that together
-
@alina this one hasn't found any kind of setting in hydra to create workers on demand yet (which is why it moved its nix ci to gitlab because that can just do that) so it'd be really interesting how hydra and kubernetes can do that together
@sophie @alina@girldick.gay is there actually an upside to running hydra instead of somewhat optimized generic CI runners? if the runners do nix related caching the performance difference should be near zero, right?
-
@sophie @alina@girldick.gay is there actually an upside to running hydra instead of somewhat optimized generic CI runners? if the runners do nix related caching the performance difference should be near zero, right?
@49016 @alina well the thing is, gitlab runners are currently scaled by just buying hetzner vms and those don't have a nix base image available, so this one currently just uses the debian base image, installs docker and calls it done. nix stuff is cached through a local file store (i.e.
file://$(pwd)/nix-cache?compression=zstd
as store-url) and then uses the gitlab caching feature to store that in s3. in the end, a lot of copying the same files back and forth over and over happens there, so it's really not ideal.
something nix-specific like hydra that can cache flake inputs directly and maybe even eval results would probably improve that a lot -
@49016 @alina well the thing is, gitlab runners are currently scaled by just buying hetzner vms and those don't have a nix base image available, so this one currently just uses the debian base image, installs docker and calls it done. nix stuff is cached through a local file store (i.e.
file://$(pwd)/nix-cache?compression=zstd
as store-url) and then uses the gitlab caching feature to store that in s3. in the end, a lot of copying the same files back and forth over and over happens there, so it's really not ideal.
something nix-specific like hydra that can cache flake inputs directly and maybe even eval results would probably improve that a lot@49016 @alina also with hydra only that one machine needs to have the binary cache signing key and s3 credentials. the gitlab ci has a secret variable that needs to be configured per project and is exposed to every runner
hmm. what if one central builder project that's triggered from other projects to build those,, would that work?
-
@49016 @alina also with hydra only that one machine needs to have the binary cache signing key and s3 credentials. the gitlab ci has a secret variable that needs to be configured per project and is exposed to every runner
hmm. what if one central builder project that's triggered from other projects to build those,, would that work?
@sophie @alina@girldick.gay it is pretty sure that hydra can be outhydrad by a slightly overengineered ci pipeline :neobot_giggle:
-
@sophie @alina@girldick.gay it is pretty sure that hydra can be outhydrad by a slightly overengineered ci pipeline :neobot_giggle:
-
-
@sophie @alina@girldick.gay this ones autistic ass would LOVE perfectly reproducible food
-
@sophie @alina@girldick.gay this ones autistic ass would LOVE perfectly reproducible food
-
-
@sophie @alina@girldick.gay if it ever finds someone responsible for changing the texture of existing food products it cannot guarantee that they will walk out of that encounter alive
-
A awoo@gts.apicrim.es shared this topic
-
@sophie @alina@girldick.gay this ones autistic ass would LOVE perfectly reproducible food
-