What a `cargo install` Taught Me About CI/CD
It started as a curiosity. A quick experiment. Nothing fancy. I just wanted to see how long it really takes to install a crate like clap
using cargo
. Something that thousands of Rust projects do, thousands of times a day, across CI runners scattered all over the globe.
I ran it in my local prototype runner:
📉 Result: 4.5 seconds.
Then I tried the same install on GitHub Actions. Same crate, no cache, fresh setup.
📈 Result: 9.3 seconds.
That’s a 2.07× speedup, cold start vs cold start. No hidden complex pre-computation. Just a stripped-down, parallelised, efficient environment doing what it's supposed to do — fast.
Then I warmed the cache.
With my runner’s naive caching enabled, the same install dropped to 1.57 seconds.
That’s a 5.93× improvement over GitHub Actions' cold run.
And at that moment, I realized — this isn’t just an optimization. This is a symptom.
CI/CD Today Is Built for Safety. Not Speed.
If you're building modern software, your CI/CD pipeline is your heartbeat. It validates, tests, deploys, and holds your entire dev process together. But most pipelines today are bloated, inefficient, and unaware of context.
They’re reactive.
They don’t know what changed in your last commit.
They rebuild entire modules even if only one file was touched.
They reinstall tools that never changed.
They waste time, compute, and money — every single run.
And that’s not because devs don’t care. It’s because most CI systems were architected in a world where compute was cheap, and pipelines were dumb.
A Smarter, Hungrier Runner
So I’m building something different — something I’m calling KurajoCI. It's a CI runner that doesn’t just execute steps. It thinks. It’s built on three principles:
-
Understand the intent behind every change.
- What changed in this commit?
- Which crates or modules are actually affected?
- Can we skip rebuilding certain paths entirely?
-
Exploit parallelism with zero waste.
- If multiple jobs can run, don’t just throw them at random threads.
- Use ANIO (All Nodes Idle at Once) — a parallelism optimization algorithm designed to ensure all nodes are always working, minimizing idle time and keeping the entire system maximally productive.
-
Make caching a first-class citizen.
- Not just “did this file change?”
- But: Has this exact dependency ever been installed with the same inputs?
- If yes, grab it. If no, install it once, fingerprint it, and reuse forever.
In other words, a runner that remembers, learns, and adapts — not one that blindly follows a YAML checklist.
Why Start With cargo install
?
Because it’s fundamental. In the Rust ecosystem, installing tools is often the first thing a pipeline does. And yet, it’s one of the most wasteful. Even trivial crates get recompiled across thousands of CI runs, despite having identical inputs.
In CI, every second matters. A 9-second cold install might not seem like much until you realize:
- Multiply it by 100 crates in a pipeline
- Multiply that by 5 contributors pushing per day
- Multiply that by 20 CI runs per branch
- ...and you’re suddenly burning hours per day on installs alone.
Cut that in half, or down to a quarter with caching, and you’re not just saving time. You’re unlocking faster iteration, tighter feedback loops, and better developer experience. That’s what this is really about.
Looking Ahead
Right now, the prototype is simple. Fast installs. Smarter cache. ANIO-driven parallelism. But the vision is far bigger:
- Commit-diff-aware pipelines. Imagine a pipeline that only runs the tests affected by your code changes — automatically.
- Per-crate dependency graphs. Rust workspaces are perfect for this. Don’t rebuild the world. Just rebuild the affected subtree.
- Pre-fetched install shards. If we know your team installs
clap
,serde
,tokio
, andthiserror
in every job, why not pre-compile those and store the build artifacts in a shared cache shard? - A pluggable, local AI brain (eventually): One that understands your project’s structure, diff patterns, and failure history — and can suggest or generate optimal workflows.
- Drop-in compatibility. You’ll be able to use KurajoCI alongside GitHub Actions, or even swap it in as a runner. Think
act
, but smarter and faster.
Why We’re Building This
Because I’m tired of wasteful pipelines.
Because developers deserve faster feedback.
Because the gap between "write code" and "ship code" should be measured in seconds, not minutes.
And because someone needed to challenge the status quo of CI/CD.
Turns out that "someone" might be me.
If You're Curious…
If you’ve ever cursed at a slow pipeline, or watched your CI burn minutes on unnecessary rebuilds, I’d love to talk. We’re open-sourcing parts of this soon. You can try the prototype. Break it. Fork it. Improve it.
I’m not claiming to have solved CI. But maybe we’ve shaved off just enough seconds to make you think:
“Wait… why isn't my pipeline this fast?”
And once you ask that, you’re already on our side.