The alpha suffix on v0.1.0-alpha was a safety blanket — technically accurate, but mostly an excuse to keep the version number qualified with “not really done yet.” On February 22 I removed the suffix and tagged v0.1.0. (It’s the same code. The difference is entirely psychological.)
What shipped
The CHANGELOG had already been written for the alpha, so I knew the answer: two compiler backends (Rust/MLIR frontend, C++/MLIR codegen), an M:N actor runtime with per-actor heaps and supervision trees, an LSP with completions and go-to-definition, a formatter, a test runner, a REPL, a package manager skeleton, wire types with MessagePack serialization. 993 Rust workspace tests, 225 C++ E2E tests, zero failures. Whether any of that constitutes a “release” is a matter of opinion, but the tag doesn’t care about your opinion.
Five targets, one workflow
The release workflow triggers on v* tag push or workflow_dispatch (for when you inevitably need to re-run it without pushing a new tag). It builds five targets in a matrix:
strategy:
matrix:
include:
- target: linux-x86_64
os: ubuntu-24.04
- target: linux-aarch64
os: ubuntu-24.04
- target: darwin-x86_64
os: macos-13
- target: darwin-aarch64
os: macos-14
- target: windows-x86_64
os: windows-latestLinux aarch64 cross-compiles from an x86_64 runner because GitHub doesn’t offer native ARM Linux runners on the free tier. The macOS builds use two different runner images — macos-13 for Intel, macos-14 for Apple Silicon — because cross-compiling LLVM for a different macOS architecture is a pain I chose not to experience.
The Windows build has continue-on-error: true. LLVM on Windows requires a specific Visual Studio version, a specific Windows SDK version, and a willingness to debug CMake errors that reference paths with backslashes in them. It works about 70% of the time. The other 30%, something in the LLVM build fails with an error about a missing diaguids.lib or an incompatible MSVC runtime. I didn’t want a flaky Windows build blocking the entire release, so the pipeline moves on without it.
windows-build:
runs-on: windows-latest
continue-on-error: trueThe non-Linux builds are all continue-on-error for similar reasons — macOS runners occasionally fail with Xcode version mismatches, and I’d rather ship Linux binaries on time than hold everything for a transient CI issue.
Six package formats
The downstream jobs depend on the build matrix with a condition that took me three attempts to get right:
needs: build
if: "!cancelled() && needs.build.result != 'failure'"Not == 'success'. If you use == 'success' and the Windows build is skipped or fails with continue-on-error, the downstream jobs see the overall matrix result as not-success and refuse to run. The !cancelled() && != 'failure' formulation means: run unless something actually broke, not just because something was flaky.
installers/build-packages.sh handles all six formats — .deb for Debian/Ubuntu, .rpm for Fedora/RHEL, .pkg.tar.zst for Arch, .apk for Alpine, plus a Docker image pushed to ghcr.io/hew-lang/hew and a GitHub Release with tarballs for every target. Version injection happens via sed in CI, which is exactly as fragile as it sounds but works fine when you only have one place to patch.
The Docker image is a multi-stage build: compile in a fat image with LLVM and all the build dependencies, copy the binary into a minimal runtime image. The final image is about 45MB — most of that is the LLVM libraries the compiler needs at runtime for JIT compilation in the REPL.
macOS code signing
Apple requires code signing for binaries that won’t trigger Gatekeeper warnings. The obvious approach is to use your Apple ID — log in, download a certificate, sign with codesign. The CI approach is different: App Store Connect API keys, which are service credentials that don’t require interactive authentication.
The signing identity is auto-discovered from the macOS keychain. The CI workflow imports a .p12 certificate into a temporary keychain, sets it as the default, and codesign picks it up automatically. No hardcoded identity names, no --force flags. When the job finishes, the temporary keychain gets deleted.
The notarization step submits the signed binary to Apple’s notarization service and polls until it completes. This adds about 3-5 minutes to the macOS build. I briefly considered skipping notarization for pre-release builds and decided against it — the whole point of having a pipeline is that it does the annoying things every time so you don’t forget.
The Homebrew tap
The homebrew-hew repo holds the formula. A separate HOMEBREW_TAP_TOKEN PAT is needed because the default GITHUB_TOKEN in GitHub Actions is scoped to the current repository — it can’t push to a different repo. ```bash brew tap hew-lang/hew brew install hew
The formula downloads the Darwin tarball from the GitHub Release, verifies the SHA256, and symlinks the binary into the Homebrew prefix. On version bump, the release pipeline updates the formula's URL and SHA256 automatically and pushes to the tap repo.
## What broke
The first `v0.1.0` pipeline run built Linux binaries, signed and notarized the macOS binaries, pushed Docker — and then the Homebrew formula update failed because the GitHub Release hadn't been created yet. The downstream jobs ran in parallel, and the Homebrew job tried to compute the SHA256 of a tarball URL that returned 404.
The fix was adding an explicit dependency so the tap update waits for the GitHub Release job. Obvious in retrospect. *(It always is.)*
The Windows build failed on that first run too — `diaguids.lib` not found, the LLVM classic. I re-ran it manually the next day with a pinned MSVC version and it worked. The binary is 4.2MB larger than the Linux equivalent, for reasons I haven't investigated and probably won't.
`v0.1.1` followed two days later with the fix for the Homebrew race condition and a handful of codegen patches that had landed on main since the tag. `v0.1.0` to `v0.1.1` is mostly pipeline fixes, not language changes.