Rolling my own meta-build Link to heading
When I started my recent C project I wasn’t doing much other than just running clang foo.c -o foo and then letting foo.c include all the sources of the project and #define everything needed for “shipping” or “development” versions. With a head full of fresh ideas about stuff it seemed like an interesting approach. There has been a lot of recent chatter about how unity builds in C are superior and free the developer from the shackles of greater project orchestration. And languages such as Odin and Jai really take this to heart (maybe zig too in some ways). It feels seductive to be free from some worries and concerns… but it doesn’t last…
Despite being so dedicated to the virtue of ultimate C austerity (which is emphatically not the point that the proponents of C unity builds are actually trying to get across), it wasn’t long before I ended up deciding to get ninja involved by writing a little “configure.py” script to simplify things and ensure that the various platform dependencies are handled transparently for the day to day pixi run build process. And this is only considering the C source compilation. There are a boatload of other concerns in a typical game or creative application.
Of course, ninja can handle this exceptionally well. Carrying out it’s mission with ruthless efficiency.
But I’m not writing ninja rules. I’m writing a python script that generates ninja rules. And it’s not that crazy right now but I can already imagine a future where it is sorta crazy. And a future day will come where I haven’t looked at it in a while and the activation energy for adding something carries the goal beyond my grasp at which point I turn off the lights and let that project collect dust in my SVN depot.
Ahoy SCons! Link to heading
Why SCons? Like everything else that inspired this entire line of personal research and investigation: Someone I respect said it’s a great and underrated tool. And then just today they also went further and shared what inspires them about getting it done with SCons.
It’s a bit embarassing to admit that I have derided SCons of yore as slow and difficult to integrate compared to CMake which is the “defacto standard build tool for C++”. While there is truth in that statement, sometimes abundant experience leads to bias. Sometimes it isn’t important that a project uses any “defacto standard” tools. Especially for your personal creations.
Additionally, my impressions of SCons were formed 15 years ago, and probably the last time I even had cause to even run it was almost 8 years ago.
Anyway, I’m a human and I’m allowed to be wrong and I’m allowed to re-evaluate my prejudices.
Is SCons simpler? Link to heading
[chipc@lith smvc]$ pixi run tokei SConstruct
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Language Files Lines Code Comments Blanks
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Scons 1 145 100 28 17
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total 1 145 100 28 17
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[chipc@lith smvc]$ pixi run tokei scripts/configure.py
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Language Files Lines Code Comments Blanks
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Python 1 288 205 5 78
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total 1 288 205 5 78
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
So after 20 minutes of reading the SCons docs I’ve hit feature parity with my ninja generator. The SCons build is half the size! Naturally this is a side effect of SCons abstracting a lot of the details, so it’s safe to say that in general the single script generating a ninja build could be said to be packing more “work” per line of code. And while I could go and add all the lines of code of SCons and compare to all the lines of code to my generator and the ninja_syntax module… I don’t think it’s a valuable exercise because it’s just telling us what we already know.
I do think it’s realistic to say that it’s fundamentally simpler to maintain the SCons script. And it is already clear that my current ninja generator has tumbled over that cliff of diminishing returns. Because as alluded to above, there is more to life than C code. I also have some python based code generation in this project, and I’ve absentmindedly set these up as pixi tasks because that presented the lower effort (and perfectly reliable) integration point.
Grab the stopwatch Link to heading
At this point in my exploration, my custom ninja generator and the SCons build do exactly the same thing:
- Build the shipping version (
-O3) - Build the development version (
-g)- Build the game as a dll that can be hot-reloaded
- Build tests.
This is as good a place as any to make some simple measurements. These are just gathered using time on the shell. I find these kinds of measurements useful to make periodically because if your build and iteration times start to climb then it’s sapping away the meager time available to pursue my personal projects.
My Linux laptop is a Thinkpad X1 Carbon. If it isn’t plugged in, then the power throttling dramatically slows things down. You can double the time reported on battery power alone. But even when plugged in it is so incredibly variable due to thermal triggered cpu throttling that it has been hard to get stable numbers.
I’ll never get an X1 Carbon again. It is otherwise a lovely machine, but it’s constantly throttling and working on it is just slow… at least I have a bottom end performer to profile things on.
Unity builds Link to heading
| SCons | Ninja | |
|---|---|---|
| Thinkpad / Linux | 10.480 seconds | 14.960 seconds |
| Macbook Pro M4 | 6.37 seconds | 1.70 seconds |
Well damn that’s kind of interesting and unexpected. Seems as though ninja causes a lot of throttling on the Thinkpad, while on my Macbook it looks exactly as expected.
“Regular” builds Link to heading
Well… I’ll be honest. I’ve got things working with SCons now and I honestly don’t feel like updating the ninja generator.
Here is the linecount after updating the SCons build:
[chipc@lith smvc]$ pixi run tokei SConstruct
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Language Files Lines Code Comments Blanks
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SCons 1 197 147 34 26
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total 1 197 147 34 26
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
At this point I’m convinced that there is a more manageable future switching to SCons. The cost in terms of build time seems fairly reasonable for my personal projects.
Here are the times for the “development” configuration:
| SCons | Ninja | |
|---|---|---|
| Thinkpad / Linux | 1.643 seconds | <not measured> |
| Macbook Pro M4 | 2.12 seconds | <not measured> |
And here are the times for the “shipping” configuration:
| SCons | Ninja | |
|---|---|---|
| Thinkpad / Linux | 7.573 seconds | <not measured> |
| Macbook Pro M4 | 6.06 seconds | <not measured> |
Why break up “shipping” and “development” now? Well, previously I guess I felt like it was a pain to have to run the ‘configure’ script multiple times and wanted a single configuration to build everything. 😅 But now with SCons it’s just an option when running scons and there isn’t any need to build it all at once. 🤷 And this simplifies the ambient editor and project configurations for hot-reload builds as well.
I was definitely not expecting the the shipping build to take so long but it’s building with -O3 so maybe this is just the overhead for optimizations. I switched back to a unity build only for shipping and it jumped up to 10 seconds on the Thinkpad! Meanwhile on macOS it was slightly faster at 5.06 seconds.
Parting thoughts Link to heading
Well… I think not having measured on Windows yet is an important omission. I believe that most of the staunch advocates for unity builds (and their associated major wins in build time improvment) were making comparisons on that platform with it’s myriad inefficiencies. So I can’t call the case closed until testing there. But this is besides the “SCons or Ninja” question that motivated all of this, and the SCons build can cerainly run a unity build. So I think I’ll continue to have unity builds remain an option, and importantly also get some measurements on Windows.
Another feature I’ve learned that SCons has these days is a build cache which I welcome with open arms. In addition to this, I’m happy to find that generating the compilation database for clang tools is also provided.
Is it all sunshine and rainbows? Link to heading
No, definitely not. It’s like everything else, just a choice to make given certain constraints or the intended workflows. And I’ve already hit a little snag for macOS builds that want to use RPATH… SCons ignores them on macOS for some reason. 🤦♂️
Next steps Link to heading
Well, my code generators will get wired up next. Then I’ll look at content processors.
And I want to make more tests on Linux using a different machine to try and have some stability in the measurments.
Code Listing for the curious Link to heading
Ok, so here is the SConstruct I ended up with by the end of this:
#
# Defines the build for desktop platforms
#
import os
import platform
from pathlib import Path
from typing import List
from multiprocessing import cpu_count
AddOption(
'--shipping',
dest='shipping',
action='store_true',
help='Build shipping project profile',
)
AddOption(
'--compdb',
dest='compdb',
action='store_true',
help='Generate a compilation database for clangd',
)
CacheDir('.cache/scons')
# Automatically run N tasks for the number of cores
SetOption('num_jobs', cpu_count())
#
# This ignores "architecture" but :shrug:
#
SYSTEM = platform.system()
IS_WINDOWS = SYSTEM == "Windows"
IS_MACOS = SYSTEM == "Darwin"
IS_LINUX = SYSTEM == "Linux"
IS_HAIKU = SYSTEM == "Haiku"
#
# Our project root location
#
repo_root = os.getenv("PIXI_PROJECT_ROOT")
if repo_root is None:
repo_root = Path(__file__).parent
else:
repo_root = Path(repo_root)
#
# Since we're expecting to be run with pixi, the current
# activated environment holds any libraries we install.
#
conda_prefix = os.getenv("CONDA_PREFIX")
if conda_prefix is None:
conda_prefix = repo_root / ".pixi" / "envs" / "default"
else:
conda_prefix = Path(conda_prefix)
conda_libpath = conda_prefix / "lib"
conda_incpath = conda_prefix / "include"
#
# Steamaudio binaries are vendored via source control.
#
steamaudio_root = repo_root / "source" / "vendor" / "steamaudio"
steamaudio_incpath = steamaudio_root / "include"
if IS_LINUX:
steamaudio_libpath = steamaudio_root / "lib" / "linux-x64"
elif IS_MACOS:
steamaudio_libpath = steamaudio_root / "lib" / "osx"
else:
steamaudio_libpath = steamaudio_root / "lib" / "windows-x64"
#
# Linux and macOS (and probably more) need rpath entries.
#
RPATH = []
if IS_LINUX:
RPATH.extend([
str(conda_libpath),
str(steamaudio_libpath),
Literal('$$ORIGIN'),
Literal('$$ORIGIN/../lib'),
])
#
# Everything will use SDL and steamaudio, but there may also
# platform specific libraries too.
#
LIBS = [
'SDL3', 'phonon'
]
if IS_LINUX:
LIBS.extend([
"c-2.28",
"m-2.28",
"dl-2.28",
])
if IS_MACOS:
LIBS.extend(["c"])
#
# Likewise for the linker flags
#
LINKFLAGS = []
if IS_LINUX:
LINKFLAGS.extend([
'-pthread',
'-fuse-ld=mold',
])
elif IS_MACOS:
LINKFLAGS.extend([
"-nostdlib",
# no idea why this doesn't work normally
"-rpath", str(conda_libpath),
"-rpath", str(steamaudio_libpath),
])
elif IS_WINDOWS:
LINKFLAGS.extend([
'-Wl,-export:do_handshake',
])
env = Environment(
CC=str(conda_prefix / "bin" / "clang"),
CXX=str(conda_prefix / "bin" / "clang++"),
CFLAGS='-std=c23',
CXXFLAGS='-std=c++23',
CCFLAGS=[
'-Wall',
'-Wextra',
'-D_DEFAULT_SOURCE',
'-DMI_OVERRIDE',
'-pthread'
],
RPATH=RPATH,
LIBS=LIBS,
LINKFLAGS=LINKFLAGS,
LIBPATH=[
str(conda_libpath),
str(steamaudio_libpath),
],
CPPPATH=[
'source/core',
'source/vendor/miniaudio',
'source/vendor/mimalloc/include',
'source/vendor/stb',
str(conda_incpath),
str(steamaudio_incpath),
],
COMPILATIONDB_USE_ABSPATH=True,
)
# This just loads the tool to make sure everything can be picked up
env.Tool('compilation_db')
# This is how you ensure that intermediate build outputs aren't
# mixed with the sources.
env.VariantDir('build', 'source', duplicate=0)
core_sources = env.Glob("build/core/*.c", exclude='build/core/sg.c')
game_sources = env.Glob("build/game/*.c")
if GetOption('shipping'):
env.Append(CCFLAGS=['-O3'])
env.Program(
target = 'bin/smvc',
source = [
'build/app/main.c',
'build/vendor/miniaudio/miniaudio.c',
'build/vendor/mimalloc/src/static.c',
] + game_sources + core_sources,
# source = 'build/shipping.c',
CCFLAGS=['-Wno-tautological-pointer-compare', '-Wno-unused'] + env['CCFLAGS']
)
else:
mimalloc = env.StaticLibrary(
'build/mimalloc', 'build/vendor/mimalloc/src/static.c',
CCFLAGS=['-Wno-tautological-pointer-compare'] + env['CCFLAGS'])
miniaudio = env.StaticLibrary(
'build/miniaudio', 'build/vendor/miniaudio/miniaudio.c')
libcore = env.StaticLibrary('build/sg_core', source=core_sources)
env.Append(CCFLAGS=['-g'])
env.Append(LIBS=[libcore, mimalloc, miniaudio])
env.Append(CPPDEFINES=[('_HOTRELOADABLE_',)])
env.SharedLibrary(
target='bin/game',
source=game_sources,
LIBPREFIX='',
)
env.Program(
target='bin/smvc',
source=['build/app/main.c'],
)
#
# Tests
#
env.Program(
target='bin/tests/test_sg_sparse_set',
source=['build/tests/test_sg_sparse_set.c'],
)
if GetOption('compdb') or not os.path.exists('compile_commands.json'):
env.CompilationDatabase()