Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's one of the reasons I loved Ansible from the moment I saw it. As the OP points out, traditionally machines accumulated ad-hoc changes over a long period of time. Describing the "known good" state and running this "checklist" to make sure it is in that state both documents the checklist and evaluates it.

Same reason we haven't typed "cc" on the command line to call the C compiler on individual files for about 30 years or more.



The last time I typed (well, pasted) "cc" on the command line to call the C compiler on an individual file was 26 hours ago. I wanted to recompile a single-file program I'd just written with debugging information (-g) and it seemed easier to copy, paste, and edit the command line rather than to manually delete the file and reinvoke make with different CFLAGS.

I mean, I've surely compiled orders of magnitude more C files without typing "cc" on the command line over the last week. But it's actually pretty common for me to tweak options like -mcpu, -m32, -pg, -Os, or -std=c11 -pedantic (not to mention diet cc) by running a "cc" command line directly.

Similarly, I often run Python or JS code in the REPL or in Jupyter rather than putting it in a file. The rapid feedback sometimes helps me learn things faster. (Other times it's an attractive nuisance.)

But I may be a bit of an odd duck. I've designed my own CPU, on paper. I write assembly code for fun. I've implemented several different programming languages for fun. I like to know what's underneath, behind the surface appearances of things. And that requires experimenting with it.


Of course I cc one file quickie programs all the time. What I am talking about is a whole directory of source files, and just "knowing" which ones are out of date and building the object files manually.

I still remember years ago trying to convince one dev to use make on a package with 20-30 source files.


Running just cc instead of make is actually a much more reasonable thing to do nowadays than it was 10, 20, or 30 years ago.

https://gitlab.com/kragen/bubbleos/-/blob/master/yeso/admu-s... is the entry point to a terminal emulator I wrote, for example. `make -j 8` can build it with GCC from a `make clean` state in 380ms, but if I, for example, `touch admu-shell.c` after a build and run `make -j 8` to run an incremental build, it recompiles and relinks just that one file, which takes 200–250ms. So the incrementality of the build is saving me 230ms–280ms in that case.

Without -j, a nonincremental `make admu-shell` takes about 1100ms.

But if I instead run

    time cc -Wall -Wno-cpp -g -Os -I. -std=gnu99 \
        admu-shell.c admu.c yeso-xlib.c yeso-pic.c \
        png.c jpeg.c ppmp6-read.c readfont.c ypathsea.c \
        -lX11 -lXext -lpng -ljpeg -lm -lbsd -lz \
        -o admu-shell
it takes 900 milliseconds to compile those 1100 lines of C. This is a little bit faster than building from scratch without -j because I'm not compiling the .c files that go into libyeso-xlib.a that admu-shell doesn't use. So all the work of `make` figuring out which ones are out of date and building the object files automatically and in parallel across multiple cores has saved me a grand total of 600–700 milliseconds.

That's something, to be sure; it's a saving† that makes the compilation feel immediate. But it's really pretty minor. 900ms is small enough that it only affects my development experience slightly. If I were to run the build in the background as I was editing, I wouldn't be able to tell if it were incremental or from-scratch.

Unless it screwed up, that is, for example because I didn't bother to set up makedepends, so if I edit a header file or upgrade a system library I might have to do a scratch build anyway. The `make` incremental-build savings doesn't come without a cost, so we have to question whether that cost is worth the benefit. (In this case I think it's worthwhile to use separate source files and `make` for other reasons: most of that source code is used in multiple Yeso programs, and `make -j` also makes a full build from scratch four or five times faster.)

If we extrapolate that 700ms saving backward to 25 years ago when our computers ran 500 million instructions per second instead of 30 billion, it's something like 45 seconds, which is enough of a wait to be distracting and maybe make me lose my train of thought. And 5 years further back, it would have taken several minutes. So `make` was an obvious win even for small projects like this at the time, and an absolute necessity for larger ones.

At the time, I was the build engineer on a largish C++ project which in practice took me a week to build, because the build system was kind of broken, and I had to poke at it to fix the problems whenever something got miscompiled. The compiler and linker were writing their output files to an NFS server over shared 10-megabit Ethernet.

As another data point, I just rebuilt the tcl8.6-8.6.13+dfsg Debian package. It took 1m24.514s. Recompiling just generic/tclIO.c (5314 SLOC) takes 1.7 seconds. So not doing a full rebuild of the Tcl library can save you a minute and a half, but 25 years ago (when Tcl 8 already existed) that would have been an hour and a half. If it's the late afternoon, you might as well go home for the day, or swordfight somebody in the hallway or something.

So incremental builds at the time were totally essential. Now they're a dispensable optimization that isn't always worth it.

______

† 1200 lines of C per second is pretty slow, so probably almost all of that is repeatedly lexing the system header files. I'm guessing that if I took the time to do a "unity build" by concatenating all the C files and consolidating the #includes, I could get that time down to basically the same as the incremental build.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: