Took me a while to figure out whether it's interpreters for C programs or if there's a particular class of interpreters called "C". Turns out it's about interpreters implemented in C that they use modified LLVM to do the retrofitting, but couldn't it be applicable for other languages with LLVM IR, or other switch-in-a-loop patterns in C?
You're quite right that since we're working with LLVM IR, adapting to other languages is probably not _that_ difficult, though these things always end up taking more time than I expect! Since the majority of real-world problems in this area depend on C interpreters, we put our limited resources to that problem. You're also right that "interpreters" is a pretty vague category, and there are other parts of C (and other) programs that could be yk-ified, though I suspect it would be a fairly specialised subset of programs.
I've been a low level C and C++ programmer for 30 years. Even with your explanation and having read the webpage twice I have no idea what this technology does or how it works. So it takes normal interpreted code and jits it somehow? But you have to modify the source code of your program in some way?
I think the website does an amazing job explaining it, but it basically takes an interpreter written in C and turns it into a JIT with minimal changes to the code of the interpreter (i.e. not to the code of the program you're running in the interpreter). For example they took the Lua interpreter and with minimal changes were able to turn it into a JIT, which runs Lua programs about 2x faster.
tracing jits are slightly harder to grasp than usual ones. The technique comes from real CPUs so the mindset of people behind the original idea is very different from the software world.
Metatracing ones are kind of an interesting twist on the original idea.
> So it takes normal interpreted code and jits it somehow?
Anyway, they use a patched LLVM to JIT-compile not just interpreted code but the main loop of the bytecode interpreter. Like, the C implementation itself.
> But you have to modify the source code of your program in some way?
Generally speaking, this is not normally the goal. All JIT-s try to support as much of the target language as possible. Some JIT-s do limit the subset of features supported.
I don't fully grasp it either, the most appropriate analogy I can think of is like how OpenMP turns #pragma annotated loops into multi-threading, this work turns bytecode interpreting loops into JIT VM.
It's a promising technology, but it's still in the research domain. It's not an automated procedure. You need to use the yk fork of LLVM to compile and link your code, and you have to manually annotate and alter a fair amount of your interpreter loop with yk macros in non-trivial ways:
while (true) {
__yk_tracebasicblock(0);
Instruction i = code[pc];
switch (GET_OPCODE(i)) {
case OP_LOOKUP:
__yk_tracebasicblock(1);
push(lookup(GET_OPVAL()));
pc++; break;
...
case OP_INT: push(yk_promote(constant_pool[GET_OPVAL(i)])); pc++; break;
Knowledge of tracing compilers, LLVM and SSA are needed by the user.
> added about 400LoC to PUC Lua, and changed under 50LoC
Lua 5.5.0 has 32106 lines of code including comments and empty lines. The changes amount to 1.4% of the code base. And then there's the code changes in the yk LLVM fork that you'd have to maintain which I'm guessing would be a few orders of magnitude larger.
If this project would be able to detect the interpreter hotspots itself and completely automate the procedure, it would be great.
> If this project would be able to detect the interpreter hotspots itself and completely automate the procedure, it would be great.
I don't think that's realistic; or, at least, not if you want good performance. You need to use quite a bit of knowledge about your context to know when best to add optimisation hints. That said, it's not impossible to imagine an LLM working this out, if not today, then perhaps in the not-too-distant future! But that's above my pay grade.
There were a couple of C interpreters since the 1990's, including with REPL support, but apparently never took off, most likely a community culture issue, that doesn't seem much value using them, beyond being in a debug session.
I used to work on LabWindows/CVI an integrated C development environment. It included an "Interactive Execution Window" where you could build programs piecemeal. You added pieces of code, ran them, then appended more code, ran the new pieces, etc. It was as text window so you had more freedom than in a simple REPL.
It integrated with "Function panels". Function panels were our attempt at documenting our library functions. See the second link below. But you could enter values, declare variables, etc and then run the function panel. Behind the scenes, the code is inserted to the interactive window and then run. Results are added back to the function panel.
These also worked while suspended on a breakpoint in your project so available while debugging.
My understanding was that these features were quite popular with customers. They also came it handy internally when we wrote examples and did manual testing.
Yeah, I find this valuable regardless of the programming language, ideally the toolchain should be a mix of interpreter/JIT/AOT, to cherry pick depending on the deployment use case.
Naturally for dynamic languages pure AOT is not really worth it, althought a JIT cache is helpful as alternative.