Just in time compilation compiles code into fast machine code at runtime, right when it’s needed
ChatGPT summary:
- Program starts quickly using bytecode or an interpreter.
- It watches what code you execute most (“hot” functions/loops).
- When something becomes hot, it compiles that piece into optimized machine code.
- Next time you hit that code path, it runs much faster.
- If the program’s behavior changes, the JIT can re-optimize (or sometimes “de-optimize” if assumptions stop being true).
Trade offs:
- Warm-up time: performance ramps up after the JIT compiles hot code.
- More memory/CPU overhead: compiling while running costs resources.
- Less predictable latency: that first time hitting a hot function might trigger compilation pauses (runtimes try to minimize this).