Note that the CompCert paper uses terminology for "simulation" in a way that is a bit inconsistent with everyone else in the context of formal verification. What CompCert calls "simulation" is a theorem that considers every possible execution of the top-level code (C code), and then says that there exists a corresponding execution of the low-level code (assembly produced by the CompCert compiler). This doesn't directly convince us of the correctness of their compiler: after all, what we care about is how the low-level code will execute. And the theorem doesn't consider every possible low-level code execution. What everyone else calls "simulation" goes the other way around (bottom-up instead of top-down): they consider every possible execution of the low-level code, and require that there exists a corresponding execution of the high-level spec. One way to think of why this is the right theorem is that, for every actual execution of the code, it is able to produce a justification in the form of a spec execution. The reason that CompCert's theorem is, nonetheless, sufficient for them is largely because their system is deterministic. This means that they can invert the direction of what they call "simulation", and prove a helper theorem that translates their "simulation" into what everyone else calls "simulation". Although the paper doesn't mention it, the CompCert source code actually defines and proves this helper theorem, which formally proves that CompCert's "simulation" does end up translating into the other kind of "simulation" that actually means correct execution.