benob 10 hours ago | next |

> If you’ve ever talked to me in person, you’d know that I’m a disbeliever of AI replacing decompilers any time soon

Decompilation, seen as a translation problem, is by any means a job that suits AI methods. Give time to researchers to gather enough mappings between source code and machine code, get used to training large predictive models, and you shall see top notch decompilers that beat all engineered methods.

thesz 3 hours ago | root | parent | next |

> Give time to researchers to gather enough mappings between source code and machine code, get used to training large predictive models, and you shall see top notch decompilers that beat all engineered methods.

Decompilation is about dependencies which makes it a graph problem.

One such problem is boolean satisfiability and this particular kind of problem is extremely important. It also very easy to gather mappings between CNF and solutions. Actually, randomization of standard benchmarks is now part of SAT competitions, AFAIK.

Have you seen any advances there using large predictive models?

Proper decompilation is even harder, it is much like halting problem than SAT. Imagine that there is a function that gets inlined and, therefore specialized. One definitely wants source for the original function and calls to it, not a listing of all specializations.

This moves us to the space of "inverse guaranteed optimization" and as such it requires approximation of the solution of halting problem.

jcranmer 6 hours ago | root | parent | prev | next |

Yes and no.

My first priority for a decompiler is that the output is (mostly) correct. (I say mostly because there's lots of little niggling behavior you probably want to ignore, like representing a shift instruction as `a << b` over `a << (b & 0x1f)`). When the decompiler's output is incorrect, I can't trust it anymore, and I'm going to go straight back to the disassembly because I need to work with the correct output. And AI--especially LLMs--are notoriously bad at the "correct" part of translation.

If you look at decompilation as a multistep problem, the main steps are a) identify the function/data symbol boundaries, b) lift the functions to IR, c) recover type information (including calling convention for functions), d) recover high-level control flow, and e) recover variable names.

For step b, correctness is so critical that I'm wary of even trusting hand-generated tables for disassembly, since it's way too easy for someone to copy something by hand. But on the other hand, this is something that can be machine-generated with something that is provably correct (see, e.g., https://cs.stanford.edu/people/eschkufz/docs/pldi_16.pdf). Sure, there's also a further step for recognizing higher-level patterns like manually-implemented-bswap, but that's basically "implement a peephole optimizer," and the state of the art for compilers these days is to use formally verifiable techniques for doing that.

For a lot of the other problems, if you instead categorize them as things where the AI being wrong doesn't make it incorrect, AI can be a valuable tool. For example, control flow structuring can be envisioned as identifying which branches are gotos (including breaks/continues/early returns), since a CFG that has no gotos is pretty trivial to structure. So if your actual AI portion is a heuristic engine for working that out, it's never going to generate wrong code, just unnecessarily complicated code.

svilen_dobrev 5 hours ago | root | parent | next |

> identifying which branches are gotos

mmh. Yesterday i tried some LLM-augmented "analysis", given a 50 lines source of C, a function with few goto's in it.. somehow all "explanations" were ~correct except it completely ignored the goto's. Using a deepseek-r1-...7b, ollama's default, probably too weak ; but i don't believe other models would be 100% correct either.

sitkack 5 hours ago | root | parent | prev | next |

You are right on a lot of things, but LLMs are the best bijective lens that humanity has ever discovered. They can invert functions we didn't think were invertible.

If given a mostly correct transform from binary back to code, how would we fix that?

Exactly!

Heuristics are dead.

Vt71fcAqt7 3 hours ago | root | parent | prev |

>And AI--especially LLMs--are notoriously bad at the "correct" part of translation.

Can't you just compare the compiled binaries to see if they are the same? Is the issue that you don't have the full toolchain so there are different outputs from the two compilers? Thinking about it though you could probably figure out which compiler was used using those same differences though..

CFLAddLoader 2 hours ago | root | parent |

The expected outcome of using a LLM to decompile is a binary that is so wildly different from the original that they cannot even be compared.

If you only make mistakes very rarely and in places that don't cause cascading analysis mistakes, you can recover. But if you keep making mistakes all over the place and vastly misjudge the structure of the program over and over, the entire output is garbage.

Vt71fcAqt7 2 hours ago | root | parent |

That makes sense. So it can work for small functions but not an entire codebase which is the goal. Does that sound correct? If so, is it useful for small functions (like, let's say I identify some sections of code I think are important becuase they modify some memory location) or is this not useful?

CFLAddLoader 2 hours ago | root | parent |

There are lots of parts of analysis that really matter for readability but aren't used as inputs to other analysis phases and thus mistakes are okay.

Things like function and variable names. Letting an LLM pick them would be perfectly fine, as long as you make sure the names are valid and not duplicates before outputting the final code.

Or if there are several ways to display some really weird control flow structures, letting an LLM pick which to do would be fine.

Same for deciding what code goes in which files and what the filenames should be.

Letting the LLM comment the code as it comes out would work too, as if the comments are misleading you can just ignore or remove them.

mahaloz 5 hours ago | root | parent | prev | next |

I agree with many other sentiments here that if it can replace decompilers, then surely it can replace compilers... which feels unlikely soon. So far, I've seen four end-to-end binary-to-code AI approaches, and none have had convincing results. Even those that crawled all of GitHub continue to have issues of making fake code, not understanding math, omitting portions of code, and (a personal irritant for me) being unable to map what address a line of decompilation came from.

However, I also acknowledge that AI can solve many pattern-based problems well. I think a considerable value can be extracted from AI by focusing in on micro decisions in the decompiler process, like variable types, as recent work has.

jcranmer 4 hours ago | root | parent |

I'd feel a lot more comfortable in the prospects of AI if their big boosters weren't so gung-ho about it replacing absolutely everything. Compilers (and by extension decompilers) are one of the areas where we have the ability to have formal proofs of correctness [1]--and the fact that AI people seem to be willing to throw all of that away in favor of their maybe-correct-but-does-it-really-matter-if-it's-not tools is extremely distressing to me.

[1] And one of the big advances in compilers in the past decade or so is the fact that compilers are actually using these in practice!

wzdd 7 hours ago | root | parent | prev | next |

> Decompilation, seen as a translation problem, is by any means a job that suits AI methods.

Compilation is also a translation problem but I think many people would be leery of an LLM-based rust or clang -- perhaps simply because they're more familiar with the complexities involved in compilation than they are with those involved in decompilation.

(Not to say it won't eventually happen in some form.)

chrisco255 6 hours ago | root | parent |

LLMs are not deterministic, and I want deterministic builds from compiled code to assembly. I also do not want the LLM to arbitrarily change the functionality, I have no such guarantees.

donatj 9 hours ago | root | parent | prev | next |

It's pattern matching, plain and simple, An area where AI excels. AI driven decomp is absolutely on its way

thesz 39 minutes ago | root | parent | next |

Let me parrot you, it's fun.

"It's pattern matching, plain and simple, an area where pattern matching algorithms excel. Pattern matching driven decomp absolutely leads"

Decompilation is a dependence graph problem, one can formulate decompilation as a graph transformation/rewrite. Neural networks are notoriously bad at graphs.

ChrisKnott 8 hours ago | root | parent | prev | next |

It's also perfect for RL because it can compile it's output and check it against the input. It's a translation exercise where there's already a perfect machine translator in one direction.

It probably just hasn't happened because decompilation is not a particularly useful thing for the vast majority of people.

dartos 8 hours ago | root | parent | prev |

Maybe in conjunction with a deterministic decompiler.

precision wrt translation, especially when the translation is not 1-to-1, is not excellent with LLMs.

In fact, their lack of precision is what makes them so good at translating natural languages!

__alexander 7 hours ago | root | parent | prev | next |

> Give time to researchers to gather enough mappings between source code and machine code, get used to training large predictive models, and you shall see top notch decompilers that beat all engineered methods.

Not anytime soon. There is more to a decompiler than assembly being converted to x language. File parsers, disassemblers, type reconstruction, etc are all functionality that have to run before a “machine code” can be converted to the most basics of decompiler output.

mips_avatar 7 hours ago | prev | next |

Decompilers aren’t just for security research they’re a key part of data compression of software updates. Delta compressors make deltas between decompiled code. So an improvement in mapping of decompiled files could have as much as a 20x improvement in software update size.

mahaloz 5 hours ago | root | parent |

I love this use case! Do you have any public links acknowledging/mentioning/showing this use case? Including it in the Applications portion of the Dec Wiki would be great.

loloquwowndueo 10 hours ago | prev |

“Resurgence” not “resurgance”. I wanted to leave a comment in the article itself but it wants me to sign in with GitHub, which: yuk, so I’m commenting here instead.