Reviewing AI generated code
AI coding tools are shifting the engineer’s role – from writing code to reviewing it. These tools can already generate decent code. But to take it to production, someone still needs to read it carefully to catch edge cases.
Is reviewing AI code any different than human code? For once, unlike humans, AI won’t learn from your review (it will adjust but short-term) but it can explain its own code well.
The pace is faster. You don't need to wait for the code author to respond. Thus the process is more interactive and in smaller steps. This tends to be of high-quality (no junior level mistakes or inconsistencies), but at times hard to understand.
It also means that errors tend to be subtle, so it's even more important that you put on your best reviewer hat (which is hard because you don't feel the accountability and social pressure that comes from a human code review).
Lastly, AI-generated tests are often too rigid (due to the prompter’s underspecification of the problem). It's worth spending time adding realistic edge cases.