Code Reviews
I’m a big believer in code reviews, possibly more than is warranted. It’s also true that it is hard work, and it is often hard to get started. So here are a list of the things I try to do, maybe they will help someone else.
Say nice things about nice code. We all have our ups and downs. Code reviews are often about preventing things from going wrong in the future- and it can be hard on the people in the present. I value people doing the work, so I want to communicate that. If a function looks clean, say so. If there is a workaround for an ugly wart in the language or framework, commiserate. Think of it like NBA players high-fiving their teammates after free throw attempts- everyone is a professional, no one technically needs it, it’s a little rote- but even perfunctory social gestures help. Even if everyone knows they’re a little forced.
File tickets for tech debt. As a reviewer, it’s more polite to create real tickets for focused followup work than to unload a dump truck of scope creep. I’m sort of in “treat the new hires well” mode in my personal life, but the same applies to long-time developers. The exception is when someone is doing a ton of work in the same section of code and just kind of creating a mess. If your org is too dysfunctional to allow you to work on tech debt tickets occasionally… I don’t know how to help, to be honest. That sounds like its own problem.
Double check that people aren’t re-implementing existing code. New developers on a project are particularly susceptible to this- they don’t know the codebase yet. It’s fine, it’s just about communicating.
Double check the names for naming conventions. This is the right time to make sure that everything isn’t called FooBar except for the one new feature where everything is BarFoo.
Run through the security checklist. Don’t build strings and send them to the
shell, especially if they have user input in them. Use your language’s execve(2)
.
Don’t build strings and send them to a SQL database, use parameterized queries.
User supplied data has got to get escaped in the template. Etc.
Run through the test checklist. Is it tested? Where is it tested? Do the tests exercise the usual boring edge cases- too much, too little, garbage input.
Run through the integration input. Does this require new monitoring? Is there a companion change to the monitoring configuration somewhere?
Anyway, once you’ve run through this list, you’ve read the code a couple times, you’ve thought about it a little, and you’re probably in a better place to think about it as a holistic thing. And all of that needs to get done anyway.