Code Slop
It all started in code review
At work this week I was reviewing code when I came across this piece of code (anonymized):
def can_be_foobard(self) -> bool:
"""
Checks if the object is eligible for foobar.
:return: True if the object can be foobar'd, False otherwise.
"""
...
I immediately suspected that this was copilot generated: why on Earth would a human repeat themselves 3 times over? This snippet could be reduced to:
def can_be_foobard(self) -> bool:
...
The entire docstring is unnecessary! In terms of the information content, the original version is needlessly verbose, providing redundant explanation of the method name. As "slop" is the term for unwanted AI-generated content, this sample falls into the code slop subcategory.
From a previous discussion
Last week, my team had a discussion around AI tool best practices. We noted that copilot can be instructed, that is, for comments that describe a task, copilot will suggest an implementation. This is a great experience, as it fits entirely within my editor workflow.
However, the point was raised that developers should remember to remove those instructions, as they typically don't provide any additional informational content over the code.
While we're still in the early days of using large language models for software engineering, I foresee some patterns emerging. Engineers are lazy, and will not take time to remove copilot instructions. They may also be prone to mindlessly accept copilot suggestions, as in the docstring example above.
This suggests to me a trend towards increasing code verbosity via code slop. Combined with existing software tendencies to increase verbosity (not-invented-here syndrome, do-repeat-yourself syndrome, technical rot), software engineers may be in store for misery.
And optimism
I don't believe it's a guaranteed fate. I find it fascinating that LLMs perform better in discussions and long answers. Perhaps that's due to a low "information density" in the models. If a human can understand a question at face value, without multiple rounds of interaction, then I expect the models will catch up eventually.
And when they do, model output will be short and sweet, to the point. In the mean time, I'll hold my nose when reviewing code slop.