There’s no doubt AI can be a tremendous tool for productivity, research and coding. It’s also making software development more accessible to more people. This is generally good. We should strive to do so, because it’s empowering to do things yourself.
There’s however a nuisance that’s slowly surfacing as a side effect of this progress: AI-generated (or mostly AI-generated) pull requests. I think there’s an attitude toward wanting to hide the fact that a tool was used to generate a certain piece of code, almost as if using AI consists to “cheating” much in the way that using a lens was considered “cheating” for painters.
There are clear signs that give away usage of AI, such as a perfect or overly descriptive pull request message, weird/verbose/exoteric logic and of course errors that would not pass attentive scrutiny or precise testing.
The problem here, which is not really a problem, but rather an improvement, is that the tools are getting better and it’s becoming more difficult to recognize AI generated code. Most code will look correct, and might even work correctly, in most cases. Accepting these contributions is difficult however, because reviewing these contributions is difficult, most human mistakes are easier to spot, AI-generated ones are more insidious, they look correct, plausible.
The contributions from AI tools can be valuable of course, but they are becoming a burden for maintainers. I think at the very minimum people using AI should declare so in their pull request description and specify exactly how they have tested the changes and verified that the AI generated code works as expected. I realize the latter might be difficult for newcomers of the field, so perhaps the former will suffice to at least warn the maintainer of the additional scrutiny required.