

AI code is like alternative medicine, it’s called that when it’s bad and doesn’t work. If it does, it’s just called code. And the issue isn’t using code made by AI, it’s when people who don’t know how to code think the AI does, and blindly do without checking. That’s very unlikely to happen with the Linux kernel, as the entire project is basically just one constant code review where it really doesn’t matter if bad code was written by a human or an AI.
Even Torvalds has used AI to help with his projects, because it would be kinda silly not to.



Ask chat GPT to come up with a nice message explaining why direct copy pastes of LLM outputs is bad. Copy paste it to her directly.
Maybe she will understand it better that way.