That’s actually a valid skill to know when to tell the AI that it’s wrong.
A few months ago, I had to talk to my juniors to think critically about the shitty code that AI was generating. I was getting sick of clearly copy-pasted code from chatGPT and the junior not knowing what the fuck they were submitting to code review.
I’m trying to convince a senior developer from the team I’m a member of, to stop using copilot. They have committed code that they didn’t understand (only tested to verify it does what it’s expected to do). I doubt it’d succeed…
That used to make sense when LLMs were not the thing, when evaluating assessments from students, half of which asked someone else and didn’t bother to even read the code
“Prompt Engineering”: AKA explaining to Chat GPT why it’s wrong a dozen times before it spits out a useable (but still not completely correct) answer.
That’s actually a valid skill to know when to tell the AI that it’s wrong.
A few months ago, I had to talk to my juniors to think critically about the shitty code that AI was generating. I was getting sick of clearly copy-pasted code from chatGPT and the junior not knowing what the fuck they were submitting to code review.
I’m trying to convince a senior developer from the team I’m a member of, to stop using copilot. They have committed code that they didn’t understand (only tested to verify it does what it’s expected to do). I doubt it’d succeed…
Should start asking them like, why did you do this? Why did you chose this method? To make them sweat :p
That used to make sense when LLMs were not the thing, when evaluating assessments from students, half of which asked someone else and didn’t bother to even read the code
If no one can make sense of the change, then you reject it. Makes no difference if it was generated with an LLM or copy-pasted from Stackoverflow.
I just ask ChatGPT to review pull requests.