Hey friends,

Last Saturday, a researcher wrote to me, completely confused.

His paper had just been rejected by an academic journal. Reason given: plagiarism.

Except that it was not plagiarism

He had done what many researchers do today:
used AI to translate his work, improve the English, and smooth out the phrasing

And yet, the verdict came:

“88% of the content appears to have been generated by AI.”

Zero tolerance. Immediate rejection.

His question was simple, and honestly painful:

“I put so much effort into this. What should I do now?”

This case perfectly captures where academia is today
We no longer really know what is being punished

Plagiarism is about stealing ideas
Nothing was stolen here
The AI didn’t invent results
It rephrased, polished, standardized his own text

But for some journals, today, a text that sounds too clean is a suspicious one

The problem isn’t AI. The problem is letting AI take over the voice of the paper.

AI writes well. Too well
And that’s exactly what gives it away

Here are few things one can do to avoid this situation:

  • Rewrite the text manually, especially the introduction, discussion, and conclusion (AI should assist, not author)

  • Reduce overly smooth or generic phrasing that is typical of AI-generated text

  • Keep your ideas and results

  • Check the journal’s policy on AI usage carefully before submitting

  • Disclose the use of AI in your manuscript. Also, explain clearly in a cover letter that AI was used only for linguistic translation, not for scientific content creation

This kind of rejection is frustrating
But it doesn’t mean the paper is lost

It’s a harsh reminder, but a useful one:
It should remain a tool, never a voice.

We adapt. We learn. And we keep going

Well, that’s all for this week.
Let me know how you work with AI

See you next Sunday
Jamal

Reply

Avatar

or to participate

Keep Reading