In partnership with

Hey friends,

Recently, I conducted a webinar on the use of AI in Academia.

Lots of questions from the participants. And one pattern I noticed:

AI is used every day now but researchers do not fully trust it.
On its validity, on plagiarism and man others aspects

I was talking about AI in the loop rather than Human in the loop.
The main question is :
how to use it without losing control of our research.

During the same webinar I explained that AI is already part of the research workflow.

We can see it in:

• literature reviews
• data analysis
• writing support
• coding
• experimental design
and more

We are not talking about something coming in the future
And we agree that in many cases it is incredibly useful.

What AI Does Really Well

When used properly, AI can save a huge amount of time

For example, it can process large datasets in minutes.
Tasks that used to take days, or weeks, can now be done in a few minutes

It can also streamline literature reviews or get you started quickly
Instead of reading hundreds of papers one by one, you can:

• summarise articles
• group themes
• extract key findings

Of course this doesn’t replace reading.
We still have to engage with the literature

But it helps us navigate the field much faster

Another big advantage is pattern detection.
AI can identify relationships in data that are not always obvious
This is especially useful in complex datasets where patterns are difficult to spot manually.

One more aspect where I see AI very useful: accessibility.

AI tools can:

• translate research
• transcribe interviews
• support non-native speakers

This makes research more inclusive, keeping in mind that translations and transcriptions may not be always perfect

And then there’s experimental work.

AI can help you:

• simulate scenarios
• test variables
• generate hypotheses

It’s like having a thinking partner that helps you explore ideas faster.
That’s how I described AI during my webinar: an assistant
Not a replacement.

Finally, AI can improve accuracy in repetitive tasks.

For example:

• classification
• tagging or labelling items
• data cleaning

AI can help reduce human error on large volumes of data

But There’s a Catch

AI also comes with real risks. One of the biggest concerns is academic integrity

If you rely too much on AI, you may start to lose track of:

• what you wrote
• what the AI generated
• where ideas came from

This can blur authorship. Do not copy/paste what AI gives you.
Use it to structure your ideas not to write papers on your behalf

And in research, authorship matters. You are responsible for every claim you make. AI cannot take that responsibility


Another problematic aspect is bias. AI systems learn from existing data.
And we know that data is not perfect. It contains biases.

So AI can simply:

• reproduce them
• amplify them
• present them confidently

Stay critically alert because the output often looks polished and convincing.

But that doesn’t mean it is always correct

I also raised the question of reproducibility and transparency.
Many AI models are “black boxes.”
You see some convincing output but you don’t fully understand how it was generated.

In research, this is a problem. Because science relies on:

• explanation
• reproducibility
• clarity

If you cannot explain how a result was produced, it becomes harder to defend.

That brings me to the following point: AI is fast, AI looks good, and easy to be trusted.
If you accept outputs without checking them, you risk:

• weak arguments
• incorrect conclusions
• poor-quality research

AI should support your thinking.

Not replace it.

In a previous newsletter, I mentioned that we will have more and more researchers/PhD candidates who will not be able to defend/explain properly what they “wrote” because AI did. Not them.

There are also practical limitations. Advanced AI tools can be expensive for institutions.

They require:

• computing power
• technical skills
• proper setup

And then there’s data privacy. If you upload sensitive data into the wrong system, you may expose:

• unpublished research
• personal data
• confidential information

So, how should we use AI?
The goal of this newsletter is not to convince you to avoid AI.
The goal is to use it well. Here’s a simple way to think about it:

Use AI for:

• speed
• structure
• support

But keep control of:

• ideas
• interpretation
• decisions

A simple rule:

AI can assist the process.
But you must own the thinking and the authorship.

When you use AI in your research, always ask yourself:

• Do I understand this output?
• Can I verify it?
• Can I defend it?

If the answer is no, don’t use it.

AI is a powerful tool. But it’s still just a tool. It won’t replace good research.
It will expose weak research faster.

So the question is not: Should I use AI?

But Am I still doing the thinking?

Because in the end, that’s what makes the work truly yours

Let me know what you think, I read every email.

See you next Sunday,
Jamal

ps: Reminders for the upcoming webinars:
[English] PhD Program 3/5: Managing References and Ethics on 29 April, 13:00 (GMT+4):
Registration link:
https://clarivatewebinars.webex.com/weblink/register/re22032c826e12fe121b0af0758f87564

[Français] Bien utiliser l’IA académique : les outils Clarivate pour la recherche optimisés par l’IA. 30 Avril, 13h (GMT+4). Inscrivez-vous au webinaire en français: https://clarivatewebinars.webex.com/weblink/register/r43e9285ee8a2cbf4463194e5b33706bc

Reply

Avatar

or to participate

Keep Reading