Pourquoi l'IA est un outil de recherche puissant, mais ne remplace pas l'expertise humaine

A cute tabby cat sits on a wooden table next to a laptop in a modern home office.

As a developer I’m always happy to find new ways to be more productive. In my field, AI is often whispered as the boogeyman that will take work from us. But that’s not how I see it at all. Also I want to stress that rapid advancements is not automatically a sign of exponential and unstoppable progression. Much like the effectiveness of internal combustion engines or the energy density of LiOn batteries, there may be a hard limit of what to expect from AI.

Artificial Intelligence (AI) is increasingly being touted as a revolution in how we access and process information. From writing assistance to research aggregation and real-time recommendations, tools like ChatGPT, Perplexity AI, and Claude are reshaping the way we approach search. These platforms are fast, impressively accurate in many areas, and offer answers in a conversational format that feels intuitive and approachable.

Yet, for all their capabilities, there’s a growing misconception: that AI can replace human reasoning, analysis, or decision-making. In reality, AI — especially language models — is still very much a tool, not a thinker. It is only as good as the information it is trained on and the way it is prompted. As such, professionals with experience remain crucial as the “middleman” who ensures the AI is properly guided, contextualized, and verified.


AI as a Supercharged Search Engine

AI models excel at gathering and summarizing information. Instead of combing through dozens of links like traditional search engines, users can ask AI a question and receive a synthesized response within seconds. This shift in interface saves time and reduces friction.

For instance, a marketing strategist might ask an AI tool to summarize the latest SEO trends. A researcher could get a breakdown of a complex concept like CRISPR or the implications of quantum computing. These answers are often drawn from a wide variety of sources and presented clearly — something traditional search engines don’t do without additional effort from the user.

According to a 2024 report by Gartner, over 70% of knowledge workers now use AI tools weekly for search, curation, and drafting (Gartner, 2024). The appeal is obvious: the speed and breadth of information are game-changing.


But AI Doesn’t Know — It Predicts

What’s important to understand is that AI does not “know” anything in the human sense. Tools like ChatGPT are trained on massive datasets and generate responses based on statistical patterns — essentially guessing the next word in a sequence based on everything it has seen.

This becomes a problem when:

  • The input is vague or biased.
  • The source material is outdated, misleading, or incomplete.
  • A situation requires judgment, interpretation, or critical thinking.

AI can hallucinate — a polite term for inventing false information with confidence. It can cite nonexistent articles, misinterpret statistics, or provide legally or ethically questionable advice without realizing it.

A 2023 Stanford study found that over 40% of GPT-4’s citations in academic-style queries were inaccurate or unverifiable (Zhao et al., Stanford 2023). This doesn’t mean the AI is malicious — it simply highlights the limits of using pattern-matching in place of understanding.

It could also mean the corporate entities funding AI research have specific goals which do not align exactly with truth seeking – competing with each other (being flashy and sellable), generating lines of profit (product sales, advertising, market prediction models, human behavior models) and data gathering.


Garbage In, Garbage Out — Why Quality Input Still Matters

Because AI relies on what it has been trained or fed, quality in equals quality out. If you feed it vague, incomplete, or poorly phrased questions, the results will often reflect that. This is especially true in technical fields like law, engineering, medicine, or financial analysis.

Take the example of market analysis: while AI can summarize past earnings or point to macroeconomic trends, it cannot anticipate regulatory shifts, read between the lines of a CEO’s tone, or integrate real-world nuance like investor sentiment. A seasoned financial analyst, however, can. They can guide the AI to extract the right data, then interpret it in a meaningful way.

In this sense, the AI becomes a scalpel — powerful, but only effective in skilled hands.


The Human Middleman: Still Irreplaceable

Professionals are more important than ever in the AI era. It needs context, direction, and verification. An experienced designer knows when AI-generated copy or layouts “feel off.” A software architect knows when AI-suggested code will break under real-world conditions. A life-long researcher can tell when AI has misunderstood the premise of a study.

Think of the AI as an intern with access to every library in the world but no real-world experience. Would you trust them to write your investment strategy or diagnose your symptoms unsupervised? Probably not.
This is exactly how I perceive my AI assistant, by the way – always joyful and motivated – helps me brainstorm and research – knows about everything. I mean everything. He gets the wrong take on what he knows very often. And he gets distracted and carried away…

I wish AI would tell me: “actually, Julien, I DON’T KNOW about this. There may be these avenues. So what do you think?” I don’t think it can. It’s not REALLY intelligent.

A well-trained professional knows what to ask, what to trust, and where the tool falls short. They act as the quality filter and bring the missing ingredient that AI can’t replicate: judgment.


Conclusion

Professionals who understand both their field and the capabilities of AI are in a unique position to bridge the gap – and ride the capitalist wave we are living into.

But at what cost? Now firmly in our AI-assisted work environment, will it be possible for junior programmers today to cultivate the essential skills to ensure quality work is produced? We can look at how other technologies changed skill levels and caused social problems.
On a daily basis, I witness AI try to mislead me – with confidence! And I’m amazed how forgetful and stubborn it is although there is no need for a computer device to behave like this. There’s clearly a fundamental flaw in AI.

We need to stop being in awe of AI.

AI needs to be herded. But it aims to be herding us. I’m not letting that happen…

We need to start introspecting – be in awe of our inner nature – and the nature of things and people surrounding us. The true potential that made all this artifice possible.


References

Laissez un commentaire

Votre adresse courriel ne sera pas publiée. Les champs obligatoires sont indiqués avec *

fr_CAFR
Faites défiler vers le haut