![]() |
For years we fought a phrase that sounded like a sentence: "We've always done it this way." It was the perfect refuge from immobility, the elegant justification for not studying, not updating, not understanding. Today that phrase is changing form, but not substance. The new mantra is: "ChatGPT said so.".
The problem is not AI, but how we use it
The problem is not artificial intelligence. The problem is the senseless use we are making of it.
We are witnessing a dangerous delegation of critical thinking. Where previously we gave up on change, now we give up on understanding.
AI is becoming an unquestionable authority, a shortcut that relieves us of the burden of studying, verifying, and taking responsibility for our own decisions. It doesn't matter if the answer is inaccurate, out of context, or even wrong: if "ChatGPT said so," then that's fine.
Probability is not truth
But AI doesn't think! It doesn't understand! It has no responsibility!
It returns what is statistically plausible based on past data. Using it as an oracle means confusing probability with truth, speed with competence, convenience with knowledge.
The atrophy of skills
The greatest risk is not the occasional mistake. It's the atrophy of skills.
When we stop asking ourselves why an answer is correct, we also stop being able to recognize when it's not.
Professionals who no longer know how to evaluate a solution, students who copy without understanding, managers who make strategic decisions based on texts generated without any verification. AI becomes a permanent crutch for those who have decided not to walk anymore.
From expertise to automation
Then there is an even more disturbing cultural aspect: authority is being shifted from expertise to automation.
It no longer matters who studied, experimented, made mistakes, and learned. What matters is who "asked AI," As if it were some sort of omniscient entity, to the point of nullifying or calling into question the full experience of industry professionals.
It's a new form of conformism, more subtle than the previous one, because it presents itself as progress.
Using AI well requires more expertise, not less.
Using AI well requires more expertise, not less.
It requires the ability to ask the right questions, to interpret the answers, to recognize their limitations. It requires context, experience, and a critical sense.
In other words: everything that the lazy use of AI is eroding.
Responsibility remains human.
If we don't reverse this trend, we will move from a society that defended bad habits to one that condones incompetence with technology. And when something goes wrong — because it will go wrong — it won't be enoughSay "ChatGPT said so."
In the end, responsibility is always human!
A tool that amplifies what it finds
AI is a very powerful tool. But like any tool, it amplifies what it finds.
In the hands of those who think, it enhances intelligence.
In the hands of those who don't want to think, it amplifies emptiness.
Follow me #techelopment
Official site: www.techelopment.it
facebook: Techelopment
instagram: @techelopment
X: techelopment
Bluesky: @techelopment
telegram: @techelopment_channel
whatsapp: Techelopment
youtube: @techelopment
