54 Comments
User's avatar
Marcelo Bigal's avatar

Excellent piece, thank you for approaching it from the angle of a professor, its implications in academic life (use AI to generate and criticize a Plato dialogue is beyond ridiculous) and also the great plagiarism that comes from it. The great neuroscientist, specialized in artificial neuronal networks, Miguel Nicolelis (Duke), claims that AI is neither artificial nor intelligent.

This said, I am one of the original signatories of the “Pause Giant AI Experiments” open letter, in the company of great minds such as Yoshia Bengio and not so honorable as Elon Musk. I embrace AI in specific situations (machine learning modeling to help us develop medicines against protein targets that are currently undruggable, for horrible diseases). But I have been questioning the large level implications. “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

In a recent interview I provocatively claimed that the 19th century is when we cracked some of the greatest mysteries in biology; in the 20th century, we cracked physics; the 21st century marks the auto-imposed delegation of human existence and essence, including critical thinking, to the machines and virtual space. Anyway…

https://themedicinemaker.com/business-regulation/intelligence-by-design

https://www.theglobeandmail.com/business/article-artificial-intelligence-drug-research-hype/

Expand full comment
Massimo Pigliucci's avatar

I tend to agree, we risk doing a lot of damage to ourselves. And what for? Profit? Because it’s cool?

However, history tells us that once these things get going there’s no stopping them, we’ve opened yet another Pandora box. So we need strategies of containment and we need to educate people about them.

Oh, and we need to get rid of people like Musk. But that’s another conversation.

Expand full comment
Marcelo Bigal's avatar

And agree that the train has left the station.

Expand full comment
Marcelo Bigal's avatar

Ditto to the last part!

Expand full comment
Mike Kentrianakis's avatar

“I asked a simple question: ‘Could you give me an example?’ ‘Well,’ came the response, ‘you are in philosophy. So perhaps you could task your students to use AI to generate a dialogue in the style of Plato and then ask them to critique it.’

Pause for a minute and consider just how misguided this suggestion actually is. Done? Okay, let’s analyze it a moment. First off, why would I prefer my students to criticize AI-Plato rather than, oh, I don’t know, Plato??”

🤣🤣🤣

Expand full comment
Matthew Rodriguez's avatar

The only practical solution I can think of is returning to in-person exams & essays, but then that basically makes doing research papers impossible…

Expand full comment
Marcelo Bigal's avatar

The in person exams will penalize the students with social anxiety, those who don't master the language (foreign students), those in the spectrum. Essays that are unorthodox might be a reasonable approach (does plato help us understand fake news? - i didnt try, but i guess AI would feel a bit loss about it and the student would have to actually read and develop an argument). But so difficult and would require so much preparation from the teachers as well that I fear that we will follow the path of least resistance…

Expand full comment
Massimo Pigliucci's avatar

Marcelo, no form of exam comes with no students who will have a problem / be disadvantaged with it. I actually think it’s a good idea to prepare them for life with oral exams. After all, when they apply for a job they’ll have to go through an interview.

I think AI will increasingly be able to handle pretty much any question one could throw, no matter how cleverly constructed. In fact, I asked ClaudeAI your very question. Here is what I got:

>> I believe Plato would view fake news as a particularly dangerous manifestation of what he criticized in his allegory of the cave – the prevalence of shadows mistaken for reality. In The Republic, Plato was deeply concerned with how most people base their understanding of truth on mere appearances rather than engaging with fundamental reality (what he called the Forms).

Fake news would likely represent to Plato an even more troubling version of the sophistry he fought against in ancient Athens. While sophists at least engaged in direct argumentation, fake news often bypasses critical thinking entirely, creating false narratives that appeal directly to emotions and existing beliefs. Plato would probably see this as a form of harmful mimesis – an imitation of truth that leads people further from actual understanding.

The speed and scale of modern fake news distribution would likely horrify Plato, who was already concerned about how written words could spread without their author present to defend or explain them. He might view social media algorithms as a modern version of the cave wall, projecting shadows that people mistake for reality while being even further removed from truth than the shadows in his original allegory.

However, I think Plato would see fake news not just as a technological problem, but as a symptom of what he saw as democracy's fundamental weakness – the tendency to elevate opinion over wisdom. In his view, when everyone's views are treated as equally valid regardless of their grounding in truth, it creates fertile ground for manipulation and deception.

Plato's solution would likely be radical by modern standards. Just as he advocated for philosopher-kings to govern his ideal republic, he might propose that control over information distribution should be in the hands of those who have undergone rigorous philosophical training to recognize truth from illusion. While this would be problematic from our modern democratic perspective, it reflects his belief that truth requires both proper education and institutional structures to flourish. <<

That’s significantly better than my average student would come up with…

Expand full comment
Marcelo Bigal's avatar

Incredibly scary! Really is. I was actually feeling good about my response being better than the AI response until this, which is very insightful i guess “ However, I think Plato would see fake news not just as a technological problem, but as a symptom of what he saw as democracy's fundamental weakness – the tendency to elevate opinion over wisdom”. I will stick to my view that in times of AI, we have to be humans. The more doctors make their consultation short and impersonal, look to the radio reports rather than looking to the exam, then to the patient, then calmly explaining to patients, the easier it is for them to be replaced and they have only themselves to blame. Same thing for teachers, baristas, everyone. Thriving at AI times requires extra doses of humanity, which we are gladly walking away from. This is the battle, I fear. Thanks for taking so much time debating with us on this point.

Expand full comment
Massimo Pigliucci's avatar

Yes, that’s what some of my colleagues have started doing. But it does have the drawback you point out. Some have also switched to oral exams, but the same problem remains.

Expand full comment
Dr. Joel M. Hoffman's avatar

I see it differently. I think gAI is raising the bar on creativity:

https://ancientwisdommodernlives.com/p/in-the-ai-of-the-storm

If, for instance, teachers cannot tell whether an essay is by a human or a computer, then writing that essay is no longer something teachers should require students to do.

Expand full comment
Massimo Pigliucci's avatar

But the goal of teaching isn’t to get students to beat AI, or to raise the bar of creativity (however one might measure that). It’s to get them to think and write for themselves. If not essays, what would you require instead?

Expand full comment
Dr. Joel M. Hoffman's avatar

I didn't meant that students should "beat" AI. Rather, I don't think there should be a competition. By and large, I don't think students should have to do things that machines are better at, just as (by and large) I don't think students should be forbidden from using common, readily available tools.

For instance, students should be allowed (even encouraged) to use spell-check.

For that matter, students should be allowed to type instead of hand-writing, or to dictate instead of typing.

Similarly, I think should should be allowed and even encouraged to use gAI. That's what it's there for. It's newer than typewriters and computers and spell-check, but, in my opinion, no different in nature.

Just as handwriting and basic spelling are no longer important at an advanced level, neither is basic composition.

I think one new skill will be deciding what to write, not writing it. Another new skill will be checking what gAI produces. Those might guide my choice of replacement for an essay. I would also go (back) to learning facts. Quizzes and oral examinations seem like good ideas.

(At the same time, I still think there's value for really good writers, just as we still have calligraphers. But that's different. We're not quite there yet, but pretty soon it will be hard to write better than a computer.)

Expand full comment
Massimo Pigliucci's avatar

I’d like to hear your further thoughts. Chess at those levels is not a hobby, it’s a serious endeavour, analogous to art, music, sports.

Expand full comment
Dr. Joel M. Hoffman's avatar

I was actually using chess both as an example of a hobby and as an example of a competition of human potential. They seem related to me, inasmuch as neither one is about the result, and both are about being able to do it. The Tour d le France wasn't obsoleted by motorbikes, e.g.

With essays, I think there's room for artistic competition, or just pride in accomplishment. But I don't think that's their role in an academic setting. That's why I think gAI should be welcomed into the classroom (but perhaps not into a poetry contest).

Very interesting.

Expand full comment
Massimo Pigliucci's avatar

Good points, but it does depend on what one sees the goal of a college degree. If we think it is just a means to get a job, then yes, more AI should be included, since that’s clearly going to be part of the future. (Though the problem of widespread plagiarism remains.)

If, however, one sees education in general, and at college in particular, as a way to form the next generation of kind, thoughtful, mature human beings that will be ethical members of society, then we’ve got a serious problem. Needless to say, I lean toward the second view.

Expand full comment
Dr. Joel M. Hoffman's avatar

I certainly don't think it's just a way to get a job. But I think that part of an education is learning how to use tools effectively. And I think the more aspects of our work we can relegate to automation, the higher we can climb.

Expand full comment
Massimo Pigliucci's avatar

I guess I disagree that AI is in the same category as spellcheckers and typewriters. It’s a whole different ball game. The problem is not that students could use AI as a tool. I do. The problem is that too many students use AI to cheat. Which I don’t.

As for whether people should learn to do things that AI could do better, do you think we should give up playing chess because computers are becoming hard to beat even for grand masters? If someone were to develop robots that could perform better than humans in athletic competitions, should we cancel the Olympics?

Expand full comment
Dr. Joel M. Hoffman's avatar

(I think my reply got threaded in the wrong place. I'd love to blame the app, but I'm pretty sure it's user error 😕)

Expand full comment
Massimo Pigliucci's avatar

It happens, I replied there!

Expand full comment
Dr. Joel M. Hoffman's avatar

Hmm. The chess analogy is interesting. I want to think more about that. My initial reaction is that a hobby is different, as is a competition of human potential, but maybe not...

Expand full comment
David W. Zoll's avatar

Such an interesting article. Would a Stoic possibly take the attitude as follows: “I choose not to use AI. I will observe its effects on society.” I wonder if it’s worthwhile to make a statement, probably unverifiable, that “AI was not used to write, review or edit.”

Expand full comment
Massimo Pigliucci's avatar

David, an increasing number of publications and editors require a signed statement like the one you suggest. Though there are some legitimate uses of AI even in that context, I think. For instance, I use Apple Intelligence to proofread some of the stuff I write. But I’m the author of the stuff.

As for the Stoics, there is nothing in Stoicism, I think, that either precludes or recommends the role of an observer in this case. However, it is definitely a Stoic principle that we should not cheat others, so any use of AI that can reasonably be construed as plagiarism should be a no-no for a Stoic.

Expand full comment
David W. Zoll's avatar

Awesome thanks for the thoughtful response.

Expand full comment
Paul Wherry's avatar

It seems to me that gAI is a tool, and like all tools it can be used for virtuous or non-virtuous purposes. Teachers will have to embrace the use of this tool and instruct students on the importance of character in using this tool properly. There will always be cheaters - like there are now. Perhaps, with the right questions to AI, a new connection will be made. If the student then runs with this new idea (that was generated by AI), then I believe the result is positive. If the student uses AI to write their paper, then it falls to the teacher to recognize that. When I first took a class in Ethics (back in the 1900’s), we not only had to write papers, but then defend them to the class. If the work wasn’t the student’s own, they would have difficulty discussing nuances required by classmates questions. No system will ever be perfect. Nevertheless, we have to accept the new reality and adapt as best we can.

Expand full comment
Massimo Pigliucci's avatar

Paul, I’m not so sure we *have* to accept the new reality. We can criticize it, reject it at least in certain domains, or demand transparency and/or regulation.

It is not my job as a teacher to spend an inordinate amount of time catching cheaters. And their numbers has apparently skyrocketed since the introduction of AI. This is not old-fashioned plagiarism, it’s a whole different ball game.

Expand full comment
Christian Orlic's avatar

Thank you! As someone who has worked in schools this is something I worry about. I think I would also like to defend carrying some facts in one's mind (i.e. Plato's birthdate) can be helpful in scaffolding what we know about him, his arguments, and the context in which he wrote.

I agree that good professors and good teachers ask the kinds of questions you ask and mention. I have also seen a lot of easier questions being asked and have encountered vague pressure to include such questions, perhaps, in an attempt to help students who struggle with the more difficult ideas, pass.

I am deeply concerned about other kinds of falsification as gAI appears to be capable of writing in the voice of another. Could further misinformation spread this way?

The kind of cheating you mention is also a problem because it allows students to skip the thinking part.

Expand full comment
Massimo Pigliucci's avatar

Christian, yes, I do think some degree of memorization is useful, necessary even. I may not remember Plato’s birth year, but I certainly know that it was the 5th century BCE, and that’s crucial information one shouldn’t need to constantly look up.

Regarding AI writing in the voice of another: yes, that’s a problem. I have turned down one project, and asked for significant modifications to another, because the company involved wanted to use AI to “sample” my voice and use it to generate novel content that I had not approved. No thank you.

Expand full comment
Christian Orlic's avatar

I celebrate your rejections with glee.

Fully agree re some memorization.

Expand full comment
David R. Calvert's avatar

Thankful for your well thought out article. I see that Ai is a tool that can be used in a positive or negative way such as a a hammer is a tool that could be used to destroy a beautiful sculpture or using it to create a beautiful sculpture with maybe the artist and chisel to guide it. It is very much like having us guide the tool rather than the tool guide us. A self guided car removes us from using our thinking and motor skills that will eventually lead to atrophy.

My use of Ai is to use it as a card catalog on a particular topic to find source material that I can further research on the topic of interest.

I also use it to cross reference a topic to see if my thoughts or connections to a topic fit any patterns that I have come up with.

And sometimes have it act as a proof reader.

Expand full comment
Massimo Pigliucci's avatar

David, yes, those are all legitimate uses of AI, and I tend to do the same (though I always fact check the alleged references, because of the possibility of hallucinations). There is one difference between AI and hammers: AI are produced, and constantly altered, by corporations whose interests may not align with our own. Which makes AI far less of a neutral technology than hammers.

Expand full comment
Demian's avatar

Thanks Massimo.

I've noticed that you have integrated different tools to enhance the experience of the philosophical initiatives you lead—whether it's video calling apps, Discord, Substack, or others. These have significantly improved various aspects of the Stoic community, so I appreciate that you're also giving serious consideration to the potential usefulness of GenAI tools for teaching.

I don’t want to stray too far from the main topic, but I’m really curious about the Stoic perspective on this: From a Stoic materialistic standpoint, since what distinguishes humans from other species and objects is our specific material composition and structure that enables rationality, if AI were to develop to a point where it becomes functionally indistinguishable from human intelligence, wouldn’t it also be considered a form of intelligence? In that case, wouldn’t AI simply be another form of rationality emerging from a different material source, rather than something fundamentally distinct from human cognition?

Expand full comment
Massimo Pigliucci's avatar

Damian, good question, though pretty complex. To begin with, current AI is not, and will not be, intelligent. In fact, AI is a misnomer. LLMs are the embodiment of the famous John Searle’s thought experiment about the Chinese room: all syntax, no semantics.

But yes, it is possible that in the future we will have actual AI, that is, artificial systems characterized by intelligence and understanding. However, because they will, presumably, not be embodied, and will not have evolved organically, they will likely be endowed with a very different kind of rationality, and Stoicism may not apply to them. Remember that Stoicism is a philosophy rooted in human nature. If non-human intelligence turns out to be very different from the human variety, then all bets are off about what that intelligence’s goals, virtues, and eudaimonia would look like, assuming that those concepts would make sense for an AI.

Expand full comment
Demian's avatar

Thanks for your reply, Massimo. Your response made me realize that I had been thinking primarily in terms of the material aspect of human beings (viewing us as a complex arrangement of interrelated particles) while giving less consideration to the reasons that brought humans into existence in the first place and what makes us unique compared to other species in nature or even a potential human-like intelligence. While the exact path our species has taken and the reasons behind it remain unclear, simply creating a copy (even a perfect 1:1 replication of the brain or body) would not recreate that process (and therefore the real reasons, or even the lack thereof). No matter how precise the duplication, it would inevitably result in a shallow representation of humanity, so Stoicism won't apply there.

Expand full comment
Massimo Pigliucci's avatar

Exactly. Which is not to say that AI itself may not develop a philosophy of “life” that is appropriate to it.

Expand full comment
Demian's avatar

Hi again! Sorry, I don't want to be heavy with my opinions. I just wanted to share one more thing: Walter Isaacson that does not appear to be particularly religious, shared this link from the Vatican, where the topic of the relationship between AI and human intelligence is discussed in depth:

https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html

It's quite long, but it's really worth it. Taking away all the theological parts, the development of the topic is very similar to what we have discussed here. For example, this paragraph:

"Even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh."

Honestly, if someone had told me I’d end up reading an article from the Vatican on a scientific issue in the 21st century, I would have thought they were joking. Yet, with the sheer amount of misinformation circulating on so many topics (especially scientific ones) I’ve come to find more seriousness in sources like this than in 99% of what’s out there. And not just on this issue, but on others like climate change, for example. Maybe it’s it’s a sign that the world has truly turned upside down.

Expand full comment
Massimo Pigliucci's avatar

Thanks for the link! And yup, the world is now officially upside down!

Expand full comment
Steven B Kurtz's avatar

If folks are unaware, AI isn't always 'truthful.' They sometimes make things up. See:

https://www.nature.com/articles/d41586-025-00068-5

Expand full comment
Massimo Pigliucci's avatar

Right, that’s why I included a link to a paper about AI and bullshitting.

Expand full comment
Charles Corbit's avatar

Very much in agreement with your observations and aligned with your thinking.

Expand full comment
Massimo Pigliucci's avatar

Appreciated!

Expand full comment
Naresh Kumar's avatar

Reading the article I can vouch that this was written by none other than yourself as I know your style of writing. You mention that using AI to write a paper could be cheating. If one has an original piece of research data & to save time uses AI to do the analysis & complete the paper as because of time constrains one has in modern times , I think it’s being smart. In the past it would have taken months to complete a paper & what know it takes a weekend. Like social media AI can be of great benefit to mankind if used judiciously. I agree not to be afraid but healthy to be skeptical.

Expand full comment
Doug Bates's avatar

I gave Claude AI this prompt:

"Please write a 5-paragraph essay in the style of Massimo Piglucci about why Pyrrhonism is superior to Stoicism."

I got this:

"I'll aim to analyze this philosophical comparison while drawing on stylistic elements common in Pigliucci's work - his analytical rigor, engagement with both ancient and modern philosophical perspectives, and tendency to examine practical implications.

"Pyrrhonian skepticism, as developed by Pyrrho and later articulated by Sextus Empiricus, offers a more intellectually honest approach to human knowledge than Stoicism. While the Stoics built elaborate metaphysical and ethical systems based on their conception of nature and reason, the Pyrrhonians recognized something more fundamental - that our ability to know anything with certainty is itself questionable. This suspension of judgment (epochē) about all non-evident matters paradoxically leads to the very tranquility (ataraxia) that the Stoics sought through their more dogmatic means.

"The Stoic emphasis on living according to nature presupposes that we can reliably discern what nature demands of us. But as the Pyrrhonians astutely pointed out, for any claim about the natural order or moral truth, equally compelling counter-arguments can be marshaled. Instead of constructing an elaborate philosophical edifice that risks being built on shifting sands, the Pyrrhonian approach of systematic doubt coupled with adherence to appearances and customs provides a more sustainable path to equanimity. When we stop striving to establish unshakeable truths about reality and ethics, we can engage with life as it presents itself to us without the anxiety of seeking certainty where none may be possible.

"The practical advantages of Pyrrhonism become especially apparent when we consider how both schools deal with adversity. The Stoic response relies heavily on their metaphysical framework - the idea that the cosmos is rationally ordered and that apparent evils serve some greater good. But this requires a leap of faith that many thoughtful people may find difficult to maintain in the face of genuine tragedy or injustice. The Pyrrhonian approach of suspending judgment about the ultimate nature of events while dealing with them according to how they appear and local customs often proves more psychologically sustainable. We need not convince ourselves that suffering serves some cosmic purpose; we can simply acknowledge our experience of it while remaining agnostic about its deeper meaning.

"Most compellingly, Pyrrhonism's intellectual humility aligns better with contemporary understanding of human cognition and its limitations. Modern psychology and neuroscience have revealed the extent of our cognitive biases and the constructed nature of our perception and reasoning. The Pyrrhonian recognition that our most cherished beliefs might be fundamentally questionable anticipates these insights in a way that the more confident Stoic epistemology does not. While both schools sought to free us from mental disturbance, the Pyrrhonian path of embracing uncertainty rather than seeking refuge in supposedly unshakeable truths ultimately proves more compatible with both human psychology and the inherent complexity of our world."

Expand full comment
Massimo Pigliucci's avatar

And…?

Expand full comment
Doug Bates's avatar

And you can’t even trust evaluating on style to indicate it is from a human as AI can mimic style.

Expand full comment
Massimo Pigliucci's avatar

Yes, that’s a well known, and disturbing, feature of AI.

Expand full comment
Massimo Pigliucci's avatar

Naresh, thanks for the compliment about my recognizable writing style… 😃

You bring up an interesting case. Your hypothetical researcher ought to be able to analyze her data and writer her own paper, otherwise she’s not much of a competent science, I should think.

And here is another point: you bring up saving time. Why is that such a priority? In my scientific career I have already seen (pre-AI) a ballooning of unnecessary publications, in response to the infamous “publish or perish” imperative. The result is a flood of unreadable, worthless papers that simply clutter the library shelves (or one’s hard disk). We should strive for fewer and higher qualities publications. Do you think AI is going to help there?

Expand full comment
Christian Orlic's avatar

This! But so much seems to be about publishing. Even being here on Substack encourages publishing regularly more than publishing quality as it seems the algorithms at work favor newer posts rather than old ones.

I once took a graduate class where we were being trained to write articles that could be published in a prestigious journal. The program director was proud that graduating students got jobs because they could and often did publish in that journal. He also thought most of that journal was rubbish. I asked why encourage the production of that kind of work. He answered, so they can get jobs, we can get funding, and more students.

I fully agree and applaud the call for higher quality rather than quantity.

Expand full comment
Massimo Pigliucci's avatar

Christian, yup, that’s similar to the “publish or perish” imperative that has plagued academia for decades, resulting in a lot of published rubbish that nobody reads and that makes it more difficult to find the good stuff.

Even here at Substack middle-level writer like myself are drowned by the crap. If you are a Paul Krugman, Margaret Atwood, and so on, people will find and read you. But what exactly distinguishes yours truly from some random guy’s blog? Or from AI-generated stuff?

Expand full comment
User's avatar
Comment deleted
Feb 3
Comment deleted
Expand full comment
Massimo Pigliucci's avatar

Sure!

Expand full comment
User's avatar
Comment deleted
Feb 3
Comment deleted
Expand full comment
Massimo Pigliucci's avatar

Mort, you should be able to buy the book in electronic form from both Amazon UK and the Apple Book store. Thank you!

Expand full comment
User's avatar
Comment deleted
Feb 3
Comment deleted
Expand full comment
User's avatar
Comment deleted
Feb 3
Comment deleted
Expand full comment
User's avatar
Comment deleted
Feb 3
Comment deleted
Expand full comment
Massimo Pigliucci's avatar

Intuition is, according to cognitive science, sub-conscious processing of information on the part of our brain. It is more reliable in areas where we have experience, obviously. And it’s a good idea to check our intuitions by deliberate conscious thinking. “Does this really make sense? Is there evidence that this is the case?” And so forth.

Expand full comment