Looking for science in a world of misinformation
AI, mushrooms and the death of trust in the internet
It was Elan Trybuch at the New York Mycological Society who raised the alarm to a journalist at Vox Media early last year: the advent of widespread AI, or artificial intelligence, was starting to threaten access to accurate, scientific information—specifically, information around mushrooms.
The covers of several mushroom foraging books had caught his eye, and for all the wrong reasons. “They had mushroom structures that don’t quite make sense”, he told Vox, going on to say that the content of the books seemed completely invented, talking about mushrooms that did not even exist. It was very obvious, to Trybuch, that the books had not been written by a person with the expertise required to be telling the general public which wild mushrooms were safe to eat, and which were not. In fact, he believed that they had been written by AI.
The rise of ChatGPT and other AI models has, provably, given rise to a wave of content put out there with the express intent of earning easy money, without any of the oversight that is required for the ethical and harm-reduction-focused production of such health-related information as which mushrooms are safe to eat. Traditional publishing has long had safeguards in place to ensure that the information they publish is accurate and safe (though this, too, is diminishing as the number of books published increases hugely); this is what factcheckers are for. However, self-publishing, coupled with publishing-on-demand services offered by the likes of Amazon, that results in no upfront cost for printing, has allowed anyone to produce a book, without any of the editorial support that traditional publishers offer. While this has had the effect (in theory) of democratising book publishing, it has also opened the door for those who are using AI to churn out false information. While they might never make a bestselling book, they are able to con a few people out of their hard-earned money by making something that looks, at least, like something to be trusted. It stands to reason that mushrooms, a highly popular topic, would be targeted by those using these methods. When it comes to mushrooms, however, wrong information can be not just annoying, but dangerous, and even fatal.
In August last year, a Reddit user posted on the r/LegalAdviceUK subreddit asking for advice: their whole family, they claimed, had been hospitalised after consuming poisonous mushrooms that they had understood to be safe. The book they were using to forage was purchased from a “major online retailer”, and the post believed the book to have been authored by AI. The text revealed unfinished sentences, statements that addressed the reader (or generator) directly, and a series of random questions that were completely out of place—all signs that it was not written by a person or overseen by an editor. The online retailer had warned them against posting images of the book, citing copyright issues. Clearly, they were keen to avoid legal responsibility for the hospitalisation of the poster’s wife and son.
But it isn’t just the world of scam books that are threatened by this issue. At the end of last year, 404 Media reported that an AI chatbot was added, by Meta, to a mushroom-foraging Facebook group, and, when asked a direct question, gave answers that suggested ways of cooking and consuming Sarcosphaera coronaria, a fungus that has caused at least one documented death. The chatbot responded by not only saying it was edible, but by saying that “cooking methods mentioned by some enthusiasts include sautéing in butter, adding to soups or stews, and pickling.”
This would be dangerous if it were being posted in a public forum, where it would be seen by many, but it would also allow others to fact-check the information and assert their greater knowledge. What makes this chatbot even more risky, however, is that Facebook users interact with it directly, in the Messenger feature—meaning that others with greater knowledge have no capacity to intervene. The chatbot was a feature that no one involved with the group asked for, and its mere presence undermined the group’s approach to harm reduction and clear, accurate education. In layman’s terms, it fucked everything up.
Those of us who existed on the internet 10-15 years ago are now realising that we lived in an incredibly lucky, wildly unlikely time. We had been given direct access to almost all of the knowledge accumulated in human history, and we were being kept well informed by a complex web of gatekeeping and fact-checking measures that we barely even noticed or knew existed. If you Googled something in the year 2010, the top results were true, aggregated not in response to the author paying money or the sponsors insisting so, but because it was popular, correct and helpful. This time, this golden time, is now long gone.
First it was the advertising that scrambled our trust in the algorithm, then it was the widespread understanding of how to game Google’s SEO, with highly-paid SEO experts teaching company execs how to push their company’s content to the top of the Google ranking at a ludicrous expense. In the last couple of years, it has been AI that has ruined Google: if you Google search for almost anything now, AI-generated versions of the images you’re looking for will be sprinkled throughout the search results, sometimes being indistinguishable from the real thing unless you have cause to inspect them closely. For some searches, AI-generated images make up the entire first page. Google added an AI summary at the top of each search page in 2024, and personal experience will have already proven to you that very often, the summary it gives is simply incorrect. Google, it seems, has long since abandoned its mission to “to organise the world's information and make it universally accessible and useful.”
And then there are AI search engines themselves. A 2025 piece in Columbia Journalism Review, which compared eight AI search engines, highlighted the stunningly bad rate of accuracy of answers given by AI chatbots:
Collectively, [the chatbots] provided incorrect answers to more than 60 percent of queries. Across different platforms, the level of inaccuracy varied, with Perplexity answering 37 percent of the queries incorrectly, while Grok 3 had a much higher error rate, answering 94 percent of the queries incorrectly.
On top of this, there are the sites that we may not use to look for information, but through which we almost passively consume it: social media. Under Musk’s control, Twitter has removed almost all of its safeguards and fact-checking. Meta, which encompasses both Facebook and Instagram, abandoned its fact-checking in January this year, whilst on a years-long project of censoring any content do to with mushrooms or other substances. TikTok is basically the wild west. Into this desert of fact-checking step the influencers who want nothing but enormous followings and monetisation, meaning that accuracy makes way for bold, often completely unevidenced claims about whatever it is that will make them the most money. Everything is clickbait and nothing is correct. The accuracy era is over: we are in a post-truth world.
It seems clear to us that the tools we have been using to search for information previously, and the new tools being pushed on us despite widespread cynicism of them, are no longer fit for our everyday use. But what does all of this mean for science? Specifically, what does it mean for science-backed information about mushrooms, and psilocybin?
In the decade since we wrote the first edition of The Psilocybin Mushroom Bible—a book which was borne of our frustration with a lack of reputable, science-backed information on growing psilocybin mushrooms—the landscape of psilocybin information has changed a lot. And not necessarily for the better. However, the advice we gave in that book remains applicable: don’t trust just anyone. Look for the science. Read the whitepapers, read the studies. Experiment. Try things out. Build yourself a body of knowledge that doesn’t rely on others.
Here are some of the questions to ask yourself in your quest to do this:
Is it AI?
Not all AI-generated content is wrong (though it is all catastrophically bad for the environment), but figuring out if it IS AI is your first step to analysing it properly. If it’s a book listed on one of the world’s largest book-purchasing platforms: is it published by a reputable publisher? Do they have a website you can find? If not, is it self-published? Can you find any information at all on the purported author? Their social media, their website, their level of expertise? Has it been reviewed by real outlets, rather than just receiving lots of potentially paid-for reviews on the same site that’s selling it?
What is this person selling?
When it comes to social media, which is increasingly all about hype, it can be very difficult to see where real experts are just engaging with clickbait to increase their visibility (something that is all but demanded by such platforms now), and where their information is actually inaccurate. A good place to start is by looking at what it is they are trying to sell you.
A person selling lions mane mushroom capsules has a vested interest in making you believe that such capsules are basically miraculous. A person selling grow kits has a vested interest in making you believe that you cannot grow by traditional methods, nor make your own kits. A person selling highly expensive tripsitting services has a vested interested in making you believe that to do it without them is inherently dangerous. Do you see what we’re saying, here? Them selling a service or a product doesn’t necessarily mean that what they’re saying isn’t accurate. But it gives them more reason to lean into hype and away from the science—so when you evaluate them, you have to bear that in mind.
Can I find proof of this in a research paper?
In this strange period where people making health claims can get onto some of the most widely-listened to podcasts in the world regardless of whether what they’re saying is true or not, it is absolutely essential for you to look up the research yourself. Scientific papers are broadly accessible to all on the internet; even if you can’t read the whole paper, you can usually see its premise and its conclusions. Don’t believe any claim that any of these people make. Look it up for yourself—and not through headlines that just parrot the hype, but in the actual peer-reviewed, fact-checked, published research paper that was written by scientists.
Do others agree with this?
Consensus in the scientific community doesn’t always guarantee that something is correct—science changes with evidence, after all—but if one thing is being said by a whole bunch of experts in the field, you have good cause to believe it. If it’s being said by just one person who is claiming to be a “maverick”, then tread carefully. Treat the claim with as much suspicion as you can muster.
What happens if I doubt absolutely everything that hasn’t been scientifically proven?
The philosopher Kant once performed a thought experiment where he doubted absolutely everything except that which was beyond doubt, and it led him back to one of the most fundamental philosophical statements of all time: cogito ergo sum, or I think therefore I am.
We’re not suggesting quite this level of doubt—but we are suggesting that you strip back the things you’ve been led to believe about mushrooms, both consuming and growing, to those which you can prove or which have been proved through the scientific method. This then forms the foundation of your knowledge, on top of which you can build. Not every block you build has to be infallible; you can take something on board in the knowledge that it may or may not be correct, and hold it lightly. But your foundations, at least, will be strong.
This is, of course, a lot of work to do. It is work we’re incentivised away from, by hype and social media and traditional media and everything else that tells us to think less and consume more. But this is the real work. This is what we have to do, if we wish to be informed individuals; if we wish to follow the science when it comes to mushrooms.
Post script: The tile image for this article is a real image of Sarcosphaera coronaria, released under Creative Commons licence by Björn S, through Wikimedia Commons.


