Since ChatGPT’s launch in 2022, a growing number of AI-driven tools designed to assist with literature reviews have been released into the scholarly market. Many of them make exaggerated claims about their ability to streamline and fast-track literature reviews. Only a few of these AI-driven tools rely on reputable literature sources and databases for its third-party data. Moreover, AI-driven tools for literature reviews typically depend on some LLM to support prompting or summarise literature. What is well-known is that hallucinations can occur in AI summaries. What is not so well-known is that LLMs increasingly more train on their own output, leading to AI pollution and AI dilution – and it is getting worse, with implications for non-mainstream or context sensitive GenAI applications In this webinar, Dr Kirstin Krauss will introduce these concerns and then show how to avoid citing polluted science when using AI-driven tools for literature reviews.
Since ChatGPT’s launch in 2022, a growing number of AI-driven tools designed to assist with literature reviews have been released into the scholarly market. Many of them make exaggerated claims about their ability to streamline and fast-track literature reviews. Only a few of these AI-driven tools rely on reputable literature sources and databases for its third-party data. Moreover, AI-driven tools for literature reviews typically depend on some LLM to support prompting or summarise literature. What is well-known is that hallucinations can occur in AI summaries. What is not so well-known is that LLMs increasingly more train on their own output, leading to AI pollution and AI dilution – and it is getting worse, with implications for non-mainstream or context sensitive GenAI applications In this webinar, Dr Kirstin Krauss will introduce these concerns and then show how to avoid citing polluted science when using AI-driven tools for literature reviews.