Wikimedia | Wikipedia in the Age of AI and Bots
The rapid growth of large language models has created new challenges for the Wikimedia Foundation in managing changing traffic patterns while upholding its free knowledge mission and fostering human engagement.
Wikipedia has been a source of text data for natural language processing and machine learning for almost 25 years. We’d like to cover how data science and computer scientists can help foster the health of what has been agreed on as an essential public good in the age of AI by giving some context that explains how the datasets are created. Historically, access to the site via scraping and bots has caused multiple issues for Wikimedia Foundation infrastructure, but recent changes in traffic behavior and volume due to growth of large language models (LLMs) have caused an increase in incidents. Managing this expansion has created unique challenges for the organization considering Wikimedia’s free knowledge mission, and the need to continue to foster human traffic growth. Maintaining the sustainability of the platform and prioritizing human and mission-oriented access first has required nuanced approaches to identifying and responding to observed trends.
This talk will cover the basic editorial processes within Wikipedia driven by the large community of volunteers, the emergence of new AI-specific tooling and datasets from Wikimedia, and the best practices for engaging with Wikimedia content to support open data growth. Examples of automated traffic observed on Wikimedia projects will be discussed, highlighting traffic trends, bot behavior, and resource impacts. We will showcase current risk strategies aimed at reducing server load and mitigating potential abuse without impacting general service availability. There will be time for Q&A at the end of the presentation to discuss policy development and how to get involved with the broader Wikimedia movement.



