An AI (Artificial Intelligent) editor has been introduced by Wikipedia to help fight against article edits that contain spam or false information. This intelligent technology called Object Orientation Evaluation Service (ORES), helps Wikipedia editors to check the quality of edits and articles on Wikipedia and is an open web service accessible to everyone.
This service empowers Wikipedia editors by helping them discover damaging edits and can be used to immediately “score” the quality of any Wikipedia article. We’ve made this artificial intelligence available as an open web service that anyone can use
ORES according to Wikimedia will function like a “pair of X-ray specs” which will help Wikipedians detect suspicious edits by looking for certain patterns. If an article is flagged, it is sent over to a human editor to double-check, who will then notify the contributor if the revision is removed. This will reduce spam, also contributors who may not have malicious intent will be notified why their revision wasn’t approved, unlike the current system which just deletes edits without explanation.
Wikipedia had used other AI services such as Huggle, STiki and ClueBot NG but these brought negative impact on newcomers who are learning about how to contribute to Wikipedia, ORES on the other hand allows new quality control tools to be designed that integrate with newcomer support and training spaces, it also features the ability to differentiate between someone’s honest error and that which was intentionally meant to damage an article.
Wikipedia stated “We’ve been testing the service for a few months and more than a dozen editing tools and services are already using it,” it added. “We’re beating the state of the art in the accuracy of our predictions. The service is online right now and it is ready for your experimentation.”
According to Wikipedia about half a million edits happen per day, this is quite a lot for human editors, ORES service will make this easier by handling some of the workload. Spammers and trolls beware.