Intuitively, this makes a lot of sense to me. Wikipedia has a reputation (often blown a tad bit out of proportion) of being the Wild Wild West of information sources not to be trusted because any outlaw can taint its contents in a most biased or blatantly false manner.
Of course, Wikipedia needs to acknowledge valid criticism of its system and further mold its process and infrastructure in an open way to meet such concern. And it has. An algorithmic supplement however, is taking that approach to an entirely different level. This is a perfectly suited job for an algorithm. True, an algorithm is always fundamentally stupid…no matter how complex and apparently clever. But so long as it’s simply keeping tabs on our constructed knowledge and not replacing us as actual constructors of knowledge, it could prove to be a socially useful tool. Furthermore, it must exist of course, as free software if implemented on the client side or free software as a service (more likely) if integrated with the online version of Wikipedia itself. After all, if you’re going to build and maximize a measured level of trust into code, the code itself must be trusted.
So Luca de Alfaro, B. Thomas Adler, Marco Faella, Ian Pye, and Caitlin Sadowski, it seems you are very open regarding the techniques of your work. What are your plans regarding source code?