C4I & Cyber Seminar: Will It Replicate: Crowdsourcing the Reliability of Social Science Claims?

Presented by: Charles Twardy, PhD


It has become increasingly clear that many published social science results are false. Foretold of old, the modern realization began with Ioannidis’ 2005 blockbuster paper, “Why most published research findings are false” (2.7M views, 3,8K citations!). The strongest evidence comes from quite recent (2015+) organized replication trials by the Center for Open Science (and others). Using stronger techniques than the original study, they find that only about half of the key claims replicate, and the median effect sizes are only about 1⁄4 of the original result. But perhaps most interesting for us, is that a related team ran prediction markets (and surveys) for four of these big replication results, and the markets are about 70% accurate in predicting, ahead of time, which results will replicate. I summarize these results and talk about a new DARPA-funded effort to predict the outcome of 3,000+ social science claims, 5-10% of which will be randomly selected for testing by replication. We use the SciCast combinatorial prediction market developed at George Mason University for earlier IARPA forecasting tournaments.


Dr. Twardy worked with the C4I and Cyber Center from 2008-2015, where he led the George Mason DAGGRE and SciCast teams for the IARPA ACE and ForeST forecasting challenges (2011-2015). Their novel combinatorial prediction market showed sustained 35% gains over baseline methods, with much greater expressivity. He then worked for a year supporting the Defense Suicide Prevention Office establish procedures for handling and analyzing sensitive personal data, and provided sound approaches to hot-spot detection. In late 2016 he joined KeyW (then Sotera) where he has worked on two DARPA big data programs and the IARPA CREATE crowdsourced argumentation project. He now leads the Replication Markets team for DARPA SCORE.

2:00 pm - 3:00 pm

C4I Center ENGR 4705