In 2014, Nominet Trust in the UK funded a training program to provide charitable organisations with a more effective low-cost, community led alternative to traditional evaluations. The method works across the NGO sector, and provides all organisations and peer benchmarking. Our approach is analogous to Framingham’s Heart Study; developing sector-wide longitudinal baseline data about emerging social problems. These tools will inform implementers about interventions, aid decision-making and support grant proposals with evidence. We aim to help organisations save time while extending the useful lifetime of information. Furthermore, our real-time feedback loops should address the challenge that project design can sometimes follow personal hunches rather than facts because evaluations are not timely and may reflect a positive insider-perspective bias.
Since 2010, Globalgiving has been experimenting with easier ways to provide NGOs with the analytical power of evaluations for just 10% of the usual staff time and 1% of the traditional costs. Our East Africa pilot generated over 57,000 community stories mentioning over 4000 organizations, covering a broad range of social issues. Both the method and analysis tools have been developed but broader adoption will ultimately depend on exposing them to organizations and forging incentives for both funders and grantee organizations to use them.
a. Advertise the program to its network
b. Train and host webinars
c. Support NGOs in using the data collection system
d. Update organisations about the analysis tools
e. Deliver final feedback on the experiment to all participant and surveyed organisations; specifically what types of grants the group wrote, how evidence was used in them, and whether the overall grant rating was higher / lower than a control group.
Truthful: There are few comparable examples of this strategy and these tools. Feedback to civil society organizations tends to come from the agencies that fund projects. Hence, it tends to be sparse, narrow, and positively-biased with little room for honest criticism or iterative learning.
Data: Our design creates an open-ended, semi-structured, aid-recipient-driven longitudinal baseline. What emerges is what matters most to citizens. We enable everyone to analyse patterns with an intuitive visual interface – the consumers of knowledge need not know statistics or learn a complicated tool. All data remains open.
Tools: As we refine these tools they will become more flexible and able to incorporate more data, unlike other tightly structured analysis systems. Eventually the tools should be decoupled from specific data pipelines and stand alone as a means to filter big data sets. Likewise, the inferences drawn from these data are hypotheses that can inform other evaluation efforts.
Iterative design: From the beginning we have stuck to an agile, iterative design, lean startup model for this project. The method and tools have undergone over 15 iterations since 2010 and our preliminary learnings are published online and in trade publications, so that others can learn from our mistakes:
This is one iteration in a long-term effort to drive down the cost of evaluations while driving up their usefulness, and a core part of GlobalGiving’s mission to make other organisations more effective and sustainable. We sustained this project since 2009 in three ways: