Gathering Dust on the Web: How relevant are reports on legal tech, and others?


By Laurent Arribe

How do worthy report findings transition from a shelf or inbox into actionable projects and meaningful change? Speaking with academics and practitioners alike, I have come across this question in diverse settings over the past year, where a seemingly good idea or technology is suggested through a feasible and well thought-out proposal yet never manages to lift off and help those it was made for. While I cannot enumerate all the potential reasons a good idea might fail to be implemented, a couple major themes have risen over the past semester: identifying and engaging with all stakeholders, reaching out and disseminating findings, and maintaining communications are all vital activities for project implementation. However, a frustration continues to build as I listen to my peers make yet another project recommendation: how many recommendations and findings will be implemented or used to stimulate change?

The World Bank recently released a report[1]that looks into its database’s analytics and wonders: “Does anybody read our reports?” The Washington Post’s Christopher Ingraham succinctly sums up its findings: “Nearly one-third of their PDF reports had never been downloaded, not even once. Another 40 percent of their reports had been downloaded fewer than 100 times. Only 13 percent had seen more than 250 downloads in their lifetimes.”[2]




When added to the hundreds of think tanks across the world, government organizations, donors, consultants and academia, it is not surprising that many findings get lost in the fray or gather in someone’s inbox.The internet is a big place and the amounts of information floating in it could make results-finding a daunting task. Yet each research entity offers pros and cons in its approach towards a particular subject. For instance in the confines of academia, one may make assumptions and hypotheses without needing to deal with potentially impacting stakeholders negatively.[3]

For all these resources, I believe the development field would greatly benefit from centralized platform that could aggregate reports and lessons learned to help practitioners and policymakers extract necessary information and learn from past efforts or research. A sort of Google Scholar for development, if you will.

In order to evaluate a good report from a bad one, we can turn back to the World Bank’s introspective report for a few interesting elucidations on what could constitute success:
· Knowledge services must be evaluated against their specific objective (See Table 1)
· Reports should gather feedback on the quality, relevance and impact of the provided knowledge.
· Measure the demand for and use of such reports through download or citation count.
· Sustained follow-up beyond dissemination is needed. For instance in the form of increased funding for implementing policy changes or form training workshops.



The development objectives are measured using a scale ranging from 1 to 0. A score of 1 indicates that the objective was fully met; a score of 0.75 or above indicates that it was largely met; a score of 0.5 or above indicates that it was partially met; and a score of 0 indicates that the objective was clearly not met.
Thus, the common thread appears to be communication and active engagement with all parties affected, Among the hundreds, if not thousands, or reports that get published every year, a relatively small proportion will ever be used for project implementation. By taking an extra step tostrategize and include all stakeholders, good ideas or technologies can be transferred into the field to significantly improve the lives of the people most in need.

As for the World Bank, a few trends emerged regarding their popular reports:
· Complex multi-sector and core diagnostic reports are downloaded more frequently.
· Objective of the report matters: reports whose objective was to inform the public debate were more likely to be downloaded.
· type of document seems to matter: labelling can change people’s perception of the document “one policy report was cited twice within our dataset, but when later published as a working paper wth a new title, it was cited over 50 times.”

These findings beg the question: should popularity guide the research topics? It sounds reasonable to expect demand to guide supply yet it seems misleading to assume the value of knowledge through download count. when a report is downloaded hundreds of times versus once

Most importantly, it is extremely valuable to perform such self-assessments reiterating an organization’s objective (ie. The World Bank serving as a Knowledge Bank), whether it is achieving it, and how it could improve. Can you think of other research institutions or think tanks that could benefit from carrying out such exercises?




[1]Doemeland, D; Trevino, J. (2014). Which World Bank Reports Are Widely Read? The World Bank Operations and Strategy Unit. Retreived 10 May, 2014, from http://www-wds.worldbank.org/external/default/WDSContentServer/WDSP/IB/2014/05/01/000158349_20140501153249/Rendered/PDF/WPS6851.pdf
[2] Ingraham, C. (2014). The solutions to all our problems may be buried in PDFs that nobody reads. Washington Post.Retrieved 14 May 2014, from http://www.washingtonpost.com/blogs/wonkblog/wp/2014/05/08/the-solutions-to-all-our-problems-may-be-buried-in-pdfs-that-nobody-reads/
[3] For instance, the US’ Joint Combined Warfighting School has a research document, titled CONOP 8888, planing for a zombie apocalypse: http://www.foreignpolicy.com/articles/2014/05/13/exclusive_the_pentagon_has_a_plan_to_stop_the_zombie_apocalypse

No comments:

Post a Comment