This methodology review was conducted to assess the effects of different methods for obtaining unpublished studies (data) and missing data from studies to be included in systematic reviews. Six studies met the inclusion criteria, two were randomised studies and four were observational comparative studies evaluating different methods for obtaining missing data.
Five studies assessed methods for obtaining missing data (i.e. data available to the original researchers but not reported in the published study). Two studies found that correspondence with study authors by e-mail resulted in the greatest response rate with the fewest attempts and shortest time to respond. The difference between the effect of a single request for missing information (by e-mail or surface mail) versus a multistage approach (pre-notification, request for missing information and active follow-up) was not significant for response rate and completeness of information retrieved (one study). Requests for clarification of methods (one study) resulted in a greater response than requests for missing data. A well-known signatory had no significant effect on the likelihood of authors responding to a request for unpublished information (one study). One study assessed the number of attempts made to obtain missing data and found that the number of items requested did not influence the probability of response. In addition, multiple attempts using the same methods did not increase the likelihood of response.
One study assessed methods to obtain unpublished studies (i.e. data for studies that have never been published). Identifying unpublished studies ahead of time and then asking the drug industry to provide further specific detail proved to be more fruitful than sending of a non-specific request.
Those carrying out systematic reviews should continue to contact authors for missing data, recognising that this might not always be successful, particularly for older studies. Contacting authors by e-mail results in the greatest response rate with the fewest number of attempts and the shortest time to respond.
Those carrying out systematic reviews should continue to contact authors for missing data, recognising that this might not always be successful, particularly for older studies. Contacting authors by e-mail results in the greatest response rate with the fewest number of attempts and the shortest time to respond.
In order to minimise publication bias, authors of systematic reviews often spend considerable time trying to obtain unpublished data. These include data from studies conducted but not published (unpublished data), as either an abstract or full-text paper, as well as missing data (data available to original researchers but not reported) in published abstracts or full-text publications. The effectiveness of different methods used to obtain unpublished or missing data has not been systematically evaluated.
To assess the effects of different methods for obtaining unpublished studies (data) and missing data from studies to be included in systematic reviews.
We identified primary studies comparing different methods of obtaining unpublished studies (data) or missing data by searching the Cochrane Methodology Register (Issue 1, 2010), MEDLINE and EMBASE (1980 to 28 April 2010). We also checked references in relevant reports and contacted researchers who were known or who were thought likely to have carried out relevant studies. We used the Science Citation Index and PubMed 'related articles' feature to identify any additional studies identified by other sources (19 June 2009).
Primary studies comparing different methods of obtaining unpublished studies (data) or missing data in the healthcare setting.
The primary outcome measure was the proportion of unpublished studies (data) or missing data obtained, as defined and reported by the authors of the included studies. Two authors independently assessed the search results, extracted data and assessed risk of bias using a standardised data extraction form. We resolved any disagreements by discussion.
Six studies met the inclusion criteria; two were randomised studies and four were observational comparative studies evaluating different methods for obtaining missing data.
Methods to obtain missing data
Five studies, two randomised studies and three observational comparative studies, assessed methods for obtaining missing data (i.e. data available to the original researchers but not reported in the published study).
Two studies found that correspondence with study authors by e-mail resulted in the greatest response rate with the fewest attempts and shortest time to respond. The difference between the effect of a single request for missing information (by e-mail or surface mail) versus a multistage approach (pre-notification, request for missing information and active follow-up) was not significant for response rate and completeness of information retrieved (one study). Requests for clarification of methods (one study) resulted in a greater response than requests for missing data. A well-known signatory had no significant effect on the likelihood of authors responding to a request for unpublished information (one study). One study assessed the number of attempts made to obtain missing data and found that the number of items requested did not influence the probability of response. In addition, multiple attempts using the same methods did not increase the likelihood of response.
Methods to obtain unpublished studies
One observational comparative study assessed methods to obtain unpublished studies (i.e. data for studies that have never been published). Identifying unpublished studies ahead of time and then asking the drug industry to provide further specific detail proved to be more fruitful than sending of a non-specific request.