That's the question asked, but not definitively answered, in a study presented this week in Chicago at the annual Scientific Sessions of the American Heart Association (AHA) and published online in the AHA journal, Circulation.
The wonderfully named "Intention-to-Tweet" study randomized 243 Circulation articles to receive social media promotion (or not) and then compared page views for those articles in the first 30 days after publication.
The result: no statistically significant difference in page views between the promoted and non-promoted groups. For full results, download the PDF.
It's great to see this kind of research. The authors should be commended for designing and testing an intervention, and reporting findings to peers. And although the rest of this post may seem like criticism, I want to underscore that doing this research is a significant contribution. I'm only clarifying what they did and didn't show through their work.
Removing those buttons from below the "no promotion" posts would have been a fairer test of social networking's contribution. Facilitating sharing via these icons is itself a social strategy (and, I would submit, a good one.) The study also didn't take into account that other institutions may have promoted their authors' publications via social media.
The study did test the unique contribution of the Journal's social accounts to article views. Those selected for promotion were tweeted once by @CircAHA and posted to Circulation's Facebook page. Given the diminishing organic reach for Facebook pages and the short half-life of tweets, it's understandable that posts from one source may not make a statistically significant difference.
Lots of others were sharing, which may confound the results. This relates the the previous two points; if hospitals or authors of the studies were tweeting on their own, it would make the absence of @CircAHA tweets less noticeable. Circulation had 4,759 Twitter followers at the end of the study period. Academic medical center accounts attract from several thousand to nearly a million followers.
Isolating variables is good science but not necessarily great communication. The authors wanted to identify the impact of the journal's social accounts, so they excluded studies for which Circulation had issued a news release. Studies that don't get news releases are, in the judgment of professional staff, less newsworthy than those that do. Therefore, the studied articles were almost by definition the least interesting.
The page views endpoint doesn't tell the whole story of impact. Article views or downloads aren't the only meaningful measure. Readers of tweets and Facebook posts gain some knowledge of the underlying research, whether visiting the Circulation site or not.
A good social post condenses the study's main findings. With the explosion of medical knowledge, keeping up with all research is impossible. Social media provides good channels to disseminate key messages to engaged patients as well as to academic audiences, even when posts don't lead to article downloads.
This study is helpful in generating discussions about social media impact and best practices.
Make social media promotion a "standard of care." While this study doesn't prove benefit, given its singular endpoint and confounding factors I've cited, a tweet or Facebook post about a research article will in fact lead more people to know about it.
Here's how you prove it: Create social media posts for a study and include shortened links. Count the clicks on those links. If that number is greater than zero, the post increased article readership. Through Twitter and Facebook analytics, you also can show exposure to the synopsis.
Given the modest incremental effort required to create social media posts for your existing content, why would you not do it?
I've shared my thoughts. What do you think?