D'oh! WebMD Screws Up Reporting on Bisexual HIV Rates

Vague wording in a University of Pittsburgh medical report prompts news outlets to misreport HIV infection rates among bisexual men.

BY Michael Regula

November 13 2013 6:22 PM ET

Based on a confusingly constructed press release from the University of Pittsburgh, multiple media outlets have been reporting misinterpreted data from an HIV study conducted by the university focusing on bisexual men.

Various news and medical media, including Medical Daily and WebMD, have relayed misinformation to the public regarding HIV infection rates among bisexual men, reported that the HIV risk for bisexual men in the United States is the same as that for heterosexual men. 

This would only be true if the rate of HIV were the same between the two groups (bisexual and staright men), and in fact they are not. The University of Pittsburgh study, based primarily on analyses of 31 scientific articles detailing HIV prevalence rates in gay and bisexual men, state that the total number of reported cases is similar, but not the rates of infection. Since the total population of heterosexual men is significantly larger than that of bisexual men, with the number of cases being the same, the actual rate of infection is therefore higher for bisexuals.

Pitt's Graduate School of Public Health investigators estimated that bisexual men have an HIV prevalence rate of about 10 percent, meaning that around 120,000 are HIV-positive out of an estimated 1.2 million bisexual men in the United States. Though a similar number of heterosexual men are thought to be living with HIV, it's important to note that the straight population is much greater, meaning that the percentage of those infected is significantly less than that of bisexuals. 

In misinterpreting this data, both the WebMD and Medical Daily stories inaccurately state the Pitt study's findings, even confusing the amount of data on which the researchers based the analysis. Misstating that researches pulled from of over 3,000 studies to reach their conclusions, they missed the point that only 31 papers were ultimately used in the analysis (3474 articles were identified; 31 met inclusion). This also can be attributed to the confusing wording of the Pitt abstract, stating that researchers "reviewed over 3,000 scientific articles" for the study, when in fact from the pool of roughly 3,000 articles, they pulled 31 on which they ultimately conducted the analysis.

 

 

 

 

Quantcast