Support Pullquote, upgrade to Pro!
(Or just tweet your Pullquote for free!)
With Pullquote Pro, you'll get to:
- share on Facebook
- schedule tweets
- tweet from multiple accounts
- edit quotes
- customize colors
- change fonts
- save and index quotes
- private quotes
Choose a plan: $5/month $50/year (includes free access to any new features)
Recent quotes:
How a data detective exposed suspicious medical trials
After seeing so many cases of fraud, alongside typos and mistakes, Carlisle has developed his own theory of what drives some researchers to make up their data. “They think that random chance on this occasion got in the way of the truth, of how they know the Universe really works,” he says. “So they change the result to what they think it should have been.”
The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: the case of depression
Figure 1 demonstrates the cumulative impact of reporting and citation biases. Of 105 antidepressant trials, 53 (50%) trials were considered positive by the FDA and 52 (50%) were considered negative or questionable (Fig. 1a). While all but one of the positive trials (98%) were published, only 25 (48%) of the negative trials were published. Hence, 77 trials were published, of which 25 (32%) were negative (Fig. 1b). Ten negative trials, however, became ‘positive’ in the published literature, by omitting unfavorable outcomes or switching the status of the primary and secondary outcomes (Fig. 1c). Without access to the FDA reviews, it would not have been possible to conclude that these trials, when analyzed according to protocol, were not positive. Among the remaining 15 (19%) negative trials, five were published with spin in the abstract (i.e. concluding that the treatment was effective). For instance, one article reported non-significant results for the primary outcome (p = 0.10), yet concluded that the trial ‘demonstrates an antidepressant effect for fluoxetine that is significantly more marked than the effect produced by placebo’ (Rickels et al., 1986). Five additional articles contained mild spin (e.g. suggesting the treatment is at least numerically better than placebo). One article lacked an abstract, but the discussion section concluded that there was a ‘trend for efficacy’. Hence, only four (5%) of 77 published trials unambiguously reported that the treatment was not more effective than placebo in that particular trial (Fig. 1d). Compounding the problem, positive trials were cited three times as frequently as negative trials (92 v. 32 citations in Web of Science, January 2016, p < 0.001, see online Supplementary material for further details) (Fig. 1e). Among negative trials, those with (mild) spin in the abstract received an average of 36 citations, while those with a clearly negative abstract received 25 citations. While this might suggest a synergistic effect between spin and citation biases, where negatively presented negative studies receive especially few citations (de Vries et al., 2016), this difference was not statistically significant (p = 0.50), likely due to the small sample size. Altogether, these results show that the effects of different biases accumulate to hide non-significant results from view.
Humans rely more on 'inferred' visual objects than 'real' ones -- ScienceDaily
To make sense of the world, humans and animals need to combine information from multiple sources. This is usually done according to how reliable each piece of information is. For example, to know when to cross the street, we usually rely more on what we see than what we hear -- but this can change on a foggy day.
"In such situations with the blind spot, the brain 'fills in' the missing information from its surroundings, resulting in no apparent difference in what we see," says senior author Professor Peter König, from the University of Osnabrück's Institute of Cognitive Science. "While this fill-in is normally accurate enough, it is mostly unreliable because no actual information from the real world ever reaches the brain. We wanted to find out if we typically handle this filled-in information differently to real, direct sensory information, or whether we treat it as equal."
To do this, König and his team asked study participants to choose between two striped visual images, both of which were displayed to them using shutter glasses. Each image was displayed either partially inside or completely outside the visual blind spot. Both were perceived as identical and 'continuous' due to the filling-in effect, and participants were asked to select the image they thought represented the real, continuous stimulus.
"We thought people would either make their choice without preference, or with a preference towards the real stimulus, but exactly the opposite happened -- there was in fact a strong bias towards the filled-in stimulus inside the blind spot," says first author Benedikt Ehinger, researcher at the University of Osnabrück. "Additionally, in an explorative analysis of how long the participants took to make their choice, we saw that they were slightly quicker to choose this stimulus than the one outside the blind spot."
So, why are subjects so keen on the blind-spot information when it is essentially the least reliable? The team's interpretation is that subjects compare the internal representation (or 'template') of a continuous stimulus against the incoming sensory input, resulting in an error signal which represents the mismatch. In the absence of real information, no deviation and therefore no error or a smaller signal occurs, ultimately leading to a higher credibility at the decision-making stage. This indicates that perceptual decision-making can rely more on inferred rather than real information, even when there is some knowledge about the reduced reliability of the inferred image available in the brain.
"In other words, the implicit knowledge that a filled-in stimulus is less reliable than an external one does not seem to be taken into account for perceptual decision-making," Ehinger explains.