GET THE APP

How Wrongdoing Aided the Advancement of Psychological Science

Clinical and Experimental Psychology

ISSN - 2471-2701

Mini Review - (2021) Volume 7, Issue 9

How Wrongdoing Aided the Advancement of Psychological Science

Rama Chandra*
 
*Correspondence: Rama Chandra, Department of Education, Sant Tulsidas P.G. College, Awadh University, India, Email:

Author info »

Introduction

I was taken aback ten years ago this week when I saw tweets claiming that a former colleague, Dutch psychologist Diederik Stapel, had admitted to faking and inventing data in scores of studies. E-mails from other methodologists, researchers who study and improve research procedures and statistical tools, flooded my inbox. They expressed surprise at the scope of the wrongdoing, but also a sense of impending doom. We were all aware that sloppiness, a lack of ethical standards, and competitiveness were all too common. What followed was inspiring: An open discussion that focused on enhancing research rather than on wrongdoing.

Several researchers, many of whom were still in their early stages of their careers, used social media to advocate for bias-countering measures such as sharing data and analysis plans. It shifted the tone of the discourse. My proposals for grants to examine statistical errors and biases in psychology were routinely turned down as low priority before to 2011. I had gotten funding and established my current research group by 2012.

Another case of data fraud was exposed in August, this time in a 2012 paper by behavioural-science superstar Dan Ariely, who acknowledges that the data are created but claims that he did not fabricate them. This case, oddly in a study about how to foster honesty, is an opportunity to consider how research practice norms have changed, and how far reform still has to go.

In the 1950s, publication bias the propensity for data that validate hypotheses to be published more frequently than null outcomes was extensively observed. In the 1960’s and 1970’s, there were cautions that data analysis decisions could lead to bias, such as the identification of bogus or unduly powerful effects. In the 1960’s and 1970’s, there was also a general refusal to exchange psychology data for verification purposes. (It was documented by my group in 2006.)

By the 1990’s, methodologists had raised concerns that most studies had insufficient statistical power — the probability of detecting actual effects — and that researchers frequently misrepresented studies as being designed to test a specific hypothesis when they were actually looking for a pattern in exploratory research. At least among methodologists, the high rate of statistical errors was not surprising. The habit of tinkering with and repeating studies until a statistical threshold (such as P 0.05) was attained was also abandoned. In 2005, a modelling work found that when these biases were coupled, most published results might be erroneous [1]. This controversial message drew a lot of attention, but it didn't result in much action.

Despite this history, prior to Stapel, researchers were either unaware of or dismissed these issues as minor. A worried colleague and I recommended creating an archive to preserve the data gathered by researchers in our department, to ensure reproducibility and reuse, a few months before the case became public. Our idea was dismissed by a council of renowned colleagues on the grounds that competing departments had no equivalent proposals. Reasonable suggestions we made to promote data sharing were ignored on the spurious basis that psychology data sets can never be safely anonymized and will be used to attack well-intentioned researchers out of jealously. I have learned of at least one genuine attempt by older researchers to have me removed from a session for new researchers because it was too critical of substandard procedures.

P hacking was coined by a group of researchers about the same time that the Stapel case broke, and they illustrated how the activity may provide statistical support for implausible premises [2]. Others have worked tirelessly since then to encourage research preregistration and to create major collaborative projects to evaluate the replicability of published findings.

Early-career researchers have been at the forefront of much of the lobbying and education. Recent examples highlight how pre-registration of experiments, replication, publication of negative results, and sharing of code, resources, and data may both empower and dissuade questionable research techniques and misbehavior.

These adjustments must become systemic in order to stick and spread. We need tenure committees to incentivize behaviours like sharing data and publishing rigorous studies with less-than-stellar results. Grant committees and journals should either require preregistration or provide justifications for why it is not necessary. Grant programme officers should be responsible for ensuring that data is made available in compliance with mandates, and PhD committees should insist on verified results. We also need to build a culture in which top research is both rigorous and trustworthy, as well as innovative and interesting.

The Netherlands is blazing a trail. The Dutch Research Council set aside funding in 2016 to support replication and meta-research aimed at boosting methodological rigour. This year, all of the country's universities and major funders are debating how to include open research practices into their evaluations of candidates for tenure, promotion, and financing [3-5].

A slew of academics with a desire to enhance methods has sprung up as a result of grassroots interest. Now the system must reassure students that by using these strategies, they will be able to construct successful professions. Research integrity must never again be a taboo subject, as this will only lead to more untrustworthy research and, eventually, wrongdoing.

References

  1. Ioannidis, J.P.A. “Why Most Published Research Findings are False.” PLOSONE Medicine. (2005).
  2. Simmons, J.P., et al. “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” Psychological Science. 22.11 (2011): 1359-1366.
  3. Wicherts, J. “How misconduct helped psychological science to thrive.” Nature. 597.7875 (2021): 153-153.
  4. Stroebe, W., et al. “Scientific misconduct and the myth of self-correction in science.” Perspectives on Psychological Science. 7.6 (2012): 670-688.
  5. Larkin, I., et al. “The opportunities and challenges of behavioral field research on misconduct.” Organizational Behavior and Human Decision Processes. (2021).

Author Info

Rama Chandra*
 
Department of Education, Sant Tulsidas P.G. College, Awadh University, India
 

Citation: Rama Chandra. How Wrongdoing Aided the Advancement of Psychological Science. Clin Exp Psychol, 2021, 7(9), 273.

Received: 31-Aug-2021 Published: 21-Sep-2021, DOI: 10.35248/2471-2701.21.7.273

Copyright: 2021 Chandra R. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.