Subscribe to HAPPINESS IS A SKILL, a bi-weekly newsletter devoted to helping people heal from depression.

menu

For most of my life, I struggled with the assumption that people with letters after their name were not only smarter, more powerful, and more successful than me, but that the research they create is gospel. I’m not sure when or how this seed was planted, but it’s lead to a lifelong feeling of inadequacy—especially throughout my twenties. Doctors and scientists were busy saving lives and stumbling across eureka. Meanwhile, I made silly cupcakes for a living and couldn’t afford health insurance.

Assuming that all doctors and research belonged on a pedestal is also part of why I so easily accepted their mental health diagnosis. I knew I was depressed, but what did I know about how to fix it? A doctor told me that my brain was broken and that the pills I was taking did not have any major side effects. Who was I to question someone who spent 12 years learning how to identify and treat my exact problem? It is only since getting off the antidepressants that I’ve begun to understand how complicated, political, and often corrupt the medical and research system actually is. And this isn’t conspiracy. Bad science exists in every discipline—The Guardian even has an entire vertical dedicated to it.

While researchers are adept at sorting out bad science from the good, regular folk rarely know the difference, which can lead to a plethora of misinformation and ill-informed opinion. But I’ve learned a few basic strategies to help us plebians suss out the good from bad when it comes to mental health research. This is by no means a foolproof or comprehensive list, but it’s a start.

Where to find research papers:

pinterest graphic with text overlay on blue background

PubMed is a free search engine that primarily accesses the MEDLINE (Medical Literature Analysis and Retrieval System Online) database of research on life science and medical topics. It allows you to sort by a variety of matches, including author, publication date, and journal. It also has a nifty search feature that will only give you results that include free full text. Unfortunately, the full text of many research papers are hidden behind paywalls, which leaves the average person stuck with nothing but abstracts.

Google Scholar is…well, the Google of research. Whether you’re looking for research on antidepressants or conifer trees, Google Scholar is the grand poobah of scientific information. However, because Google Scholar is a search engine and not a subject-dedicated database (like PubMed), Google Scholar strives to include as many journals as possible, including junk journals and predatory journals. These predatory journals are known for exploiting the academic publishing business model, not checking journal articles for quality, and pushing agenda even in clear cases of fraudulent science.

All this to say that before a paper is read, the reader needs to do a bit of due diligence to make sure that what they’re reading is legitimate. Even then, we can’t be 100% sure. Case in point: Andrew Wakefield’s fraudulent research claiming that vaccines cause autism.

I know, I know. The number one rule in research is: don’t use Wikipedia as a source. Any old geezer (including you) can log on to Wikipedia and change an entry (any entry) to say anything and everything, which means that Wikipedia is riddled with errors and should not be referenced as truth in a research paper or reported article. But since we’re not reporting for the New York Times, Wikipedia is a good place to start because of the references listed at the bottom of each Wikipedia entry. The Wikipedia page on Antidepressant Discontinuation Syndrome, for example, links directly to 27 different sources on the topic.

But sourcing research is only the first step. With so much junk science out in the world, it’s imperative to learn how to identify the good from the bad. Here’s how:

Check the Citations

Google Scholar is one of my favorite ways to source research, but because Google Scholar is a search engine and not a curated database, articles published in known predatory journals may pop up in your search results.

pinterest graphic with open book and text overlay

The quickest way to determine if the article is legit is to check the “Cited by” number at the bottom of the search. If an article has multiple citations, it means other researchers are referring to the research in their own articles, which indicates legitimacy. It’s rare that articles are cited thousands of times like Eugene Paykel’s excellent study “Life and Depression: A Controlled Study.” With 1495 citations, Paykel’s study is the research equivalent of a New York Times bestselling book. But according to academics, even mid-single digits are enough to assume the research isn’t bunk.

Journal Ranking

While citations are a great place to start, they benefit from time in the system. Paykel’s article has been around since 1976, which means it has nearly half a century of research built upon it. New research won’t come with shiny citations, so you need to look at the journal it’s published in to see if it’s legitimate.

Academic journals are ranked for impact and quality by a system known as the H-Index. The H-Index is determined by the number of publications and citations. Higher H-Index indicates a higher ranking. However, note that the H-Index is not standardized across subject areas, so you can’t cross-compare across disciplines.

Find journal rankings by googling the name of the journal and the word “ranking.” The Scimago Journal & Country Rank (SJR) should be one of the first Google results, and that will show you the H-Index of the journal in question.

For layman’s purposes, the H-Index doesn’t matter too much. Think of it like the college system. Harvard isn’t the same as Iowa State, but that doesn’t mean that Iowa State isn’t capable of producing good citizens (and we all know question marks who graduated from top-tier universities.) The top journals produce great work, but there is still plenty of meaningful work to be found in smaller journals. A low ranking isn’t necessarily a problem, but no ranking is a problem. Junk publications and predatory journals won’t have an H-Index, so if a publication you’re reading doesn’t have a rating, run far far away.

Crosscheck Beall’s List

If the journal article doesn’t appear on the SJR, your predatory journal spidey sense should go off. Cross-reference the journal against Beall’s List, an archive of predatory journals created by librarian Jeffrey Beall. The sheer number of journals listed on Beall’s List is astounding, and it’s easy to see how naive readers could be duped.

Need a little giggle? Order one of my Fuckit Buckets™.

gold the fuckit bucket charm

After 15 years of depression and antidepressants, my mission is to help people find hope in the name of healing. My memoir on the subject, MAY CAUSE SIDE EFFECTS, publishes on September 6, 2022. Pre-order it on Barnes & Nobles, Amazon, or wherever books are sold. For the most up-to-date announcements, subscribe to my newsletter HAPPINESS IS A SKILL

may cause side effects a memoir book picture and author brooke siem

More articles from the blog

see all articles

January 3, 2023

On Living and Breathing Grief

read the article

October 28, 2022

The struggle to kill the serotonin theory of depression in a world of political nonsense

read the article

October 21, 2022

Last Times

read the article

October 14, 2022

Newborn Babies Go Through Antidepressant Withdrawal

read the article

When people describe legitimate research, they tend to preface it with the term “peer review.” Because peer review is a critical part of scholarly publishing, it’s worth taking a few hundred words and diving into its meaning.

What is peer review?

Peer review is exactly what it sounds like: academic peers review an individual’s work in order to determine if the research is strong enough to publish. All articles published in legitimate research journals are peer-reviewed, which is why scholarly journals are deemed a reliable source of information. This is also why predatory journals are a problem. They don’t follow the peer review protocol, which means there aren’t any gatekeepers to stop unethical or fraudulent research from getting out into the public.

How does peer review work?

Peer review follows a standard process:

  • An individual or group of people complete a study, write an article, and send it to a journal. It doesn’t matter if its original research or a systematic review. If the work is going to a journal, it will be peer reviewed.
  • The journal editors send the article out to other scientists in the field. Typically, the work is sent blind, which means that the author(s) (and sometimes the reviewers) remain anonymous during the review process. This helps keep bias to a minimum, though it’s not a perfect process. I’ve been at multiple dinners with Justin (my professor boyfriend) and his colleagues when over the course of shooting the shit, they admit that they were reviewers for each other’s work. It didn’t matter that the review was blind. Academic focus is so narrow that it creates tight-knit communities where everyone knows everyone. Topic and writing style can be as good as a name tag.
  • The reviewers provide feedback for the author and tell the journal editor whether or not they think the article is fit for publication.
  • If the work is considered to be of high quality, the authors are invited to revise and resubmit the article for consideration.
  • In theory, only articles that meet scientific standards are considered for publication. This means the work must be ethical, acknowledge other work in the field, backed up with evidence, well reasoned, and with disclosed conflicts of interest.

Is all research peer-reviewed?

If you find research in a reputable journal, the article has been peer reviewed. However, sometimes researchers bypass the peer review process and instead submit research directly to their university or for use at an industry conference.

How difficult is it to get published?

Having watched Justin go through multiple rounds of article submission, I feel the need to highlight the difficulty and glacial pace of publication. This shit is hard and slow. Justin has work he finished years ago that has only recently been accepted. It’s not that it takes all that long to read a paper, but because reviewers aren’t paid and they have other things to do, sometimes the work gets lost in the slush.

One survey suggested that 50% percent of articles are ultimately published, but only 9% are accepted without a revise and resubmit. While 50/50 odds aren’t the worst, the competition for publication in top journals is vicious. The journal Science only accepts 8% of submissions, while the New England Journal of Medicine publishes just 6%.

Is peer review a perfect system?

In short, no. Critics of the peer review system say it’s slow and expensive, inconsistent and subjective, and often filled with bias and abuse. However, with no viable alternative, both researchers and the general public must continue to believe in the system. The irony of course, as summed up by peer review critic Richard Smith: “How odd that science should be rooted in belief.”

More articles from the blog

see all articles

January 3, 2023

On Living and Breathing Grief

read the article

October 28, 2022

The struggle to kill the serotonin theory of depression in a world of political nonsense

read the article

October 21, 2022

Last Times

read the article

October 14, 2022

Newborn Babies Go Through Antidepressant Withdrawal

read the article

Click here for Part One of Where to Find Scientific Research Papers (and How to Know if They’re Legit).


Yesterday, I wrote about predatory journals. I suppose I shouldn’t have been surprised to learn that there are shitty people in the research world who get off on exploiting academics and undermining science, but I was. Blame my mother. She raised me in a world where all people, on some level, are good. I never quite bought it, but I also didn’t learn to look at everything and everyone with skepticism. I tended to assume that people were just doing the best they can. They may be severely annoying in the process, but ultimately it was all with good intention.

The internet has shattered that illusion. People are fucked.

And so the burden falls on to the individual to see through the bullshit. Historically, we’re not great at that, but when it comes to sussing out whether or not a research paper is legitimate, there are a few quick and easy ways to verify your science.

Check the Citations

Google Scholar is one of my favorite ways to source research, but because Google Scholar is a search engine and not a curated database, articles published in known predatory journals may pop up in your search results.

The quickest way to determine if the article is legit is to check the “Cited by” number at the bottom of the search. If an article has multiple citations, it means other researchers are referring to the research in their own articles, which indicates legitimacy. It’s rare that articles are cited hundreds of thousands of times like Eugene Paykel’s excellent study in the photo above. (Paykel’s study is the research equivalent of a New York Times bestselling book.) According to my smarty-pants academic boyfriend Justin, even mid-single digits is enough to assume the research isn’t bunk.

Journal Ranking

While citations are a great place to start, they benefit from time in the system. Paykel’s article has been around since 1976, which means it has nearly half a century of research built upon it. New research won’t come with shiny citations, so you need to look at the journal it’s published in to see if it’s legitimate.

Academic journals are ranked for impact and quality. Think of it like the college system. Harvard isn’t the same as Iowa State, but that doesn’t mean that Iowa State isn’t capable of producing damn good citizens (and we all know question marks who graduated from top tier universities.) The top journals produce great work, but there is still plenty of meaningful work to be found in smaller journals.

Find journal rankings by googling the name of the journal and the word “ranking.” The Scimago Journal & Country Rank (SJR) should be the first result, and that will take you to a list with the journal in question buried somewhere in there. The rank is determined by the H-Index, the details of which I don’t entirely understand. The H-Index is determined by the number of publications and citations, and higher H-Index indicated a higher ranking. However, the H-Index is not standardized across subject areas, so you can’t cross-compare.

For our purposes, the H-Index doesn’ matter too much. In Justin’s words, “A low ranking isn’t necessarily a problem. No ranking is a problem.”

Crosscheck Beall’s List

If the journal article doesn’t appear on the SJR, your predatory journal spidey sense should go off. Cross-reference the journal against Beall’s List, an archive of predatory journals created by librarian Jeffrey Beall. The sheer number of journals listed on Beall’s List is astounding, and it’s easy to see how naive readers could be duped.

More articles from the blog

see all articles

January 3, 2023

On Living and Breathing Grief

read the article

October 28, 2022

The struggle to kill the serotonin theory of depression in a world of political nonsense

read the article

October 21, 2022

Last Times

read the article

October 14, 2022

Newborn Babies Go Through Antidepressant Withdrawal

read the article

A note from Brooke: This post is taking longer than anticipated, so I’m splitting it into two parts. This post will focus on where to find relevant research papers, while Part II will focus on the quality and legitimacy of those articles.

For most of my life, I struggled with the assumption that people with letters after their name are not only smarter, more powerful, and more successful than me, but that the research they create is gospel. I’m not sure when or how this load-of-crap seed was planted, but it’s lead to a lifelong feeling of inadequacy—especially throughout my twenties. Doctors and scientists were busy saving lives and stumbling across eureka. Meanwhile, I made stupid cupcakes for a living and couldn’t afford health insurance.

My assumption that all doctors and research belonged on a pedestal is part of why I so easily accepted their mental health diagnosis. I knew I was depressed, but what did I know about how to fix it? A doctor told me that my brain was broken and that the pills I was taking did not have any major side effects. Who was I to question someone who spent 12 years learning how to identify and treat my exact problem?

It is only since getting off the antidepressants that I’ve begun to understand how complicated, political, and often corrupt the medical and research system actually is. And this isn’t conspiracy. Bad science is everywhere—The Guardian even has an entire vertical dedicated to it.

While researchers are adept at sorting out bad science from the good, regular folk rarely known the difference, which can lead to a plethora of misinformation and ill-informed opinion. But I’ve learned a few basic strategies to help us plebians suss out the good from bad. This is by no means foolproof, but it’s a start.

Where to find research papers

PubMed is a free search engine that primarily accesses the MEDLINE (Medical Literature Analysis and Retrieval System Online) database of research on life science and medical topics. It allows you to sort by a variety of matches, including author, publication date, and journal. It also has a nifty search feature that will only give you results that include free full text. Unfortunately, the full text of many research papers are hidden behind paywalls, which leaves the average person stuck with nothing but abstracts.

Google Scholar is…well, the Google of research. Whether you’re looking for research on antidepressants or conifer trees, Google Scholar is the grand poobah of scientific information. However, because Google Scholar is a search engine and not a subject-dedicated database (like PubMed), Google Scholar strives to include as many journals as possible, including junk journals and predatory journals. These predatory journals are known for exploiting the academic publishing business model, not checking journal articles for quality, and pushing agenda even in clear cases of fraudulent science.

All this to say that before a paper is read, the reader needs to do a bit of due diligence to make sure that what they’re reading is legitimate. Even then, we can’t be 100% sure. Case in point: Andrew Wakefield’s fraudulent research claiming that vaccines cause autism.

I know, I know. The number one rule in research is: don’t use Wikipedia as a source. Any old geezer (including you) can log on to Wikipedia and change an entry (any entry) to say anything and everything, which means that Wikipedia is riddled with errors and should not be referenced as truth in a research paper or reported article. But since we’re not reporting for the New York Times, Wikipedia is a good place to start because of the references listed at the bottom of each Wikipedia entry. The Wikipedia page on Antidepressant Discontinuation Syndrome, for example, links directly to 27 different sources on the topic. Whether or not all these references are legitimate is another issue entirely, and one that I will get into tomorrow when we explore Part II: How to tell if a journal article is legit.

As always, please keep in mind that like you, I am learning as I go. These are complicated topics that even experts don’t agree on. We’re all doing the best we can.

More articles from the blog

see all articles

January 3, 2023

On Living and Breathing Grief

read the article

October 28, 2022

The struggle to kill the serotonin theory of depression in a world of political nonsense

read the article

October 21, 2022

Last Times

read the article

October 14, 2022

Newborn Babies Go Through Antidepressant Withdrawal

read the article

Part of the reason why I’m able to learn what I’m learning is that my partner, Justin, is an academic. He’s built his career on reading, writing, and analyzing journal articles, which means he’s my first stop on the understanding research train. This is both great and terrible for me. On the one hand, I have an expert at my disposal. On the other hand, I have an expert at my disposal. What I think are straightforward questions turn into twenty-minute tirades that leave me more confused than before. No answer is ever simple, and I’ve been forced to accept that “it depends” is a valid conclusion.

“The more you research you read the more you’ll understand that every single study is fundamentally flawed,” he said to me yesterday. “Be careful about assumptions, because research studies are full of caveats and exceptions. They’re looking at one little sliver of one thing, and there’s no easy way to accurately translate that into something digestible and catchy for the media.”

All this because I asked him what n meant in a paper.

What is “n”?

I assumed the n operated like it does algebra, standing for a constant throughout the entire paper. As it turns out, that is entirely incorrect. There are big Ns and little ns. The big N typically stands for population size while the little stands for some sort of value. For example, if there are 1000 people in a school but only 200 of them were chosen for a study, N=1000 and n=200.

However, the n does not necessarily refer to human subjects and the meaning of that n can change with context. Using the paper from yesterday’s post as my example, we can see that there are a variety of values for n throughout different parts of the article. The first shows up in the abstract, n=16:

Reading the sentence before it, “antidepressants were significantly better than placebo in trials that had a low risk of bias,” this little n refers to the number of studies analyzed that had a low risk of bias (16 studies.) Why they can’t just say, “In the 16 trials that had a low risk of bias…” I don’t know.

Further down the paper, shows up again:

To understand what these ns represent, we need to read for context. The previous page states, “The literature searches from databases and additional resources identified 2890 relevant titles.” In this case, n has to do with the number of studies analyzed, and the chart breaks down how the researchers began with 2890 studies (2864 records identified through database searching + 26 records identified through other sources) and whittled their relevant studies down to the 28 included in the meta-analysis.

To sum up: An n is not an interpretation of the data but instead communicates some sort of numerical value. That value changes depending on what it’s referring to, so it’s always necessary to read for context.

More articles from the blog

see all articles

January 3, 2023

On Living and Breathing Grief

read the article

October 28, 2022

The struggle to kill the serotonin theory of depression in a world of political nonsense

read the article

October 21, 2022

Last Times

read the article

October 14, 2022

Newborn Babies Go Through Antidepressant Withdrawal

read the article

When I first began speaking openly about long term antidepressant use and antidepressant withdrawal, it didn’t take long for me to be faced with a wall of academic journals and research papers. At first, my instinct was to read the abstract, get the gist of what I was trying to understand and move on. But much like sourcing all your information exclusively from Fox News, that approach left me a dangerous kind of dumb. I had just enough information to confirm my bias but zero original thoughts surrounding the source, scope of work, journal reputation, limitations of the study, and industry response.

When it dawned on me that just reading the abstract was no better than just reading sensational news headlines and deeming yourself informed, I began to read the studies in full. At least, I tried. For those of us who haven’t spent their entire adult lives in research and academia, these papers are a nightmare.

While I understand that there are longstanding reasons why academic papers are written the way they’re written, it bothers me that only people with a PhD are taught to comprehend this sort of work. How can the individual be expected do their own research and make their own decisions for their own wellness if they can’t understand the research that policy and marketing is built upon?

Which brings me to the first installment of How to Read a Scientific Paper. I’m tired of taking other people’s word on research as gospel, so I’m going to learn how to do it myself and chronicle the journey here. Hopefully, I can beef up the entertainment factor, because damn these articles are dry.

I’m going to begin with a recent article spearheaded by psychiatrist Saeed Farooq and published in the Journal of Affective Disorders, entitled, “Pharmacological interventions for prevention of depression in high risk conditions: Systematic review and meta-analysis.”

I first found out about the study thanks to a Keele University tweet that said, “The study, led by Professor Saeed Farooq, found that using antidepressants as a pre-emptive measure could help to prevent depression in patients considered to be at high risk of developing the condition, for example following stroke or heart attack.” The tweet linked not to the article, but an in-house blog post that feels a bit too much like propaganda. The fact that we’re even considering doping people up on antidepressants before they become depressed deeply concerns me, so I want to learn more about it before I go full oh no you di’n’t! on the topic.

In reality, this was not a research study or clinical trial, but a systematic review and meta-analysis. And for us to learn to read journal articles, we must understand the difference.

What is a research article?

A research article is a study designed and performed by the paper’s author or authors. It will explain the methodology of the study—or rather, the methods and systems used to conduct the study—and clarify what the results mean. All of the steps are listed in detail in order to allow other researchers to conduct similar experiments.

One of the best ways to tell if you’re reading a research article is to look for phrases like “we found” or “I measured” or “we tested.” This indicated that the authors who are writing the article are the ones who also conducted the research.

Next, look at the formatting of the article. Research papers include sections that are listed in a particular order: abstract, introduction, methods, results, discussion, and references.

What is a review?

Review papers do not include original research conducted by the authors(s). Instead the author(s) give their thoughts on existing research papers for the purpose of identifying patterns or forming potential new conclusions based on a variety of research studies. For example, a researcher may look at a study performed in 1980 and compare it to a similar study from 2010 in order to provide an overview of the topic as a whole.

Reviews are particularly useful for people looking to get background information on a topic before diving into detailed or technical research papers. However, there is no formal process to dictate which articles must be included in a review, which gives authors the freedom to overlook existing research that may not fit their agenda. Thus, it can be difficult to determine if the author’s conclusions are biased.

What is a systematic review?

Systematic reviews were developed to eliminate that bias by requiring multiple authors to track down all available studies on a particular topic and execute high-level analysis of existing research in order to answer a clearly defined, clinical question. Systematic reviews can take months or years to complete, whereas standard reviews may only take a few weeks.

Systematic reviews contain a lot of data and to the untrained eye, can look a lot like original research. Systematic reviews are held in the same echelon as original research and are often presented to the public as if the research was new (like in the Keele University tweet.) This strikes me as potentially misleading, not because the research isn’t valid or useful, but because of the language used to promote the research.

For example, Farooq’s article concludes that based on his analysis, “Prevention of depression may be possible in patients who have high-risk conditions but the strategy requires complete risk and benefits analysis before it can be considered for clinical practice. However, not a single clinical study has been conducted to support or disprove that statement and the tweet says nothing about that and instead presents the research as if it were a new, exciting discovery.

What is meta-analysis?

Meta-analysis is a research process used to manage and interpret all the data for a systematic review. In layman’s terms, meta-analysis is how researchers make sense of the data in hundreds or thousands of individual papers. After extracting the data, analysts use a variety of methods to account for differences like sample size, variations in study approach that may affect the overall outcome of the systematic review, and overall findings.

Frankly, I don’t understand a lick of how meta-analysis works. But, I’ve learned that I don’t have to understand it as long as I understand what role it plays in research: meta-analysis pools the data sets from different studies into a single statistical set of data in order to analyze it and come to a single conclusion.

*  *  *

For or those of you who like visuals, check out this article by Concordia University that visually breaks down the structure of various journal articles so you can recognize what you’re reading.

More articles from the blog

see all articles

January 3, 2023

On Living and Breathing Grief

read the article

October 28, 2022

The struggle to kill the serotonin theory of depression in a world of political nonsense

read the article

October 21, 2022

Last Times

read the article

October 14, 2022

Newborn Babies Go Through Antidepressant Withdrawal

read the article