Resources > BLOG

6 Tips to make your screening easier with review software

6 Tips to make your screening easier with review software

As the process of identifying potentially relevant articles in databases for our systematic review is finished, the review team faces the first serious challenge: a title and abstract screening. An overwhelming number of references to screen frequently arouses extreme emotions, especially in beginners. A tedious process, disagreements and misinterpretations lead to mistakes and, as a result, incomplete search results and poor-quality reviews.

Unfortunately, we still cannot skip this step, but we can make it less burdensome.  Here are some tips that can help you with that:

Tip #1 Consider the assistance of AI

Are AI screening tools really necessary?

Some may say that an Excel spreadsheet or, eventually, a piece of paper is enough to perform a title/abstract and full-text screening. Basically, it’s true. However, not only the fact that we’ve finished screening is important but also what was the time and quality of this screening.

The number of studies is rapidly growing, which generates a huge number of abstracts to be screened. As we mentioned in the previous article, according to the project management triangle, it’s impossible to balance all its parts (i.e., quality, time, scope, and resources) at the same level to maintain good quality of a project. Tools supported by artificial intelligence and machine learning, help to speed up the screening process without compromising quality. Moreover, they provide comprehensive auditability of the process and support updates while conducting living systematic reviews.

Read previous article: CLICK HERE.

For instance, in Laser AI, the machine learning model uses your initial decisions to sort the records based on their probability to be relevant for your review. This way, you – the screener – can start by analyzing those records with the highest likelihood of inclusion followed by those with a lower probability.

Tip #2 Turn on the Focus Mode

Artificial intelligence alone is not enough to ensure comfortable screening. How many times did you struggle with an Excel spreadsheet to see the abstract, the authors, and the title on the same screen? Many reviewers might not be aware that the way the abstracts are presented affects their attitude toward screening. Laser AI allows users to screen records one by one in a so-called Focus Mode, which allows one to concentrate on the abstract rather than trying to adjust the cell's width.

Tip #3 Highlight your keywords

Even if we screen abstract after abstract separately after a while whole process becomes tedious, and screeners usually lose focus. Keyword highlighting may be a solution for this inconvenience.  Reviewers can use predefined highlights during the screening to scan if a particular record includes desired or undesired keywords. The more desired words in an abstract are highlighted, the higher the probability that this abstract may be relevant.

Tip #4 Filter your records

Additional filters allow reviewers to screen separate clusters of records. For example, we can filter records by date, specific words, keywords, or abstract availability. Imagine excluding all irrelevant in vitro studies in a few mouse clicks - it’s possible by filtering records including the phrase in vitro, for instance, in titles and then batch exclusion after a verification.

If you want to know more, check out our webinar about Laser AI.

An image advertising Laser AI's webinar on Literature reviews.

Tip #5 Create a screening guide

Are guides always useless?

Why should one create a screening guide when, after all, we want to save time? Wouldn't it be enough to give reviewers the protocol and some tips?

Most of us don't like to follow any guides or instructions anyway - this simple rule applies both to assembling IKEA furniture and following instructions while screening. Whereas not using instructions when assembling a chair can result in an unused screw, in the case of systematic reviews, the results may be more serious, and here’s the simple why.

Ask short and precise questions

Let’s assume that we are review team leaders and we want our reviewers to include only abstracts indicating that a high-quality design was used. Unfortunately, “high quality” is subjective, and reviewers may understand it in a different way, which results in conflicts throughout the whole screening. Instead, short and precise questions should be asked, e.g., “Does the study use a randomized controlled trial design?”. Now there’s no doubt that “high quality” relates to an RCT.

Organize your guide hierarchically

Especially while conducting large systematic reviews with many reviewers, it is required to organize a clear screening process based on concise and unambiguous questions and identify relevant studies efficiently with the simultaneous minimization of the bias risk. But there’s another threat here - without the correct order, even the simplest questions may become useless. Best practice guidelines in conducting systematic reviews indicate the need for organizing screening guides hierarchically, usually with the easiest questions at the beginning of the tool. Asking the most difficult questions at the beginning may increase the screening time, induce confusion, and the risk of conflicts between screeners. Instead, it’s worth asking simple questions that even a beginner can answer, like, e.g., year of abstract publication or type of study (in vitro / in vivo / clinical study).

Get rid of discrepancies

Even if we do our best to create a clear and simple screening guide, we cannot predict the screeners' mindset. To provide consistency in decision-making, after the screening guide has been made, the team leader should clarify each question and discuss any uncertainties with the team. Even the simplest question may raise concerns and lead to misinterpretation. For instance, when the population of interest are patients with sinusitis, shall we include all patients with appropriate symptoms or only those with imaging test results and analysis of samples from nasal discharge?

Tip #6 Conduct an introductory abstract screening/pilot testing

While conducting introductory abstract screening (also called pilot testing), reviewers analyze the screening guide with the review manager and then screen at the same time a small number of abstracts chosen from the list, e.g., 20 to 30. After screening, review team leaders analyze disagreements. These discrepancies are helpful information because they show all weaknesses of the screening guide, which can be changed before the main stage. If differences are significant, the next round (or rounds) of pilot screening can be conducted until all discrepancies are negligible.

A futuristic AI robot that has organized systematic reviews thanks to Laser AI's review software
Source: This image was created with the assistance of AI (DALL·E 2)

As you can see, the devil is not as black as he is painted. Of course, it's up to you to use the methods described above, but there is evidence that by using them, both title/abstract and full-text screening may be conducted relatively fast without compromising quality. Currently, many methodological groups are working to provide understandable and useful guidelines to support reviewers and to protect against inaccurate decision-making.

If you want to know more about good practices in systematic review conduct, follow our blog and social media to keep up to date. For people interested in the automation of literature reviews and the application of artificial intelligence in evidence synthesis, we highly encourage you to join the LinkedIn group on AI in Evidence-Based Healthcare.

Laser AI's MSc Pharmacist, Ewelina Sadowska.
Ewelina Sadowska
MSc, Pharmacist

Evidence Synthesis Specialist at Evidence Prime. She is responsible for testing new solutions in Laser AI and conducting evidence synthesis research.

Related webinars:

Thumbnail for Laser AI's webinar: "Screening: Can we do it better?"
The Data Screening Process- Can we/ Laser AI do it better?

During the webinar, learn practical information about best practices in title abstract screening and how Laser AI can support the screening process.

READ MORE

Related blog posts:

Two AI robots analyzing the difference between a scoping review vs systematic review
Living Systematic Review – Oh, No! Again? Complete them Quicker!

Are you interested in the automation of living systematic literature reviews and the application of artificial intelligence in evidence synthesis?

READ MORE