top of page

LLM-based Academic Paper Intelligent Review

​M.Sc. Student: Haowen Xue | Advisor: Jimmy Chih-Hsien Peng | Project Duration: 2024-2025​

​​

With the exponential growth of scientific articles published each year, academia faces challenges of review and knowledge explosion. The contribution claims in papers may overlap to some extent, but this overlap is difficult to identify before readers fully review all works. To optimize the efficiency of research literature reviews, for example in fields like artificial intelligence and natural language processing, we need to develop a tool that can quantitatively evaluate scientific article contributions.

​​

This project leverages the advances in LLM-based tools to develop a tool capable of evaluating the quality of given papers, helping researchers and reviewers more efficiently conduct literature screening and evaluation work.

​

Overview

Technical Principles

According to Liang’s previous work his working found that more than half (57.4%) of the users found GPT-4 generated feedback helpful/very helpful and 82.4% found it more beneficial than feedback from at least some human reviewers.

The review generation module uses carefully designed prompt templatesto guide the LLM in generating professional review opinions that include thefollowing key sections. Here is an example of the prompt template used forgenerating reviews:

image.png

Scoring Model Fine-tuning

The training process performs well, with stable loss reduction, appropriate learning rate decay, and generally stable training despite several gradient spikes.

Installation Guide

Open Source

The project used in this study is available for download here

​

Logo.png

© 2025 by Power Engineering Laboratory

bottom of page