Huang, Huang, Ye, Xiaohong, Mustafa, Mumtaz Begum, Dong, Qiyuan, Li, Yu, Asemi, Adeleh and Asemi, Asefeh
ORCID: https://orcid.org/0000-0003-1667-4408
(2025)
GPT-based lifelong learning and ANFIS-driven reply memory ratio prediction for aspect-based sentiment analysis.
Complex & Intelligent Systems, 11
.
DOI 10.1007/s40747-025-02086-2
|
PDF
- Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
5MB |
Official URL: https://doi.org/10.1007/s40747-025-02086-2
Abstract
GPT is among the most powerful large language models (LLMs), known for its versatility and strong performance across a wide range of tasks. However, its substantial computational demands and limited cross-domain generalization present challenges, particularly in resource-sensitive applications like Aspect-Based Sentiment Analysis (ABSA). Existing ABSA models are typically domain-specific and suffer from catastrophic forgetting - losing previously acquired knowledge when sequentially trained on new domains - resulting in poor scalability and knowledge retention. To address these issues, we proposed a novel framework that integrates GPT-2 with a replay-based lifelong learning mechanism to support incremental, multi-domain ABSA while mitigating forgetting. The model is sequentially fine-tuned using real-data replay across four diverse ABSA domains: Laptops, Restaurants, Tweets, and Finance. Experimental results show that proposed model achieves an average accuracy of 0.85 and a Backward Transfer (BWT) score of -0.09, significantly outperforming the baseline model without lifelong learning (accuracy of 0.70, BWT of -0.64). We further conducted a t-test to validate the statistical significance of the improvements. we design an ANFIS-based model trained on experimental results to predict the proper memory ratio for new datasets, enabling more effective and adaptive lifelong learning in GPT-based architectures. In addition, we applied a multi-level data augmentation pipeline, which significantly improved performance across domains (p = 0.0049), enhancing both retention and generalization under constrained memory. Additionally, we introduced a domain sequence permutation study to test robustness against task-order sensitivity, while another part evaluates generalization under a simulated fifth domain constructed from a mixture of all original domains. These components validate the model’s scalability and ability to generalize beyond trained distributions. An ablation study was conducted to isolate the contributions of each module, including replay, ANFIS prediction, and data augmentation. The results demonstrate that removing any single component leads to a measurable drop in performance, confirming that each part of the framework contributes meaningfully to overall effectiveness. A post-hoc comparison with a distillation-based lifelong learning baseline was conducted, showing improved performance of our replay + ANFIS approach. We acknowledge that additional comparisons with advanced baselines such as GEM or A-GEM are needed to further strengthen this claim, which we leave for future work. Overall, this study demonstrates the effectiveness of combining GPT-2 with replay-based lifelong learning and adaptive memory control, offering a scalable and robust solution for continuous, multi-domain sentiment analysis. Our proposed model provides a practical foundation for future research in dynamic, cross-domain ABSA systems.
| Item Type: | Article |
|---|---|
| Uncontrolled Keywords: | Aspect-Based sentiment analysis (ABSA) ; GPT ; Lifelong learning ; ANFIS (Adaptive Neuro-Fuzzy Inference System) ; Data augmentation ; Replay memory ratio prediction |
| Divisions: | Institute of Data Analytics and Information Systems |
| Subjects: | Automatizálás, gépesítés Computer science |
| DOI: | 10.1007/s40747-025-02086-2 |
| ID Code: | 11919 |
| Deposited By: | MTMT SWORD |
| Deposited On: | 14 Oct 2025 12:04 |
| Last Modified: | 14 Oct 2025 12:04 |
Repository Staff Only: item control page


Download Statistics
Download Statistics