Mengenal Model BERT dan Implementasinya untuk Analisis Sentimen Ulasan Game
Keywords:
BERT, sentiment analysis, game reviews, Natural Language Processing, Systematic Literature ReviewAbstract
This Systematic Literature Review (SLR) analyzes the use of the BERT (Bidirectional Encoder Representations from Transformers) model for sentiment analysis of game reviews. The research identifies methods of implementing BERT, evaluates its effectiveness compared to other models, and explores challenges and opportunities in this context. Results show that BERT consistently outperforms traditional models, with an average accuracy improvement of 3-7%. BERT architecture variations, adiptive fine-tuning techniques, and pre-processing specific to game languages proved effective. Practical applications include game sales prediction and identification of preferred features. Key challenges include high computational requirements and handling game-specific context. Future research directions include domain-specific model development, multi-modal integration, and improved interpretability. This SLR provides a foundation for further research in the use of advanced language models for sentiment analysis of game reviews.
References
J. P. Zagal, A. Ladd, and T. Johnson, “Characterizing and Understanding Game Reviews,” Proc. 4th Int. Conf. Found. Digit. Games, no. McCrea 2007, pp. 215–222, 2009, doi: 10.1145/1536513.1536553.
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” Naacl-HLT2019, pp. 4171–4186, 2019, [Online]. Available: https://aclanthology.org/N19-1423.pdf
B. Kitchenham, “Procedures for Performing Systematic Reviews,” 2014. [Online]. Available: https://www.researchgate.net/publication/228756057
S. Shen et al., “Q-BERT: Hessian based ultra low precision quantization of BERT,” AAAI 2020 - 34th AAAI Conf. Artif. Intell., pp. 8815–8824, 2020, doi: 10.1609/aaai.v34i05.6409.
M. Mosbach, M. Andriushchenko, and D. Klakow, “on the Stability of Fine-Tuning Bert: Misconceptions, Explanations, and Strong Baselines,” ICLR 2021 - 9th Int. Conf. Learn. Represent., 2021.
A. Rietzler, S. Stabinger, P. Opitz, and S. Engl, “Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification,” Lr. 2020 - 12th Int. Conf. Lang. Resour. Eval. Conf. Proc., pp. 4933–4941, 2020.
S. Yildirim and Y. Santur, “Comparing the Performance of Deep Learning Architectures for Sentiment Analysis,” Int. J. Adv. Nat. Sci. Eng. Res., vol. 4, no. 4, pp. 272–278, 2024, [Online]. Available: https://as-proceeding.com/index.php/ijanser
M. Wankhade, A. C. S. Rao, and C. Kulkarni, A survey on sentiment analysis methods, applications, and challenges, vol. 55, no. 7. Springer Netherlands, 2022. doi: 10.1007/s10462-022-10144-1.
C. Sun, L. Huang, and X. Qiu, “Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence,” NAACL HLT 2019 - 2019 Conf. North Am. Chapter Assoc. Comput. Linguist. Hum. Lang. Technol. - Proc. Conf., vol. 1, pp. 380–385, 2019.
K. Kurita, N. Vyas, A. Pareek, A. W. Black, and Y. Tsvetkov, “Measuring Bias in Contextualized Word Representations,” vol. Proceeding, pp. 166–172, 2019, doi: 10.18653/v1/w19-3823.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Seminar Nasional Sistem Informasi dan Teknologi (SISFOTEK)
This work is licensed under a Creative Commons Attribution 4.0 International License.
http://creativecommons.org/licenses/by/4.0