APPLICATION OF ARTIFICIAL INTELLIGENCE FOR ANALYSIS OF THE FINANCIAL SECURITY OF AN ENTERPRISE
Abstract
Modern enterprises operate in conditions of high uncertainty, rapid market changes and increasing influence of digital transformation, which significantly complicates timely identification of financial instability. Traditional statistical approaches to bankruptcy prediction often fail to capture complex nonlinear relationships between financial indicators and therefore provide limited practical value for management decision-making. Machine learning models improve predictive accuracy, but their practical implementation is constrained by low interpretability, as such models frequently behave as «black boxes». This study proposes an explainable artificial intelligence approach to bankruptcy prediction and financial security assessment of Ukrainian enterprises based on financial and digital indicators. The predictive core is the XGBoost algorithm, which demonstrated the highest classification accuracy compared with alternative machine learning methods. To ensure transparency of results, explainable AI techniques were applied, including feature importance analysis and Partial Dependence Plots. These tools enabled identification of key risk factors and determination of the direction, strength and threshold effects of their impact across the dataset. The revealed dependencies correspond to economic logic and allow transformation of statistical patterns into meaningful managerial conclusions. Additionally, the research introduces an extension of the interpretability stage through the integration of large language models. The proposed procedure includes generating structured descriptions of indicators, constructing Partial Dependence Plots graphs, forming a specialized prompt and transferring analytical outputs to a large language model environment for automated economic interpretation. This significantly simplifies analytical work and increases accessibility of advanced analytics for enterprises with limited analytical resources. The obtained results confirm that the combination of machine learning models, explainable AI methods and large language models creates a new level of practical applicability of predictive analytics. The proposed approach enables early detection of financial risks, identification of critical indicator ranges and prioritization of anti-crisis managerial actions, transforming artificial intelligence from a predictive instrument into a decision-support tool for financial security management.
References
Enhancing Resilience by Boosting Digital Business Transformation in Ukraine. OECD Publishing, 2024. DOI: https://doi.org/10.1787/4b13b0bb-en
Park M. S.. Son H.. Hyun C.. Hwang H. J. Explainability of Machine Learning Models for Bankruptcy Prediction. IEEE Access. 2021, 9. DOI: https://doi.org/10.1109/ACCESS.2021.3110270
Lin Yu‑Cheng, Padliansyah R., Lu Yu‑Hsin. Liu Wen‑Rang. Bankruptcy prediction: Integration of convolutional neural networks and explainable artificial intelligence techniques. International Journal of Accounting Information Systems. 2025. № 56. DOI: https://doi.org/10.1016/j.accinf.2025.100744
Горячев Г.В. Пояснюваний штучний інтелект у підтримці управлінських рішень для великих систем із вимірюваним впливом на KPI підприємства. Таврійський науковий вісник. 2025. № 5 (1). С. 366-375. DOI: https://doi.org/10.32782/tnv-tech.2025.5.1.39
Усік П.С., Смірнова Т.В., Буравченко К.О., Смірнов О.А., Улічев О.С., Смірнов С.А. Дослідження технологій забезпечення кібербезпеки банківських систем з використанням штучного інтелекту. Кібербезпека: освіта, наука, технології. 2025. № 1 (29). С. 704-716. DOI: https://doi.org/10.28925/2663-4023.2025.29.930
Біличенко М.М. Оцінка фінансової безпеки підприємства методами машинного навчання. Цифрова економіка та економічна безпека. 2024. № 4 (13). С. 101-107. DOI: https://doi.org/10.32782/dees.13-15
Hamdi M., Mestiri S., Arbi A. Artificial Intelligence Techniques for Bankruptcy Prediction of Tunisian Companies: An Application of Machine Learning and Deep Learning‑Based Models. Journal of Risk and Financial Management. 2024. Vol. 17 (4). DOI: https://doi.org/10.3390/jrfm17040132
Knab, P.; Marton, S.; Schlegel, U.; Bartelt, C. Which LIME Should I Trust? Concepts, Challenges, and Solutions. Explainable Artificial Intelligence Communications in Computer and Information Science. 2025. 2577. P. 28-52. DOI: https://doi.org/10.1007/978-3-032-08324-1_2
Brdnik S., Podgorelec V., Šumak B. Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI‑Enhanced Learning Analytics. Electronics. 2023. Vol. 12 (12), 2594. DOI: https://doi.org/10.3390/electronics12122594
Michelucci U. Feature Importance and Selection. Fundamental Mathematical Concepts for Machine Learning in Science. Springer Cham 2024. 249 р. DOI: https://doi.org/10.1007/978-3-031-56431-4_10
Molnar C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. URL: https://christophm.github.io/interpretable-ml-book
OECD. (2024). Enhancing resilience by boosting digital business transformation in Ukraine. OECD Publishing. DOI: https://doi.org/10.1787/4b13b0bb-en
Park, M. S., Son, H., Hyun, C., & Hwang, H. J. (2021). Explainability of machine learning models for bankruptcy prediction. IEEE Access, 9. DOI: https://doi.org/10.1109/ACCESS.2021.3110270
Lin, Y.-C., Padliansyah, R., Lu, Y.-H., & Liu, W.-R. (2025). Bankruptcy prediction: Integration of convolutional neural networks and explainable artificial intelligence techniques. International Journal of Accounting Information Systems, 56, 100744. DOI: https://doi.org/10.1016/j.accinf.2025.100744
Horiachev, H. V. (2025). Poiasniuvanyi shtuchnyi intelekt u pidtrymtsi upravlinskykh rishen dlia velykykh system iz vymiriuvanym vplyvom na KPI pidpryiemstva [Explainable artificial intelligence in supporting managerial decision-making for large systems with measurable impact on enterprise KPIs]. Tavriiskyi naukovyi visnyk, 5(1). DOI: https://doi.org/10.32782/tnv-tech.2025.5.1.39
Usik, P. S., Smirnova, T. V., Buravchenko, K. O., Smirnov, O. A., Ulichev, O. S., & Smirnov, S. A. (2025). Doslidzhennia tekhnolohii zabezpechennia kiberbezpeky bankivskykh system z vykorystanniam shtuchnoho intelektu [Research on cybersecurity technologies for banking systems using artificial intelligence]. Kiberbezpeka: osvita, nauka, tekhnolohii, 1(29). DOI: https://doi.org/10.28925/2663-4023.2025.29.930
Bilichenko, M. M. (2024). Otsinka finansovoi bezpeky pidpryiemstva metodamy mashynnoho navchannia [Assessment of enterprise financial security using machine learning methods]. Tsyfrova ekonomika ta ekonomichna bezpeka, 4(13), 101–107. DOI: https://doi.org/10.32782/dees.13-15
Hamdi, M., Mestiri, S., & Arbi, A. (2024). Artificial intelligence techniques for bankruptcy prediction of Tunisian companies: An application of machine learning and deep learning-based models. Journal of Risk and Financial Management, 17(4). DOI: https://doi.org/10.3390/jrfm17040132
Knab, P., Marton, S., Schlegel, U., & Bartelt, C. (2025). Which LIME should I trust? Concepts, challenges, and solutions. In Explainable Artificial Intelligence (Communications in Computer and Information Science, Vol. 2577, pp. 28–52). Springer. DOI: https://doi.org/10.1007/978-3-032-08324-1_2
Brdnik, S., Podgorelec, V., & Šumak, B. (2023). Assessing perceived trust and satisfaction with multiple explanation techniques in XAI-enhanced learning analytics. Electronics, 12(12). DOI: https://doi.org/10.3390/electronics12122594
Michelucci, U. (2024). Feature importance and selection. In Fundamental Mathematical Concepts for Machine Learning in Science (pp. 229–242). Springer. DOI: https://doi.org/10.1007/978-3-031-56431-4_10
Molnar, C. (2025). Interpretable machine learning: A guide for making black box models explainable (3rd ed.). https://christophm.github.io/interpretable-ml-book

