Using Generative AI to Build Dynamic Financial Forecasting Dashboards
Abstract
Full Text:
PDFReferences
Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798–1828.
Bishop, C. M. (2006). Pattern recognition and machine learning. Springer.
Box, G. E. P., Jenkins, G. M., & Reinsel, G. C. (2008). Time series analysis: Forecasting and control (4th ed.). Wiley.
Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.
Chatfield, C. (2003). The analysis of time series: An introduction (6th ed.). Chapman & Hall/CRC.
Chen, T., & Guestrin, C. (2015). XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 785–794.
Cleveland, R. B., Cleveland, W. S., McRae, J. E., & Terpenning, I. (1990). STL: A seasonal-trend decomposition procedure based on Loess. Journal of Official Statistics, 6(1), 3–73.
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20, 273–297.
Davenport, T. H., & Harris, J. (2007). Competing on analytics: The new science of winning. Harvard Business School Press.
Diebold, F. X. (2012). Elements of forecasting (4th ed.). Cengage Learning.
Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29(5), 1189–1232.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27, 2672–2680.
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: Data mining, inference, and prediction (2nd ed.). Springer.
Haykin, S. (2009). Neural networks and learning machines (3rd ed.). Prentice Hall.
Makridakis, S., Wheelwright, S. C., & Hyndman, R. J. (1998). Forecasting: Methods and applications (3rd ed.). Wiley.
Montgomery, D. C., Jennings, C. L., & Kulahci, M. (2008). Introduction to time series analysis and forecasting. Wiley.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533–536.
Silver, N. (2012). The signal and the noise: Why so many predictions fail—but some don’t. Penguin.
Stock, J. H., & Watson, M. W. (2002). Forecasting using principal components from a large number of predictors. Journal of the American Statistical Association, 97(460), 1167–1179.
Zhang, G. P. (2003). Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing, 50, 159–175.
Refbacks
- There are currently no refbacks.
Copyright (c) 2025 International Journal of Machine Learning for Sustainable Development

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Impact Factor :
JCR Impact Factor: 5.9 (2020)
JCR Impact Factor: 6.1 (2021)
JCR Impact Factor: 6.7 (2022)
JCR Impact Factor: 7.6 (2023)
JCR Impact Factor: 8.6 (2024)
JCR Impact Factor: Under Evaluation (2025)
A Double-Blind Peer-Reviewed Refereed Journal