Pub Date : 2025-08-01Epub Date: 2024-03-25DOI: 10.1089/big.2023.0130
Zhenzhen Yang, Zelong Lin, Yongpeng Yang, Jiaqi Li
Link prediction, which has important applications in many fields, predicts the possibility of the link between two nodes in a graph. Link prediction based on Graph Neural Network (GNN) obtains node representation and graph structure through GNN, which has attracted a growing amount of attention recently. However, the existing GNN-based link prediction approaches possess some shortcomings. On the one hand, because a graph contains different types of nodes, it leads to a great challenge for aggregating information and learning node representation from its neighbor nodes. On the other hand, the attention mechanism has been an effect instrument for enhancing the link prediction performance. However, the traditional attention mechanism is always monotonic for query nodes, which limits its influence on link prediction. To address these two problems, a Dual-Path Graph Neural Network (DPGNN) for link prediction is proposed in this study. First, we propose a novel Local Random Features Augmentation for Graph Convolution Network as a baseline of one path. Meanwhile, Graph Attention Network version 2 based on dynamic attention mechanism is adopted as a baseline of the other path. And then, we capture more meaningful node representation and more accurate link features by concatenating the information of these two paths. In addition, we propose an adaptive auxiliary module for better balancing the weight of auxiliary tasks, which brings more benefit to link prediction. Finally, extensive experiments verify the effectiveness and superiority of our proposed DPGNN for link prediction.
链接预测是指预测图中两个节点之间链接的可能性,在许多领域都有重要应用。基于图神经网络(GNN)的链接预测通过 GNN 获得节点表示和图结构,最近引起了越来越多的关注。然而,现有的基于 GNN 的链接预测方法存在一些缺陷。一方面,由于图中包含不同类型的节点,这给从相邻节点汇总信息和学习节点表示带来了巨大挑战。另一方面,注意力机制一直是提高链接预测性能的有效工具。然而,传统的注意力机制对于查询节点总是单调的,这限制了它对链接预测的影响。针对这两个问题,本研究提出了一种用于链接预测的双路径图神经网络(DPGNN)。首先,我们提出了一种新颖的局部随机特征增强图卷积网络(Local Random Features Augmentation for Graph Convolution Network),作为单路径的基线。同时,我们采用基于动态注意力机制的图注意力网络版本 2 作为另一条路径的基准。然后,我们通过串联这两条路径的信息来捕捉更有意义的节点表示和更准确的链接特征。此外,我们还提出了自适应辅助模块,以更好地平衡辅助任务的权重,从而为链接预测带来更多益处。最后,大量实验验证了我们提出的 DPGNN 在链接预测方面的有效性和优越性。
{"title":"Dual-Path Graph Neural Network with Adaptive Auxiliary Module for Link Prediction.","authors":"Zhenzhen Yang, Zelong Lin, Yongpeng Yang, Jiaqi Li","doi":"10.1089/big.2023.0130","DOIUrl":"10.1089/big.2023.0130","url":null,"abstract":"<p><p>Link prediction, which has important applications in many fields, predicts the possibility of the link between two nodes in a graph. Link prediction based on Graph Neural Network (GNN) obtains node representation and graph structure through GNN, which has attracted a growing amount of attention recently. However, the existing GNN-based link prediction approaches possess some shortcomings. On the one hand, because a graph contains different types of nodes, it leads to a great challenge for aggregating information and learning node representation from its neighbor nodes. On the other hand, the attention mechanism has been an effect instrument for enhancing the link prediction performance. However, the traditional attention mechanism is always monotonic for query nodes, which limits its influence on link prediction. To address these two problems, a Dual-Path Graph Neural Network (DPGNN) for link prediction is proposed in this study. First, we propose a novel Local Random Features Augmentation for Graph Convolution Network as a baseline of one path. Meanwhile, Graph Attention Network version 2 based on dynamic attention mechanism is adopted as a baseline of the other path. And then, we capture more meaningful node representation and more accurate link features by concatenating the information of these two paths. In addition, we propose an adaptive auxiliary module for better balancing the weight of auxiliary tasks, which brings more benefit to link prediction. Finally, extensive experiments verify the effectiveness and superiority of our proposed DPGNN for link prediction.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"333-343"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extracting meaningful patterns of human mobility from accumulating trajectories is essential for understanding human behavior. However, previous works identify human mobility patterns based on the spatial co-occurrence of trajectories, which ignores the effect of activity content, leaving challenges in effectively extracting and understanding patterns. To bridge this gap, this study incorporates the activity content of trajectories to extract human mobility patterns, and proposes acontent-aware mobility pattern model. The model first embeds the activity content in distributed continuous vector space by taking point-of-interest as an agent and then extracts representative and interpretable mobility patterns from human trajectory sets using a derived topic model. To investigate the performance of the proposed model, several evaluation metrics are developed, including pattern coherence, pattern similarity, and manual scoring. A real-world case study is conducted, and its experimental results show that the proposed model improves interpretability and helps to understand mobility patterns. This study provides not only a novel solution and several evaluation metrics for human mobility patterns but also a method reference for fusing content semantics of human activities for trajectory analysis and mining.
{"title":"Content-Aware Human Mobility Pattern Extraction.","authors":"Shengwen Li, Chaofan Fan, Tianci Li, Renyao Chen, Qingyuan Liu, Junfang Gong","doi":"10.1089/big.2022.0281","DOIUrl":"10.1089/big.2022.0281","url":null,"abstract":"<p><p>Extracting meaningful patterns of human mobility from accumulating trajectories is essential for understanding human behavior. However, previous works identify human mobility patterns based on the spatial co-occurrence of trajectories, which ignores the effect of activity content, leaving challenges in effectively extracting and understanding patterns. To bridge this gap, this study incorporates the activity content of trajectories to extract human mobility patterns, and proposes acontent-aware mobility pattern model. The model first embeds the activity content in distributed continuous vector space by taking point-of-interest as an agent and then extracts representative and interpretable mobility patterns from human trajectory sets using a derived topic model. To investigate the performance of the proposed model, several evaluation metrics are developed, including pattern coherence, pattern similarity, and manual scoring. A real-world case study is conducted, and its experimental results show that the proposed model improves interpretability and helps to understand mobility patterns. This study provides not only a novel solution and several evaluation metrics for human mobility patterns but also a method reference for fusing content semantics of human activities for trajectory analysis and mining.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"269-284"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2024-07-27DOI: 10.1089/big.2023.0016
Yinuo Qian, Fuzhong Nian, Zheming Wang, Yabing Yao
Dynamic propagation will affect the change of network structure. Different networks are affected by the iterative propagation of information to different degrees. The iterative propagation of information in the network changes the connection strength of the chain edge between nodes. Most studies on temporal networks build networks based on time characteristics, and the iterative propagation of information in the network can also reflect the time characteristics of network evolution. The change of network structure is a macromanifestation of time characteristics, whereas the dynamics in the network is a micromanifestation of time characteristics. How to concretely visualize the change of network structure influenced by the characteristics of propagation dynamics has become the focus of this article. The appearance of chain edge is the micro change of network structure, and the division of community is the macro change of network structure. Based on this, the node participation is proposed to quantify the influence of different users on the information propagation in the network, and it is simulated in different types of networks. By analyzing the iterative propagation of information, the weighted network of different networks based on the iterative propagation of information is constructed. Finally, the chain edge and community division in the network are analyzed to achieve the purpose of quantifying the influence of network propagation on complex network structure.
{"title":"Research on the Influence of Information Iterative Propagation on Complex Network Structure.","authors":"Yinuo Qian, Fuzhong Nian, Zheming Wang, Yabing Yao","doi":"10.1089/big.2023.0016","DOIUrl":"10.1089/big.2023.0016","url":null,"abstract":"<p><p>Dynamic propagation will affect the change of network structure. Different networks are affected by the iterative propagation of information to different degrees. The iterative propagation of information in the network changes the connection strength of the chain edge between nodes. Most studies on temporal networks build networks based on time characteristics, and the iterative propagation of information in the network can also reflect the time characteristics of network evolution. The change of network structure is a macromanifestation of time characteristics, whereas the dynamics in the network is a micromanifestation of time characteristics. How to concretely visualize the change of network structure influenced by the characteristics of propagation dynamics has become the focus of this article. The appearance of chain edge is the micro change of network structure, and the division of community is the macro change of network structure. Based on this, the node participation is proposed to quantify the influence of different users on the information propagation in the network, and it is simulated in different types of networks. By analyzing the iterative propagation of information, the weighted network of different networks based on the iterative propagation of information is constructed. Finally, the chain edge and community division in the network are analyzed to achieve the purpose of quantifying the influence of network propagation on complex network structure.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"319-332"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141789804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To promote the informatization management of hospital human resources and advance the application of hospital information technology. The application of deep learning (DL) technologies in health care, particularly in hospital settings, has shown significant promise in enhancing decision-making processes for nurse staff. Utilizing a hospital management decision support system based on data warehouse theory and business intelligence technology to achieve multidimensional analysis and display of data. This research explores the development and implementation of a DL-Based Clinical Decision Support System (DL-CDSS) tailored for nurses in hospitals. DL-CDSS utilizes advanced neural network architectures to analyze complex clinical data, including patient records, vital signs, and diagnostic reports, aiming to assist nurses in making informed decisions regarding patient care. By leveraging large-scale datasets from Hospital Information Systems, DL-CDSS provides real-time recommendations for treatment plans, medication administration, and patient monitoring. The system's effectiveness is demonstrated through improved accuracy in clinical decision-making, reduction in medication errors, and optimized workflow efficiency. The system analyzes and displays nurses data from hospitals in terms of quantity, distribution, structure, forecasting, analysis reports, and peer comparisons, providing head nurses with multilevel, multiperspective data mining analysis results. Challenges such as data integration, model interpretability, and user interface design are addressed to ensure seamless integration into nursing practice, also concludes with insights into the potential benefits of DL-CDSS in promoting patient safety, enhancing health care quality, and supporting nursing professionals in delivering optimal care.
{"title":"Deep Learning-Based Decision Support System for Nurse Staff in Hospitals.","authors":"Jieyu Chen, Feilong He, Lihua Tang, Lingli Gu","doi":"10.1089/big.2024.0122","DOIUrl":"https://doi.org/10.1089/big.2024.0122","url":null,"abstract":"<p><p>To promote the informatization management of hospital human resources and advance the application of hospital information technology. The application of deep learning (DL) technologies in health care, particularly in hospital settings, has shown significant promise in enhancing decision-making processes for nurse staff. Utilizing a hospital management decision support system based on data warehouse theory and business intelligence technology to achieve multidimensional analysis and display of data. This research explores the development and implementation of a DL-Based Clinical Decision Support System (DL-CDSS) tailored for nurses in hospitals. DL-CDSS utilizes advanced neural network architectures to analyze complex clinical data, including patient records, vital signs, and diagnostic reports, aiming to assist nurses in making informed decisions regarding patient care. By leveraging large-scale datasets from Hospital Information Systems, DL-CDSS provides real-time recommendations for treatment plans, medication administration, and patient monitoring. The system's effectiveness is demonstrated through improved accuracy in clinical decision-making, reduction in medication errors, and optimized workflow efficiency. The system analyzes and displays nurses data from hospitals in terms of quantity, distribution, structure, forecasting, analysis reports, and peer comparisons, providing head nurses with multilevel, multiperspective data mining analysis results. Challenges such as data integration, model interpretability, and user interface design are addressed to ensure seamless integration into nursing practice, also concludes with insights into the potential benefits of DL-CDSS in promoting patient safety, enhancing health care quality, and supporting nursing professionals in delivering optimal care.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2023-12-20DOI: 10.1089/big.2023.0015
Pingkan Mayosi Fitriana, Jumadil Saputra, Zairihan Abdul Halim
In light of developing and industrialized nations, the G20 economies account for a whopping two-thirds of the world's population and are the largest economies globally. Public emergencies have occasionally arisen due to the rapid spread of COVID-19 globally, impacting many people's lives, especially in G20 countries. Thus, this study is written to investigate the impact of the COVID-19 pandemic on stock market performance in G20 countries. This study uses daily stock market data of G20 countries from January 1, 2019 to June 30, 2020. The stock market data were divided into G7 countries and non-G7 countries. The data were analyzed using Long Short-Term Memory with a Recurrent Neural Network (LSTM-RNN) approach. The result indicated a gap between the actual stock market index and a forecasted time series that would have happened without COVID-19. Owing to movement restrictions, this study found that stock markets in six countries, including Argentina, China, South Africa, Turkey, Saudi Arabia, and the United States, are affected negatively. Besides that, movement restrictions in the G7 countries, excluding the United States, and the non-G20 countries, excluding Argentina, China, South Africa, Turkey, and Saudi, significantly impact the stock market performance. Generally, LSTM prediction estimates relative terms, except for stock market performance in the United Kingdom, the Republic of Korea, South Africa, and Spain. The stock market performance in the United Kingdom and Spain countries has significantly reduced during and after the occurrence of COVID-19. It indicates that the COVID-19 pandemic considerably influenced the stock markets of 14 G20 countries, whereas less severely impacting 6 remaining countries. In conclusion, our empirical evidence showed that the pandemic had restricted effects on the stock market performance in G20 countries.
{"title":"The Impact of the COVID-19 Pandemic on Stock Market Performance in G20 Countries: Evidence from Long Short-Term Memory with a Recurrent Neural Network Approach.","authors":"Pingkan Mayosi Fitriana, Jumadil Saputra, Zairihan Abdul Halim","doi":"10.1089/big.2023.0015","DOIUrl":"10.1089/big.2023.0015","url":null,"abstract":"<p><p>In light of developing and industrialized nations, the G20 economies account for a whopping two-thirds of the world's population and are the largest economies globally. Public emergencies have occasionally arisen due to the rapid spread of COVID-19 globally, impacting many people's lives, especially in G20 countries. Thus, this study is written to investigate the impact of the COVID-19 pandemic on stock market performance in G20 countries. This study uses daily stock market data of G20 countries from January 1, 2019 to June 30, 2020. The stock market data were divided into G7 countries and non-G7 countries. The data were analyzed using Long Short-Term Memory with a Recurrent Neural Network (LSTM-RNN) approach. The result indicated a gap between the actual stock market index and a forecasted time series that would have happened without COVID-19. Owing to movement restrictions, this study found that stock markets in six countries, including Argentina, China, South Africa, Turkey, Saudi Arabia, and the United States, are affected negatively. Besides that, movement restrictions in the G7 countries, excluding the United States, and the non-G20 countries, excluding Argentina, China, South Africa, Turkey, and Saudi, significantly impact the stock market performance. Generally, LSTM prediction estimates relative terms, except for stock market performance in the United Kingdom, the Republic of Korea, South Africa, and Spain. The stock market performance in the United Kingdom and Spain countries has significantly reduced during and after the occurrence of COVID-19. It indicates that the COVID-19 pandemic considerably influenced the stock markets of 14 G20 countries, whereas less severely impacting 6 remaining countries. In conclusion, our empirical evidence showed that the pandemic had restricted effects on the stock market performance in G20 countries.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"219-242"},"PeriodicalIF":2.6,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138832891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2023-05-08DOI: 10.1089/big.2022.0302
Asefeh Asemi, Adeleh Asemi, Andrea Ko
In this research, we propose an automatic recommender system for providing investment-type suggestions offered to investors. This system is based on a new intelligent approach using an adaptive neuro-fuzzy inference system (ANFIS) that works with four potential investors' key decision factors (KDFs), which are system value, environmental awareness factors, the expectation of high return, and expectation of low return. The proposed system provides a new model for investment recommender systems (IRSs), which is based on the data of KDFs, and the data related to the type of investment. The solution of fuzzy neural inference and choosing the type of investment is used to provide advice and support the investor's decision. This system also works with incomplete data. It is also possible to apply expert opinions based on feedback provided by investors who use the system. The proposed system is a reliable system for providing suggestions for the type of investment. It can predict the investors' investment decisions based on their KDFs in the selection of different investment types. This system uses the K-means technique in JMP for preprocessing the data and ANFIS for evaluating the data. We also compare the proposed system with other existing IRSs and evaluate the system's accuracy and effectiveness using the root mean squared error method. Overall, the proposed system is an effective and reliable IRS that can be used by potential investors to make better investment decisions.
{"title":"Investment Recommender System Model Based on the Potential Investors' Key Decision Factors.","authors":"Asefeh Asemi, Adeleh Asemi, Andrea Ko","doi":"10.1089/big.2022.0302","DOIUrl":"10.1089/big.2022.0302","url":null,"abstract":"<p><p>In this research, we propose an automatic recommender system for providing investment-type suggestions offered to investors. This system is based on a new intelligent approach using an adaptive neuro-fuzzy inference system (ANFIS) that works with four potential investors' key decision factors (KDFs), which are system value, environmental awareness factors, the expectation of high return, and expectation of low return. The proposed system provides a new model for investment recommender systems (IRSs), which is based on the data of KDFs, and the data related to the type of investment. The solution of fuzzy neural inference and choosing the type of investment is used to provide advice and support the investor's decision. This system also works with incomplete data. It is also possible to apply expert opinions based on feedback provided by investors who use the system. The proposed system is a reliable system for providing suggestions for the type of investment. It can predict the investors' investment decisions based on their KDFs in the selection of different investment types. This system uses the K-means technique in JMP for preprocessing the data and ANFIS for evaluating the data. We also compare the proposed system with other existing IRSs and evaluate the system's accuracy and effectiveness using the root mean squared error method. Overall, the proposed system is an effective and reliable IRS that can be used by potential investors to make better investment decisions.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"197-218"},"PeriodicalIF":2.6,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9432264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The stock market is heavily influenced by global sentiment, which is full of uncertainty and is characterized by extreme values and linear and nonlinear variables. High-frequency data generally refer to data that are collected at a very fast rate based on days, hours, minutes, and even seconds. Stock prices fluctuate rapidly and even at extremes along with changes in the variables that affect stock fluctuations. Research on investment risk estimation in the stock market that can identify extreme values is nonlinear, reliable in multivariate cases, and uses high-frequency data that are very important. The extreme value theory (EVT) approach can detect extreme values. This method is reliable in univariate cases and very complicated in multivariate cases. The purpose of this research was to collect, characterize, and analyze the investment risk estimation literature to identify research gaps. The literature used was selected by applying the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and sourced from Sciencedirect.com and Scopus databases. A total of 1107 articles were produced from the search at the identification stage, reduced to 236 in the eligibility stage, and 90 articles in the included studies set. The bibliometric networks were visualized using the VOSviewer software, and the main keyword used as the search criteria is "VaR." The visualization showed that EVT, the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models, and historical simulation are models often used to estimate the investment risk; the application of the machine learning (ML)-based investment risk estimation model is low. There has been no research using a combination of EVT and ML to estimate the investment risk. The results showed that the hybrid model produced better Value-at-Risk (VaR) accuracy under uncertainty and nonlinear conditions. Generally, models only use daily return data as model input. Based on research gaps, a hybrid model framework for estimating risk measures is proposed using a combination of EVT and ML, using multivariable and high-frequency data to identify extreme values in the distribution of data. The goal is to produce an accurate and flexible estimated risk value against extreme changes and shocks in the stock market. Mathematics Subject Classification: 60G25; 62M20; 6245; 62P05; 91G70.
股票市场深受全球情绪的影响,而全球情绪充满了不确定性,其特点是极端值以及线性和非线性变量。高频数据一般是指以天、小时、分钟甚至秒为单位快速收集的数据。股票价格随着影响股票波动的变量的变化而快速波动,甚至出现极端波动。能够识别极值的股市投资风险评估研究是非线性的,在多变量情况下是可靠的,并且使用的是非常重要的高频数据。极值理论(EVT)方法可以检测极值。这种方法在单变量情况下是可靠的,而在多变量情况下则非常复杂。本研究的目的是收集、描述和分析投资风险估计文献,找出研究空白。所使用的文献是根据《系统综述和元分析首选报告项目》(Preferred Reporting Items for Systematic Reviews and Meta-Analyses,PRISMA)进行筛选的,来源于 Sciencedirect.com 和 Scopus 数据库。在识别阶段共搜索到 1107 篇文章,在资格审查阶段减少到 236 篇,在纳入研究集中有 90 篇文章。使用 VOSviewer 软件对文献计量学网络进行了可视化,搜索标准的主要关键词是 "VaR"。可视化结果显示,EVT、广义自回归条件异方差(GARCH)模型和历史模拟是常用的投资风险估计模型;基于机器学习(ML)的投资风险估计模型应用较少。目前还没有将 EVT 和 ML 结合起来估计投资风险的研究。研究结果表明,在不确定和非线性条件下,混合模型能产生更好的风险价值(VaR)精度。一般来说,模型仅使用每日收益数据作为模型输入。基于研究差距,我们提出了一个结合 EVT 和 ML 的混合模型框架来估算风险度量,使用多变量和高频数据来识别数据分布中的极端值。其目标是针对股票市场的极端变化和冲击,得出准确而灵活的估计风险值。数学学科分类:60G25; 62M20; 6245; 62P05; 91G70.
{"title":"Modeling of Machine Learning-Based Extreme Value Theory in Stock Investment Risk Prediction: A Systematic Literature Review.","authors":"Melina Melina, Sukono, Herlina Napitupulu, Norizan Mohamed","doi":"10.1089/big.2023.0004","DOIUrl":"10.1089/big.2023.0004","url":null,"abstract":"<p><p>The stock market is heavily influenced by global sentiment, which is full of uncertainty and is characterized by extreme values and linear and nonlinear variables. High-frequency data generally refer to data that are collected at a very fast rate based on days, hours, minutes, and even seconds. Stock prices fluctuate rapidly and even at extremes along with changes in the variables that affect stock fluctuations. Research on investment risk estimation in the stock market that can identify extreme values is nonlinear, reliable in multivariate cases, and uses high-frequency data that are very important. The extreme value theory (EVT) approach can detect extreme values. This method is reliable in univariate cases and very complicated in multivariate cases. The purpose of this research was to collect, characterize, and analyze the investment risk estimation literature to identify research gaps. The literature used was selected by applying the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and sourced from Sciencedirect.com and Scopus databases. A total of 1107 articles were produced from the search at the identification stage, reduced to 236 in the eligibility stage, and 90 articles in the included studies set. The bibliometric networks were visualized using the VOSviewer software, and the main keyword used as the search criteria is \"VaR.\" The visualization showed that EVT, the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models, and historical simulation are models often used to estimate the investment risk; the application of the machine learning (ML)-based investment risk estimation model is low. There has been no research using a combination of EVT and ML to estimate the investment risk. The results showed that the hybrid model produced better Value-at-Risk (VaR) accuracy under uncertainty and nonlinear conditions. Generally, models only use daily return data as model input. Based on research gaps, a hybrid model framework for estimating risk measures is proposed using a combination of EVT and ML, using multivariable and high-frequency data to identify extreme values in the distribution of data. The goal is to produce an accurate and flexible estimated risk value against extreme changes and shocks in the stock market. Mathematics Subject Classification: 60G25; 62M20; 6245; 62P05; 91G70.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"161-180"},"PeriodicalIF":2.6,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139486846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2024-01-29DOI: 10.1089/big.2022.0264
Sajid Yousuf Bhat, Muhammad Abulaish
Owing to increasing size of the real-world networks, their processing using classical techniques has become infeasible. The amount of storage and central processing unit time required for processing large networks is far beyond the capabilities of a high-end computing machine. Moreover, real-world network data are generally distributed in nature because they are collected and stored on distributed platforms. This has popularized the use of the MapReduce, a distributed data processing framework, for analyzing real-world network data. Existing MapReduce-based methods for connected components detection mainly struggle to minimize the number of MapReduce rounds and the amount of data generated and forwarded to the subsequent rounds. This article presents an efficient MapReduce-based approach for finding connected components, which does not forward the complete set of connected components to the subsequent rounds; instead, it writes them to the Hadoop Distributed File System as soon as they are found to reduce the amount of data forwarded to the subsequent rounds. It also presents an application of the proposed method in contact tracing. The proposed method is evaluated on several network data sets and compared with two state-of-the-art methods. The empirical results reveal that the proposed method performs significantly better and is scalable to find connected components in large-scale networks.
{"title":"A MapReduce-Based Approach for Fast Connected Components Detection from Large-Scale Networks.","authors":"Sajid Yousuf Bhat, Muhammad Abulaish","doi":"10.1089/big.2022.0264","DOIUrl":"10.1089/big.2022.0264","url":null,"abstract":"<p><p>Owing to increasing size of the real-world networks, their processing using classical techniques has become infeasible. The amount of storage and central processing unit time required for processing large networks is far beyond the capabilities of a high-end computing machine. Moreover, real-world network data are generally distributed in nature because they are collected and stored on distributed platforms. This has popularized the use of the MapReduce, a distributed data processing framework, for analyzing real-world network data. Existing MapReduce-based methods for connected components detection mainly struggle to minimize the number of MapReduce rounds and the amount of data generated and forwarded to the subsequent rounds. This article presents an efficient MapReduce-based approach for finding connected components, which does not forward the complete set of connected components to the subsequent rounds; instead, it writes them to the Hadoop Distributed File System as soon as they are found to reduce the amount of data forwarded to the subsequent rounds. It also presents an application of the proposed method in contact tracing. The proposed method is evaluated on several network data sets and compared with two state-of-the-art methods. The empirical results reveal that the proposed method performs significantly better and is scalable to find connected components in large-scale networks.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"243-268"},"PeriodicalIF":2.6,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139571864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2024-02-13DOI: 10.1089/big.2023.0026
Jumadil Saputra, Kasypi Mokhtar, Anuar Abu Bakar, Siti Marsila Mhd Ruslan
In the last 2 years, there has been a significant upswing in oil prices, leading to a decline in economic activity and demand. This trend holds substantial implications for the global economy, particularly within the emerging business landscape. Among the influential risk factors impacting the returns of shipping stocks, none looms larger than the volatility in oil prices. Yet, only a limited number of studies have explored the complex relationship between oil price shocks and the dynamics of the liner shipping industry, with specific focus on uncertainty linkages and potential diversification strategies. This study aims to investigate the co-movements and asymmetric associations between oil prices (specifically, West Texas Intermediate and Brent) and the stock returns of three prominent shipping companies from Germany, South Korea, and Taiwan. The results unequivocally highlight the indispensable role of oil prices in shaping both short-term and long-term shipping stock returns. In addition, the research underscores the statistical significance of exchange rates and interest rates in influencing these returns, with their effects varying across different time horizons. Notably, shipping stock prices exhibit heightened sensitivity to positive movements in oil prices, while exchange rates and interest rates exert contrasting impacts, one being positive and the other negative. These findings collectively illuminate the profound influence of market sentiment regarding crucial economic indicators within the global shipping sector.
{"title":"Investigating the Co-Movement and Asymmetric Relationships of Oil Prices on the Shipping Stock Returns: Evidence from Three Shipping-Flagged Companies from Germany, South Korea, and Taiwan.","authors":"Jumadil Saputra, Kasypi Mokhtar, Anuar Abu Bakar, Siti Marsila Mhd Ruslan","doi":"10.1089/big.2023.0026","DOIUrl":"10.1089/big.2023.0026","url":null,"abstract":"<p><p>In the last 2 years, there has been a significant upswing in oil prices, leading to a decline in economic activity and demand. This trend holds substantial implications for the global economy, particularly within the emerging business landscape. Among the influential risk factors impacting the returns of shipping stocks, none looms larger than the volatility in oil prices. Yet, only a limited number of studies have explored the complex relationship between oil price shocks and the dynamics of the liner shipping industry, with specific focus on uncertainty linkages and potential diversification strategies. This study aims to investigate the co-movements and asymmetric associations between oil prices (specifically, West Texas Intermediate and Brent) and the stock returns of three prominent shipping companies from Germany, South Korea, and Taiwan. The results unequivocally highlight the indispensable role of oil prices in shaping both short-term and long-term shipping stock returns. In addition, the research underscores the statistical significance of exchange rates and interest rates in influencing these returns, with their effects varying across different time horizons. Notably, shipping stock prices exhibit heightened sensitivity to positive movements in oil prices, while exchange rates and interest rates exert contrasting impacts, one being positive and the other negative. These findings collectively illuminate the profound influence of market sentiment regarding crucial economic indicators within the global shipping sector.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"181-196"},"PeriodicalIF":2.6,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139736755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Organizations have been investing in analytics relying on internal and external data to gain a competitive advantage. However, the legal and regulatory acts imposed nationally and internationally have become a challenge, especially for highly regulated sectors such as health or finance/banking. Data handlers such as Facebook and Amazon have already sustained considerable fines or are under investigation due to violations of data governance. The era of big data has further intensified the challenges of minimizing the risk of data loss by introducing the dimensions of Volume, Velocity, and Variety into confidentiality. Although Volume and Velocity have been extensively researched, Variety, "the ugly duckling" of big data, is often neglected and difficult to solve, thus increasing the risk of data exposure and data loss. In mitigating the risk of data exposure and data loss in this article, a framework is proposed to utilize algorithmic classification and workflow capabilities to provide a consistent approach toward data evaluations across the organizations. A rule-based system, implementing the corporate data classification policy, will minimize the risk of exposure by facilitating users to identify the approved guidelines and enforce them quickly. The framework includes an exception handling process with appropriate approval for extenuating circumstances. The system was implemented in a proof of concept working prototype to showcase the capabilities and provide a hands-on experience. The information system was evaluated and accredited by a diverse audience of academics and senior business executives in the fields of security and data management. The audience had an average experience of ∼25 years and amasses a total experience of almost three centuries (294 years). The results confirmed that the 3Vs are of concern and that Variety, with a majority of 90% of the commentators, is the most troubling. In addition to that, with an approximate average of 60%, it was confirmed that appropriate policies, procedure, and prerequisites for classification are in place while implementation tools are lagging.
{"title":"Big Data Confidentiality: An Approach Toward Corporate Compliance Using a Rule-Based System.","authors":"Georgios Vranopoulos, Nathan Clarke, Shirley Atkinson","doi":"10.1089/big.2022.0201","DOIUrl":"10.1089/big.2022.0201","url":null,"abstract":"<p><p>Organizations have been investing in analytics relying on internal and external data to gain a competitive advantage. However, the legal and regulatory acts imposed nationally and internationally have become a challenge, especially for highly regulated sectors such as health or finance/banking. Data handlers such as Facebook and Amazon have already sustained considerable fines or are under investigation due to violations of data governance. The era of big data has further intensified the challenges of minimizing the risk of data loss by introducing the dimensions of Volume, Velocity, and Variety into confidentiality. Although Volume and Velocity have been extensively researched, Variety, \"the ugly duckling\" of big data, is often neglected and difficult to solve, thus increasing the risk of data exposure and data loss. In mitigating the risk of data exposure and data loss in this article, a framework is proposed to utilize algorithmic classification and workflow capabilities to provide a consistent approach toward data evaluations across the organizations. A rule-based system, implementing the corporate data classification policy, will minimize the risk of exposure by facilitating users to identify the approved guidelines and enforce them quickly. The framework includes an exception handling process with appropriate approval for extenuating circumstances. The system was implemented in a proof of concept working prototype to showcase the capabilities and provide a hands-on experience. The information system was evaluated and accredited by a diverse audience of academics and senior business executives in the fields of security and data management. The audience had an average experience of ∼25 years and amasses a total experience of almost three centuries (294 years). The results confirmed that the 3Vs are of concern and that Variety, with a majority of 90% of the commentators, is the most troubling. In addition to that, with an approximate average of 60%, it was confirmed that appropriate policies, procedure, and prerequisites for classification are in place while implementation tools are lagging.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"90-110"},"PeriodicalIF":2.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71415222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}