In this study, we investigate different epidemic control scenarios through theoretical analysis and numerical simulations. To account for two important types of control at the early ascending stage of an outbreak, nonmedical interventions, and medical treatments, a compartmental model is considered with the first control aimed at lowering the disease transmission rate through behavioral changes and the second control set to lower the period of infectiousness by means of antiviral medications and other forms of medical care. In all experiments, the implementation of control strategies reduces the daily cumulative number of cases and successfully “flattens the curve”. The reduction in the cumulative cases is achieved by eliminating or delaying new cases. This delay is incredibly valuable, as it provides public health organizations with more time to advance antiviral treatments and devise alternative preventive measures. The main theoretical result of the paper, Theorem 1, concludes that the two optimal control functions may be increasing initially. However, beyond a certain point, both controls decline (possibly causing the number of newly infected people to grow). The numerical simulations conducted by the authors confirm theoretical findings, which indicates that, ideally, around the time that early interventions become less effective, the control strategy must be upgraded through the addition of new and improved tools, such as vaccines, therapeutics, testing, air ventilation, and others, in order to successfully battle the virus going forward.
{"title":"Optimal Epidemic Control with Nonmedical and Medical Interventions","authors":"Alexandra Smirnova, Mona Baroonian, Xiaojing Ye","doi":"10.3390/math12182811","DOIUrl":"https://doi.org/10.3390/math12182811","url":null,"abstract":"In this study, we investigate different epidemic control scenarios through theoretical analysis and numerical simulations. To account for two important types of control at the early ascending stage of an outbreak, nonmedical interventions, and medical treatments, a compartmental model is considered with the first control aimed at lowering the disease transmission rate through behavioral changes and the second control set to lower the period of infectiousness by means of antiviral medications and other forms of medical care. In all experiments, the implementation of control strategies reduces the daily cumulative number of cases and successfully “flattens the curve”. The reduction in the cumulative cases is achieved by eliminating or delaying new cases. This delay is incredibly valuable, as it provides public health organizations with more time to advance antiviral treatments and devise alternative preventive measures. The main theoretical result of the paper, Theorem 1, concludes that the two optimal control functions may be increasing initially. However, beyond a certain point, both controls decline (possibly causing the number of newly infected people to grow). The numerical simulations conducted by the authors confirm theoretical findings, which indicates that, ideally, around the time that early interventions become less effective, the control strategy must be upgraded through the addition of new and improved tools, such as vaccines, therapeutics, testing, air ventilation, and others, in order to successfully battle the virus going forward.","PeriodicalId":18303,"journal":{"name":"Mathematics","volume":"16 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate predictions of parking occupancy are vital for navigation and autonomous transport systems. This research introduces a deep learning mode, AGCRU, which integrates Adaptive Graph Convolutional Networks (GCNs) with Gated Recurrent Units (GRUs) for predicting on-street parking occupancy. By leveraging real-world data from Melbourne, the proposed model utilizes on-street parking sensors to capture both temporal and spatial dynamics of parking behaviors. The AGCRU model is enhanced with the inclusion of Points of Interest (POIs) and housing data to refine its predictive accuracy based on spatial relationships and parking habits. Notably, the model demonstrates a mean absolute error (MAE) of 0.0156 at 15 min, 0.0330 at 30 min, and 0.0558 at 60 min; root mean square error (RMSE) values are 0.0244, 0.0665, and 0.1003 for these intervals, respectively. The mean absolute percentage error (MAPE) for these intervals is 1.5561%, 3.3071%, and 5.5810%. These metrics, considerably lower than those from traditional and competing models, indicate the high efficiency and accuracy of the AGCRU model in an urban setting. This demonstrates the model as a tool for enhancing urban parking management and planning strategies.
{"title":"Enhancing Predictive Models for On-Street Parking Occupancy: Integrating Adaptive GCN and GRU with Household Categories and POI Factors","authors":"Xiaohang Zhao, Mingyuan Zhang","doi":"10.3390/math12182823","DOIUrl":"https://doi.org/10.3390/math12182823","url":null,"abstract":"Accurate predictions of parking occupancy are vital for navigation and autonomous transport systems. This research introduces a deep learning mode, AGCRU, which integrates Adaptive Graph Convolutional Networks (GCNs) with Gated Recurrent Units (GRUs) for predicting on-street parking occupancy. By leveraging real-world data from Melbourne, the proposed model utilizes on-street parking sensors to capture both temporal and spatial dynamics of parking behaviors. The AGCRU model is enhanced with the inclusion of Points of Interest (POIs) and housing data to refine its predictive accuracy based on spatial relationships and parking habits. Notably, the model demonstrates a mean absolute error (MAE) of 0.0156 at 15 min, 0.0330 at 30 min, and 0.0558 at 60 min; root mean square error (RMSE) values are 0.0244, 0.0665, and 0.1003 for these intervals, respectively. The mean absolute percentage error (MAPE) for these intervals is 1.5561%, 3.3071%, and 5.5810%. These metrics, considerably lower than those from traditional and competing models, indicate the high efficiency and accuracy of the AGCRU model in an urban setting. This demonstrates the model as a tool for enhancing urban parking management and planning strategies.","PeriodicalId":18303,"journal":{"name":"Mathematics","volume":"1 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breast cancer is one of the most lethal and widespread diseases affecting women worldwide. As a result, it is necessary to diagnose breast cancer accurately and efficiently utilizing the most cost-effective and widely used methods. In this research, we demonstrated that synthetically created high-quality ultrasound data outperformed conventional augmentation strategies for efficiently diagnosing breast cancer using deep learning. We trained a deep-learning model using the EfficientNet-B7 architecture and a large dataset of 3186 ultrasound images acquired from multiple publicly available sources, as well as 10,000 synthetically generated images using generative adversarial networks (StyleGAN3). The model was trained using five-fold cross-validation techniques and validated using four metrics: accuracy, recall, precision, and the F1 score measure. The results showed that integrating synthetically produced data into the training set increased the classification accuracy from 88.72% to 92.01% based on the F1 score, demonstrating the power of generative models to expand and improve the quality of training datasets in medical-imaging applications. This demonstrated that training the model using a larger set of data comprising synthetic images significantly improved its performance by more than 3% over the genuine dataset with common augmentation. Various data augmentation procedures were also investigated to improve the training set’s diversity and representativeness. This research emphasizes the relevance of using modern artificial intelligence and machine-learning technologies in medical imaging by providing an effective strategy for categorizing ultrasound images, which may lead to increased diagnostic accuracy and optimal treatment options. The proposed techniques are highly promising and have strong potential for future clinical application in the diagnosis of breast cancer.
乳腺癌是影响全世界妇女的最致命、最普遍的疾病之一。因此,有必要利用最具成本效益且广泛使用的方法来准确、高效地诊断乳腺癌。在这项研究中,我们证明了合成创建的高质量超声波数据在利用深度学习高效诊断乳腺癌方面优于传统的增强策略。我们使用 EfficientNet-B7 架构和一个大型数据集训练了一个深度学习模型,该数据集包含从多个公开来源获取的 3186 张超声波图像,以及使用生成式对抗网络(StyleGAN3)合成的 10,000 张图像。该模型使用五倍交叉验证技术进行训练,并使用准确率、召回率、精确度和 F1 分数四个指标进行验证。结果表明,根据 F1 分数,将合成数据整合到训练集可将分类准确率从 88.72% 提高到 92.01%,这证明了生成模型在医学影像应用中扩展和提高训练数据集质量的能力。这表明,使用由合成图像组成的更大数据集来训练模型,其性能比使用普通增强的真实数据集显著提高了 3% 以上。此外,还研究了各种数据增强程序,以提高训练集的多样性和代表性。这项研究强调了在医学成像中使用现代人工智能和机器学习技术的意义,为超声波图像分类提供了一种有效的策略,可提高诊断准确性和优化治疗方案。所提出的技术前景广阔,在未来乳腺癌诊断的临床应用中具有很强的潜力。
{"title":"Next-Generation Diagnostics: The Impact of Synthetic Data Generation on the Detection of Breast Cancer from Ultrasound Imaging","authors":"Hari Mohan Rai, Serhii Dashkevych, Joon Yoo","doi":"10.3390/math12182808","DOIUrl":"https://doi.org/10.3390/math12182808","url":null,"abstract":"Breast cancer is one of the most lethal and widespread diseases affecting women worldwide. As a result, it is necessary to diagnose breast cancer accurately and efficiently utilizing the most cost-effective and widely used methods. In this research, we demonstrated that synthetically created high-quality ultrasound data outperformed conventional augmentation strategies for efficiently diagnosing breast cancer using deep learning. We trained a deep-learning model using the EfficientNet-B7 architecture and a large dataset of 3186 ultrasound images acquired from multiple publicly available sources, as well as 10,000 synthetically generated images using generative adversarial networks (StyleGAN3). The model was trained using five-fold cross-validation techniques and validated using four metrics: accuracy, recall, precision, and the F1 score measure. The results showed that integrating synthetically produced data into the training set increased the classification accuracy from 88.72% to 92.01% based on the F1 score, demonstrating the power of generative models to expand and improve the quality of training datasets in medical-imaging applications. This demonstrated that training the model using a larger set of data comprising synthetic images significantly improved its performance by more than 3% over the genuine dataset with common augmentation. Various data augmentation procedures were also investigated to improve the training set’s diversity and representativeness. This research emphasizes the relevance of using modern artificial intelligence and machine-learning technologies in medical imaging by providing an effective strategy for categorizing ultrasound images, which may lead to increased diagnostic accuracy and optimal treatment options. The proposed techniques are highly promising and have strong potential for future clinical application in the diagnosis of breast cancer.","PeriodicalId":18303,"journal":{"name":"Mathematics","volume":"7 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammed Said Al Ghafri, Yousef Estaremi, Zhidong Huang
In this paper, by extending some Lp-norm inequalities to similar inequalities for Orlicz space (LΦ-norm), we provide equivalent conditions for composition operators to have the shadowing property on the Orlicz space LΦ(μ). Additionally, we show that for composition operators on Orlicz spaces, the concepts of generalized hyperbolicity and the shadowing property are equivalent. These results extend similar findings on Lp-spaces to Orlicz spaces.
{"title":"Orlicz Spaces and Their Hyperbolic Composition Operators","authors":"Mohammed Said Al Ghafri, Yousef Estaremi, Zhidong Huang","doi":"10.3390/math12182809","DOIUrl":"https://doi.org/10.3390/math12182809","url":null,"abstract":"In this paper, by extending some Lp-norm inequalities to similar inequalities for Orlicz space (LΦ-norm), we provide equivalent conditions for composition operators to have the shadowing property on the Orlicz space LΦ(μ). Additionally, we show that for composition operators on Orlicz spaces, the concepts of generalized hyperbolicity and the shadowing property are equivalent. These results extend similar findings on Lp-spaces to Orlicz spaces.","PeriodicalId":18303,"journal":{"name":"Mathematics","volume":"59 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article presents a causal modeling approach for analyzing the processes of an academic institution. Academic processes consist of activities that are considered self-managed systems and are defined as management transactions (MTs). The purpose of this article is to present a method of causal modeling of organizational processes, which helps to determine the internal model of the current process under consideration, its activities, and the processes’ causal dependencies in the management hierarchy of the institution, as well as horizontal and vertical coordination interactions and their content. Internal models of the identified activities were created, corresponding to the MT framework. In the second step, based on the causal model, a taxonomy of characteristics is presented, which helps to systematize the process quality assessment and ensures the completeness of the characteristics and indicators. Predefined structures of characteristic types are the basis of activity content description templates. Based on the proposed method, two causal models are created: the “to-be” causal model of the target study process (based on expert knowledge) and the “as-is” documented (existing) model of the study process used to evaluate the study process’s quality. The principles and examples of comparing the created “to-be” causal model with the existing study process monitoring method are presented, enabling the detection of the shortcomings in the existing method for assessing academic performance. Causal modeling allows for the rethinking of existing interactions and the identification of necessary interactions to improve the quality of studies. The comparison based on causal modeling allows for a systematic analysis of regulations and the consistent identification of new characteristics (indicators) that evaluate relevant aspects of academic processes and activities.
{"title":"Causal Modeling of Academic Activity and Study Process Management","authors":"Saulius Gudas, Vitalijus Denisovas, Jurij Tekutov","doi":"10.3390/math12182810","DOIUrl":"https://doi.org/10.3390/math12182810","url":null,"abstract":"This article presents a causal modeling approach for analyzing the processes of an academic institution. Academic processes consist of activities that are considered self-managed systems and are defined as management transactions (MTs). The purpose of this article is to present a method of causal modeling of organizational processes, which helps to determine the internal model of the current process under consideration, its activities, and the processes’ causal dependencies in the management hierarchy of the institution, as well as horizontal and vertical coordination interactions and their content. Internal models of the identified activities were created, corresponding to the MT framework. In the second step, based on the causal model, a taxonomy of characteristics is presented, which helps to systematize the process quality assessment and ensures the completeness of the characteristics and indicators. Predefined structures of characteristic types are the basis of activity content description templates. Based on the proposed method, two causal models are created: the “to-be” causal model of the target study process (based on expert knowledge) and the “as-is” documented (existing) model of the study process used to evaluate the study process’s quality. The principles and examples of comparing the created “to-be” causal model with the existing study process monitoring method are presented, enabling the detection of the shortcomings in the existing method for assessing academic performance. Causal modeling allows for the rethinking of existing interactions and the identification of necessary interactions to improve the quality of studies. The comparison based on causal modeling allows for a systematic analysis of regulations and the consistent identification of new characteristics (indicators) that evaluate relevant aspects of academic processes and activities.","PeriodicalId":18303,"journal":{"name":"Mathematics","volume":"23 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuangqing Chen, Shanlong Wang, Minghu Jiang, Yuchun Li, Lan Meng, Bing Guan, Ze Yu
The problems of uneven load and low operating efficiency in the oil-gathering system of old oilfields lead to higher operating costs. In order to reduce operating costs, the layout-reconfiguration optimization model is established, and the minimum comprehensive investment is taken as the objective function. The multi-constraint conditions, such as the current situation of the oil-gathering system, the processing capacity, the possibility of pipeline failure, and the obstacles, are considered. The hybrid arithmetic–fireworks optimization algorithm (AFOA) is proposed to solve the model. Combined with the experience of the hybrid metaheuristic algorithm, using hybrid metaheuristics, the hybrid of the arithmetic optimization algorithm (AOA) and the operator of the fireworks algorithm (FWA) is considered, and some improved operators of FWA are integrated into AOA to form a new algorithm (AFOA) to achieve a better solution effect. Compared with the 11 other algorithms, AFOA has better solution efficiency. This method is applied to the actual case of an old oilfield. The optimized scheme increases the average load rate of the station by 15.9% and reduces the operating costs by 38.1% per year. Overall, the reconstruction costs will be recovered in a short period.
{"title":"Layout Reconstruction Optimization Method of Oil-Gathering Systems for Oilfields in the Mid to Late Stage of Development Based on the Arithmetic–Fireworks Optimization Algorithm","authors":"Shuangqing Chen, Shanlong Wang, Minghu Jiang, Yuchun Li, Lan Meng, Bing Guan, Ze Yu","doi":"10.3390/math12182819","DOIUrl":"https://doi.org/10.3390/math12182819","url":null,"abstract":"The problems of uneven load and low operating efficiency in the oil-gathering system of old oilfields lead to higher operating costs. In order to reduce operating costs, the layout-reconfiguration optimization model is established, and the minimum comprehensive investment is taken as the objective function. The multi-constraint conditions, such as the current situation of the oil-gathering system, the processing capacity, the possibility of pipeline failure, and the obstacles, are considered. The hybrid arithmetic–fireworks optimization algorithm (AFOA) is proposed to solve the model. Combined with the experience of the hybrid metaheuristic algorithm, using hybrid metaheuristics, the hybrid of the arithmetic optimization algorithm (AOA) and the operator of the fireworks algorithm (FWA) is considered, and some improved operators of FWA are integrated into AOA to form a new algorithm (AFOA) to achieve a better solution effect. Compared with the 11 other algorithms, AFOA has better solution efficiency. This method is applied to the actual case of an old oilfield. The optimized scheme increases the average load rate of the station by 15.9% and reduces the operating costs by 38.1% per year. Overall, the reconstruction costs will be recovered in a short period.","PeriodicalId":18303,"journal":{"name":"Mathematics","volume":"14 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lin Shi, Weitao Liu, Yafeng Wu, Chenxu Dai, Zhanlin Ji, Ivan Ganchev
Knowledge graph embedding (KGE) has been identified as an effective method for link prediction, which involves predicting missing relations or entities based on existing entities or relations. KGE is an important method for implementing knowledge representation and, as such, has been widely used in driving intelligent applications w.r.t. question-answering systems, recommendation systems, and relationship extraction. Models based on convolutional neural networks (CNNs) have achieved good results in link prediction. However, as the coverage areas of knowledge graphs expand, the increasing volume of information significantly limits the performance of these models. This article introduces a triple-attention-based multi-channel CNN model, named ConvAMC, for the KGE task. In the embedding representation module, entities and relations are embedded into a complex space and the embeddings are performed in an alternating pattern. This approach helps in capturing richer semantic information and enhances the expressive power of the model. In the encoding module, a multi-channel approach is employed to extract more comprehensive interaction features. A triple attention mechanism and max pooling layers are used to ensure that interactions between spatial dimensions and output tensors are captured during the subsequent tensor concatenation and reshaping process, which allows preserving local and detailed information. Finally, feature vectors are transformed into prediction targets for embedding through the Hadamard product of feature mapping and reshaping matrices. Extensive experiments were conducted to evaluate the performance of ConvAMC on three benchmark datasets compared with state-of-the-art (SOTA) models, demonstrating that the proposed model outperforms all compared models across all evaluation metrics on two of the datasets, and achieves advanced link prediction results on most evaluation metrics on the third dataset.
{"title":"Knowledge Graph Embedding Using a Multi-Channel Interactive Convolutional Neural Network with Triple Attention","authors":"Lin Shi, Weitao Liu, Yafeng Wu, Chenxu Dai, Zhanlin Ji, Ivan Ganchev","doi":"10.3390/math12182821","DOIUrl":"https://doi.org/10.3390/math12182821","url":null,"abstract":"Knowledge graph embedding (KGE) has been identified as an effective method for link prediction, which involves predicting missing relations or entities based on existing entities or relations. KGE is an important method for implementing knowledge representation and, as such, has been widely used in driving intelligent applications w.r.t. question-answering systems, recommendation systems, and relationship extraction. Models based on convolutional neural networks (CNNs) have achieved good results in link prediction. However, as the coverage areas of knowledge graphs expand, the increasing volume of information significantly limits the performance of these models. This article introduces a triple-attention-based multi-channel CNN model, named ConvAMC, for the KGE task. In the embedding representation module, entities and relations are embedded into a complex space and the embeddings are performed in an alternating pattern. This approach helps in capturing richer semantic information and enhances the expressive power of the model. In the encoding module, a multi-channel approach is employed to extract more comprehensive interaction features. A triple attention mechanism and max pooling layers are used to ensure that interactions between spatial dimensions and output tensors are captured during the subsequent tensor concatenation and reshaping process, which allows preserving local and detailed information. Finally, feature vectors are transformed into prediction targets for embedding through the Hadamard product of feature mapping and reshaping matrices. Extensive experiments were conducted to evaluate the performance of ConvAMC on three benchmark datasets compared with state-of-the-art (SOTA) models, demonstrating that the proposed model outperforms all compared models across all evaluation metrics on two of the datasets, and achieves advanced link prediction results on most evaluation metrics on the third dataset.","PeriodicalId":18303,"journal":{"name":"Mathematics","volume":"64 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shengdong Mu, Boyu Liu, Jijian Gu, Chaolung Lien, Nedjah Nadia
Stock index fluctuations are characterized by high noise and their accurate prediction is extremely challenging. To address this challenge, this study proposes a spatial–temporal–bidirectional long short-term memory (STBL) model, incorporating spatiotemporal attention mechanisms. The model enhances the analysis of temporal dependencies between data by introducing graph attention networks with multi-hop neighbor nodes while incorporating the temporal attention mechanism of long short-term memory (LSTM) to effectively address the potential interdependencies in the data structure. In addition, by assigning different learning weights to different neighbor nodes, the model can better integrate the correlation between node features. To verify the accuracy of the proposed model, this study utilized the closing prices of the Hong Kong Hang Seng Index (HSI) from 31 December 1986 to 31 December 2023 for analysis. By comparing it with nine other forecasting models, the experimental results show that the STBL model achieves more accurate predictions of the closing prices for short-term, medium-term, and long-term forecasts of the stock index.
{"title":"Research on Stock Index Prediction Based on the Spatiotemporal Attention BiLSTM Model","authors":"Shengdong Mu, Boyu Liu, Jijian Gu, Chaolung Lien, Nedjah Nadia","doi":"10.3390/math12182812","DOIUrl":"https://doi.org/10.3390/math12182812","url":null,"abstract":"Stock index fluctuations are characterized by high noise and their accurate prediction is extremely challenging. To address this challenge, this study proposes a spatial–temporal–bidirectional long short-term memory (STBL) model, incorporating spatiotemporal attention mechanisms. The model enhances the analysis of temporal dependencies between data by introducing graph attention networks with multi-hop neighbor nodes while incorporating the temporal attention mechanism of long short-term memory (LSTM) to effectively address the potential interdependencies in the data structure. In addition, by assigning different learning weights to different neighbor nodes, the model can better integrate the correlation between node features. To verify the accuracy of the proposed model, this study utilized the closing prices of the Hong Kong Hang Seng Index (HSI) from 31 December 1986 to 31 December 2023 for analysis. By comparing it with nine other forecasting models, the experimental results show that the STBL model achieves more accurate predictions of the closing prices for short-term, medium-term, and long-term forecasts of the stock index.","PeriodicalId":18303,"journal":{"name":"Mathematics","volume":"383 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The simplification of complex networks is a research field closely related to graph theory in discrete mathematics. The existing methods are typically limited to simplifying the series sub-networks, parallel sub-networks, diagonal sub-networks, and nested simple sub-networks. From the current perspective, there are no available methods that can handle complex sub-networks and nested complex sub-networks. In this paper, we innovatively propose an efficient and automatic equivalence simplification method for arbitrary complex ventilation networks. The method enables, for the first time, the maximum possible equivalence simplification of nested simple sub-networks and nested complex sub-networks. In order to avoid the NP-hard problem caused by the searching of simplifiable sub-networks, it is necessary to analyze the intrinsic topology relationship between simplifiable sub-networks and spanning sub-graphs to optimize the searching process. One of our main contributions is that we present an efficient searching method for arbitrarily nested reducible sub-networks based on the bidirectional traversal process of a directed tree. The method optimizes the searching process for simplifiable node pairs by combining the characteristics of a directed tree with the judgment rules of simplifiable sub-networks. Moreover, by deriving the formula of an equivalent air resistance calculation for complex sub-networks, another one of our main contributions is that we present an equivalent calculation and simplification method for arbitrarily complex sub-networks based on the principle of energy conservation. The basic idea of the method is to calculate the equivalent air resistance using the ventilation network resolution of the constructed virtual sub-networks. We realize the simplification method of arbitrarily complex mine ventilation networks, and we validate the reliability of the simplification method by comparing the air distribution results using the network solution method before and after simplification. It can be determined that, with appropriate modifications to meet specific requirements, the proposed method can also be applicable to equivalent simplification instances of other types of complex networks. Based on the results analysis of several real-world mine ventilation network examples, the effectiveness of the proposed method is further verified, which can satisfactorily meet the requirements for simplifying complex networks.
{"title":"An Efficient and Automatic Simplification Method for Arbitrary Complex Networks in Mine Ventilation","authors":"Deyun Zhong, Lixue Wen, Lin Bi, Yulong Liu","doi":"10.3390/math12182815","DOIUrl":"https://doi.org/10.3390/math12182815","url":null,"abstract":"The simplification of complex networks is a research field closely related to graph theory in discrete mathematics. The existing methods are typically limited to simplifying the series sub-networks, parallel sub-networks, diagonal sub-networks, and nested simple sub-networks. From the current perspective, there are no available methods that can handle complex sub-networks and nested complex sub-networks. In this paper, we innovatively propose an efficient and automatic equivalence simplification method for arbitrary complex ventilation networks. The method enables, for the first time, the maximum possible equivalence simplification of nested simple sub-networks and nested complex sub-networks. In order to avoid the NP-hard problem caused by the searching of simplifiable sub-networks, it is necessary to analyze the intrinsic topology relationship between simplifiable sub-networks and spanning sub-graphs to optimize the searching process. One of our main contributions is that we present an efficient searching method for arbitrarily nested reducible sub-networks based on the bidirectional traversal process of a directed tree. The method optimizes the searching process for simplifiable node pairs by combining the characteristics of a directed tree with the judgment rules of simplifiable sub-networks. Moreover, by deriving the formula of an equivalent air resistance calculation for complex sub-networks, another one of our main contributions is that we present an equivalent calculation and simplification method for arbitrarily complex sub-networks based on the principle of energy conservation. The basic idea of the method is to calculate the equivalent air resistance using the ventilation network resolution of the constructed virtual sub-networks. We realize the simplification method of arbitrarily complex mine ventilation networks, and we validate the reliability of the simplification method by comparing the air distribution results using the network solution method before and after simplification. It can be determined that, with appropriate modifications to meet specific requirements, the proposed method can also be applicable to equivalent simplification instances of other types of complex networks. Based on the results analysis of several real-world mine ventilation network examples, the effectiveness of the proposed method is further verified, which can satisfactorily meet the requirements for simplifying complex networks.","PeriodicalId":18303,"journal":{"name":"Mathematics","volume":"37 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let Mg+f be the one-sided Hardy–Littlewood maximal function, φ1 be a nonnegative and nondecreasing function on [0,∞), γ be a positive and nondecreasing function defined on [0,∞); let φ2 be a quasi-convex function and u,v,w be three weight functions. In this paper, we present necessary and sufficient conditions on weight functions (u,v,w) such that the inequality φ1(λ)∫{Mg+f>λ}u(x)g(x)dx≤C∫−∞+∞φ2(C|f(x)|v(x)γ(λ))w(x)g(x)dx holds. Then, we unify the weak and extra-weak-type one-sided Hardy–Littlewood maximal inequalities in the above inequality.
{"title":"A Unified Version of Weighted Weak-Type Inequalities for the One-Sided Hardy–Littlewood Maximal Function in Orlicz Classes","authors":"Erxin Zhang","doi":"10.3390/math12182814","DOIUrl":"https://doi.org/10.3390/math12182814","url":null,"abstract":"Let Mg+f be the one-sided Hardy–Littlewood maximal function, φ1 be a nonnegative and nondecreasing function on [0,∞), γ be a positive and nondecreasing function defined on [0,∞); let φ2 be a quasi-convex function and u,v,w be three weight functions. In this paper, we present necessary and sufficient conditions on weight functions (u,v,w) such that the inequality φ1(λ)∫{Mg+f>λ}u(x)g(x)dx≤C∫−∞+∞φ2(C|f(x)|v(x)γ(λ))w(x)g(x)dx holds. Then, we unify the weak and extra-weak-type one-sided Hardy–Littlewood maximal inequalities in the above inequality.","PeriodicalId":18303,"journal":{"name":"Mathematics","volume":"9 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}