E-businesses (EBEs) may commit legal offenses due to perpetrating cybercrime while doing the commercial activity. According to the findings, various obstacles might deter cybercrime throughout accounting. The study examined the present laws for accounting policy elements and determined those aspects that should be included in the administrative document for e-business enterprise accounting policies. E-businesses must avoid cyber-crime (CC), which has a detrimental influence on the company's brand and diminishes client loyalty to ensure their success. According to the study's findings, the use of information and control functions of accounting can help prevent cyber-crime in the bookkeeping system by increasing the content of individual internal rules. The authors intended to make online payments for EBE-CC as safe, easy, and fast as possible. However, the internet is known for making its users feel anonymous. E-commerce (EC) transactions are vulnerable to cybercrime, resulting in considerable money and personal information losses.
{"title":"Accountancy for E-Business Enterprises Based on Cyber Security","authors":"Yu Yang, Zecheng Yin","doi":"10.4018/ijdwm.320227","DOIUrl":"https://doi.org/10.4018/ijdwm.320227","url":null,"abstract":"E-businesses (EBEs) may commit legal offenses due to perpetrating cybercrime while doing the commercial activity. According to the findings, various obstacles might deter cybercrime throughout accounting. The study examined the present laws for accounting policy elements and determined those aspects that should be included in the administrative document for e-business enterprise accounting policies. E-businesses must avoid cyber-crime (CC), which has a detrimental influence on the company's brand and diminishes client loyalty to ensure their success. According to the study's findings, the use of information and control functions of accounting can help prevent cyber-crime in the bookkeeping system by increasing the content of individual internal rules. The authors intended to make online payments for EBE-CC as safe, easy, and fast as possible. However, the internet is known for making its users feel anonymous. E-commerce (EC) transactions are vulnerable to cybercrime, resulting in considerable money and personal information losses.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88833110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-negative matrix factorization (NMF) has gained sustaining attention due to its compact leaning ability. Cancer subtyping is important for cancer prognosis analysis and clinical precision treatment. Integrating multi-omics data for cancer subtyping is beneficial to uncover the characteristics of cancer at the system-level. A unified multi-view clustering method was developed via adaptive graph and sparsity regularized non-negative matrix factorization (multi-GSNMF) for cancer subtyping. The local geometrical structures of each omics data were incorporated into the procedures of common consensus matrix learning, and the sparsity constraints were used to reduce the effect of noise and outliers in bioinformatics datasets. The performances of multi-GSNMF were evaluated on ten cancer datasets. Compared with 10 state-of-the-art multi-view clustering algorithms, multi-GSNMF performed better by providing significantly different survival in 7 out of 10 cancer datasets, the highest among all the compared methods.
{"title":"A Unified Multi-View Clustering Method Based on Non-Negative Matrix Factorization for Cancer Subtyping","authors":"Zhanpeng Huang, Jiekang Wu, Jinlin Wang, Yu Lin, Xiaohua Chen","doi":"10.4018/ijdwm.319956","DOIUrl":"https://doi.org/10.4018/ijdwm.319956","url":null,"abstract":"Non-negative matrix factorization (NMF) has gained sustaining attention due to its compact leaning ability. Cancer subtyping is important for cancer prognosis analysis and clinical precision treatment. Integrating multi-omics data for cancer subtyping is beneficial to uncover the characteristics of cancer at the system-level. A unified multi-view clustering method was developed via adaptive graph and sparsity regularized non-negative matrix factorization (multi-GSNMF) for cancer subtyping. The local geometrical structures of each omics data were incorporated into the procedures of common consensus matrix learning, and the sparsity constraints were used to reduce the effect of noise and outliers in bioinformatics datasets. The performances of multi-GSNMF were evaluated on ten cancer datasets. Compared with 10 state-of-the-art multi-view clustering algorithms, multi-GSNMF performed better by providing significantly different survival in 7 out of 10 cancer datasets, the highest among all the compared methods.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82153195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the modern era, nursing intervention is an increased commitment to patient quality and protection that allows nurses to make evidence-based healthcare decisions. The challenging characteristic of patients such as high deep venous thrombosis (DVT) and respiratory embolisms (RE) are significant health conditions that lead to post-operative severe injury and death. In this article, hybrid machine learning (HML) is used for senile patients with lower extremity fractures during the perioperative time and the clinical effectiveness of early stages nursing protocol for deep venous thrombosis of patients and nurses. A three-dimensional shape model of the user interface is shown the examined vessels, which have compression measurements mapped to the surface as colors and virtual image plane representation of DVT. The measures of comprehension have been validated using HML model segmentation experts and contrasted with paired f-tests to reduce the incidence of lower extremity deep venous thrombosis in patients and nurses.
{"title":"Effect Analysis of Nursing Intervention on Lower Extremity Deep Venous Thrombosis in Patients","authors":"Xuanyue Zhang","doi":"10.4018/ijdwm.319948","DOIUrl":"https://doi.org/10.4018/ijdwm.319948","url":null,"abstract":"In the modern era, nursing intervention is an increased commitment to patient quality and protection that allows nurses to make evidence-based healthcare decisions. The challenging characteristic of patients such as high deep venous thrombosis (DVT) and respiratory embolisms (RE) are significant health conditions that lead to post-operative severe injury and death. In this article, hybrid machine learning (HML) is used for senile patients with lower extremity fractures during the perioperative time and the clinical effectiveness of early stages nursing protocol for deep venous thrombosis of patients and nurses. A three-dimensional shape model of the user interface is shown the examined vessels, which have compression measurements mapped to the surface as colors and virtual image plane representation of DVT. The measures of comprehension have been validated using HML model segmentation experts and contrasted with paired f-tests to reduce the incidence of lower extremity deep venous thrombosis in patients and nurses.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82746459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heng Liu, Rui Liu, Zhimei Liu, Xuena Han, Kaixuan Wang, Li Yang, Fuguo Yang
The electronic health record (EHR) is a patient care database, which helps doctors or nurses to analyse comprehensive patient healthcare through health-cart (h-cart) assistance. Electronic health (e-Health) services offer efficient sharing of the patient's information based on geo-location in which nurses, doctors, or health care practitioners access the patients, promptly and without time delay in case of emergency. In e-Health services, nurses are considered as the data holder who can store and maintain patient's health records in the cloud h-cart platform to analyses patient's data effectively. Therefore, nurses need to safely share and manage access to data in the healthcare system; this need required prominent solutions. However, data authenticity and response time are considered as challenging characteristics in the e-health care system. Hence, in this paper, an improved e-health service model (IeHSM) has been proposed based on cloud computing technology to improve the data authenticity, reliability, and accessibility time of the healthcare information.
{"title":"A Data Management Framework for Nurses Using E-Health as a Service (eHaaS)","authors":"Heng Liu, Rui Liu, Zhimei Liu, Xuena Han, Kaixuan Wang, Li Yang, Fuguo Yang","doi":"10.4018/ijdwm.319736","DOIUrl":"https://doi.org/10.4018/ijdwm.319736","url":null,"abstract":"The electronic health record (EHR) is a patient care database, which helps doctors or nurses to analyse comprehensive patient healthcare through health-cart (h-cart) assistance. Electronic health (e-Health) services offer efficient sharing of the patient's information based on geo-location in which nurses, doctors, or health care practitioners access the patients, promptly and without time delay in case of emergency. In e-Health services, nurses are considered as the data holder who can store and maintain patient's health records in the cloud h-cart platform to analyses patient's data effectively. Therefore, nurses need to safely share and manage access to data in the healthcare system; this need required prominent solutions. However, data authenticity and response time are considered as challenging characteristics in the e-health care system. Hence, in this paper, an improved e-health service model (IeHSM) has been proposed based on cloud computing technology to improve the data authenticity, reliability, and accessibility time of the healthcare information.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83800184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Investors can learn a lot about the health of a firm by looking at its FP (financial performance). For investors, it offers a glimpse into the company's financial health and performance, as well as a forecast for the stock's performance in the future. Certain criteria, including liquidity, ownership, maturity, and size, have been linked to financial success. Blockchain provides several benefits in the logistics business, including increased trust in the system owing to improved transparency and traceability and cost savings by removing manual and paper-based administration. The study uses the FP-BCT technique, a new approach to measuring company performance. However, e-business helps expand data exchange, aspects, and data quantity. Improving processing capabilities impacts the macroeconomic and financial environments, reducing economic activity, ensuring timely implementation of information, and decreasing costs.
{"title":"An Evaluation of the Financial Impact on Business Performance of the Adoption of E-Business via Blockchain Technology","authors":"Zecheng Yin, Yu Yang","doi":"10.4018/ijdwm.319970","DOIUrl":"https://doi.org/10.4018/ijdwm.319970","url":null,"abstract":"Investors can learn a lot about the health of a firm by looking at its FP (financial performance). For investors, it offers a glimpse into the company's financial health and performance, as well as a forecast for the stock's performance in the future. Certain criteria, including liquidity, ownership, maturity, and size, have been linked to financial success. Blockchain provides several benefits in the logistics business, including increased trust in the system owing to improved transparency and traceability and cost savings by removing manual and paper-based administration. The study uses the FP-BCT technique, a new approach to measuring company performance. However, e-business helps expand data exchange, aspects, and data quantity. Improving processing capabilities impacts the macroeconomic and financial environments, reducing economic activity, ensuring timely implementation of information, and decreasing costs.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85274455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aspect-based sentiment analysis (ABSA) aims to classify the sentiment polarity of a given aspect in a sentence or document, which is a fine-grained task of natural language processing. Recent ABSA methods mainly focus on exploiting the syntactic information, the semantic information and both. Research on cognition theory reveals that the syntax an*/874d the semantics have effects on each other. In this work, a graph convolutional network-based model that fuses the syntactic information and semantic information in line with the cognitive practice is proposed. To start with, the GCN is taken to extract syntactic information on the syntax dependency tree. Then, the semantic graph is constructed via a multi-head self-attention mechanism and encoded by GCN. Furthermore, a parameter-sharing GCN is developed to capture the common information between the semantics and the syntax. Experiments conducted on three benchmark datasets (Laptop14, Restaurant14 and Twitter) validate that the proposed model achieves compelling performance comparing with the state-of-the-art models.
{"title":"Fusing Syntax and Semantics-Based Graph Convolutional Network for Aspect-Based Sentiment Analysis","authors":"Jinhui Feng, Shaohua Cai, Kuntao Li, Yifan Chen, Qianhua Cai, Hongya Zhao","doi":"10.4018/ijdwm.319803","DOIUrl":"https://doi.org/10.4018/ijdwm.319803","url":null,"abstract":"Aspect-based sentiment analysis (ABSA) aims to classify the sentiment polarity of a given aspect in a sentence or document, which is a fine-grained task of natural language processing. Recent ABSA methods mainly focus on exploiting the syntactic information, the semantic information and both. Research on cognition theory reveals that the syntax an*/874d the semantics have effects on each other. In this work, a graph convolutional network-based model that fuses the syntactic information and semantic information in line with the cognitive practice is proposed. To start with, the GCN is taken to extract syntactic information on the syntax dependency tree. Then, the semantic graph is constructed via a multi-head self-attention mechanism and encoded by GCN. Furthermore, a parameter-sharing GCN is developed to capture the common information between the semantics and the syntax. Experiments conducted on three benchmark datasets (Laptop14, Restaurant14 and Twitter) validate that the proposed model achieves compelling performance comparing with the state-of-the-art models.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85826520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network representation learning is one of the important works of analyzing network information. Its purpose is to learn a vector for each node in the network and map it into the vector space, and the resulting number of node dimensions is much smaller than the number of nodes in the network. Most of the current work only considers local features and ignores other features in the network, such as attribute features. Aiming at such problems, this paper proposes novel mechanisms of combining network topology, which models node text information and node clustering information on the basis of network structure and then constrains the learning process of network representation to obtain the optimal network node vector. The method is experimentally verified on three datasets: Citeseer (M10), DBLP (V4), and SDBLP. Experimental results show that the proposed method is better than the algorithm based on network topology and text feature. Good experimental results are obtained, which verifies the feasibility of the algorithm and achieves the expected experimental results.
{"title":"CTNRL: A Novel Network Representation Learning With Three Feature Integrations","authors":"Yanlong Tang, Zhonglin Ye, Haixing Zhao, Yi Ji","doi":"10.4018/ijdwm.318696","DOIUrl":"https://doi.org/10.4018/ijdwm.318696","url":null,"abstract":"Network representation learning is one of the important works of analyzing network information. Its purpose is to learn a vector for each node in the network and map it into the vector space, and the resulting number of node dimensions is much smaller than the number of nodes in the network. Most of the current work only considers local features and ignores other features in the network, such as attribute features. Aiming at such problems, this paper proposes novel mechanisms of combining network topology, which models node text information and node clustering information on the basis of network structure and then constrains the learning process of network representation to obtain the optimal network node vector. The method is experimentally verified on three datasets: Citeseer (M10), DBLP (V4), and SDBLP. Experimental results show that the proposed method is better than the algorithm based on network topology and text feature. Good experimental results are obtained, which verifies the feasibility of the algorithm and achieves the expected experimental results.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70455633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The higher-order and temporal characteristics of tweet sequences are often ignored in the field of rumor detection. In this paper, a new rumor detection method (T-BiGAT) is proposed to capture the temporal features between tweets by combining a graph attention network (GAT) and gated recurrent neural network (GRU). First, timestamps are calculated for each tweet within the same event. On the premise of the same timestamp, two different propagation subgraphs are constructed according to the response relationship between tweets. Then, GRU is used to capture intralayer dependencies between sibling nodes in the subtree; global features of each subtree are extracted using an improved GAT. Furthermore, GRU is reused to capture the temporal dependencies of individual subgraphs at different timestamps. Finally, weights are assigned to the global feature vectors of different timestamp subtrees for aggregation, and a mapping function is used to classify the aggregated vectors.
{"title":"Research on Rumor Detection Based on a Graph Attention Network With Temporal Features","authors":"Xiaohui Yang, Hailong Ma, Miao Wang","doi":"10.4018/ijdwm.319342","DOIUrl":"https://doi.org/10.4018/ijdwm.319342","url":null,"abstract":"The higher-order and temporal characteristics of tweet sequences are often ignored in the field of rumor detection. In this paper, a new rumor detection method (T-BiGAT) is proposed to capture the temporal features between tweets by combining a graph attention network (GAT) and gated recurrent neural network (GRU). First, timestamps are calculated for each tweet within the same event. On the premise of the same timestamp, two different propagation subgraphs are constructed according to the response relationship between tweets. Then, GRU is used to capture intralayer dependencies between sibling nodes in the subtree; global features of each subtree are extracted using an improved GAT. Furthermore, GRU is reused to capture the temporal dependencies of individual subgraphs at different timestamps. Finally, weights are assigned to the global feature vectors of different timestamp subtrees for aggregation, and a mapping function is used to classify the aggregated vectors.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86331233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sundus Naji Alaziz, Bakr Albayati, A. A. El-Bagoury, Wasswa Shafik
The COVID-19 pandemic is one of the current universal threats to humanity. The entire world is cooperating persistently to find some ways to decrease its effect. The time series is one of the basic criteria that play a fundamental part in developing an accurate prediction model for future estimations regarding the expansion of this virus with its infective nature. The authors discuss in this paper the goals of the study, problems, definitions, and previous studies. Also they deal with the theoretical aspect of multi-time series clusters using both the K-means and the time series cluster. In the end, they apply the topics, and ARIMA is used to introduce a prototype to give specific predictions about the impact of the COVID-19 pandemic from 90 to 140 days. The modeling and prediction process is done using the available data set from the Saudi Ministry of Health for Riyadh, Jeddah, Makkah, and Dammam during the previous four months, and the model is evaluated using the Python program. Based on this proposed method, the authors address the conclusions.
{"title":"Clustering of COVID-19 Multi-Time Series-Based K-Means and PCA With Forecasting","authors":"Sundus Naji Alaziz, Bakr Albayati, A. A. El-Bagoury, Wasswa Shafik","doi":"10.4018/ijdwm.317374","DOIUrl":"https://doi.org/10.4018/ijdwm.317374","url":null,"abstract":"The COVID-19 pandemic is one of the current universal threats to humanity. The entire world is cooperating persistently to find some ways to decrease its effect. The time series is one of the basic criteria that play a fundamental part in developing an accurate prediction model for future estimations regarding the expansion of this virus with its infective nature. The authors discuss in this paper the goals of the study, problems, definitions, and previous studies. Also they deal with the theoretical aspect of multi-time series clusters using both the K-means and the time series cluster. In the end, they apply the topics, and ARIMA is used to introduce a prototype to give specific predictions about the impact of the COVID-19 pandemic from 90 to 140 days. The modeling and prediction process is done using the available data set from the Saudi Ministry of Health for Riyadh, Jeddah, Makkah, and Dammam during the previous four months, and the model is evaluated using the Python program. Based on this proposed method, the authors address the conclusions.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90155954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Yang, Xianghan Zheng, Qiongxia Huang, Yu Liu, Yimi Chen, ZhiGang Song
It has been widely known that long non-coding RNA (lncRNA) plays an important role in gene expression and regulation. However, due to a few characteristics of lncRNA (e.g., huge amounts of data, high dimension, lack of noted samples, etc.), identifying key lncRNA closely related to specific disease is nearly impossible. In this paper, the authors propose a computational method to predict key lncRNA closely related to its corresponding disease. The proposed solution implements a BPSO based intelligent algorithm to select possible optimal lncRNA subset, and then uses ML-ELM based deep learning model to evaluate each lncRNA subset. After that, wrapper feature extraction method is used to select lncRNAs, which are closely related to the pathophysiology of disease from massive data. Experimentation on three typical open datasets proves the feasibility and efficiency of our proposed solution. This proposed solution achieves above 93% accuracy, the best ever.
{"title":"Combining BPSO and ELM Models for Inferring Novel lncRNA-Disease Associations","authors":"W. Yang, Xianghan Zheng, Qiongxia Huang, Yu Liu, Yimi Chen, ZhiGang Song","doi":"10.4018/ijdwm.317092","DOIUrl":"https://doi.org/10.4018/ijdwm.317092","url":null,"abstract":"It has been widely known that long non-coding RNA (lncRNA) plays an important role in gene expression and regulation. However, due to a few characteristics of lncRNA (e.g., huge amounts of data, high dimension, lack of noted samples, etc.), identifying key lncRNA closely related to specific disease is nearly impossible. In this paper, the authors propose a computational method to predict key lncRNA closely related to its corresponding disease. The proposed solution implements a BPSO based intelligent algorithm to select possible optimal lncRNA subset, and then uses ML-ELM based deep learning model to evaluate each lncRNA subset. After that, wrapper feature extraction method is used to select lncRNAs, which are closely related to the pathophysiology of disease from massive data. Experimentation on three typical open datasets proves the feasibility and efficiency of our proposed solution. This proposed solution achieves above 93% accuracy, the best ever.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2023-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75810785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}