{"title":"Changes in Reasons for Visits to Primary Care as a Result of the COVID-19 Pandemic: by INTRePID","authors":"Karen Tu, M. Lapadula","doi":"10.1370/afm.22.s1.5425","DOIUrl":"https://doi.org/10.1370/afm.22.s1.5425","url":null,"abstract":"","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"11 1","pages":""},"PeriodicalIF":4.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139301044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William Curry, Wen-Jan Tuan, Qiushi Chen, Andrew Chung
{"title":"Breast cancer screening during the COVID-19 Pandemic in the United States: Results from real-world health records data","authors":"William Curry, Wen-Jan Tuan, Qiushi Chen, Andrew Chung","doi":"10.1370/afm.22.s1.4885","DOIUrl":"https://doi.org/10.1370/afm.22.s1.4885","url":null,"abstract":"","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"48 1","pages":""},"PeriodicalIF":4.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139292120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tarin Clay, Melissa Filippi, Elise Robertson, Cory B. Lutgen, Elisabeth F. Callen
{"title":"A Novel Method for Utilizing Electronic Health Record Data in Condition-specific Research","authors":"Tarin Clay, Melissa Filippi, Elise Robertson, Cory B. Lutgen, Elisabeth F. Callen","doi":"10.1370/afm.22.s1.4955","DOIUrl":"https://doi.org/10.1370/afm.22.s1.4955","url":null,"abstract":"","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"12 1","pages":""},"PeriodicalIF":4.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139294842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chance R. Strenth, David Schneider, U. Sambamoorthi, Sravan Mattevada, Kimberly Fulda, Bhaskar Thakur, Anna Espinoza
{"title":"Harmonized Healthcare Database across Family Medicine Institutions","authors":"Chance R. Strenth, David Schneider, U. Sambamoorthi, Sravan Mattevada, Kimberly Fulda, Bhaskar Thakur, Anna Espinoza","doi":"10.1370/afm.22.s1.5404","DOIUrl":"https://doi.org/10.1370/afm.22.s1.5404","url":null,"abstract":"","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"14 1","pages":""},"PeriodicalIF":4.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139291188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Winston R. Liaw, Ben King, Omolola E. Adepoju, Jiangtao Luo, Ioannis Kakadiaris, Todd Prewitt, Jessica Dobbins, Pete Womack
{"title":"Identifying the Factors Associated with the Accumulation of Diabetes Complications to Inform a Prediction Tool","authors":"Winston R. Liaw, Ben King, Omolola E. Adepoju, Jiangtao Luo, Ioannis Kakadiaris, Todd Prewitt, Jessica Dobbins, Pete Womack","doi":"10.1370/afm.22.s1.5071","DOIUrl":"https://doi.org/10.1370/afm.22.s1.5071","url":null,"abstract":"","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"23 1","pages":""},"PeriodicalIF":4.6,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139291940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Organizations have been investing in analytics relying on internal and external data to gain a competitive advantage. However, the legal and regulatory acts imposed nationally and internationally have become a challenge, especially for highly regulated sectors such as health or finance/banking. Data handlers such as Facebook and Amazon have already sustained considerable fines or are under investigation due to violations of data governance. The era of big data has further intensified the challenges of minimizing the risk of data loss by introducing the dimensions of Volume, Velocity, and Variety into confidentiality. Although Volume and Velocity have been extensively researched, Variety, "the ugly duckling" of big data, is often neglected and difficult to solve, thus increasing the risk of data exposure and data loss. In mitigating the risk of data exposure and data loss in this article, a framework is proposed to utilize algorithmic classification and workflow capabilities to provide a consistent approach toward data evaluations across the organizations. A rule-based system, implementing the corporate data classification policy, will minimize the risk of exposure by facilitating users to identify the approved guidelines and enforce them quickly. The framework includes an exception handling process with appropriate approval for extenuating circumstances. The system was implemented in a proof of concept working prototype to showcase the capabilities and provide a hands-on experience. The information system was evaluated and accredited by a diverse audience of academics and senior business executives in the fields of security and data management. The audience had an average experience of ∼25 years and amasses a total experience of almost three centuries (294 years). The results confirmed that the 3Vs are of concern and that Variety, with a majority of 90% of the commentators, is the most troubling. In addition to that, with an approximate average of 60%, it was confirmed that appropriate policies, procedure, and prerequisites for classification are in place while implementation tools are lagging.
{"title":"Big Data Confidentiality: An Approach Toward Corporate Compliance Using a Rule-Based System.","authors":"Georgios Vranopoulos, Nathan Clarke, Shirley Atkinson","doi":"10.1089/big.2022.0201","DOIUrl":"https://doi.org/10.1089/big.2022.0201","url":null,"abstract":"<p><p>Organizations have been investing in analytics relying on internal and external data to gain a competitive advantage. However, the legal and regulatory acts imposed nationally and internationally have become a challenge, especially for highly regulated sectors such as health or finance/banking. Data handlers such as Facebook and Amazon have already sustained considerable fines or are under investigation due to violations of data governance. The era of big data has further intensified the challenges of minimizing the risk of data loss by introducing the dimensions of Volume, Velocity, and Variety into confidentiality. Although Volume and Velocity have been extensively researched, Variety, \"the ugly duckling\" of big data, is often neglected and difficult to solve, thus increasing the risk of data exposure and data loss. In mitigating the risk of data exposure and data loss in this article, a framework is proposed to utilize algorithmic classification and workflow capabilities to provide a consistent approach toward data evaluations across the organizations. A rule-based system, implementing the corporate data classification policy, will minimize the risk of exposure by facilitating users to identify the approved guidelines and enforce them quickly. The framework includes an exception handling process with appropriate approval for extenuating circumstances. The system was implemented in a proof of concept working prototype to showcase the capabilities and provide a hands-on experience. The information system was evaluated and accredited by a diverse audience of academics and senior business executives in the fields of security and data management. The audience had an average experience of ∼25 years and amasses a total experience of almost three centuries (294 years). The results confirmed that the 3Vs are of concern and that Variety, with a majority of 90% of the commentators, is the most troubling. In addition to that, with an approximate average of 60%, it was confirmed that appropriate policies, procedure, and prerequisites for classification are in place while implementation tools are lagging.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":""},"PeriodicalIF":4.6,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71415222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consumer segmentation is an electronic marketing practice that involves dividing consumers into groups with similar features to discover their preferences. In the business-to-customer (B2C) retailing industry, marketers explore big data to segment consumers based on various dimensions. However, among these dimensions, the motives of location and time of shopping have received relatively less attention. In this study, we use the recency, frequency, monetary, and tenure (RFMT) method to segment consumers into 10 groups based on their time and geographical features. To explore location, we investigate market distribution, revenue distribution, and consumer distribution. Geographical coordinates and peculiarities are estimated based on consumer density. Regarding time exploration, we evaluate the accuracy of product delivery and the timing of promotions. To pinpoint the target consumers, we display the main hotspots on the distribution heatmap. Furthermore, we identify the optimal time for purchase and the most densely populated locations of beneficial consumers. In addition, we evaluate product distribution to determine the most popular product categories. Based on the RFMT segmentation and product popularity, we have developed a product recommender system to assist marketers in attracting and engaging potential consumers. Through a case study using data from massive B2C retailing, we conclude that the proposed segmentation provides superior insights into consumer behavior and improves product recommendation performance.
{"title":"Consumer Segmentation Based on Location and Timing Dimensions Using Big Data from Business-to-Customer Retailing Marketplaces.","authors":"Fatemeh Ehsani, Monireh Hosseini","doi":"10.1089/big.2022.0307","DOIUrl":"10.1089/big.2022.0307","url":null,"abstract":"<p><p>Consumer segmentation is an electronic marketing practice that involves dividing consumers into groups with similar features to discover their preferences. In the business-to-customer (B2C) retailing industry, marketers explore big data to segment consumers based on various dimensions. However, among these dimensions, the motives of location and time of shopping have received relatively less attention. In this study, we use the recency, frequency, monetary, and tenure (RFMT) method to segment consumers into 10 groups based on their time and geographical features. To explore location, we investigate market distribution, revenue distribution, and consumer distribution. Geographical coordinates and peculiarities are estimated based on consumer density. Regarding time exploration, we evaluate the accuracy of product delivery and the timing of promotions. To pinpoint the target consumers, we display the main hotspots on the distribution heatmap. Furthermore, we identify the optimal time for purchase and the most densely populated locations of beneficial consumers. In addition, we evaluate product distribution to determine the most popular product categories. Based on the RFMT segmentation and product popularity, we have developed a product recommender system to assist marketers in attracting and engaging potential consumers. Through a case study using data from massive B2C retailing, we conclude that the proposed segmentation provides superior insights into consumer behavior and improves product recommendation performance.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":""},"PeriodicalIF":4.6,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71415223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2022-01-24DOI: 10.1089/big.2021.0274
Afzal Badshah, Ateeqa Jalal, Umar Farooq, Ghani-Ur Rehman, Shahab S Band, Celestine Iwendi
The cloud network is rapidly growing due to a massive increase in interconnected devices and the emergence of different technologies such as the Internet of things, fog computing, and artificial intelligence. In response, cloud computing needs reliable dealings among the service providers, brokers, and consumers. The existing cloud monitoring frameworks such as Amazon Cloud Watch, Paraleap Azure Watch, and Rack Space Cloud Kick work under the control of service providers. They work fine; however, this may create dissatisfaction among customers over Service Level Agreement (SLA) violations. Customers' dissatisfaction may drastically reduce the businesses of service providers. To cope with the earlier mentioned issue and get in line with cloud philosophy, Monitoring as a Service (MaaS), completely independent in nature, is needed for observing and regulating the cloud businesses. However, the existing MaaS frameworks do not address the comprehensive SLA for customer satisfaction and penalties management. This article proposes a reliable framework for monitoring the provider's services by adopting third-party monitoring services with clearcut SLA and penalties management. Since this framework monitors SLA as a cloud monitoring service, it is named as SLA-MaaS. On violations, it penalizes those who are found in breach of terms and condition enlisted in SLA. Simulation results confirmed that the proposed framework adequately satisfies the customers (as well as service providers). This helps in developing a trustworthy relationship among cloud partners and increases customer attention and retention.
由于互联设备的大量增加以及物联网、雾计算和人工智能等不同技术的出现,云网络正在迅速发展。作为回应,云计算需要服务提供商、经纪人和消费者之间的可靠交易。现有的云监控框架,如Amazon cloud Watch、Paraleap Azure Watch和Rack Space cloud Kick,在服务提供商的控制下工作。它们工作良好;然而,这可能会引起客户对违反服务水平协议(SLA)的不满。客户的不满可能会大大减少服务提供商的业务。为了解决前面提到的问题并符合云的理念,监控即服务(MaaS)在本质上是完全独立的,需要用于观察和监管云业务。然而,现有的MaaS框架没有解决客户满意度和处罚管理的全面SLA问题。本文提出了一个可靠的框架,通过采用具有clearcut SLA和惩罚管理的第三方监控服务来监控提供商的服务。由于该框架将SLA作为云监控服务进行监控,因此将其命名为SLA-MaaS。关于违规行为,它惩罚那些被发现违反苏丹解放军招募的条款和条件的人。仿真结果证实,所提出的框架充分满足了客户(以及服务提供商)的要求。这有助于在云合作伙伴之间建立值得信赖的关系,并提高客户的关注度和忠诚度。
{"title":"Service Level Agreement Monitoring as a Service: An Independent Monitoring Service for Service Level Agreements in Clouds.","authors":"Afzal Badshah, Ateeqa Jalal, Umar Farooq, Ghani-Ur Rehman, Shahab S Band, Celestine Iwendi","doi":"10.1089/big.2021.0274","DOIUrl":"10.1089/big.2021.0274","url":null,"abstract":"<p><p>The cloud network is rapidly growing due to a massive increase in interconnected devices and the emergence of different technologies such as the Internet of things, fog computing, and artificial intelligence. In response, cloud computing needs reliable dealings among the service providers, brokers, and consumers. The existing cloud monitoring frameworks such as Amazon Cloud Watch, Paraleap Azure Watch, and Rack Space Cloud Kick work under the control of service providers. They work fine; however, this may create dissatisfaction among customers over Service Level Agreement (SLA) violations. Customers' dissatisfaction may drastically reduce the businesses of service providers. To cope with the earlier mentioned issue and get in line with cloud philosophy, Monitoring as a Service (MaaS), completely independent in nature, is needed for observing and regulating the cloud businesses. However, the existing MaaS frameworks do not address the comprehensive SLA for customer satisfaction and penalties management. This article proposes a reliable framework for monitoring the provider's services by adopting third-party monitoring services with clearcut SLA and penalties management. Since this framework monitors SLA as a cloud monitoring service, it is named as SLA-MaaS. On violations, it penalizes those who are found in breach of terms and condition enlisted in SLA. Simulation results confirmed that the proposed framework adequately satisfies the customers (as well as service providers). This helps in developing a trustworthy relationship among cloud partners and increases customer attention and retention.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"339-354"},"PeriodicalIF":4.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39857084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-01-19DOI: 10.1089/big.2021.0333
Guowei Zhang, Weilan Wang, Ce Zhang, Penghai Zhao, Mingkai Zhang
Recognition of handwritten Uchen Tibetan characters input has been considered an efficient way of acquiring mass data in the digital era. However, it still faces considerable challenges due to seriously touching letters and various morphological features of identical characters. Thus, deeper neural networks are required to achieve decent recognition accuracy, making an efficient, lightweight model design important to balance the inevitable trade-off between accuracy and latency. To reduce the learnable parameters of the network as much as possible and maintain acceptable accuracy, we introduce an efficient model named HUTNet based on the internal relationship between floating-point operations per second (FLOPs) and Memory Access Cost. The proposed network achieves a ResNet-18-level accuracy of 96.86%, with only a tenth of the parameters. The subsequent pruning and knowledge distillation strategies were applied to further reduce the inference latency of the model. Experiments on the test set (Handwritten Uchen Tibetan Data set by Wang [HUTDW]) containing 562 classes of 42,068 samples show that the compressed model achieves a 96.83% accuracy while maintaining lower FLOPs and fewer parameters. To verify the effectiveness of HUTNet, we tested it on the Chinese Handwriting Data sets Handwriting Database 1.1 (HWDB1.1), in which HUTNet achieved an accuracy of 97.24%, higher than that of ResNet-18 and ResNet-34. In general, we conduct extensive experiments on resource and accuracy trade-offs and show a stronger performance compared with other famous models on HUTDW and HWDB1.1. It also unlocks the critical bottleneck for handwritten Uchen Tibetan recognition on low-power computing devices.
{"title":"HUTNet: An Efficient Convolutional Neural Network for Handwritten Uchen Tibetan Character Recognition.","authors":"Guowei Zhang, Weilan Wang, Ce Zhang, Penghai Zhao, Mingkai Zhang","doi":"10.1089/big.2021.0333","DOIUrl":"10.1089/big.2021.0333","url":null,"abstract":"<p><p>Recognition of handwritten Uchen Tibetan characters input has been considered an efficient way of acquiring mass data in the digital era. However, it still faces considerable challenges due to seriously touching letters and various morphological features of identical characters. Thus, deeper neural networks are required to achieve decent recognition accuracy, making an efficient, lightweight model design important to balance the inevitable trade-off between accuracy and latency. To reduce the learnable parameters of the network as much as possible and maintain acceptable accuracy, we introduce an efficient model named HUTNet based on the internal relationship between floating-point operations per second (FLOPs) and Memory Access Cost. The proposed network achieves a ResNet-18-level accuracy of 96.86%, with only a tenth of the parameters. The subsequent pruning and knowledge distillation strategies were applied to further reduce the inference latency of the model. Experiments on the test set (Handwritten Uchen Tibetan Data set by Wang [HUTDW]) containing 562 classes of 42,068 samples show that the compressed model achieves a 96.83% accuracy while maintaining lower FLOPs and fewer parameters. To verify the effectiveness of HUTNet, we tested it on the Chinese Handwriting Data sets Handwriting Database 1.1 (HWDB1.1), in which HUTNet achieved an accuracy of 97.24%, higher than that of ResNet-18 and ResNet-34. In general, we conduct extensive experiments on resource and accuracy trade-offs and show a stronger performance compared with other famous models on HUTDW and HWDB1.1. It also unlocks the critical bottleneck for handwritten Uchen Tibetan recognition on low-power computing devices.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"387-398"},"PeriodicalIF":4.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10543391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-01-19DOI: 10.1089/big.2021.0365
Jiabing Xu, Jiarui Liu, Tianen Yao, Yang Li
This study aims to transform the existing telecom operators from traditional Internet operators to digital-driven services, and improve the overall competitiveness of telecom enterprises. Data mining is applied to telecom user classification to process the existing telecom user data through data integration, cleaning, standardization, and transformation. Although the existing algorithms ensure the accuracy of the algorithm on the telecom user analysis platform under big data, they do not solve the limitations of single machine computing and cannot effectively improve the training efficiency of the model. To solve this problem, this article establishes a telecom customer churn prediction model with the help of backpropagation neural network (BPNN) algorithm, and deploys the MapReduce programming framework on Hadoop platform. Using the data of a telecom company, this article analyzes the loss of telecom customers in the big data environment. The research shows that the accuracy of telecom customer churn prediction model in BPNN is 82.12%. After deploying large data sets, the learning and training time of the model is greatly shortened. When the number of nodes is 8, the acceleration ratio of the model remains at 60 seconds. Under big data, the telecom user analysis platform not only ensures the accuracy of the algorithm, but also solves the limitations of single machine computing and effectively improves the training efficiency of the model. Compared with that of the existing research, the accuracy of the model is improved by 25.36%, and the running time is shortened by about twice. This business model based on BPNN algorithm has obvious advantages in processing more data sets, and has great reference value for the digital-driven business model transformation of the telecommunications industry.
{"title":"Prediction and Big Data Impact Analysis of Telecom Churn by Backpropagation Neural Network Algorithm from the Perspective of Business Model.","authors":"Jiabing Xu, Jiarui Liu, Tianen Yao, Yang Li","doi":"10.1089/big.2021.0365","DOIUrl":"10.1089/big.2021.0365","url":null,"abstract":"<p><p>This study aims to transform the existing telecom operators from traditional Internet operators to digital-driven services, and improve the overall competitiveness of telecom enterprises. Data mining is applied to telecom user classification to process the existing telecom user data through data integration, cleaning, standardization, and transformation. Although the existing algorithms ensure the accuracy of the algorithm on the telecom user analysis platform under big data, they do not solve the limitations of single machine computing and cannot effectively improve the training efficiency of the model. To solve this problem, this article establishes a telecom customer churn prediction model with the help of backpropagation neural network (BPNN) algorithm, and deploys the MapReduce programming framework on Hadoop platform. Using the data of a telecom company, this article analyzes the loss of telecom customers in the big data environment. The research shows that the accuracy of telecom customer churn prediction model in BPNN is 82.12%. After deploying large data sets, the learning and training time of the model is greatly shortened. When the number of nodes is 8, the acceleration ratio of the model remains at 60 seconds. Under big data, the telecom user analysis platform not only ensures the accuracy of the algorithm, but also solves the limitations of single machine computing and effectively improves the training efficiency of the model. Compared with that of the existing research, the accuracy of the model is improved by 25.36%, and the running time is shortened by about twice. This business model based on BPNN algorithm has obvious advantages in processing more data sets, and has great reference value for the digital-driven business model transformation of the telecommunications industry.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":" ","pages":"355-368"},"PeriodicalIF":4.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10549823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}