Pub Date : 2023-10-01Epub Date: 2023-01-19DOI: 10.1089/big.2021.0333
Guowei Zhang, Weilan Wang, Ce Zhang, Penghai Zhao, Mingkai Zhang
Recognition of handwritten Uchen Tibetan characters input has been considered an efficient way of acquiring mass data in the digital era. However, it still faces considerable challenges due to seriously touching letters and various morphological features of identical characters. Thus, deeper neural networks are required to achieve decent recognition accuracy, making an efficient, lightweight model design important to balance the inevitable trade-off between accuracy and latency. To reduce the learnable parameters of the network as much as possible and maintain acceptable accuracy, we introduce an efficient model named HUTNet based on the internal relationship between floating-point operations per second (FLOPs) and Memory Access Cost. The proposed network achieves a ResNet-18-level accuracy of 96.86%, with only a tenth of the parameters. The subsequent pruning and knowledge distillation strategies were applied to further reduce the inference latency of the model. Experiments on the test set (Handwritten Uchen Tibetan Data set by Wang [HUTDW]) containing 562 classes of 42,068 samples show that the compressed model achieves a 96.83% accuracy while maintaining lower FLOPs and fewer parameters. To verify the effectiveness of HUTNet, we tested it on the Chinese Handwriting Data sets Handwriting Database 1.1 (HWDB1.1), in which HUTNet achieved an accuracy of 97.24%, higher than that of ResNet-18 and ResNet-34. In general, we conduct extensive experiments on resource and accuracy trade-offs and show a stronger performance compared with other famous models on HUTDW and HWDB1.1. It also unlocks the critical bottleneck for handwritten Uchen Tibetan recognition on low-power computing devices.
{"title":"HUTNet: An Efficient Convolutional Neural Network for Handwritten Uchen Tibetan Character Recognition.","authors":"Guowei Zhang, Weilan Wang, Ce Zhang, Penghai Zhao, Mingkai Zhang","doi":"10.1089/big.2021.0333","DOIUrl":"10.1089/big.2021.0333","url":null,"abstract":"<p><p>Recognition of handwritten Uchen Tibetan characters input has been considered an efficient way of acquiring mass data in the digital era. However, it still faces considerable challenges due to seriously touching letters and various morphological features of identical characters. Thus, deeper neural networks are required to achieve decent recognition accuracy, making an efficient, lightweight model design important to balance the inevitable trade-off between accuracy and latency. To reduce the learnable parameters of the network as much as possible and maintain acceptable accuracy, we introduce an efficient model named HUTNet based on the internal relationship between floating-point operations per second (FLOPs) and Memory Access Cost. The proposed network achieves a ResNet-18-level accuracy of 96.86%, with only a tenth of the parameters. The subsequent pruning and knowledge distillation strategies were applied to further reduce the inference latency of the model. Experiments on the test set (Handwritten Uchen Tibetan Data set by Wang [HUTDW]) containing 562 classes of 42,068 samples show that the compressed model achieves a 96.83% accuracy while maintaining lower FLOPs and fewer parameters. To verify the effectiveness of HUTNet, we tested it on the Chinese Handwriting Data sets Handwriting Database 1.1 (HWDB1.1), in which HUTNet achieved an accuracy of 97.24%, higher than that of ResNet-18 and ResNet-34. In general, we conduct extensive experiments on resource and accuracy trade-offs and show a stronger performance compared with other famous models on HUTDW and HWDB1.1. It also unlocks the critical bottleneck for handwritten Uchen Tibetan recognition on low-power computing devices.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10543391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-01-19DOI: 10.1089/big.2021.0365
Jiabing Xu, Jiarui Liu, Tianen Yao, Yang Li
This study aims to transform the existing telecom operators from traditional Internet operators to digital-driven services, and improve the overall competitiveness of telecom enterprises. Data mining is applied to telecom user classification to process the existing telecom user data through data integration, cleaning, standardization, and transformation. Although the existing algorithms ensure the accuracy of the algorithm on the telecom user analysis platform under big data, they do not solve the limitations of single machine computing and cannot effectively improve the training efficiency of the model. To solve this problem, this article establishes a telecom customer churn prediction model with the help of backpropagation neural network (BPNN) algorithm, and deploys the MapReduce programming framework on Hadoop platform. Using the data of a telecom company, this article analyzes the loss of telecom customers in the big data environment. The research shows that the accuracy of telecom customer churn prediction model in BPNN is 82.12%. After deploying large data sets, the learning and training time of the model is greatly shortened. When the number of nodes is 8, the acceleration ratio of the model remains at 60 seconds. Under big data, the telecom user analysis platform not only ensures the accuracy of the algorithm, but also solves the limitations of single machine computing and effectively improves the training efficiency of the model. Compared with that of the existing research, the accuracy of the model is improved by 25.36%, and the running time is shortened by about twice. This business model based on BPNN algorithm has obvious advantages in processing more data sets, and has great reference value for the digital-driven business model transformation of the telecommunications industry.
{"title":"Prediction and Big Data Impact Analysis of Telecom Churn by Backpropagation Neural Network Algorithm from the Perspective of Business Model.","authors":"Jiabing Xu, Jiarui Liu, Tianen Yao, Yang Li","doi":"10.1089/big.2021.0365","DOIUrl":"10.1089/big.2021.0365","url":null,"abstract":"<p><p>This study aims to transform the existing telecom operators from traditional Internet operators to digital-driven services, and improve the overall competitiveness of telecom enterprises. Data mining is applied to telecom user classification to process the existing telecom user data through data integration, cleaning, standardization, and transformation. Although the existing algorithms ensure the accuracy of the algorithm on the telecom user analysis platform under big data, they do not solve the limitations of single machine computing and cannot effectively improve the training efficiency of the model. To solve this problem, this article establishes a telecom customer churn prediction model with the help of backpropagation neural network (BPNN) algorithm, and deploys the MapReduce programming framework on Hadoop platform. Using the data of a telecom company, this article analyzes the loss of telecom customers in the big data environment. The research shows that the accuracy of telecom customer churn prediction model in BPNN is 82.12%. After deploying large data sets, the learning and training time of the model is greatly shortened. When the number of nodes is 8, the acceleration ratio of the model remains at 60 seconds. Under big data, the telecom user analysis platform not only ensures the accuracy of the algorithm, but also solves the limitations of single machine computing and effectively improves the training efficiency of the model. Compared with that of the existing research, the accuracy of the model is improved by 25.36%, and the running time is shortened by about twice. This business model based on BPNN algorithm has obvious advantages in processing more data sets, and has great reference value for the digital-driven business model transformation of the telecommunications industry.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10549823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2022-01-06DOI: 10.1089/big.2021.0279
Aisha Batool, Muhammad Wasif Nisar, Jamal Hussain Shah, Muhammad Attique Khan, Ahmed A Abd El-Latif
Traffic sign detection (TSD) in real-time environment holds great importance for applications such as automated-driven vehicles. Large variety of traffic signs, different appearances, and spatial representations causes a huge intraclass variation. In this article, an extreme learning machine (ELM), convolutional neural network (CNN), and scale transformation (ST)-based model, called improved extreme learning machine network, are proposed to detect traffic signs in real-time environment. The proposed model has a custom DenseNet-based novel CNN architecture, improved version of region proposal networks called accurate anchor prediction model (A2PM), ST, and ELM module. CNN architecture makes use of handcrafted features such as scale-invariant feature transform and Gabor to improvise the edges of traffic signs. The A2PM minimizes the redundancy among extracted features to make the model efficient and ST enables the model to detect traffic signs of different sizes. ELM module enhances the efficiency by reshaping the features. The proposed model is tested on three publicly available data sets, challenging unreal and real environments for traffic sign recognition, Tsinghua-Tencent 100K, and German traffic sign detection benchmark and achieves average precisions of 93.31%, 95.22%, and 99.45%, respectively. These results prove that the proposed model is more efficient than state-of-the-art sign detection techniques.
{"title":"iELMNet: Integrating Novel Improved Extreme Learning Machine and Convolutional Neural Network Model for Traffic Sign Detection.","authors":"Aisha Batool, Muhammad Wasif Nisar, Jamal Hussain Shah, Muhammad Attique Khan, Ahmed A Abd El-Latif","doi":"10.1089/big.2021.0279","DOIUrl":"10.1089/big.2021.0279","url":null,"abstract":"<p><p>Traffic sign detection (TSD) in real-time environment holds great importance for applications such as automated-driven vehicles. Large variety of traffic signs, different appearances, and spatial representations causes a huge intraclass variation. In this article, an extreme learning machine (ELM), convolutional neural network (CNN), and scale transformation (ST)-based model, called improved extreme learning machine network, are proposed to detect traffic signs in real-time environment. The proposed model has a custom DenseNet-based novel CNN architecture, improved version of region proposal networks called accurate anchor prediction model (A2PM), ST, and ELM module. CNN architecture makes use of handcrafted features such as scale-invariant feature transform and Gabor to improvise the edges of traffic signs. The A2PM minimizes the redundancy among extracted features to make the model efficient and ST enables the model to detect traffic signs of different sizes. ELM module enhances the efficiency by reshaping the features. The proposed model is tested on three publicly available data sets, challenging unreal and real environments for traffic sign recognition, Tsinghua-Tencent 100K, and German traffic sign detection benchmark and achieves average precisions of 93.31%, 95.22%, and 99.45%, respectively. These results prove that the proposed model is more efficient than state-of-the-art sign detection techniques.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39655008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-01-27DOI: 10.1089/big.2021.0343
Chen Tao
Anomaly detection is crucial in a variety of domains, such as fraud detection, disease diagnosis, and equipment defect detection. With the development of deep learning, anomaly detection with Bayesian neural networks (BNNs) becomes a novel research topic in recent years. This article aims to propose a widely applicable method of outlier detection (a category of anomaly detection) using BNNs based on uncertainty measurement. There are three kinds of uncertainties generated in the prediction of BNNs: epistemic uncertainty, aleatoric uncertainty, and (model) misspecification uncertainty. Although the approaches in previous studies are adopted to measure epistemic and aleatoric uncertainty, a new method of utilizing loss functions to quantify misspecification uncertainty is proposed in this article. Then, these three uncertainty sources are merged together by specific combination models to construct total prediction uncertainty. In this study, the key idea is that the observations with high total prediction uncertainty should correspond to outliers in the data. The method of this research is applied to the experiments on Modified National Institute of Standards and Technology (MNIST) dataset and Taxi dataset, respectively. From the results, if the network is appropriately constructed and well-trained and model parameters are carefully tuned, most anomalous images in MNIST dataset and all the abnormal traffic periods in Taxi dataset can be nicely detected. In addition, the performance of this method is compared with the BNN anomaly detection methods proposed before and the classical Local Outlier Factor and Density-Based Spatial Clustering of Applications with Noise methods. This study links the classification of uncertainties in essence with anomaly detection and takes the lead to consider combining different uncertainty sources to reform detection outcomes instead of using only single uncertainty each time.
{"title":"Applications of Bayesian Neural Networks in Outlier Detection.","authors":"Chen Tao","doi":"10.1089/big.2021.0343","DOIUrl":"10.1089/big.2021.0343","url":null,"abstract":"<p><p>Anomaly detection is crucial in a variety of domains, such as fraud detection, disease diagnosis, and equipment defect detection. With the development of deep learning, anomaly detection with Bayesian neural networks (BNNs) becomes a novel research topic in recent years. This article aims to propose a widely applicable method of outlier detection (a category of anomaly detection) using BNNs based on uncertainty measurement. There are three kinds of uncertainties generated in the prediction of BNNs: epistemic uncertainty, aleatoric uncertainty, and (model) misspecification uncertainty. Although the approaches in previous studies are adopted to measure epistemic and aleatoric uncertainty, a new method of utilizing loss functions to quantify misspecification uncertainty is proposed in this article. Then, these three uncertainty sources are merged together by specific combination models to construct total prediction uncertainty. In this study, the key idea is that the observations with high total prediction uncertainty should correspond to outliers in the data. The method of this research is applied to the experiments on Modified National Institute of Standards and Technology (MNIST) dataset and Taxi dataset, respectively. From the results, if the network is appropriately constructed and well-trained and model parameters are carefully tuned, most anomalous images in MNIST dataset and all the abnormal traffic periods in Taxi dataset can be nicely detected. In addition, the performance of this method is compared with the BNN anomaly detection methods proposed before and the classical Local Outlier Factor and Density-Based Spatial Clustering of Applications with Noise methods. This study links the classification of uncertainties in essence with anomaly detection and takes the lead to consider combining different uncertainty sources to reform detection outcomes instead of using only single uncertainty each time.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10681813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1089/big.2023.29062.editorial
Chinmay Chakraborty, Muhammad Khurram Khan
{"title":"Big Data-Driven Futuristic Fabric System in Societal Digital Transformation.","authors":"Chinmay Chakraborty, Muhammad Khurram Khan","doi":"10.1089/big.2023.29062.editorial","DOIUrl":"10.1089/big.2023.29062.editorial","url":null,"abstract":"","PeriodicalId":51314,"journal":{"name":"Big Data","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41219740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinjun Lai, Guitao Huang, Ziyue Zhao, Shenhe Lin, Sheng Zhang, Huiyu Zhang, Qingxin Chen, Ning Mao
This study investigates customers' product design requirements through online comments from social media, and quickly translates these needs into product design specifications. First, the exponential discriminative snowball sampling method was proposed to generate a product-related subnetwork. Second, natural language processing (NLP) was utilized to mine user-generated comments, and a Graph SAmple and aggreGatE method was employed to embed the user's node neighborhood information in the network to jointly define a user's persona. Clustering was used for market and product model segmentation. Finally, a deep learning bidirectional long short-term memory with conditional random fields framework was introduced for opinion mining. A comment frequency-invert group frequency indicator was proposed to quantify all user groups' positive and negative opinions for various specifications of different product functions. A case study of smartphone design analysis is presented with data from a large Chinese online community called Baidu Tieba. Eleven layers of social relationships were snowball sampled, with 14,018 users and 30,803 comments. The proposed method produced a more reasonable user group clustering result than the conventional method. With our approach, user groups' dominating likes and dislikes for specifications could be immediately identified, and the similar and different preferences of product features by different user groups were instantly revealed. Managerial and engineering insights were also discussed.
{"title":"Social Listening for Product Design Requirement Analysis and Segmentation: A Graph Analysis Approach with User Comments Mining.","authors":"Xinjun Lai, Guitao Huang, Ziyue Zhao, Shenhe Lin, Sheng Zhang, Huiyu Zhang, Qingxin Chen, Ning Mao","doi":"10.1089/big.2022.0021","DOIUrl":"https://doi.org/10.1089/big.2022.0021","url":null,"abstract":"<p><p>This study investigates customers' product design requirements through online comments from social media, and quickly translates these needs into product design specifications. First, the exponential discriminative snowball sampling method was proposed to generate a product-related subnetwork. Second, natural language processing (NLP) was utilized to mine user-generated comments, and a Graph SAmple and aggreGatE method was employed to embed the user's node neighborhood information in the network to jointly define a user's persona. Clustering was used for market and product model segmentation. Finally, a deep learning bidirectional long short-term memory with conditional random fields framework was introduced for opinion mining. A comment frequency-invert group frequency indicator was proposed to quantify all user groups' positive and negative opinions for various specifications of different product functions. A case study of smartphone design analysis is presented with data from a large Chinese online community called Baidu Tieba. Eleven layers of social relationships were snowball sampled, with 14,018 users and 30,803 comments. The proposed method produced a more reasonable user group clustering result than the conventional method. With our approach, user groups' dominating likes and dislikes for specifications could be immediately identified, and the similar and different preferences of product features by different user groups were instantly revealed. Managerial and engineering insights were also discussed.</p>","PeriodicalId":51314,"journal":{"name":"Big Data","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10508327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-21DOI: 10.3390/engproc2023038091
Manying Shi, Fang Luo, Hanping Ke, Shiliang Zhang
{"title":"Design and Analysis of Education Personalized Recommendation System under Vision of System Science Communication","authors":"Manying Shi, Fang Luo, Hanping Ke, Shiliang Zhang","doi":"10.3390/engproc2023038091","DOIUrl":"https://doi.org/10.3390/engproc2023038091","url":null,"abstract":"","PeriodicalId":51314,"journal":{"name":"Big Data","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90898197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220569
Ramahlapane Lerato Moila, M. Velempini
This study proposes an optimised routing scheme, called OCS-AODV, for Cognitive Radio Ad Hoc Networks (CRAHNs) to enhance Quality of Service (QoS). The scheme applies the Cuckoo Search (CS) algorithm optimised with a fitness function to improve the performance of the Ad Hoc On-Demand Distance Vector (AODV). The objective of the study is to evaluate the proposed scheme's performance with respect to delay, packet loss, packet delivery ratio and throughput. The literature review shows that the existing routing protocols have limitations which impact performance in dynamic environments. The proposed OCS-AODV scheme aims to address these limitations by selecting reliable paths based on a fitness function that considers the lifetime of nodes, reliability, and available buffer capacity. The simulation results have shown that the OCS-AODV scheme outperforms the CS-DSDV and ACO-AODV schemes in terms of PDR, packet loss, delay, and throughput. The study concludes that the proposed scheme improves the QoS of routing in CRAHNs. However, the use of a single fitness function may not be optimal for all network scenarios. Multiple fitness functions may be considered in future and the schemes be evaluated in real-world CRAHNs
{"title":"Optimising the Cuckoo Search Algorithm for Improved Quality of Service in Cognitive Radio ad hoc Networks","authors":"Ramahlapane Lerato Moila, M. Velempini","doi":"10.1109/icABCD59051.2023.10220569","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220569","url":null,"abstract":"This study proposes an optimised routing scheme, called OCS-AODV, for Cognitive Radio Ad Hoc Networks (CRAHNs) to enhance Quality of Service (QoS). The scheme applies the Cuckoo Search (CS) algorithm optimised with a fitness function to improve the performance of the Ad Hoc On-Demand Distance Vector (AODV). The objective of the study is to evaluate the proposed scheme's performance with respect to delay, packet loss, packet delivery ratio and throughput. The literature review shows that the existing routing protocols have limitations which impact performance in dynamic environments. The proposed OCS-AODV scheme aims to address these limitations by selecting reliable paths based on a fitness function that considers the lifetime of nodes, reliability, and available buffer capacity. The simulation results have shown that the OCS-AODV scheme outperforms the CS-DSDV and ACO-AODV schemes in terms of PDR, packet loss, delay, and throughput. The study concludes that the proposed scheme improves the QoS of routing in CRAHNs. However, the use of a single fitness function may not be optimal for all network scenarios. Multiple fitness functions may be considered in future and the schemes be evaluated in real-world CRAHNs","PeriodicalId":51314,"journal":{"name":"Big Data","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74075792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220457
A. Periola, M. Sumbwanyambe
Ice melting in the Arctic enables the conduct of underwater neutrino astronomy in new regions with maritime resources. The presented research proposes a novel underwater network that is integrated with terrestrial computing entities to obtain underwater astronomy-associated data. In addition, the proposed network architecture enhances the conduct of underwater neutrino astronomy. This is done by increasing the potential neutrino presence points. Analysis shows that the use of the arctic region in addition to the existing region of Lake Baikal in comparison to the existing case (where only Lake Baikal is utilized) increases the potential neutrino presence points by an average of (28.3 – 65.7) %.
{"title":"An Underwater Network for Mini-Submarine Underwater Observatory","authors":"A. Periola, M. Sumbwanyambe","doi":"10.1109/icABCD59051.2023.10220457","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220457","url":null,"abstract":"Ice melting in the Arctic enables the conduct of underwater neutrino astronomy in new regions with maritime resources. The presented research proposes a novel underwater network that is integrated with terrestrial computing entities to obtain underwater astronomy-associated data. In addition, the proposed network architecture enhances the conduct of underwater neutrino astronomy. This is done by increasing the potential neutrino presence points. Analysis shows that the use of the arctic region in addition to the existing region of Lake Baikal in comparison to the existing case (where only Lake Baikal is utilized) increases the potential neutrino presence points by an average of (28.3 – 65.7) %.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75011344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}