Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220551
Steven Lububu, Boniface Kabaso
African swine fever (ASF) is a virulent infectious disease of pigs. It can infect domestic and wild pigs, causing severe economic and production losses. The virus can be spread through live or dead pigs and through pork products. Since there is currently no vaccine or treatment method, it poses a major challenge and threat to the pig industry once it breaks out. The results of the investigation show that most existing solutions use laboratory tests to diagnose possible ASF cases. In addition, various machine learning (ML) techniques have been used in the past to diagnose ASF. However, historical review of recent years shows that laboratories have difficulty diagnosing ASF with the required accuracy due to a lack of correlation between causes and effects. Lack of accuracy and incorrect ASF diagnoses by laboratories have proven to be a major problem for pig welfare. Consequently, misdiagnosis of ASF disease can result in severe direct and indirect economic losses to farmers, especially farmers whose income is derived primarily from pig production. While several other researchers have proposed the use of ML for ASF diagnosis, the application of cause-effect relationships between specific viruses and symptoms for ASF diagnosis is still missing. In this systematic literature review, we examine the methods, limitations, and approaches in the existing literature from ML and laboratories for ASF diagnosis. In this review, we evaluate the performance of ML and laboratory techniques for ASF diagnosis. In addition, we compare the performance of the techniques of ML with other statistical approaches such as causal ML and computer vision for ASF diagnosis. In addition, the strengths and weaknesses of ML and laboratory techniques for ASF diagnosis were summarized. A thorough search of relevant databases was performed, and the selected studies were examined using predefined inclusion and exclusion criteria. Nevertheless, the study also indicates an area for improvement, such as the accuracy of ASF diagnosis. The study recommends the use of Causal Reasoning with ML to develop a causal ML model capable of establishing relationships between viruses and symptoms to improve the accuracy of the ASF disease. The application of causal ML is presented as an alternative solution for laboratory diagnosis of ASF, which contributes to the field of the study. In addition, further research could investigate the possible characteristics of ASF, including virus variants originating from the ASF family. The review could provide essential information on ASF datasets based on the interpretation of results obtained from the use of appropriate samples and validated tests in combination with the information from laboratory tests of ASF disease epidemiology, scenario, clinical signs, and lesions produced by different virulence. This review concludes that more studies are needed for improving the accuracy and implementation of the causal ML model for ASF diagnosis in real-time surveill
{"title":"A Systematic Literature Review on Machine Learning and Laboratory Techniques for the Diagnosis of African swine fever (ASF)","authors":"Steven Lububu, Boniface Kabaso","doi":"10.1109/icABCD59051.2023.10220551","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220551","url":null,"abstract":"African swine fever (ASF) is a virulent infectious disease of pigs. It can infect domestic and wild pigs, causing severe economic and production losses. The virus can be spread through live or dead pigs and through pork products. Since there is currently no vaccine or treatment method, it poses a major challenge and threat to the pig industry once it breaks out. The results of the investigation show that most existing solutions use laboratory tests to diagnose possible ASF cases. In addition, various machine learning (ML) techniques have been used in the past to diagnose ASF. However, historical review of recent years shows that laboratories have difficulty diagnosing ASF with the required accuracy due to a lack of correlation between causes and effects. Lack of accuracy and incorrect ASF diagnoses by laboratories have proven to be a major problem for pig welfare. Consequently, misdiagnosis of ASF disease can result in severe direct and indirect economic losses to farmers, especially farmers whose income is derived primarily from pig production. While several other researchers have proposed the use of ML for ASF diagnosis, the application of cause-effect relationships between specific viruses and symptoms for ASF diagnosis is still missing. In this systematic literature review, we examine the methods, limitations, and approaches in the existing literature from ML and laboratories for ASF diagnosis. In this review, we evaluate the performance of ML and laboratory techniques for ASF diagnosis. In addition, we compare the performance of the techniques of ML with other statistical approaches such as causal ML and computer vision for ASF diagnosis. In addition, the strengths and weaknesses of ML and laboratory techniques for ASF diagnosis were summarized. A thorough search of relevant databases was performed, and the selected studies were examined using predefined inclusion and exclusion criteria. Nevertheless, the study also indicates an area for improvement, such as the accuracy of ASF diagnosis. The study recommends the use of Causal Reasoning with ML to develop a causal ML model capable of establishing relationships between viruses and symptoms to improve the accuracy of the ASF disease. The application of causal ML is presented as an alternative solution for laboratory diagnosis of ASF, which contributes to the field of the study. In addition, further research could investigate the possible characteristics of ASF, including virus variants originating from the ASF family. The review could provide essential information on ASF datasets based on the interpretation of results obtained from the use of appropriate samples and validated tests in combination with the information from laboratory tests of ASF disease epidemiology, scenario, clinical signs, and lesions produced by different virulence. This review concludes that more studies are needed for improving the accuracy and implementation of the causal ML model for ASF diagnosis in real-time surveill","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"35 1","pages":"1-8"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81080721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220485
Isaiah O. Adebayo, M. Adigun, P. Mudali
The advent of artificial intelligence and big data makes it nearly impossible for large scale networks to be managed manually. To this end, software-defined networking (SDN) was introduced to provide network operators with the infrastructure for achieving greater flexibility and fine-grained control over networks. However, a critical issue to consider when incorporating SDN technology over large-scale networks like wide area networks (WANs) is the allocation of switches to controllers. In this paper, we address the switch-to-controller allocation problem that considers the heterogeneity of controller capacities. Specifically, we propose two neighbourhood centrality-based algorithms for addressing the problem with the aim of minimizing switch-to-controller latency. We also introduce a weighted centrality function that enables fair distribution of load across capacitated controllers. The proposed algorithms utilize centrality-based measures and heuristics to determine the ideal switch-to-controller allocations that consider the propagating capacity of suitable controller nodes. We evaluate the performance of the proposed algorithms on the internet2 topology. The results show that considering the heterogeneity of controller capacities reduces load imbalance significantly. Moreover, by limiting the exploration of the local centrality for each node to a maximum of two-step neighbours the complexity of the proposed algorithm is reduced. Thus, making it suitable for implementation in real-world SD-WANs.
{"title":"Neighbourhood Centality Based Algorithms for Switch-to-Controller Allocation in SD-WANs","authors":"Isaiah O. Adebayo, M. Adigun, P. Mudali","doi":"10.1109/icABCD59051.2023.10220485","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220485","url":null,"abstract":"The advent of artificial intelligence and big data makes it nearly impossible for large scale networks to be managed manually. To this end, software-defined networking (SDN) was introduced to provide network operators with the infrastructure for achieving greater flexibility and fine-grained control over networks. However, a critical issue to consider when incorporating SDN technology over large-scale networks like wide area networks (WANs) is the allocation of switches to controllers. In this paper, we address the switch-to-controller allocation problem that considers the heterogeneity of controller capacities. Specifically, we propose two neighbourhood centrality-based algorithms for addressing the problem with the aim of minimizing switch-to-controller latency. We also introduce a weighted centrality function that enables fair distribution of load across capacitated controllers. The proposed algorithms utilize centrality-based measures and heuristics to determine the ideal switch-to-controller allocations that consider the propagating capacity of suitable controller nodes. We evaluate the performance of the proposed algorithms on the internet2 topology. The results show that considering the heterogeneity of controller capacities reduces load imbalance significantly. Moreover, by limiting the exploration of the local centrality for each node to a maximum of two-step neighbours the complexity of the proposed algorithm is reduced. Thus, making it suitable for implementation in real-world SD-WANs.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"1 1","pages":"1-6"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76735105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220478
Sithembiso Dyubele, S. Soobramoney, D. Heukelman
Increased functionalities of smartphones, such as providing easy access to the internet, have offered multiple learning opportunities, especially in a world surrounded by unprecedented periods like COVID'19. Despite the benefits of smartphones mentioned above, academics still have significant concerns about the effective utilisation of these technological devices by students for learning purposes. This paper aims to examine the factors affecting the use of smartphones for learning. The study utilised a quantitative method to pursue its aim and objectives. Data were gathered from 80 academic staff members from five Departments under the Faculty of Accounting & Informatics. A stratified sampling approach was applied to ensure a more realistic and accurate estimation of the population had been used. After applying the above approach, a simple random sampling method was used for this population according to the number of academic staff members in the above-mentioned departments. The data were analysed to ensure reliability and validity, and descriptive statistics were applied, and correlations identified to develop the proposed model. The outcomes indicate that academic staff members believe that Attitudes towards Smartphones, Facilitating Conditions, Perceived Ease of Use, Perceived Usefulness, and Performance Expectations significantly impact the use of smartphones for learning. This study was limited to academic staff from five departments of a single faculty at a South African University of Technology.
{"title":"Factors Affecting the use of Smartphones for Learning: A Proposed Model","authors":"Sithembiso Dyubele, S. Soobramoney, D. Heukelman","doi":"10.1109/icABCD59051.2023.10220478","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220478","url":null,"abstract":"Increased functionalities of smartphones, such as providing easy access to the internet, have offered multiple learning opportunities, especially in a world surrounded by unprecedented periods like COVID'19. Despite the benefits of smartphones mentioned above, academics still have significant concerns about the effective utilisation of these technological devices by students for learning purposes. This paper aims to examine the factors affecting the use of smartphones for learning. The study utilised a quantitative method to pursue its aim and objectives. Data were gathered from 80 academic staff members from five Departments under the Faculty of Accounting & Informatics. A stratified sampling approach was applied to ensure a more realistic and accurate estimation of the population had been used. After applying the above approach, a simple random sampling method was used for this population according to the number of academic staff members in the above-mentioned departments. The data were analysed to ensure reliability and validity, and descriptive statistics were applied, and correlations identified to develop the proposed model. The outcomes indicate that academic staff members believe that Attitudes towards Smartphones, Facilitating Conditions, Perceived Ease of Use, Perceived Usefulness, and Performance Expectations significantly impact the use of smartphones for learning. This study was limited to academic staff from five departments of a single faculty at a South African University of Technology.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"1 1","pages":"1-7"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79004119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220492
Bh Chiloane, S. Akilimalissiga, N. Sukdeo, I. Ohiomah
As the world changes with technological innovation, the retail industry strives to keep up with emerging technologies to remain relevant in the market. Most industries are shifting towards a more automated environment framed by loT applications. Hence, the retail industry is not immune to these innovative applications in order to meet consumers' ever-changing needs and preferences. The South African retail industry is expected to upgrade its systems and advance to technologically advanced retail systems, which have already been implemented in various countries globally. With the implementation of loT technologies around the world, South African retailers are expected to follow suit with the new changes and face the challenges that may arise as a result of the implementation. loT technologies through digital transformation have been portrayed worldwide as an advantageous practice and competition-leveraging tool to promote business agility and capabilities, improve business processes, and, ultimately, enhance customer satisfaction. The purpose of this paper is to assess the level of readiness of the South African retail industry when it comes to moving away from a conventional functional system to a system mainly dominated by advanced technology-based practices. This paper will also examine the specifics and challenges of adopting loT applications from the South African retail industry's standpoint. Hence, the analysis of the acquired results revealed that the South African retail's readiness still has ground to cover to execute loT integration, and this state is orchestrated by various factors.
{"title":"Evaluating the Readiness of Integrating loT into the South African Retail Industry","authors":"Bh Chiloane, S. Akilimalissiga, N. Sukdeo, I. Ohiomah","doi":"10.1109/icABCD59051.2023.10220492","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220492","url":null,"abstract":"As the world changes with technological innovation, the retail industry strives to keep up with emerging technologies to remain relevant in the market. Most industries are shifting towards a more automated environment framed by loT applications. Hence, the retail industry is not immune to these innovative applications in order to meet consumers' ever-changing needs and preferences. The South African retail industry is expected to upgrade its systems and advance to technologically advanced retail systems, which have already been implemented in various countries globally. With the implementation of loT technologies around the world, South African retailers are expected to follow suit with the new changes and face the challenges that may arise as a result of the implementation. loT technologies through digital transformation have been portrayed worldwide as an advantageous practice and competition-leveraging tool to promote business agility and capabilities, improve business processes, and, ultimately, enhance customer satisfaction. The purpose of this paper is to assess the level of readiness of the South African retail industry when it comes to moving away from a conventional functional system to a system mainly dominated by advanced technology-based practices. This paper will also examine the specifics and challenges of adopting loT applications from the South African retail industry's standpoint. Hence, the analysis of the acquired results revealed that the South African retail's readiness still has ground to cover to execute loT integration, and this state is orchestrated by various factors.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"81 1","pages":"1-6"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90944031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220449
G. Nhinda, Fungai Bhunu Shava
Globally, Information Communication Technology (ICT) device usage has seen a steep rise over the last few years. This also holds in developing countries, which have embarked on connecting the unconnected or previously disadvantaged parts of their populations. This connectivity enables people to interact with cyberspace, which brings opportunities and challenges. Opportunities such as the ability to conduct business online, attend online education, and perform online banking activities. Challenges experienced are the cost of Internet access and more worrying cyber-risks and potential for exploitation. There remain pockets of communities that experience sporadic connectivity to cyberspace, these communities tend to be more susceptible to cyber-attacks due to issues of lack/limited awareness of cyber secure practices, an existent culture that might be exploited by cybercriminals, and overall, a lackluster approach to their cyber-hygiene. We present a qualitative study conducted in rural Northern Namibia. Our findings indicate that both secure and insecure cybersecurity practices exist. However, through the Ubuntu and Uushiindaism Afrocentric lenses, practices such as sharing mobile devices without passwords among the community mirror community unity. Practices such as this in mainstream research can be considered insecure. We also propose interrogating “common” secure cybersecurity practices in their universality of applicability.
{"title":"Cybersecurity Practices of Rural Underserved Communities in Africa: A Case Study from Northern Namibia","authors":"G. Nhinda, Fungai Bhunu Shava","doi":"10.1109/icABCD59051.2023.10220449","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220449","url":null,"abstract":"Globally, Information Communication Technology (ICT) device usage has seen a steep rise over the last few years. This also holds in developing countries, which have embarked on connecting the unconnected or previously disadvantaged parts of their populations. This connectivity enables people to interact with cyberspace, which brings opportunities and challenges. Opportunities such as the ability to conduct business online, attend online education, and perform online banking activities. Challenges experienced are the cost of Internet access and more worrying cyber-risks and potential for exploitation. There remain pockets of communities that experience sporadic connectivity to cyberspace, these communities tend to be more susceptible to cyber-attacks due to issues of lack/limited awareness of cyber secure practices, an existent culture that might be exploited by cybercriminals, and overall, a lackluster approach to their cyber-hygiene. We present a qualitative study conducted in rural Northern Namibia. Our findings indicate that both secure and insecure cybersecurity practices exist. However, through the Ubuntu and Uushiindaism Afrocentric lenses, practices such as sharing mobile devices without passwords among the community mirror community unity. Practices such as this in mainstream research can be considered insecure. We also propose interrogating “common” secure cybersecurity practices in their universality of applicability.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"31 1","pages":"1-7"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72963314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220544
Joseph Tafataona Mtetwa, K. Ogudo, S. Pudaruth
This paper presents a novel coupled Generative Adversarial Network (GAN) for the optimization of algorithmic trading techniques, termed Visio- Temporal Conditional Generative Adversarial Network (VTCGAN). The termed Visio- Temporal Conditional Generative Adversarial Network combines an Image Generative Adversarial Network and a Multivariate Time Series Generative Adversarial Network, offering an innovative approach for producing realistic and high-quality financial time series and chart patterns. By utilizing the generated synthetic data, the resilience and flexibility of algorithmic trading models can be enhanced, leading to improved decision-making and decreased risk exposure. Although empirical analyses have not yet been conducted, the termed Visio- Temporal Conditional Generative Adversarial Network shows promise as a valuable tool for optimizing algorithmic trading techniques, potentially leading to better performance and generalizability when applied to actual financial records.
{"title":"VTCGAN: A Proposed Multimodal Approach to Financial Time Series and Chart Pattern Generation for Algorithmic Trading","authors":"Joseph Tafataona Mtetwa, K. Ogudo, S. Pudaruth","doi":"10.1109/icABCD59051.2023.10220544","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220544","url":null,"abstract":"This paper presents a novel coupled Generative Adversarial Network (GAN) for the optimization of algorithmic trading techniques, termed Visio- Temporal Conditional Generative Adversarial Network (VTCGAN). The termed Visio- Temporal Conditional Generative Adversarial Network combines an Image Generative Adversarial Network and a Multivariate Time Series Generative Adversarial Network, offering an innovative approach for producing realistic and high-quality financial time series and chart patterns. By utilizing the generated synthetic data, the resilience and flexibility of algorithmic trading models can be enhanced, leading to improved decision-making and decreased risk exposure. Although empirical analyses have not yet been conducted, the termed Visio- Temporal Conditional Generative Adversarial Network shows promise as a valuable tool for optimizing algorithmic trading techniques, potentially leading to better performance and generalizability when applied to actual financial records.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"34 1","pages":"1-5"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84437815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220564
Zahir Toufie, Boniface Kabaso
Web browsers have for long been wanting to host and execute feature-rich, compute-intensive, and complex applications or simply Compute-Intensive Applications (CIAs), within their Execution Environment (EE), with native desktop performance. There was Adobe Shockwave, Macromedia Flash, Java Applets, JavaScript Programming Language (JS) and recently WebAssembly Programming Language (WASM), but also short-lived relationships, such as Microsoft ActiveX, Silverlight and Apple Quicktime. One hindrance to web browsers hosting and executing CIAs with native desktop performance is that currently there is no web browser technology with the software architecture and design that can support them. This paper aims to review the evolution of the Web as an application platform since the rise of WASM, over the last decade or so, within the context of application performance relative to that of native desktop application performance. As well as to propose where researchers should focus their efforts in order to advance the Web as an application platform that is capable of executing CIAs. In future work, we plan to extend our study to include theoretical contributions, such as providing insights into how to improve the performance of web applications based on various software architectures and designs for web browser EEs, methodological contributions, such as providing methods and approaches developed, adapted or enhanced which detail the software architecture and design for web browser EEs that have higher performance than currently available, and practical contributions that will lay the groundwork for a production-ready web browser EE based on the prototype web browser EE produced by our study.
{"title":"The Next Evolution of Web Browser Execution Environment Performance","authors":"Zahir Toufie, Boniface Kabaso","doi":"10.1109/icABCD59051.2023.10220564","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220564","url":null,"abstract":"Web browsers have for long been wanting to host and execute feature-rich, compute-intensive, and complex applications or simply Compute-Intensive Applications (CIAs), within their Execution Environment (EE), with native desktop performance. There was Adobe Shockwave, Macromedia Flash, Java Applets, JavaScript Programming Language (JS) and recently WebAssembly Programming Language (WASM), but also short-lived relationships, such as Microsoft ActiveX, Silverlight and Apple Quicktime. One hindrance to web browsers hosting and executing CIAs with native desktop performance is that currently there is no web browser technology with the software architecture and design that can support them. This paper aims to review the evolution of the Web as an application platform since the rise of WASM, over the last decade or so, within the context of application performance relative to that of native desktop application performance. As well as to propose where researchers should focus their efforts in order to advance the Web as an application platform that is capable of executing CIAs. In future work, we plan to extend our study to include theoretical contributions, such as providing insights into how to improve the performance of web applications based on various software architectures and designs for web browser EEs, methodological contributions, such as providing methods and approaches developed, adapted or enhanced which detail the software architecture and design for web browser EEs that have higher performance than currently available, and practical contributions that will lay the groundwork for a production-ready web browser EE based on the prototype web browser EE produced by our study.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"183 1","pages":"1-7"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76639144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220479
Tinashe Crispen Gadzirai, W. T. Vambe
Pneumonia remains the most common reason for inpatient stays and fatalities among adults and children in the world. It became worse during Covid 19 pandemic. Most African countries like South Africa were and are still seriously affected. The situation is worse in rural areas because of several reasons, among them; not having enough X-rays machines, having no or few radiologists to analyze and interpret the X-ray pictures to determine if the pictures are normal pictures or pneumonia. The ability to accurately classify these two types of pneumonia can guarantee effective treatment which will boost survival chances. Artificial Intelligence (AI) is a cost-effective approach and can play a pivotal role in easily analyzing and interpreting X-ray images. This research used CRoss Industry Standard Process for Data Mining methodology in developing a simple Rest API model that would classify the chest X-ray image if it were normal, the person has pneumonia caused by bacteria or virus. Multi-Layer Perceptron (MLP) model had a training accuracy of 73.89%, validation accuracy of 75.46%, and test accuracy of 75.46% whereas LeNet had 78.49%, 76.51%, and 76,51%, respectively. This study demonstrated to the public that AI models may be developed to aid health professionals in the early diagnosis, classification, analysis, and interpretation of X-ray images for pneumonia. In the future, the model created should convert the English interpretations into South African local languages like isiXhosa, Zulu, Venda, and many others. Thus, making it easier for the local communities to understand giving them a sense of belonging.
{"title":"A Rest API to Classify Pneumonia Infection From Chest X-ray Images Using Multi-Layer Perceptron and LeNet","authors":"Tinashe Crispen Gadzirai, W. T. Vambe","doi":"10.1109/icABCD59051.2023.10220479","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220479","url":null,"abstract":"Pneumonia remains the most common reason for inpatient stays and fatalities among adults and children in the world. It became worse during Covid 19 pandemic. Most African countries like South Africa were and are still seriously affected. The situation is worse in rural areas because of several reasons, among them; not having enough X-rays machines, having no or few radiologists to analyze and interpret the X-ray pictures to determine if the pictures are normal pictures or pneumonia. The ability to accurately classify these two types of pneumonia can guarantee effective treatment which will boost survival chances. Artificial Intelligence (AI) is a cost-effective approach and can play a pivotal role in easily analyzing and interpreting X-ray images. This research used CRoss Industry Standard Process for Data Mining methodology in developing a simple Rest API model that would classify the chest X-ray image if it were normal, the person has pneumonia caused by bacteria or virus. Multi-Layer Perceptron (MLP) model had a training accuracy of 73.89%, validation accuracy of 75.46%, and test accuracy of 75.46% whereas LeNet had 78.49%, 76.51%, and 76,51%, respectively. This study demonstrated to the public that AI models may be developed to aid health professionals in the early diagnosis, classification, analysis, and interpretation of X-ray images for pneumonia. In the future, the model created should convert the English interpretations into South African local languages like isiXhosa, Zulu, Venda, and many others. Thus, making it easier for the local communities to understand giving them a sense of belonging.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"1 1","pages":"1-6"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84198355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220512
Lusani Mamushiane, A. Lysko, H. Kobo, Joyce B. Mwangama
Field trials and experimentation are crucial for accelerating the adoption of standalone (SA) 5G in Africa. Traditionally, only network operators and vendors had the opportunity for practical experimentation due to proprietary systems and licensing restrictions. However, the emergence of open source cellular stacks and affordable software-defined radio (SDR) systems is changing this landscape. Although these technologies are not yet fully developed for complete 5G systems, their progress is rapid, and the research community is using them to test different use cases like network slicing. Building a 5G network is complex, especially in uncontrolled RF environments with fluctuating physical conditions such as noise and interference. This necessitates proper RF planning and performance optimization. The complexity is further compounded by the variety of 5G end-user devices, each with unique configurations and integration requirements. Some devices are network locked and require rooting to connect to a 5G testbed, while others need expert APN configurations or have specific compatibility specifications like sub-carrier spacing (SCS) and duplex mode. Unfortunately, vendors often provide limited information about RF compatibility, making trial-and-error techniques necessary to uncover compatibility details. This paper presents best practices for deploying and configuring a 5G SA testbed, focusing on the integration challenges of consumer-grade devices, specifically 5G mobile phones connected to a 5G testbed. Additionally, the paper offers solutions for troubleshooting integration errors and performance issues, as well as a brief discussion on the realization of basic network slicing in a 5G SA network.
{"title":"Deploying a Stable 5G SA Testbed Using srsRAN and Open5GS: UE Integration and Troubleshooting Towards Network Slicing","authors":"Lusani Mamushiane, A. Lysko, H. Kobo, Joyce B. Mwangama","doi":"10.1109/icABCD59051.2023.10220512","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220512","url":null,"abstract":"Field trials and experimentation are crucial for accelerating the adoption of standalone (SA) 5G in Africa. Traditionally, only network operators and vendors had the opportunity for practical experimentation due to proprietary systems and licensing restrictions. However, the emergence of open source cellular stacks and affordable software-defined radio (SDR) systems is changing this landscape. Although these technologies are not yet fully developed for complete 5G systems, their progress is rapid, and the research community is using them to test different use cases like network slicing. Building a 5G network is complex, especially in uncontrolled RF environments with fluctuating physical conditions such as noise and interference. This necessitates proper RF planning and performance optimization. The complexity is further compounded by the variety of 5G end-user devices, each with unique configurations and integration requirements. Some devices are network locked and require rooting to connect to a 5G testbed, while others need expert APN configurations or have specific compatibility specifications like sub-carrier spacing (SCS) and duplex mode. Unfortunately, vendors often provide limited information about RF compatibility, making trial-and-error techniques necessary to uncover compatibility details. This paper presents best practices for deploying and configuring a 5G SA testbed, focusing on the integration challenges of consumer-grade devices, specifically 5G mobile phones connected to a 5G testbed. Additionally, the paper offers solutions for troubleshooting integration errors and performance issues, as well as a brief discussion on the realization of basic network slicing in a 5G SA network.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"390 1","pages":"1-10"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80438244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220560
Yolo Madani, Adeyinka K. Akanbi, Mpho Mbele, M. Masinde
The application of modern technologies in the environmental monitoring domain through the deployment of interconnected Internet of Things (loT) sensors, legacy systems, and enterprise networks has become an invaluable component of realising an efficient environmental monitoring system. Monitoring systems' requirements are extremely different depending on the environment, leading to ad-hoc implementations and integration of heterogeneous systems and applications. The resulting distributed systems lack flexibility with inherent issues such as data incompatibility, lack of data integration, and systems interoperability. Semantic representation of data is necessary to combine data from heterogeneous sources for consolidation into meaningful and valuable information and unlock the reusability of data between the monitoring systems. This research explores how a scalable semantic framework can ensure data representation using machine-readable languages for seamless data integration and interoperability of other heterogeneous sub-systems in a Multi-Hazard Early Warning System (MHEWS) as a case study. The study hypothesises that the challenge of ensuring data representation, data integration, and system interoperability within an MHEWS can be overcome through the application of semantic middleware.
{"title":"A Scalable Semantic Framework for an Integrated Multi-Hazard Early Warning System","authors":"Yolo Madani, Adeyinka K. Akanbi, Mpho Mbele, M. Masinde","doi":"10.1109/icABCD59051.2023.10220560","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220560","url":null,"abstract":"The application of modern technologies in the environmental monitoring domain through the deployment of interconnected Internet of Things (loT) sensors, legacy systems, and enterprise networks has become an invaluable component of realising an efficient environmental monitoring system. Monitoring systems' requirements are extremely different depending on the environment, leading to ad-hoc implementations and integration of heterogeneous systems and applications. The resulting distributed systems lack flexibility with inherent issues such as data incompatibility, lack of data integration, and systems interoperability. Semantic representation of data is necessary to combine data from heterogeneous sources for consolidation into meaningful and valuable information and unlock the reusability of data between the monitoring systems. This research explores how a scalable semantic framework can ensure data representation using machine-readable languages for seamless data integration and interoperability of other heterogeneous sub-systems in a Multi-Hazard Early Warning System (MHEWS) as a case study. The study hypothesises that the challenge of ensuring data representation, data integration, and system interoperability within an MHEWS can be overcome through the application of semantic middleware.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"54 1","pages":"1-6"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90753084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}