Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220482
Eneia Filipe Vilanculos, T. Shongwe, Ali N. Hasan
Identifying and classifying vegetables in big farms is a challenge, especially when the vegetables are similar in colour and shape. Manual identification of vegetables takes time and is prone to errors. Therefore, the automatic classification process of the precision farming, increasingly using image processing and pattern recognition to identify fruits and vegetable, is becoming essential to identify and classify vegetables in big farms. In this paper, an automatic system for the identification and classification of green leafy vegetables, similar in colour and shape was evaluataed using five different deep learning models such as CNN, MobileNet, VGG-16, Inception V3 and ResNet 50. The accuracies of these models achieved in this paper vary from 67% to 99%. The model with the highest accuracy is the MobileNet.
{"title":"Identification and classification of Green Leafy Vegetables using CNN models","authors":"Eneia Filipe Vilanculos, T. Shongwe, Ali N. Hasan","doi":"10.1109/icABCD59051.2023.10220482","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220482","url":null,"abstract":"Identifying and classifying vegetables in big farms is a challenge, especially when the vegetables are similar in colour and shape. Manual identification of vegetables takes time and is prone to errors. Therefore, the automatic classification process of the precision farming, increasingly using image processing and pattern recognition to identify fruits and vegetable, is becoming essential to identify and classify vegetables in big farms. In this paper, an automatic system for the identification and classification of green leafy vegetables, similar in colour and shape was evaluataed using five different deep learning models such as CNN, MobileNet, VGG-16, Inception V3 and ResNet 50. The accuracies of these models achieved in this paper vary from 67% to 99%. The model with the highest accuracy is the MobileNet.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"122 5","pages":"1-6"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72376715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220459
V. Maphosa
Artificial Intelligence (AI) is increasingly ubiquitous, transforming our everyday lives. AI is expected to improve human life amid growing concerns that unregulated AI could lead to disastrous outcomes. AI algorithms have become complex and more challenging to follow. The disruptive nature of AI is seen in state power through surveillance, facial recognition, and deployment of lethal autonomous weapons systems by superpowers. This review paper analyses how AI is deployed for state power to enhance policing and military operations. AI deployment by the police and army increases operational excellence and efficiency and offers simulated training, and predictive capabilities, while unregulated use raises ethical and human rights violations. Given the foreseeable pervasiveness and rapid AI development, more research is required to restrict coercive state power. This review paper raises awareness of AI's affordances and contributes to emergent literature on constraints and ethical and legal issues. It raises interest among scholars, policymakers, and practitioners for collaborative research. AI will reinforce the technology divide as developing countries face infrastructural, financial and digital skills barriers. The review concludes with future research implications.
{"title":"Artificial Intelligence and State Power","authors":"V. Maphosa","doi":"10.1109/icABCD59051.2023.10220459","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220459","url":null,"abstract":"Artificial Intelligence (AI) is increasingly ubiquitous, transforming our everyday lives. AI is expected to improve human life amid growing concerns that unregulated AI could lead to disastrous outcomes. AI algorithms have become complex and more challenging to follow. The disruptive nature of AI is seen in state power through surveillance, facial recognition, and deployment of lethal autonomous weapons systems by superpowers. This review paper analyses how AI is deployed for state power to enhance policing and military operations. AI deployment by the police and army increases operational excellence and efficiency and offers simulated training, and predictive capabilities, while unregulated use raises ethical and human rights violations. Given the foreseeable pervasiveness and rapid AI development, more research is required to restrict coercive state power. This review paper raises awareness of AI's affordances and contributes to emergent literature on constraints and ethical and legal issues. It raises interest among scholars, policymakers, and practitioners for collaborative research. AI will reinforce the technology divide as developing countries face infrastructural, financial and digital skills barriers. The review concludes with future research implications.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"20 1","pages":"1-5"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78064363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220537
Mlungisi Molefe, K. Sibiya, B. Nleya
As the future of networking dives into a new era of connecting every single physical device into the internet termed Internet of Things (loT), this significantly means a rapid increase in the number of online connected devices, which leads to more bandwidth hungry and data consuming devices. The fifth generation (5G) of mobile communication has been deployed already in multiple countries, therefore researchers have migrated their focus to the sixth generation (6G) of mobile communication to cater for extensive coverage and massive number of loT devices. A promising architecture and technology to cope with massive number of online devices and extensive coverage is a joint optical wireless transport network which offers comparably ultra-high systems capacity and extremely low latency while maintaining an improved quality of service. Furthermore, an optical wireless transport network can accommodate high speed mobility for frequently moving end user devices which is essential for 6G. In this paper our focus is to explore and propose an ultimate optical wireless transport network architecture scheme that will cater for loT as well as networks beyond 5G. We thus propose an innovative Optical-Backhaul and Wireless Access (OBWA) network architecture as a favorable solution for future networks. We further present a joint channel and route allocation (JCRA) scheme for achieving optimal quality of experience. Performance evaluation of the proposed JCRA scheme for OBW A network architecture show a significant improvement in the network throughput as well as the network end-to-end delay despite varying load traffic or varying flow channels.
{"title":"A Resources Allocation Scheme For Joint Optical Wireless Transport Networks","authors":"Mlungisi Molefe, K. Sibiya, B. Nleya","doi":"10.1109/icABCD59051.2023.10220537","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220537","url":null,"abstract":"As the future of networking dives into a new era of connecting every single physical device into the internet termed Internet of Things (loT), this significantly means a rapid increase in the number of online connected devices, which leads to more bandwidth hungry and data consuming devices. The fifth generation (5G) of mobile communication has been deployed already in multiple countries, therefore researchers have migrated their focus to the sixth generation (6G) of mobile communication to cater for extensive coverage and massive number of loT devices. A promising architecture and technology to cope with massive number of online devices and extensive coverage is a joint optical wireless transport network which offers comparably ultra-high systems capacity and extremely low latency while maintaining an improved quality of service. Furthermore, an optical wireless transport network can accommodate high speed mobility for frequently moving end user devices which is essential for 6G. In this paper our focus is to explore and propose an ultimate optical wireless transport network architecture scheme that will cater for loT as well as networks beyond 5G. We thus propose an innovative Optical-Backhaul and Wireless Access (OBWA) network architecture as a favorable solution for future networks. We further present a joint channel and route allocation (JCRA) scheme for achieving optimal quality of experience. Performance evaluation of the proposed JCRA scheme for OBW A network architecture show a significant improvement in the network throughput as well as the network end-to-end delay despite varying load traffic or varying flow channels.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"28 1","pages":"1-7"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84572092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220487
Dephney Mathebula
Future computing entities should be capable of accessing computing resources for data-intensive algorithm execution. This should be realizable in operational contexts where internet accessibility to cloud contexts becomes challenging. Such a scenario describes developing contexts. In addition, future computing entities also make use of multiple operating systems in a context where the computing resources are reduced due to the use of partitions. The use of partitions is recognized to reduce the number of accessible computing resources and increase the overhead associated with computing resource allocation. The presented research proposes an architecture where an operating system is logically stowed and selectively activated without involving the use of partition. This frees up the number of computing resources previously locked in different partition systems and reduces the computing resource overhead. Analysis shows that the proposed framework increases the accessible computing resources by 14.6% on average. In addition, the computing resource overhead is reduced by 21 % on average.
{"title":"Disease Motivated Model for Future Dynamic Computing","authors":"Dephney Mathebula","doi":"10.1109/icABCD59051.2023.10220487","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220487","url":null,"abstract":"Future computing entities should be capable of accessing computing resources for data-intensive algorithm execution. This should be realizable in operational contexts where internet accessibility to cloud contexts becomes challenging. Such a scenario describes developing contexts. In addition, future computing entities also make use of multiple operating systems in a context where the computing resources are reduced due to the use of partitions. The use of partitions is recognized to reduce the number of accessible computing resources and increase the overhead associated with computing resource allocation. The presented research proposes an architecture where an operating system is logically stowed and selectively activated without involving the use of partition. This frees up the number of computing resources previously locked in different partition systems and reduces the computing resource overhead. Analysis shows that the proposed framework increases the accessible computing resources by 14.6% on average. In addition, the computing resource overhead is reduced by 21 % on average.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"32 1","pages":"1-6"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89283720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advancement of network-based devices, resulting in communication networks also growing rapidly and becoming more complex, resulting in large and heterogeneous network architecture has brought a lot of challenges in network management. Therefore, managing the network has become an increasingly a challenge given the existing network architectures. In this study, we have investigated how network operators operate, maintain and secure telecommunications networks. The study has also investigated the effectiveness of Software Defined Networking (SDN) in improving network management. The study has also investigated how the architecture minimizes the challenges users face. To improve network management with SDN using the OpenFlow protocol, we created network topologies and configured devices using the graphical network simulator 3, Oracle VM VirtualBox Manager, and Mininet VM. Our approach implemented both Git and Ansible in a centralized network architecture to solve the problems facing existing network architectures with the rapid growth of network-based devices on the Internet. This research paper has shown how to use Ansible playbooks to manage your network and overcome the challenges you face. The simulation results shows that the proposed scheme performs better in terms of efficiency and flexibility than the traditional OpenFlow protocol. These improvements have been achieved through the separation of the control and data planes, allowing for more centralized network management and easier implementation of network policies.
随着网络设备的进步,导致通信网络也在快速增长并变得越来越复杂,导致网络架构的庞大和异构给网络管理带来了很多挑战。因此,在现有的网络体系结构下,管理网络已成为一项日益严峻的挑战。在这项研究中,我们调查了网络运营商如何运营、维护和保护电信网络。该研究还调查了软件定义网络(SDN)在改善网络管理方面的有效性。该研究还调查了架构如何最大限度地减少用户面临的挑战。为了使用OpenFlow协议改进SDN的网络管理,我们使用图形网络模拟器3、Oracle VM VirtualBox Manager和Mininet VM创建了网络拓扑并配置了设备。我们的方法在一个集中的网络架构中实现了Git和Ansible,以解决现有网络架构在Internet上基于网络的设备快速增长所面临的问题。这篇研究论文展示了如何使用Ansible剧本来管理你的网络并克服你所面临的挑战。仿真结果表明,该方案在效率和灵活性方面都优于传统的OpenFlow协议。这些改进是通过分离控制平面和数据平面实现的,从而允许更集中的网络管理和更容易地实现网络策略。
{"title":"Improving Network Management with Software Defined Networking using OpenFlow Protocol","authors":"Koketso Molemane Rodney Mokoena, Ramahlapane Lerato Moila, Prof Mthulisi Velempini","doi":"10.1109/icABCD59051.2023.10220519","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220519","url":null,"abstract":"With the advancement of network-based devices, resulting in communication networks also growing rapidly and becoming more complex, resulting in large and heterogeneous network architecture has brought a lot of challenges in network management. Therefore, managing the network has become an increasingly a challenge given the existing network architectures. In this study, we have investigated how network operators operate, maintain and secure telecommunications networks. The study has also investigated the effectiveness of Software Defined Networking (SDN) in improving network management. The study has also investigated how the architecture minimizes the challenges users face. To improve network management with SDN using the OpenFlow protocol, we created network topologies and configured devices using the graphical network simulator 3, Oracle VM VirtualBox Manager, and Mininet VM. Our approach implemented both Git and Ansible in a centralized network architecture to solve the problems facing existing network architectures with the rapid growth of network-based devices on the Internet. This research paper has shown how to use Ansible playbooks to manage your network and overcome the challenges you face. The simulation results shows that the proposed scheme performs better in terms of efficiency and flexibility than the traditional OpenFlow protocol. These improvements have been achieved through the separation of the control and data planes, allowing for more centralized network management and easier implementation of network policies.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"33 1","pages":"1-5"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87963992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220553
T. Sefara, Mapitsi Rangata
Twitter is one of the microblogging sites with millions of daily users. Broadcast companies use Twitter to share short messages to engage or share opinions about a particular topic or product. With a large number of conversations available on Twitter, it is difficult to identify the category of topics in the broadcasting domain. This paper proposes the use of unsupervised learning to generate topics from unlabelled tweet data sets in the broadcasting domain using the latent Dirichlet allocation (LDA) method. Approximately six groups of topics were generated and each group was assigned a label or category. These labels were used to label the data by finding the dominating label in each tweet as the main category. Supervised learning was conducted to train six machine learning models which are multinomial logistic regression, XGBoost, decision trees, random forest, support vector machines, and multilayer perceptron (MLP). The models were able to learn from the data to predict the category of each tweet from the testing data. The models were evaluated using accuracy and the f1 score. Linear support vector machine and MLP obtained better classi-fication results compared to other trained models.
{"title":"Topic Classification of Tweets in the Broadcasting Domain using Machine Learning Methods","authors":"T. Sefara, Mapitsi Rangata","doi":"10.1109/icABCD59051.2023.10220553","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220553","url":null,"abstract":"Twitter is one of the microblogging sites with millions of daily users. Broadcast companies use Twitter to share short messages to engage or share opinions about a particular topic or product. With a large number of conversations available on Twitter, it is difficult to identify the category of topics in the broadcasting domain. This paper proposes the use of unsupervised learning to generate topics from unlabelled tweet data sets in the broadcasting domain using the latent Dirichlet allocation (LDA) method. Approximately six groups of topics were generated and each group was assigned a label or category. These labels were used to label the data by finding the dominating label in each tweet as the main category. Supervised learning was conducted to train six machine learning models which are multinomial logistic regression, XGBoost, decision trees, random forest, support vector machines, and multilayer perceptron (MLP). The models were able to learn from the data to predict the category of each tweet from the testing data. The models were evaluated using accuracy and the f1 score. Linear support vector machine and MLP obtained better classi-fication results compared to other trained models.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"61 1","pages":"1-6"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74084993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220556
Makhabane Molapo, Chunling Tu, Deao Du Plessis, Shengzhi Du
Livestock management and monitoring system play a crucial role in farm operations. This paper proposes a system for the management and monitoring of livestock on a farm using deep learning techniques. Traditional methods of monitoring livestock involve manual observation, which can be time-consuming and unreliable. Various systems have been developed, however, there are still challenges existing in present livestock classification and counting, including occlusion, animal overlapping, shadow, etc. To improve all these challenges, this paper presents a monitoring system of livestock under different conditions by the end-to-end deep learning model of You Only Look Once version 5 (YOLOv5). The suggested model conducts feature extraction on the original image with the original YOLOv5 backbone network and detects livestock of different sizes for counting on each anchor frame. Additionally, this model identifies and tracks individual animals The Kaggle dataset collected in real-time containing different animals is used as YOLOv5 relies heavily on data augmentation to improve its detection and tracking performance and validate the proposed system. The scaling, resizing, and manipulation of the splitting dataset are done by the Roboflow application. Additionally, this paper seeks to demonstrate the latest research in utilizing Faster Regional convolutional neural networks (R-CNN) and compare its backbones with the original YOLOv5 backbone. The tensor board graphs from Colab show that this proposed system outperformed other R-CNN, achieving an accuracy of 93% on mAP@_0.5%, making it a promising option for intelligent farm monitoring and managing.
牲畜管理和监测系统在农场经营中起着至关重要的作用。本文提出了一个使用深度学习技术管理和监测农场牲畜的系统。传统的牲畜监测方法涉及人工观察,既费时又不可靠。虽然已经开发出了各种系统,但目前的家畜分类和计数仍然存在一些挑战,包括遮挡、动物重叠、阴影等。为了改善这些挑战,本文提出了一个基于端到端深度学习模型的不同条件下牲畜监测系统You Only Look Once version 5 (YOLOv5)。该模型利用原始的YOLOv5骨干网对原始图像进行特征提取,检测不同大小的牲畜在每个锚帧上计数。此外,该模型还可以识别和跟踪单个动物。由于YOLOv5在很大程度上依赖于数据增强来提高其检测和跟踪性能并验证所提出的系统,因此使用了实时收集的包含不同动物的Kaggle数据集。分割数据集的缩放、调整大小和操作由Roboflow应用程序完成。此外,本文试图展示利用更快区域卷积神经网络(R-CNN)的最新研究,并将其主干与原始的YOLOv5主干进行比较。来自Colab的张量板图显示,该系统优于其他R-CNN,在mAP@_0.5%上达到93%的准确率,使其成为智能农场监控和管理的一个有前途的选择。
{"title":"Management and Monitoring of Livestock in the Farm Using Deep Learning","authors":"Makhabane Molapo, Chunling Tu, Deao Du Plessis, Shengzhi Du","doi":"10.1109/icABCD59051.2023.10220556","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220556","url":null,"abstract":"Livestock management and monitoring system play a crucial role in farm operations. This paper proposes a system for the management and monitoring of livestock on a farm using deep learning techniques. Traditional methods of monitoring livestock involve manual observation, which can be time-consuming and unreliable. Various systems have been developed, however, there are still challenges existing in present livestock classification and counting, including occlusion, animal overlapping, shadow, etc. To improve all these challenges, this paper presents a monitoring system of livestock under different conditions by the end-to-end deep learning model of You Only Look Once version 5 (YOLOv5). The suggested model conducts feature extraction on the original image with the original YOLOv5 backbone network and detects livestock of different sizes for counting on each anchor frame. Additionally, this model identifies and tracks individual animals The Kaggle dataset collected in real-time containing different animals is used as YOLOv5 relies heavily on data augmentation to improve its detection and tracking performance and validate the proposed system. The scaling, resizing, and manipulation of the splitting dataset are done by the Roboflow application. Additionally, this paper seeks to demonstrate the latest research in utilizing Faster Regional convolutional neural networks (R-CNN) and compare its backbones with the original YOLOv5 backbone. The tensor board graphs from Colab show that this proposed system outperformed other R-CNN, achieving an accuracy of 93% on mAP@_0.5%, making it a promising option for intelligent farm monitoring and managing.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"6 1","pages":"1-6"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79674554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220542
Fati Tahiru, Steven. Parbanath
Understanding student behaviour is crucial for creating personalised learning and other interventions. Educational stakeholders continue investigating diverse solutions to improve student learning behaviour in higher educational institutions. One solution that stands out is to gain insights and identify the trends and patterns in data about students learning behaviour for decision-making. Exploratory Data Analysis (EDA) is a method for analysing and summarising data in order to get insights and recognise patterns or trends about an entity. This study seeks to utilise Exploratory Data Analysis to analyse students' logs in the virtual learning environment to distinguish the characteristics/habits of students who graduate and students who do not graduate from higher educational institutions. The process flow for implementing EDA can act as a helpful guide for educational stakeholders. The study findings indicate that the revision trend of graduated students is much more frequent than that of non-graduated students. However, there were no differences in habits in the early access to the learning materials before the start of the program. Academic stakeholders can utilise the approach to enable them to make better decisions when assessing students' behaviour and trends in the virtual environment.
{"title":"Using an Exploratory Analytical Approach to Distinguish the Habits of Graduating and Non-Graduating Students in a Virtual Learning Environment","authors":"Fati Tahiru, Steven. Parbanath","doi":"10.1109/icABCD59051.2023.10220542","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220542","url":null,"abstract":"Understanding student behaviour is crucial for creating personalised learning and other interventions. Educational stakeholders continue investigating diverse solutions to improve student learning behaviour in higher educational institutions. One solution that stands out is to gain insights and identify the trends and patterns in data about students learning behaviour for decision-making. Exploratory Data Analysis (EDA) is a method for analysing and summarising data in order to get insights and recognise patterns or trends about an entity. This study seeks to utilise Exploratory Data Analysis to analyse students' logs in the virtual learning environment to distinguish the characteristics/habits of students who graduate and students who do not graduate from higher educational institutions. The process flow for implementing EDA can act as a helpful guide for educational stakeholders. The study findings indicate that the revision trend of graduated students is much more frequent than that of non-graduated students. However, there were no differences in habits in the early access to the learning materials before the start of the program. Academic stakeholders can utilise the approach to enable them to make better decisions when assessing students' behaviour and trends in the virtual environment.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"187 1","pages":"1-8"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80670616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220467
Patrick Mwansa, Boniface Kabaso
Blockchain technology in electronic voting has emerged as an alternative to other electronic and paper-based voting systems to minimize inconsistencies and redundancies. However, past experiences indicate limited success due to scalability, speed, and privacy issues. This systematic literature review examines the methods, constraints, and approaches in the existing literature on blockchain-based electoral vote-counting solutions. A thorough search of pertinent databases was performed, and selected studies were assessed based on predefined inclusion and exclusion criteria. The review's findings reveal that most existing solutions employ smart contracts and various cryptographic algorithms to create secure and transparent voting systems. However, the study also pinpoints areas that require improvement, such as scalability, privacy, and accessibility. The review recommends exploring different combinations of blockchain platforms, cryptographic algorithms, and programming languages to develop secure and transparent voting systems. Additionally, future research could investigate the potential benefits and challenges of incorporating Internet of Things (IoT) devices, consensus mechanisms, and other technologies into the voting process. The review concludes that more research is needed to enhance the security and transparency of blockchain-based voting systems.
{"title":"Blockchain Electoral Vote Counting Solutions: A Comparative Analysis of Methods, Constraints, and Approaches","authors":"Patrick Mwansa, Boniface Kabaso","doi":"10.1109/icABCD59051.2023.10220467","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220467","url":null,"abstract":"Blockchain technology in electronic voting has emerged as an alternative to other electronic and paper-based voting systems to minimize inconsistencies and redundancies. However, past experiences indicate limited success due to scalability, speed, and privacy issues. This systematic literature review examines the methods, constraints, and approaches in the existing literature on blockchain-based electoral vote-counting solutions. A thorough search of pertinent databases was performed, and selected studies were assessed based on predefined inclusion and exclusion criteria. The review's findings reveal that most existing solutions employ smart contracts and various cryptographic algorithms to create secure and transparent voting systems. However, the study also pinpoints areas that require improvement, such as scalability, privacy, and accessibility. The review recommends exploring different combinations of blockchain platforms, cryptographic algorithms, and programming languages to develop secure and transparent voting systems. Additionally, future research could investigate the potential benefits and challenges of incorporating Internet of Things (IoT) devices, consensus mechanisms, and other technologies into the voting process. The review concludes that more research is needed to enhance the security and transparency of blockchain-based voting systems.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"1 1","pages":"1-10"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77640701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1109/icABCD59051.2023.10220503
Z. C. Khan, Thulile Mkhwanazi, M. Masango
As cyber attacks are increasing in South Africa, organisations need to strengthen cyber security controls. Cyber Threat Intelligence is an essential component of a Cybersecurity program but is often overlooked. It can assist to identify future and potential cyber threats. Organisations process large volumes of data containing Cyber Threat Intelligence, but this is often not collected, processed, or considered as Cyber Threat Intelligence. South African organizations will continue to feel the repercussions of cyber-attacks if actions are not taken. To bring clarity and allow South African organizations to leverage on Cyber Threat Intelligence, this work aims to categorize Cyber Threat Intelligence for organizations. Several characteristics of Cyber Threat Intelligence are discussed, and thereafter a model is presented. The applicability of this model is demonstrated by a short use-case.
{"title":"A Model for Cyber Threat Intelligence for Organisations","authors":"Z. C. Khan, Thulile Mkhwanazi, M. Masango","doi":"10.1109/icABCD59051.2023.10220503","DOIUrl":"https://doi.org/10.1109/icABCD59051.2023.10220503","url":null,"abstract":"As cyber attacks are increasing in South Africa, organisations need to strengthen cyber security controls. Cyber Threat Intelligence is an essential component of a Cybersecurity program but is often overlooked. It can assist to identify future and potential cyber threats. Organisations process large volumes of data containing Cyber Threat Intelligence, but this is often not collected, processed, or considered as Cyber Threat Intelligence. South African organizations will continue to feel the repercussions of cyber-attacks if actions are not taken. To bring clarity and allow South African organizations to leverage on Cyber Threat Intelligence, this work aims to categorize Cyber Threat Intelligence for organizations. Several characteristics of Cyber Threat Intelligence are discussed, and thereafter a model is presented. The applicability of this model is demonstrated by a short use-case.","PeriodicalId":51314,"journal":{"name":"Big Data","volume":"47 1","pages":"1-7"},"PeriodicalIF":4.6,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73718574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}