Pub Date : 2023-06-28DOI: 10.3390/computation11070125
Zineb Lotfi, Hamid Khalifi, Faissal Ouardi
The emergence of new embedded system technologies, such as IoT, requires the design of new lightweight cryptosystems to meet different hardware restrictions. In this context, the concept of Finite State Machines (FSMs) can offer a robust solution when using cryptosystems based on finite automata, known as FAPKC (Finite Automaton Public Key Cryptosystems), introduced by Renji Tao. These cryptosystems have been proposed as alternatives to traditional public key cryptosystems, such as RSA. They are based on composing two private keys, which are two FSMs M1 and M2 with the property of invertibility with finite delay to obtain the composed FSM M=M1oM2, which is the public key. The invert process (factorizing) is hard to compute. Unfortunately, these cryptosystems have not really been adopted in real-world applications, and this is mainly due to the lack of profound studies on the FAPKC key space and a random generator program. In this paper, we first introduce an efficient algebraic method based on the notion of a testing table to compute the delay of invertibility of an FSM. Then, we carry out a statistical study on the number of invertible FSMs with finite delay by varying the number of states as well as the number of output symbols. This allows us to estimate the landscape of the space of invertible FSMs, which is considered a first step toward the design of a random generator.
{"title":"Efficient Algebraic Method for Testing the Invertibility of Finite State Machines","authors":"Zineb Lotfi, Hamid Khalifi, Faissal Ouardi","doi":"10.3390/computation11070125","DOIUrl":"https://doi.org/10.3390/computation11070125","url":null,"abstract":"The emergence of new embedded system technologies, such as IoT, requires the design of new lightweight cryptosystems to meet different hardware restrictions. In this context, the concept of Finite State Machines (FSMs) can offer a robust solution when using cryptosystems based on finite automata, known as FAPKC (Finite Automaton Public Key Cryptosystems), introduced by Renji Tao. These cryptosystems have been proposed as alternatives to traditional public key cryptosystems, such as RSA. They are based on composing two private keys, which are two FSMs M1 and M2 with the property of invertibility with finite delay to obtain the composed FSM M=M1oM2, which is the public key. The invert process (factorizing) is hard to compute. Unfortunately, these cryptosystems have not really been adopted in real-world applications, and this is mainly due to the lack of profound studies on the FAPKC key space and a random generator program. In this paper, we first introduce an efficient algebraic method based on the notion of a testing table to compute the delay of invertibility of an FSM. Then, we carry out a statistical study on the number of invertible FSMs with finite delay by varying the number of states as well as the number of output symbols. This allows us to estimate the landscape of the space of invertible FSMs, which is considered a first step toward the design of a random generator.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"7 1","pages":"125"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86750240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-28DOI: 10.3390/computation11070124
E. Shiriaev, N. Kucherov, M. Babenko, A. Nazarov
This article presents a study related to increasing the performance of distributed computing systems. The essence of fog computing lies in the use of so-called edge devices. These devices are low-power, so they are extremely sensitive to the computational complexity of the methods used. This article is aimed at improving the efficiency of calculations while maintaining an appropriate level of reliability by applying the methods of the Residue Number System (RNS). We are investigating methods for determining the sign of a number in the RNS based on the core function in order to develop a new, fast method. As a result, a fast method for determining the sign of a number based on the Akushsky core function, using approximate calculations, is obtained. Thus, in the course of this article, a study of methods for ensuring reliability in distributed computing is conducted. A fast method for determining the sign of a number in the RNS based on the core function using approximate calculations is also proposed. This result is interesting from the point of view of nebulous calculations, since it allows maintaining high reliability of a distributed system of edge devices with a slight increase in the computational complexity of non-modular operations.
{"title":"Fast Operation of Determining the Sign of a Number in RNS Using the Akushsky Core Function","authors":"E. Shiriaev, N. Kucherov, M. Babenko, A. Nazarov","doi":"10.3390/computation11070124","DOIUrl":"https://doi.org/10.3390/computation11070124","url":null,"abstract":"This article presents a study related to increasing the performance of distributed computing systems. The essence of fog computing lies in the use of so-called edge devices. These devices are low-power, so they are extremely sensitive to the computational complexity of the methods used. This article is aimed at improving the efficiency of calculations while maintaining an appropriate level of reliability by applying the methods of the Residue Number System (RNS). We are investigating methods for determining the sign of a number in the RNS based on the core function in order to develop a new, fast method. As a result, a fast method for determining the sign of a number based on the Akushsky core function, using approximate calculations, is obtained. Thus, in the course of this article, a study of methods for ensuring reliability in distributed computing is conducted. A fast method for determining the sign of a number in the RNS based on the core function using approximate calculations is also proposed. This result is interesting from the point of view of nebulous calculations, since it allows maintaining high reliability of a distributed system of edge devices with a slight increase in the computational complexity of non-modular operations.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"14 1","pages":"124"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74995689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-27DOI: 10.3390/computers12070131
Aakanksha Sharma, V. Balasubramanian, J. Kamruzzaman
Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation.
{"title":"A Novel Dynamic Software-Defined Networking Approach to Neutralize Traffic Burst","authors":"Aakanksha Sharma, V. Balasubramanian, J. Kamruzzaman","doi":"10.3390/computers12070131","DOIUrl":"https://doi.org/10.3390/computers12070131","url":null,"abstract":"Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"1 1","pages":"131"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88779611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-26DOI: 10.3390/computation11070123
Heriberto Arias-Rojas, Miguel A. Rodríguez-Velázquez, Ángel Cerriteño-Sánchez, F. Domínguez-Mota, S. Galván-González
Several methodologies have successfully described the runner blade shape as a set of discrete sections joining the hub and shroud, defined by 3D geometrical forms of considerable complexity. This task requires an appropriate parametric approach for its accurate reconstruction. Among them, piecewise Bernstein polynomials have been used to create parametrizations of twisted runner blades by extracting some cross-sectional hydrofoil profiles from reference CAD data to be approximated by such polynomials. Using the interpolating polynomial coefficients as parameters, more profiles are generated by Lagrangian techniques. The generated profiles are then stacked along the spanwise direction of the blade via transfinite interpolation to obtain a smooth and continuous representation of the reference blade. This versatile approach makes the description of a range of different blade shapes possible within the required accuracy and, furthermore, the design of new blade shapes. However, even though it is possible to redefine new blade shapes using the aforementioned parametrization, a remaining question is whether the parametrized blades are suitable as a replacement for the currently used ones. In order to assess the mechanical feasibility of the new shapes, several stages of analysis are required. In this paper, bearing in mind the standard hydraulic test conditions of the hydrofoil test case of the Norwegian Hydropower Center, we present a structural stress–strain analysis of the reparametrization of a Francis blade, thus showing its adequate computational performance in two model tests.
{"title":"A FEM Structural Analysis of a Francis Turbine Blade Parametrized Using Piecewise Bernstein Polynomials","authors":"Heriberto Arias-Rojas, Miguel A. Rodríguez-Velázquez, Ángel Cerriteño-Sánchez, F. Domínguez-Mota, S. Galván-González","doi":"10.3390/computation11070123","DOIUrl":"https://doi.org/10.3390/computation11070123","url":null,"abstract":"Several methodologies have successfully described the runner blade shape as a set of discrete sections joining the hub and shroud, defined by 3D geometrical forms of considerable complexity. This task requires an appropriate parametric approach for its accurate reconstruction. Among them, piecewise Bernstein polynomials have been used to create parametrizations of twisted runner blades by extracting some cross-sectional hydrofoil profiles from reference CAD data to be approximated by such polynomials. Using the interpolating polynomial coefficients as parameters, more profiles are generated by Lagrangian techniques. The generated profiles are then stacked along the spanwise direction of the blade via transfinite interpolation to obtain a smooth and continuous representation of the reference blade. This versatile approach makes the description of a range of different blade shapes possible within the required accuracy and, furthermore, the design of new blade shapes. However, even though it is possible to redefine new blade shapes using the aforementioned parametrization, a remaining question is whether the parametrized blades are suitable as a replacement for the currently used ones. In order to assess the mechanical feasibility of the new shapes, several stages of analysis are required. In this paper, bearing in mind the standard hydraulic test conditions of the hydrofoil test case of the Norwegian Hydropower Center, we present a structural stress–strain analysis of the reparametrization of a Francis blade, thus showing its adequate computational performance in two model tests.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"110 1","pages":"123"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73037742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-25DOI: 10.3390/computers12070130
Ying Fang, Tong Li, Linh Huynh, Katerina Christhilf, Rod D. Roscoe, D. McNamara
Literacy assessment is essential for effective literacy instruction and training. However, traditional paper-based literacy assessments are typically decontextualized and may cause stress and anxiety for test takers. In contrast, serious games and game environments allow for the assessment of literacy in more authentic and engaging ways, which has some potential to increase the assessment’s validity and reliability. The primary objective of this study is to examine the feasibility of a novel approach for stealthily assessing literacy skills using games in an intelligent tutoring system (ITS) designed for reading comprehension strategy training. We investigated the degree to which learners’ game performance and enjoyment predicted their scores on standardized reading tests. Amazon Mechanical Turk participants (n = 211) played three games in iSTART and self-reported their level of game enjoyment after each game. Participants also completed the Gates–MacGinitie Reading Test (GMRT), which includes vocabulary knowledge and reading comprehension measures. The results indicated that participants’ performance in each game as well as the combined performance across all three games predicted their literacy skills. However, the relations between game enjoyment and literacy skills varied across games. These findings suggest the potential of leveraging serious games to assess students’ literacy skills and improve the adaptivity of game-based learning environments.
{"title":"Stealth Literacy Assessments via Educational Games","authors":"Ying Fang, Tong Li, Linh Huynh, Katerina Christhilf, Rod D. Roscoe, D. McNamara","doi":"10.3390/computers12070130","DOIUrl":"https://doi.org/10.3390/computers12070130","url":null,"abstract":"Literacy assessment is essential for effective literacy instruction and training. However, traditional paper-based literacy assessments are typically decontextualized and may cause stress and anxiety for test takers. In contrast, serious games and game environments allow for the assessment of literacy in more authentic and engaging ways, which has some potential to increase the assessment’s validity and reliability. The primary objective of this study is to examine the feasibility of a novel approach for stealthily assessing literacy skills using games in an intelligent tutoring system (ITS) designed for reading comprehension strategy training. We investigated the degree to which learners’ game performance and enjoyment predicted their scores on standardized reading tests. Amazon Mechanical Turk participants (n = 211) played three games in iSTART and self-reported their level of game enjoyment after each game. Participants also completed the Gates–MacGinitie Reading Test (GMRT), which includes vocabulary knowledge and reading comprehension measures. The results indicated that participants’ performance in each game as well as the combined performance across all three games predicted their literacy skills. However, the relations between game enjoyment and literacy skills varied across games. These findings suggest the potential of leveraging serious games to assess students’ literacy skills and improve the adaptivity of game-based learning environments.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"15 1","pages":"130"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85447216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-23DOI: 10.3390/computers12070128
Mina Samir, N. Sherief, W. Abdelmoez
Software engineering is a comprehensive process that requires developers and team members to collaborate across multiple tasks. In software testing, bug triaging is a tedious and time-consuming process. Assigning bugs to the appropriate developers can save time and maintain their motivation. However, without knowledge about a bug’s class, triaging is difficult. Motivated by this challenge, this paper focuses on the problem of assigning a suitable developer to a new bug by analyzing the history of developers’ profiles and analyzing the history of bugs for all developers using machine learning-based recommender systems. Explainable AI (XAI) is AI that humans can understand. It contrasts with “black box” AI, which even its designers cannot explain. By providing appropriate explanations for results, users can better comprehend the underlying insight behind the outcomes, boosting the recommender system’s effectiveness, transparency, and confidence. The trained model is utilized in the recommendation stage to calculate relevance scores for developers based on expertise and past bug handling performance, ultimately presenting the developers with the highest scores as recommendations for new bugs. This approach aims to strike a balance between computational efficiency and accurate predictions, enabling efficient bug assignment while considering developer expertise and historical performance. In this paper, we propose two explainable models for recommendation. The first is an explainable recommender model for personalized developers generated from bug history to know what the preferred type of bug is for each developer. The second model is an explainable recommender model based on bugs to identify the most suitable developer for each bug from bug history.
{"title":"Improving Bug Assignment and Developer Allocation in Software Engineering through Interpretable Machine Learning Models","authors":"Mina Samir, N. Sherief, W. Abdelmoez","doi":"10.3390/computers12070128","DOIUrl":"https://doi.org/10.3390/computers12070128","url":null,"abstract":"Software engineering is a comprehensive process that requires developers and team members to collaborate across multiple tasks. In software testing, bug triaging is a tedious and time-consuming process. Assigning bugs to the appropriate developers can save time and maintain their motivation. However, without knowledge about a bug’s class, triaging is difficult. Motivated by this challenge, this paper focuses on the problem of assigning a suitable developer to a new bug by analyzing the history of developers’ profiles and analyzing the history of bugs for all developers using machine learning-based recommender systems. Explainable AI (XAI) is AI that humans can understand. It contrasts with “black box” AI, which even its designers cannot explain. By providing appropriate explanations for results, users can better comprehend the underlying insight behind the outcomes, boosting the recommender system’s effectiveness, transparency, and confidence. The trained model is utilized in the recommendation stage to calculate relevance scores for developers based on expertise and past bug handling performance, ultimately presenting the developers with the highest scores as recommendations for new bugs. This approach aims to strike a balance between computational efficiency and accurate predictions, enabling efficient bug assignment while considering developer expertise and historical performance. In this paper, we propose two explainable models for recommendation. The first is an explainable recommender model for personalized developers generated from bug history to know what the preferred type of bug is for each developer. The second model is an explainable recommender model based on bugs to identify the most suitable developer for each bug from bug history.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"24 1","pages":"128"},"PeriodicalIF":0.0,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74660946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-23DOI: 10.3390/computers12070129
Giacomo Donati, F. Zonzini, L. Marchi
The timely diagnosis of defects at their incipient stage of formation is crucial to extending the life-cycle of technical appliances. This is the case of mechanical-related stress, either due to long aging degradation processes (e.g., corrosion) or in-operation forces (e.g., impact events), which might provoke detrimental damage, such as cracks, disbonding or delaminations, most commonly followed by the release of acoustic energy. The localization of these sources can be successfully fulfilled via adoption of acoustic emission (AE)-based inspection techniques through the computation of the time of arrival (ToA), namely the time at which the induced mechanical wave released at the occurrence of the acoustic event arrives to the acquisition unit. However, the accurate estimation of the ToA may be hampered by poor signal-to-noise ratios (SNRs). In these conditions, standard statistical methods typically fail. In this work, two alternative deep learning methods are proposed for ToA retrieval in processing AE signals, namely a dilated convolutional neural network (DilCNN) and a capsule neural network for ToA (CapsToA). These methods have the additional benefit of being portable on resource-constrained microprocessors. Their performance has been extensively studied on both synthetic and experimental data, focusing on the problem of ToA identification for the case of a metallic plate. Results show that the two methods can achieve localization errors which are up to 70% more precise than those yielded by conventional strategies, even when the SNR is severely compromised (i.e., down to 2 dB). Moreover, DilCNN and CapsNet have been implemented in a tiny machine learning environment and then deployed on microcontroller units, showing a negligible loss of performance with respect to offline realizations.
{"title":"Tiny Deep Learning Architectures Enabling Sensor-Near Acoustic Data Processing and Defect Localization","authors":"Giacomo Donati, F. Zonzini, L. Marchi","doi":"10.3390/computers12070129","DOIUrl":"https://doi.org/10.3390/computers12070129","url":null,"abstract":"The timely diagnosis of defects at their incipient stage of formation is crucial to extending the life-cycle of technical appliances. This is the case of mechanical-related stress, either due to long aging degradation processes (e.g., corrosion) or in-operation forces (e.g., impact events), which might provoke detrimental damage, such as cracks, disbonding or delaminations, most commonly followed by the release of acoustic energy. The localization of these sources can be successfully fulfilled via adoption of acoustic emission (AE)-based inspection techniques through the computation of the time of arrival (ToA), namely the time at which the induced mechanical wave released at the occurrence of the acoustic event arrives to the acquisition unit. However, the accurate estimation of the ToA may be hampered by poor signal-to-noise ratios (SNRs). In these conditions, standard statistical methods typically fail. In this work, two alternative deep learning methods are proposed for ToA retrieval in processing AE signals, namely a dilated convolutional neural network (DilCNN) and a capsule neural network for ToA (CapsToA). These methods have the additional benefit of being portable on resource-constrained microprocessors. Their performance has been extensively studied on both synthetic and experimental data, focusing on the problem of ToA identification for the case of a metallic plate. Results show that the two methods can achieve localization errors which are up to 70% more precise than those yielded by conventional strategies, even when the SNR is severely compromised (i.e., down to 2 dB). Moreover, DilCNN and CapsNet have been implemented in a tiny machine learning environment and then deployed on microcontroller units, showing a negligible loss of performance with respect to offline realizations.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"30 1 1","pages":"129"},"PeriodicalIF":0.0,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80491759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-22DOI: 10.3390/computers12070127
Tebogo Mhlongo, J. A. V. D. Poll, Tebogo Sethibe
Small and medium enterprises (SMEs) play a critical role in the economic growth of a nation, and their significance is increasingly acknowledged. More than 90% of commercial establishments, almost 70f% of jobs, and 55% of the GDP are held by SMEs in mature economies. Additionally, this sector accounts for 70% of employment possibilities and up to 40% of the GDP in developing countries. Technologically, the Internet of Things (IoT) enables multiple connected devices, i.e., “things”, to add value to businesses, as they can communicate and send messages or signals promptly. In this article, we investigate various challenges SMEs experience in IoT adoption to further their businesses. Amongst others, the challenges elicited include IoT considerations for SMEs, data, financial availability, and challenges related to the SME environment. Having analysed the challenges, a three-tiered solution framework coined the Secure IoT Control Framework (SIoTCF) to address the said challenges is developed and briefly validated through a theoretical analysis of the elements of the framework. It is hoped that the proposed framework will assist with aspects of design, governance, and maintenance in enhancing the security levels of IoT adoption and usage in SMEs, especially start-ups or less experienced SMEs. Future work in this area will involve surveying SME owners and ICT staff to validate the utility of the SIoTCF further. The study adds to the body of knowledge in general by developing a secure IoT control framework. In the field of ICT, this paradigm is expected to be useful for academics, researchers, and students.
{"title":"A Control Framework for a Secure Internet of Things within Small-, Medium-, and Micro-Sized Enterprises in a Developing Economy","authors":"Tebogo Mhlongo, J. A. V. D. Poll, Tebogo Sethibe","doi":"10.3390/computers12070127","DOIUrl":"https://doi.org/10.3390/computers12070127","url":null,"abstract":"Small and medium enterprises (SMEs) play a critical role in the economic growth of a nation, and their significance is increasingly acknowledged. More than 90% of commercial establishments, almost 70f% of jobs, and 55% of the GDP are held by SMEs in mature economies. Additionally, this sector accounts for 70% of employment possibilities and up to 40% of the GDP in developing countries. Technologically, the Internet of Things (IoT) enables multiple connected devices, i.e., “things”, to add value to businesses, as they can communicate and send messages or signals promptly. In this article, we investigate various challenges SMEs experience in IoT adoption to further their businesses. Amongst others, the challenges elicited include IoT considerations for SMEs, data, financial availability, and challenges related to the SME environment. Having analysed the challenges, a three-tiered solution framework coined the Secure IoT Control Framework (SIoTCF) to address the said challenges is developed and briefly validated through a theoretical analysis of the elements of the framework. It is hoped that the proposed framework will assist with aspects of design, governance, and maintenance in enhancing the security levels of IoT adoption and usage in SMEs, especially start-ups or less experienced SMEs. Future work in this area will involve surveying SME owners and ICT staff to validate the utility of the SIoTCF further. The study adds to the body of knowledge in general by developing a secure IoT control framework. In the field of ICT, this paradigm is expected to be useful for academics, researchers, and students.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"72 1","pages":"127"},"PeriodicalIF":0.0,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82869159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
U. Andrews, Mingzhong Cai, David Diamondstone, N. Schweber
We study a class of operators on Turing degrees arising naturally from ultrafilters. Suppose U is a nonprincipal ultrafilter on ω. We can then view a sequence of sets A = ( A i ) i ∈ ω as an “approximation” of a set B produced by amalgamating the A i via U: we set lim U ( A ) = { x : { i : x ∈ A i } ∈ U }. This can be extended to the Turing degrees, by defining δ U ( a ) = { lim U ( A ) : A = ( A i ) i ∈ ω ∈ a }. The δ U – which we call “ultrafilter jumps” – resemble classical limit computability in certain ways. In particular, δ U ( a ) is always a Turing ideal containing Δ 2 0 ( a ). However, they are also closely tied to Scott sets: δ U ( a ) is always a Scott set containing a ′ . (This yields an alternate proof of the standard result in reverse mathematics that Weak Konig’s Lemma is strictly weaker than arithmetic comprehension.) Our main result is that the converse also holds: if S is a countable Scott set containing a ′ , then there is some ultrafilter U with δ U ( a ) = S. We then turn to the problem of controlling the action of an ultrafilter jump δ U on two degrees simultaneously, and for example show that there are nontrivial degrees which are “low” for some ultrafilter jump. Finally, we study the structure on the set of ultrafilters arising from the construction U ↦ δ U ; in particular, we introduce a natural preordering on this set and show that it is connected with the classical Rudin–Keisler ordering of ultrafilters. We end by presenting two directions for further research.
研究了由超滤自然产生的一类图灵度算子。假设U是ω上的非主超滤波器。然后我们可以把集合序列a = (a i) i∈ω看作是集合B的一个“近似”,它是由集合i通过U合并而产生的:我们设lim U (a) = {x: {i: x∈a i}∈U}。这可以扩展到图灵度,通过定义δ U (a) = {lim U (a): a = (a i) i∈ω∈a}。δ U——我们称之为“超滤跳变”——在某些方面类似于经典的极限可计算性。特别地,δ U (a)总是包含Δ 20 (a)的图灵理想。然而,它们也与Scott集合紧密相连:δ U (a)总是包含a '的Scott集合。(这就产生了另一种证明,证明了逆向数学中的标准结果,即弱柯尼格引理严格弱于算术理解。)我们的主要结果是,反之也成立:如果S是一个包含a '的可数Scott集合,那么存在一个δ U (a) = S的超过滤器U。然后,我们转向控制一个超过滤器δ U同时在两个度上的作用的问题,并举例说明对于一些超过滤器的跳变存在“低”的非平凡度。最后,我们研究了由构造U≠δ U产生的超滤集合的结构;特别地,我们在这个集合上引入了一个自然的预序,并证明了它与经典的Rudin-Keisler超滤序有关。最后,提出了进一步研究的两个方向。
{"title":"Limit computability and ultrafilters","authors":"U. Andrews, Mingzhong Cai, David Diamondstone, N. Schweber","doi":"10.3233/com-170176","DOIUrl":"https://doi.org/10.3233/com-170176","url":null,"abstract":"We study a class of operators on Turing degrees arising naturally from ultrafilters. Suppose U is a nonprincipal ultrafilter on ω. We can then view a sequence of sets A = ( A i ) i ∈ ω as an “approximation” of a set B produced by amalgamating the A i via U: we set lim U ( A ) = { x : { i : x ∈ A i } ∈ U }. This can be extended to the Turing degrees, by defining δ U ( a ) = { lim U ( A ) : A = ( A i ) i ∈ ω ∈ a }. The δ U – which we call “ultrafilter jumps” – resemble classical limit computability in certain ways. In particular, δ U ( a ) is always a Turing ideal containing Δ 2 0 ( a ). However, they are also closely tied to Scott sets: δ U ( a ) is always a Scott set containing a ′ . (This yields an alternate proof of the standard result in reverse mathematics that Weak Konig’s Lemma is strictly weaker than arithmetic comprehension.) Our main result is that the converse also holds: if S is a countable Scott set containing a ′ , then there is some ultrafilter U with δ U ( a ) = S. We then turn to the problem of controlling the action of an ultrafilter jump δ U on two degrees simultaneously, and for example show that there are nontrivial degrees which are “low” for some ultrafilter jump. Finally, we study the structure on the set of ultrafilters arising from the construction U ↦ δ U ; in particular, we introduce a natural preordering on this set and show that it is connected with the classical Rudin–Keisler ordering of ultrafilters. We end by presenting two directions for further research.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"45 1","pages":"101-115"},"PeriodicalIF":0.0,"publicationDate":"2023-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76166182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-20DOI: 10.3390/computation11060121
S. Lobov, Yevhen Pylypko, Viktoriya Kruchyna, Ihor Bereshko
The metallurgical industry is in second place among all other industries in terms of emissions into the atmosphere, and air pollution is the main cause of environmental problems arising from the activities of metallurgical enterprises. In some existing systems for localization, in the trapping and removal of dust emissions from tapholes and cast-iron gutters of foundries, air flow parameters may differ from the optimal ones for solving aspiration problems. The largest emissions are observed in the area of the taphole (40–60%) and from the ladles during their filling (35–50%). In this paper, it is proposed to consider a variant of the blast furnace aspiration system with the simultaneous supply of a dust–gas–air mixture from two-side smoke exhausters and two upper hoods with two simultaneously operating tapholes, that is, when the blast furnace operates in the maximum emissions mode. This article proposes an assessment of the effectiveness of the modernized blast furnace aspiration system using computer CFD modeling, where its main parameters are given. It is shown that the efficiency of dust collection in the proposed system is more than 90%, and the speed of the gas–dust mixture is no lower than 20 m/s, which prevents gravitational settling on the walls. The distribution fields of temperatures and velocities are obtained for further engineering analysis and the possible improvement of aspiration systems.
{"title":"Simulation of Multi-Phase Flow to Test the Effectiveness of the Casting Yard Aspiration System","authors":"S. Lobov, Yevhen Pylypko, Viktoriya Kruchyna, Ihor Bereshko","doi":"10.3390/computation11060121","DOIUrl":"https://doi.org/10.3390/computation11060121","url":null,"abstract":"The metallurgical industry is in second place among all other industries in terms of emissions into the atmosphere, and air pollution is the main cause of environmental problems arising from the activities of metallurgical enterprises. In some existing systems for localization, in the trapping and removal of dust emissions from tapholes and cast-iron gutters of foundries, air flow parameters may differ from the optimal ones for solving aspiration problems. The largest emissions are observed in the area of the taphole (40–60%) and from the ladles during their filling (35–50%). In this paper, it is proposed to consider a variant of the blast furnace aspiration system with the simultaneous supply of a dust–gas–air mixture from two-side smoke exhausters and two upper hoods with two simultaneously operating tapholes, that is, when the blast furnace operates in the maximum emissions mode. This article proposes an assessment of the effectiveness of the modernized blast furnace aspiration system using computer CFD modeling, where its main parameters are given. It is shown that the efficiency of dust collection in the proposed system is more than 90%, and the speed of the gas–dust mixture is no lower than 20 m/s, which prevents gravitational settling on the walls. The distribution fields of temperatures and velocities are obtained for further engineering analysis and the possible improvement of aspiration systems.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"68 1","pages":"121"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81209175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}