Pub Date : 2023-09-23DOI: 10.3390/computers12100190
Shabnom Mustary, Mohammod Abul Kashem, Mohammad Asaduzzaman Chowdhury, Jia Uddin
Robotics is a crucial technology of Industry 4.0 that offers a diverse array of applications in the industrial sector. However, the quality of a robot’s manipulator is contingent on its stability, which is a function of the manipulator’s parameters. In previous studies, stability has been evaluated based on a small number of manipulator parameters; as a result, there is not much information about the integration/optimal arrangement/combination of manipulator parameters toward stability. Through Lagrangian mechanics and the consideration of multiple parameters, a mathematical model of a modern manipulator is developed in this study. In this mathematical model, motor acceleration, moment of inertia, and deflection are considered in order to assess the level of stability of the ABB Robot manipulator of six degrees of freedom. A novel mathematical approach to stability is developed in which stability is correlated with motor acceleration, moment of inertia, and deflection. In addition to this, fuzzy logic inference principles are employed to determine the status of stability. The numerical data of different manipulator parameters are verified using mathematical approaches. Results indicated that as motor acceleration increases, stability increases, while stability decreases as moment of inertia and deflection increase. It is anticipated that the implementation of these findings will increase industrial output.
{"title":"Model and Fuzzy Controller Design Approaches for Stability of Modern Robot Manipulators","authors":"Shabnom Mustary, Mohammod Abul Kashem, Mohammad Asaduzzaman Chowdhury, Jia Uddin","doi":"10.3390/computers12100190","DOIUrl":"https://doi.org/10.3390/computers12100190","url":null,"abstract":"Robotics is a crucial technology of Industry 4.0 that offers a diverse array of applications in the industrial sector. However, the quality of a robot’s manipulator is contingent on its stability, which is a function of the manipulator’s parameters. In previous studies, stability has been evaluated based on a small number of manipulator parameters; as a result, there is not much information about the integration/optimal arrangement/combination of manipulator parameters toward stability. Through Lagrangian mechanics and the consideration of multiple parameters, a mathematical model of a modern manipulator is developed in this study. In this mathematical model, motor acceleration, moment of inertia, and deflection are considered in order to assess the level of stability of the ABB Robot manipulator of six degrees of freedom. A novel mathematical approach to stability is developed in which stability is correlated with motor acceleration, moment of inertia, and deflection. In addition to this, fuzzy logic inference principles are employed to determine the status of stability. The numerical data of different manipulator parameters are verified using mathematical approaches. Results indicated that as motor acceleration increases, stability increases, while stability decreases as moment of inertia and deflection increase. It is anticipated that the implementation of these findings will increase industrial output.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135966892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-22DOI: 10.3390/computers12100189
Arash Khajooei Nejad, Mohammad (Behdad) Jamshidi, Shahriar B. Shokouhi
This paper introduces Tensor-Organized Memory (TOM), a novel neuromorphic architecture inspired by the human brain’s structural and functional principles. Utilizing spike-timing-dependent plasticity (STDP) and Hebbian rules, TOM exhibits cognitive behaviors similar to the human brain. Compared to conventional architectures using a simplified leaky integrate-and-fire (LIF) neuron model, TOM showcases robust performance, even in noisy conditions. TOM’s adaptability and unique organizational structure, rooted in the Columnar-Organized Memory (COM) framework, position it as a transformative digital memory processing solution. Innovative neural architecture, advanced recognition mechanisms, and integration of synaptic plasticity rules enhance TOM’s cognitive capabilities. We have compared the TOM architecture with a conventional floating-point architecture, using a simplified LIF neuron model. We also implemented tests with varying noise levels and partially erased messages to evaluate its robustness. Despite the slight degradation in performance with noisy messages beyond 30%, the TOM architecture exhibited appreciable performance under less-than-ideal conditions. This exploration into the TOM architecture reveals its potential as a framework for future neuromorphic systems. This study lays the groundwork for future applications in implementing neuromorphic chips for high-performance intelligent edge devices, thereby revolutionizing industries and enhancing user experiences within the power of artificial intelligence.
张量组织记忆(Tensor-Organized Memory, TOM)是一种受人脑结构和功能原理启发的新型神经形态结构。利用spike- time -dependent plasticity (STDP)和Hebbian规则,TOM表现出与人类大脑相似的认知行为。与使用简化的泄漏集成与火灾(LIF)神经元模型的传统架构相比,TOM即使在噪声条件下也表现出鲁棒性。TOM的适应性和独特的组织结构植根于柱状组织存储器(COM)框架,使其成为一种变革性的数字存储器处理解决方案。创新的神经结构、先进的识别机制和突触可塑性规则的整合增强了TOM的认知能力。我们使用简化的LIF神经元模型将TOM架构与传统的浮点架构进行了比较。我们还实现了具有不同噪声水平和部分擦除消息的测试,以评估其稳健性。尽管噪声消息超过30%时性能会略有下降,但TOM架构在不太理想的条件下表现出可观的性能。对TOM架构的探索揭示了它作为未来神经形态系统框架的潜力。这项研究为高性能智能边缘设备实现神经形态芯片的未来应用奠定了基础,从而在人工智能的力量下彻底改变行业并增强用户体验。
{"title":"Implementing Tensor-Organized Memory for Message Retrieval Purposes in Neuromorphic Chips","authors":"Arash Khajooei Nejad, Mohammad (Behdad) Jamshidi, Shahriar B. Shokouhi","doi":"10.3390/computers12100189","DOIUrl":"https://doi.org/10.3390/computers12100189","url":null,"abstract":"This paper introduces Tensor-Organized Memory (TOM), a novel neuromorphic architecture inspired by the human brain’s structural and functional principles. Utilizing spike-timing-dependent plasticity (STDP) and Hebbian rules, TOM exhibits cognitive behaviors similar to the human brain. Compared to conventional architectures using a simplified leaky integrate-and-fire (LIF) neuron model, TOM showcases robust performance, even in noisy conditions. TOM’s adaptability and unique organizational structure, rooted in the Columnar-Organized Memory (COM) framework, position it as a transformative digital memory processing solution. Innovative neural architecture, advanced recognition mechanisms, and integration of synaptic plasticity rules enhance TOM’s cognitive capabilities. We have compared the TOM architecture with a conventional floating-point architecture, using a simplified LIF neuron model. We also implemented tests with varying noise levels and partially erased messages to evaluate its robustness. Despite the slight degradation in performance with noisy messages beyond 30%, the TOM architecture exhibited appreciable performance under less-than-ideal conditions. This exploration into the TOM architecture reveals its potential as a framework for future neuromorphic systems. This study lays the groundwork for future applications in implementing neuromorphic chips for high-performance intelligent edge devices, thereby revolutionizing industries and enhancing user experiences within the power of artificial intelligence.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136061554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-21DOI: 10.3390/computers12090188
Ruth S. Contreras-Espinosa, Jose Luis Eguia-Gomez
Despite access to reliable information being essential for equal opportunities in our society, current school curricula only include some notions about media literacy in a limited context. Thus, it is necessary to create scenarios for reflection on and a well-founded analysis of misinformation. Video games may be an effective approach to foster these skills and can seamlessly integrate learning content into their design, enabling achieving multiple learning outcomes and building competencies that can transfer to real-life situations. We analyzed 24 video games about media literacy by studying their content, design, and characteristics that may affect their implementation in learning settings. Even though not all learning outcomes considered were equally addressed, the results show that media literacy video games currently on the market could be used as effective tools to achieve critical learning goals and may allow users to understand, practice, and implement skills to fight misinformation, regardless of their complexity in terms of game mechanics. However, we detected that certain characteristics of video games may affect their implementation in learning environments, such as their availability, estimated playing time, approach, or whether they include real or fictional worlds, variables that should be further considered by both developers and educators.
{"title":"Evaluating Video Games as Tools for Education on Fake News and Misinformation","authors":"Ruth S. Contreras-Espinosa, Jose Luis Eguia-Gomez","doi":"10.3390/computers12090188","DOIUrl":"https://doi.org/10.3390/computers12090188","url":null,"abstract":"Despite access to reliable information being essential for equal opportunities in our society, current school curricula only include some notions about media literacy in a limited context. Thus, it is necessary to create scenarios for reflection on and a well-founded analysis of misinformation. Video games may be an effective approach to foster these skills and can seamlessly integrate learning content into their design, enabling achieving multiple learning outcomes and building competencies that can transfer to real-life situations. We analyzed 24 video games about media literacy by studying their content, design, and characteristics that may affect their implementation in learning settings. Even though not all learning outcomes considered were equally addressed, the results show that media literacy video games currently on the market could be used as effective tools to achieve critical learning goals and may allow users to understand, practice, and implement skills to fight misinformation, regardless of their complexity in terms of game mechanics. However, we detected that certain characteristics of video games may affect their implementation in learning environments, such as their availability, estimated playing time, approach, or whether they include real or fictional worlds, variables that should be further considered by both developers and educators.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136154316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-19DOI: 10.3390/computers12090187
Arup Dey, Nita Yodo, Om P. Yadav, Ragavanantham Shanmugam, Monsuru Ramoni
Data-driven algorithms have been widely applied in predicting tool wear because of the high prediction performance of the algorithms, availability of data sets, and advancements in computing capabilities in recent years. Although most algorithms are supposed to generate outcomes with high precision and accuracy, this is not always true in practice. Uncertainty exists in distinct phases of applying data-driven algorithms due to noises and randomness in data, the presence of redundant and irrelevant features, and model assumptions. Uncertainty due to noise and missing data is known as data uncertainty. On the other hand, model assumptions and imperfection are reasons for model uncertainty. In this paper, both types of uncertainty are considered in the tool wear prediction. Empirical mode decomposition is applied to reduce uncertainty from raw data. Additionally, the Monte Carlo dropout technique is used in training a neural network algorithm to incorporate model uncertainty. The unique feature of the proposed method is that it estimates tool wear as an interval, and the interval range represents the degree of uncertainty. Different performance measurement matrices are used to compare the proposed method. It is shown that the proposed approach can predict tool wear with higher accuracy.
由于数据驱动算法的高预测性能、数据集的可用性以及近年来计算能力的进步,数据驱动算法在预测刀具磨损方面得到了广泛的应用。虽然大多数算法都被认为产生的结果具有很高的精度和准确性,但在实践中并不总是如此。由于数据中的噪声和随机性、冗余和不相关特征的存在以及模型假设,不确定性存在于应用数据驱动算法的不同阶段。由噪声和缺失数据引起的不确定性称为数据不确定性。另一方面,模型的假设和不完善是模型不确定性的原因。本文在刀具磨损预测中考虑了这两种不确定性。应用经验模态分解来降低原始数据的不确定性。此外,Monte Carlo dropout技术用于训练神经网络算法,以结合模型的不确定性。该方法的独特之处在于,它将刀具磨损作为一个区间来估计,区间范围代表了不确定性的程度。使用不同的性能度量矩阵来比较所提出的方法。结果表明,该方法能较好地预测刀具磨损。
{"title":"Addressing Uncertainty in Tool Wear Prediction with Dropout-Based Neural Network","authors":"Arup Dey, Nita Yodo, Om P. Yadav, Ragavanantham Shanmugam, Monsuru Ramoni","doi":"10.3390/computers12090187","DOIUrl":"https://doi.org/10.3390/computers12090187","url":null,"abstract":"Data-driven algorithms have been widely applied in predicting tool wear because of the high prediction performance of the algorithms, availability of data sets, and advancements in computing capabilities in recent years. Although most algorithms are supposed to generate outcomes with high precision and accuracy, this is not always true in practice. Uncertainty exists in distinct phases of applying data-driven algorithms due to noises and randomness in data, the presence of redundant and irrelevant features, and model assumptions. Uncertainty due to noise and missing data is known as data uncertainty. On the other hand, model assumptions and imperfection are reasons for model uncertainty. In this paper, both types of uncertainty are considered in the tool wear prediction. Empirical mode decomposition is applied to reduce uncertainty from raw data. Additionally, the Monte Carlo dropout technique is used in training a neural network algorithm to incorporate model uncertainty. The unique feature of the proposed method is that it estimates tool wear as an interval, and the interval range represents the degree of uncertainty. Different performance measurement matrices are used to compare the proposed method. It is shown that the proposed approach can predict tool wear with higher accuracy.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135063141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-15DOI: 10.3390/computers12090186
Theodoros Psallidas, Evaggelos Spyrou
During the last few years, several technological advances have led to an increase in the creation and consumption of audiovisual multimedia content. Users are overexposed to videos via several social media or video sharing websites and mobile phone applications. For efficient browsing, searching, and navigation across several multimedia collections and repositories, e.g., for finding videos that are relevant to a particular topic or interest, this ever-increasing content should be efficiently described by informative yet concise content representations. A common solution to this problem is the construction of a brief summary of a video, which could be presented to the user, instead of the full video, so that she/he could then decide whether to watch or ignore the whole video. Such summaries are ideally more expressive than other alternatives, such as brief textual descriptions or keywords. In this work, the video summarization problem is approached as a supervised classification task, which relies on feature fusion of audio and visual data. Specifically, the goal of this work is to generate dynamic video summaries, i.e., compositions of parts of the original video, which include its most essential video segments, while preserving the original temporal sequence. This work relies on annotated datasets on a per-frame basis, wherein parts of videos are annotated as being “informative” or “noninformative”, with the latter being excluded from the produced summary. The novelties of the proposed approach are, (a) prior to classification, a transfer learning strategy to use deep features from pretrained models is employed. These models have been used as input to the classifiers, making them more intuitive and robust to objectiveness, and (b) the training dataset was augmented by using other publicly available datasets. The proposed approach is evaluated using three datasets of user-generated videos, and it is demonstrated that deep features and data augmentation are able to improve the accuracy of video summaries based on human annotations. Moreover, it is domain independent, could be used on any video, and could be extended to rely on richer feature representations or include other data modalities.
{"title":"Video Summarization Based on Feature Fusion and Data Augmentation","authors":"Theodoros Psallidas, Evaggelos Spyrou","doi":"10.3390/computers12090186","DOIUrl":"https://doi.org/10.3390/computers12090186","url":null,"abstract":"During the last few years, several technological advances have led to an increase in the creation and consumption of audiovisual multimedia content. Users are overexposed to videos via several social media or video sharing websites and mobile phone applications. For efficient browsing, searching, and navigation across several multimedia collections and repositories, e.g., for finding videos that are relevant to a particular topic or interest, this ever-increasing content should be efficiently described by informative yet concise content representations. A common solution to this problem is the construction of a brief summary of a video, which could be presented to the user, instead of the full video, so that she/he could then decide whether to watch or ignore the whole video. Such summaries are ideally more expressive than other alternatives, such as brief textual descriptions or keywords. In this work, the video summarization problem is approached as a supervised classification task, which relies on feature fusion of audio and visual data. Specifically, the goal of this work is to generate dynamic video summaries, i.e., compositions of parts of the original video, which include its most essential video segments, while preserving the original temporal sequence. This work relies on annotated datasets on a per-frame basis, wherein parts of videos are annotated as being “informative” or “noninformative”, with the latter being excluded from the produced summary. The novelties of the proposed approach are, (a) prior to classification, a transfer learning strategy to use deep features from pretrained models is employed. These models have been used as input to the classifiers, making them more intuitive and robust to objectiveness, and (b) the training dataset was augmented by using other publicly available datasets. The proposed approach is evaluated using three datasets of user-generated videos, and it is demonstrated that deep features and data augmentation are able to improve the accuracy of video summaries based on human annotations. Moreover, it is domain independent, could be used on any video, and could be extended to rely on richer feature representations or include other data modalities.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135437880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.3390/computers12090183
Bimo Sunarfri Hantono, Syukron Abu Ishaq Alfarozi, Azkario Rizky Pratama, Ahmad Ataka Awwalur Rizqi, I Wayan Mustika, Mardhani Riasetiawan, Anna Maria Sri Asih
Counterfeit products have become a pervasive problem in the global marketplace, necessitating effective strategies to protect both consumers and brands. This study examines the role of cybersecurity in addressing counterfeiting issues, specifically focusing on a multi-level grayscale watermark-based authentication system. The system comprises a generator responsible for creating a secure 2D code, and an authenticator designed to extract watermark information and verify product authenticity. To authenticate the secure 2D code, we propose various features, including the analysis of the spatial domain, frequency domain, and grayscale watermark distribution. Furthermore, we emphasize the importance of selecting appropriate interpolation methods to enhance counterfeit detection. Our proposed approach demonstrates remarkable performance, achieving precision, recall, and specificities surpassing 84.8%, 83.33%, and 84.5%, respectively, across different datasets.
{"title":"Enhancing Counterfeit Detection with Multi-Features on Secure 2D Grayscale Codes","authors":"Bimo Sunarfri Hantono, Syukron Abu Ishaq Alfarozi, Azkario Rizky Pratama, Ahmad Ataka Awwalur Rizqi, I Wayan Mustika, Mardhani Riasetiawan, Anna Maria Sri Asih","doi":"10.3390/computers12090183","DOIUrl":"https://doi.org/10.3390/computers12090183","url":null,"abstract":"Counterfeit products have become a pervasive problem in the global marketplace, necessitating effective strategies to protect both consumers and brands. This study examines the role of cybersecurity in addressing counterfeiting issues, specifically focusing on a multi-level grayscale watermark-based authentication system. The system comprises a generator responsible for creating a secure 2D code, and an authenticator designed to extract watermark information and verify product authenticity. To authenticate the secure 2D code, we propose various features, including the analysis of the spatial domain, frequency domain, and grayscale watermark distribution. Furthermore, we emphasize the importance of selecting appropriate interpolation methods to enhance counterfeit detection. Our proposed approach demonstrates remarkable performance, achieving precision, recall, and specificities surpassing 84.8%, 83.33%, and 84.5%, respectively, across different datasets.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134912001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.3390/computers12090185
Giacomo Bergami, Samuel Appleby, Graham Morgan
Current specification mining algorithms for temporal data rely on exhaustive search approaches, which become detrimental in real data settings where a plethora of distinct temporal behaviours are recorded over prolonged observations. This paper proposes a novel algorithm, Bolt2, based on a refined heuristic search of our previous algorithm, Bolt. Our experiments show that the proposed approach not only surpasses exhaustive search methods in terms of running time but also guarantees a minimal description that captures the overall temporal behaviour. This is achieved through a hypothesis lattice search that exploits support metrics. Our novel specification mining algorithm also outperforms the results achieved in our previous contribution.
{"title":"Specification Mining over Temporal Data","authors":"Giacomo Bergami, Samuel Appleby, Graham Morgan","doi":"10.3390/computers12090185","DOIUrl":"https://doi.org/10.3390/computers12090185","url":null,"abstract":"Current specification mining algorithms for temporal data rely on exhaustive search approaches, which become detrimental in real data settings where a plethora of distinct temporal behaviours are recorded over prolonged observations. This paper proposes a novel algorithm, Bolt2, based on a refined heuristic search of our previous algorithm, Bolt. Our experiments show that the proposed approach not only surpasses exhaustive search methods in terms of running time but also guarantees a minimal description that captures the overall temporal behaviour. This is achieved through a hypothesis lattice search that exploits support metrics. Our novel specification mining algorithm also outperforms the results achieved in our previous contribution.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134914439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.3390/computers12090184
Benedetto Intrigila, Giuseppe Della Penna, Andrea D’Ambrogio, Dario Campagna, Malina Grigore
Requirements management is a key aspect in the development of software components, since complex systems are often subject to frequent updates due to continuously changing requirements. This is especially true in critical systems, i.e., systems whose failure or malfunctioning may lead to severe consequences. This paper proposes a three-step approach that incrementally refines a critical system specification, from a lightweight high-level model targeted to stakeholders, down to a formal standard model that links requirements, processes and data. The resulting model provides the requirements specification used to feed the subsequent development, verification and maintenance activities, and can also be seen as a first step towards the development of a digital twin of the physical system.
{"title":"Process-Oriented Requirements Definition and Analysis of Software Components in Critical Systems","authors":"Benedetto Intrigila, Giuseppe Della Penna, Andrea D’Ambrogio, Dario Campagna, Malina Grigore","doi":"10.3390/computers12090184","DOIUrl":"https://doi.org/10.3390/computers12090184","url":null,"abstract":"Requirements management is a key aspect in the development of software components, since complex systems are often subject to frequent updates due to continuously changing requirements. This is especially true in critical systems, i.e., systems whose failure or malfunctioning may lead to severe consequences. This paper proposes a three-step approach that incrementally refines a critical system specification, from a lightweight high-level model targeted to stakeholders, down to a formal standard model that links requirements, processes and data. The resulting model provides the requirements specification used to feed the subsequent development, verification and maintenance activities, and can also be seen as a first step towards the development of a digital twin of the physical system.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134912194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-13DOI: 10.3390/computers12090182
Vaclav Skala
Image processing techniques are based nearly exclusively on RGB (red–green–blue) representation, which is significantly influenced by technological issues. The RGB triplet represents a mixture of the wavelength, saturation, and lightness values of light. It leads to unexpected chromaticity artifacts in processing. Therefore, processing based on the wavelength, saturation, and lightness should be more resistant to the introduction of color artifacts. The proposed process of converting RGB values to corresponding wavelengths is not straightforward. In this contribution, a novel simple and accurate method for extracting the wavelength, saturation, and lightness of a color represented by an RGB triplet is described. The conversion relies on the known RGB values of the rainbow spectrum and accommodates variations in color saturation.
{"title":"Multispectral Image Generation from RGB Based on WSL Color Representation: Wavelength, Saturation, and Lightness","authors":"Vaclav Skala","doi":"10.3390/computers12090182","DOIUrl":"https://doi.org/10.3390/computers12090182","url":null,"abstract":"Image processing techniques are based nearly exclusively on RGB (red–green–blue) representation, which is significantly influenced by technological issues. The RGB triplet represents a mixture of the wavelength, saturation, and lightness values of light. It leads to unexpected chromaticity artifacts in processing. Therefore, processing based on the wavelength, saturation, and lightness should be more resistant to the introduction of color artifacts. The proposed process of converting RGB values to corresponding wavelengths is not straightforward. In this contribution, a novel simple and accurate method for extracting the wavelength, saturation, and lightness of a color represented by an RGB triplet is described. The conversion relies on the known RGB values of the rainbow spectrum and accommodates variations in color saturation.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135740183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-11DOI: 10.3390/computers12090181
Vishnu Priya Biyyapu, Sastry Kodanda Rama Jammalamadaka, Sasi Bhanu Jammalamadaka, Bhupati Chokara, Bala Krishna Kamesh Duvvuri, Raja Rao Budaraju
The main channel for disseminating information is now the Internet. Users have different expectations for the calibre of websites regarding the posted and presented content. The website’s quality is influenced by up to 120 factors, each represented by two to fifteen attributes. A major challenge is quantifying the features and evaluating the quality of a website based on the feature counts. One of the aspects that determines a website’s quality is its completeness, which focuses on the existence of all the objects and their connections with one another. It is not easy to build an expert model based on feature counts to evaluate website quality, so this paper has focused on that challenge. Both a methodology for calculating a website’s quality and a parser-based approach for measuring feature counts are offered. We provide a multi-layer perceptron model that is an expert model for forecasting website quality from the "completeness" perspective. The accuracy of the predictions is 98%, whilst the accuracy of the nearest model is 87%.
{"title":"Building an Expert System through Machine Learning for Predicting the Quality of a Website Based on Its Completion","authors":"Vishnu Priya Biyyapu, Sastry Kodanda Rama Jammalamadaka, Sasi Bhanu Jammalamadaka, Bhupati Chokara, Bala Krishna Kamesh Duvvuri, Raja Rao Budaraju","doi":"10.3390/computers12090181","DOIUrl":"https://doi.org/10.3390/computers12090181","url":null,"abstract":"The main channel for disseminating information is now the Internet. Users have different expectations for the calibre of websites regarding the posted and presented content. The website’s quality is influenced by up to 120 factors, each represented by two to fifteen attributes. A major challenge is quantifying the features and evaluating the quality of a website based on the feature counts. One of the aspects that determines a website’s quality is its completeness, which focuses on the existence of all the objects and their connections with one another. It is not easy to build an expert model based on feature counts to evaluate website quality, so this paper has focused on that challenge. Both a methodology for calculating a website’s quality and a parser-based approach for measuring feature counts are offered. We provide a multi-layer perceptron model that is an expert model for forecasting website quality from the \"completeness\" perspective. The accuracy of the predictions is 98%, whilst the accuracy of the nearest model is 87%.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136071085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}