首页 > 最新文献

Computers最新文献

英文 中文
Model and Fuzzy Controller Design Approaches for Stability of Modern Robot Manipulators 现代机械臂稳定性的模型与模糊控制器设计方法
Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-23 DOI: 10.3390/computers12100190
Shabnom Mustary, Mohammod Abul Kashem, Mohammad Asaduzzaman Chowdhury, Jia Uddin
Robotics is a crucial technology of Industry 4.0 that offers a diverse array of applications in the industrial sector. However, the quality of a robot’s manipulator is contingent on its stability, which is a function of the manipulator’s parameters. In previous studies, stability has been evaluated based on a small number of manipulator parameters; as a result, there is not much information about the integration/optimal arrangement/combination of manipulator parameters toward stability. Through Lagrangian mechanics and the consideration of multiple parameters, a mathematical model of a modern manipulator is developed in this study. In this mathematical model, motor acceleration, moment of inertia, and deflection are considered in order to assess the level of stability of the ABB Robot manipulator of six degrees of freedom. A novel mathematical approach to stability is developed in which stability is correlated with motor acceleration, moment of inertia, and deflection. In addition to this, fuzzy logic inference principles are employed to determine the status of stability. The numerical data of different manipulator parameters are verified using mathematical approaches. Results indicated that as motor acceleration increases, stability increases, while stability decreases as moment of inertia and deflection increase. It is anticipated that the implementation of these findings will increase industrial output.
机器人技术是工业4.0的一项关键技术,在工业领域提供了多种应用。然而,机器人机械臂的质量取决于其稳定性,而稳定性是机械臂参数的函数。在以前的研究中,稳定性是基于少数机械手参数来评估的;因此,关于机械臂参数对稳定性的整合/优化安排/组合的信息并不多。通过拉格朗日力学和多参数的考虑,建立了现代机械臂的数学模型。在该数学模型中,考虑了电机加速度、转动惯量和挠度,以评估ABB机器人六自由度机械手的稳定性水平。一种新的数学方法的稳定性发展,其中稳定性与电机加速度,惯性矩和偏转相关。除此之外,还采用模糊逻辑推理原理来确定系统的稳定状态。采用数学方法对不同机械手参数的数值数据进行了验证。结果表明,随着电机加速度的增加,稳定性增加,而随着转动惯量和挠度的增加,稳定性降低。预计这些发现的实施将增加工业产出。
{"title":"Model and Fuzzy Controller Design Approaches for Stability of Modern Robot Manipulators","authors":"Shabnom Mustary, Mohammod Abul Kashem, Mohammad Asaduzzaman Chowdhury, Jia Uddin","doi":"10.3390/computers12100190","DOIUrl":"https://doi.org/10.3390/computers12100190","url":null,"abstract":"Robotics is a crucial technology of Industry 4.0 that offers a diverse array of applications in the industrial sector. However, the quality of a robot’s manipulator is contingent on its stability, which is a function of the manipulator’s parameters. In previous studies, stability has been evaluated based on a small number of manipulator parameters; as a result, there is not much information about the integration/optimal arrangement/combination of manipulator parameters toward stability. Through Lagrangian mechanics and the consideration of multiple parameters, a mathematical model of a modern manipulator is developed in this study. In this mathematical model, motor acceleration, moment of inertia, and deflection are considered in order to assess the level of stability of the ABB Robot manipulator of six degrees of freedom. A novel mathematical approach to stability is developed in which stability is correlated with motor acceleration, moment of inertia, and deflection. In addition to this, fuzzy logic inference principles are employed to determine the status of stability. The numerical data of different manipulator parameters are verified using mathematical approaches. Results indicated that as motor acceleration increases, stability increases, while stability decreases as moment of inertia and deflection increase. It is anticipated that the implementation of these findings will increase industrial output.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135966892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementing Tensor-Organized Memory for Message Retrieval Purposes in Neuromorphic Chips 在神经形态芯片中实现用于信息检索的张量组织记忆
Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-22 DOI: 10.3390/computers12100189
Arash Khajooei Nejad, Mohammad (Behdad) Jamshidi, Shahriar B. Shokouhi
This paper introduces Tensor-Organized Memory (TOM), a novel neuromorphic architecture inspired by the human brain’s structural and functional principles. Utilizing spike-timing-dependent plasticity (STDP) and Hebbian rules, TOM exhibits cognitive behaviors similar to the human brain. Compared to conventional architectures using a simplified leaky integrate-and-fire (LIF) neuron model, TOM showcases robust performance, even in noisy conditions. TOM’s adaptability and unique organizational structure, rooted in the Columnar-Organized Memory (COM) framework, position it as a transformative digital memory processing solution. Innovative neural architecture, advanced recognition mechanisms, and integration of synaptic plasticity rules enhance TOM’s cognitive capabilities. We have compared the TOM architecture with a conventional floating-point architecture, using a simplified LIF neuron model. We also implemented tests with varying noise levels and partially erased messages to evaluate its robustness. Despite the slight degradation in performance with noisy messages beyond 30%, the TOM architecture exhibited appreciable performance under less-than-ideal conditions. This exploration into the TOM architecture reveals its potential as a framework for future neuromorphic systems. This study lays the groundwork for future applications in implementing neuromorphic chips for high-performance intelligent edge devices, thereby revolutionizing industries and enhancing user experiences within the power of artificial intelligence.
张量组织记忆(Tensor-Organized Memory, TOM)是一种受人脑结构和功能原理启发的新型神经形态结构。利用spike- time -dependent plasticity (STDP)和Hebbian规则,TOM表现出与人类大脑相似的认知行为。与使用简化的泄漏集成与火灾(LIF)神经元模型的传统架构相比,TOM即使在噪声条件下也表现出鲁棒性。TOM的适应性和独特的组织结构植根于柱状组织存储器(COM)框架,使其成为一种变革性的数字存储器处理解决方案。创新的神经结构、先进的识别机制和突触可塑性规则的整合增强了TOM的认知能力。我们使用简化的LIF神经元模型将TOM架构与传统的浮点架构进行了比较。我们还实现了具有不同噪声水平和部分擦除消息的测试,以评估其稳健性。尽管噪声消息超过30%时性能会略有下降,但TOM架构在不太理想的条件下表现出可观的性能。对TOM架构的探索揭示了它作为未来神经形态系统框架的潜力。这项研究为高性能智能边缘设备实现神经形态芯片的未来应用奠定了基础,从而在人工智能的力量下彻底改变行业并增强用户体验。
{"title":"Implementing Tensor-Organized Memory for Message Retrieval Purposes in Neuromorphic Chips","authors":"Arash Khajooei Nejad, Mohammad (Behdad) Jamshidi, Shahriar B. Shokouhi","doi":"10.3390/computers12100189","DOIUrl":"https://doi.org/10.3390/computers12100189","url":null,"abstract":"This paper introduces Tensor-Organized Memory (TOM), a novel neuromorphic architecture inspired by the human brain’s structural and functional principles. Utilizing spike-timing-dependent plasticity (STDP) and Hebbian rules, TOM exhibits cognitive behaviors similar to the human brain. Compared to conventional architectures using a simplified leaky integrate-and-fire (LIF) neuron model, TOM showcases robust performance, even in noisy conditions. TOM’s adaptability and unique organizational structure, rooted in the Columnar-Organized Memory (COM) framework, position it as a transformative digital memory processing solution. Innovative neural architecture, advanced recognition mechanisms, and integration of synaptic plasticity rules enhance TOM’s cognitive capabilities. We have compared the TOM architecture with a conventional floating-point architecture, using a simplified LIF neuron model. We also implemented tests with varying noise levels and partially erased messages to evaluate its robustness. Despite the slight degradation in performance with noisy messages beyond 30%, the TOM architecture exhibited appreciable performance under less-than-ideal conditions. This exploration into the TOM architecture reveals its potential as a framework for future neuromorphic systems. This study lays the groundwork for future applications in implementing neuromorphic chips for high-performance intelligent edge devices, thereby revolutionizing industries and enhancing user experiences within the power of artificial intelligence.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136061554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Video Games as Tools for Education on Fake News and Misinformation 评估电子游戏作为假新闻和错误信息教育工具的作用
Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-21 DOI: 10.3390/computers12090188
Ruth S. Contreras-Espinosa, Jose Luis Eguia-Gomez
Despite access to reliable information being essential for equal opportunities in our society, current school curricula only include some notions about media literacy in a limited context. Thus, it is necessary to create scenarios for reflection on and a well-founded analysis of misinformation. Video games may be an effective approach to foster these skills and can seamlessly integrate learning content into their design, enabling achieving multiple learning outcomes and building competencies that can transfer to real-life situations. We analyzed 24 video games about media literacy by studying their content, design, and characteristics that may affect their implementation in learning settings. Even though not all learning outcomes considered were equally addressed, the results show that media literacy video games currently on the market could be used as effective tools to achieve critical learning goals and may allow users to understand, practice, and implement skills to fight misinformation, regardless of their complexity in terms of game mechanics. However, we detected that certain characteristics of video games may affect their implementation in learning environments, such as their availability, estimated playing time, approach, or whether they include real or fictional worlds, variables that should be further considered by both developers and educators.
尽管获得可靠的信息对于我们社会的平等机会至关重要,但目前的学校课程只在有限的背景下包含了一些关于媒体素养的概念。因此,有必要为反思和对错误信息进行有根据的分析创造场景。电子游戏可能是培养这些技能的有效方法,可以将学习内容无缝地整合到设计中,实现多种学习成果,并建立可以转移到现实生活中的能力。我们通过研究视频游戏的内容、设计和可能影响其在学习环境中的实施的特征,分析了24款关于媒体素养的视频游戏。尽管并非所有的学习结果都得到了平等的处理,但结果表明,目前市场上的媒体素养电子游戏可以作为实现关键学习目标的有效工具,并且可以让用户理解、练习和执行打击错误信息的技能,而不管它们在游戏机制方面的复杂性如何。然而,我们发现电子游戏的某些特征可能会影响它们在学习环境中的实施,比如它们的可用性、估计的游戏时间、方法,或者它们是否包含真实或虚构的世界,这些变量应该被开发者和教育者进一步考虑。
{"title":"Evaluating Video Games as Tools for Education on Fake News and Misinformation","authors":"Ruth S. Contreras-Espinosa, Jose Luis Eguia-Gomez","doi":"10.3390/computers12090188","DOIUrl":"https://doi.org/10.3390/computers12090188","url":null,"abstract":"Despite access to reliable information being essential for equal opportunities in our society, current school curricula only include some notions about media literacy in a limited context. Thus, it is necessary to create scenarios for reflection on and a well-founded analysis of misinformation. Video games may be an effective approach to foster these skills and can seamlessly integrate learning content into their design, enabling achieving multiple learning outcomes and building competencies that can transfer to real-life situations. We analyzed 24 video games about media literacy by studying their content, design, and characteristics that may affect their implementation in learning settings. Even though not all learning outcomes considered were equally addressed, the results show that media literacy video games currently on the market could be used as effective tools to achieve critical learning goals and may allow users to understand, practice, and implement skills to fight misinformation, regardless of their complexity in terms of game mechanics. However, we detected that certain characteristics of video games may affect their implementation in learning environments, such as their availability, estimated playing time, approach, or whether they include real or fictional worlds, variables that should be further considered by both developers and educators.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136154316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing Uncertainty in Tool Wear Prediction with Dropout-Based Neural Network 基于dropout的神经网络求解刀具磨损预测中的不确定性
Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-19 DOI: 10.3390/computers12090187
Arup Dey, Nita Yodo, Om P. Yadav, Ragavanantham Shanmugam, Monsuru Ramoni
Data-driven algorithms have been widely applied in predicting tool wear because of the high prediction performance of the algorithms, availability of data sets, and advancements in computing capabilities in recent years. Although most algorithms are supposed to generate outcomes with high precision and accuracy, this is not always true in practice. Uncertainty exists in distinct phases of applying data-driven algorithms due to noises and randomness in data, the presence of redundant and irrelevant features, and model assumptions. Uncertainty due to noise and missing data is known as data uncertainty. On the other hand, model assumptions and imperfection are reasons for model uncertainty. In this paper, both types of uncertainty are considered in the tool wear prediction. Empirical mode decomposition is applied to reduce uncertainty from raw data. Additionally, the Monte Carlo dropout technique is used in training a neural network algorithm to incorporate model uncertainty. The unique feature of the proposed method is that it estimates tool wear as an interval, and the interval range represents the degree of uncertainty. Different performance measurement matrices are used to compare the proposed method. It is shown that the proposed approach can predict tool wear with higher accuracy.
由于数据驱动算法的高预测性能、数据集的可用性以及近年来计算能力的进步,数据驱动算法在预测刀具磨损方面得到了广泛的应用。虽然大多数算法都被认为产生的结果具有很高的精度和准确性,但在实践中并不总是如此。由于数据中的噪声和随机性、冗余和不相关特征的存在以及模型假设,不确定性存在于应用数据驱动算法的不同阶段。由噪声和缺失数据引起的不确定性称为数据不确定性。另一方面,模型的假设和不完善是模型不确定性的原因。本文在刀具磨损预测中考虑了这两种不确定性。应用经验模态分解来降低原始数据的不确定性。此外,Monte Carlo dropout技术用于训练神经网络算法,以结合模型的不确定性。该方法的独特之处在于,它将刀具磨损作为一个区间来估计,区间范围代表了不确定性的程度。使用不同的性能度量矩阵来比较所提出的方法。结果表明,该方法能较好地预测刀具磨损。
{"title":"Addressing Uncertainty in Tool Wear Prediction with Dropout-Based Neural Network","authors":"Arup Dey, Nita Yodo, Om P. Yadav, Ragavanantham Shanmugam, Monsuru Ramoni","doi":"10.3390/computers12090187","DOIUrl":"https://doi.org/10.3390/computers12090187","url":null,"abstract":"Data-driven algorithms have been widely applied in predicting tool wear because of the high prediction performance of the algorithms, availability of data sets, and advancements in computing capabilities in recent years. Although most algorithms are supposed to generate outcomes with high precision and accuracy, this is not always true in practice. Uncertainty exists in distinct phases of applying data-driven algorithms due to noises and randomness in data, the presence of redundant and irrelevant features, and model assumptions. Uncertainty due to noise and missing data is known as data uncertainty. On the other hand, model assumptions and imperfection are reasons for model uncertainty. In this paper, both types of uncertainty are considered in the tool wear prediction. Empirical mode decomposition is applied to reduce uncertainty from raw data. Additionally, the Monte Carlo dropout technique is used in training a neural network algorithm to incorporate model uncertainty. The unique feature of the proposed method is that it estimates tool wear as an interval, and the interval range represents the degree of uncertainty. Different performance measurement matrices are used to compare the proposed method. It is shown that the proposed approach can predict tool wear with higher accuracy.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135063141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video Summarization Based on Feature Fusion and Data Augmentation 基于特征融合和数据增强的视频摘要
Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-15 DOI: 10.3390/computers12090186
Theodoros Psallidas, Evaggelos Spyrou
During the last few years, several technological advances have led to an increase in the creation and consumption of audiovisual multimedia content. Users are overexposed to videos via several social media or video sharing websites and mobile phone applications. For efficient browsing, searching, and navigation across several multimedia collections and repositories, e.g., for finding videos that are relevant to a particular topic or interest, this ever-increasing content should be efficiently described by informative yet concise content representations. A common solution to this problem is the construction of a brief summary of a video, which could be presented to the user, instead of the full video, so that she/he could then decide whether to watch or ignore the whole video. Such summaries are ideally more expressive than other alternatives, such as brief textual descriptions or keywords. In this work, the video summarization problem is approached as a supervised classification task, which relies on feature fusion of audio and visual data. Specifically, the goal of this work is to generate dynamic video summaries, i.e., compositions of parts of the original video, which include its most essential video segments, while preserving the original temporal sequence. This work relies on annotated datasets on a per-frame basis, wherein parts of videos are annotated as being “informative” or “noninformative”, with the latter being excluded from the produced summary. The novelties of the proposed approach are, (a) prior to classification, a transfer learning strategy to use deep features from pretrained models is employed. These models have been used as input to the classifiers, making them more intuitive and robust to objectiveness, and (b) the training dataset was augmented by using other publicly available datasets. The proposed approach is evaluated using three datasets of user-generated videos, and it is demonstrated that deep features and data augmentation are able to improve the accuracy of video summaries based on human annotations. Moreover, it is domain independent, could be used on any video, and could be extended to rely on richer feature representations or include other data modalities.
在过去几年中,几项技术进步导致视听多媒体内容的创作和消费增加。用户通过几个社交媒体或视频分享网站和手机应用程序过度接触视频。为了在多个多媒体集合和存储库之间高效地浏览、搜索和导航,例如,为了寻找与特定主题或兴趣相关的视频,这种不断增加的内容应该通过信息丰富而简洁的内容表示来有效地描述。解决这个问题的一个常见方法是构建一个视频的简短摘要,它可以呈现给用户,而不是完整的视频,这样用户就可以决定是观看还是忽略整个视频。理想情况下,这样的摘要比其他选择(如简短的文本描述或关键字)更具表现力。本文将视频摘要问题作为一个依赖于音视频数据特征融合的监督分类问题来研究。具体来说,这项工作的目标是生成动态视频摘要,即原始视频的部分组成,其中包括其最重要的视频片段,同时保留原始时间序列。这项工作依赖于每帧基础上的注释数据集,其中视频的部分被注释为“信息”或“非信息”,后者被排除在生成的摘要之外。该方法的新颖之处在于:(a)在分类之前,采用迁移学习策略来使用预训练模型的深度特征。这些模型被用作分类器的输入,使它们更加直观和健壮,并且(b)通过使用其他公开可用的数据集来增强训练数据集。使用三个用户生成视频数据集对该方法进行了评估,结果表明深度特征和数据增强能够提高基于人工注释的视频摘要的准确性。此外,它是领域独立的,可以在任何视频上使用,并且可以扩展到依赖于更丰富的特征表示或包括其他数据模式。
{"title":"Video Summarization Based on Feature Fusion and Data Augmentation","authors":"Theodoros Psallidas, Evaggelos Spyrou","doi":"10.3390/computers12090186","DOIUrl":"https://doi.org/10.3390/computers12090186","url":null,"abstract":"During the last few years, several technological advances have led to an increase in the creation and consumption of audiovisual multimedia content. Users are overexposed to videos via several social media or video sharing websites and mobile phone applications. For efficient browsing, searching, and navigation across several multimedia collections and repositories, e.g., for finding videos that are relevant to a particular topic or interest, this ever-increasing content should be efficiently described by informative yet concise content representations. A common solution to this problem is the construction of a brief summary of a video, which could be presented to the user, instead of the full video, so that she/he could then decide whether to watch or ignore the whole video. Such summaries are ideally more expressive than other alternatives, such as brief textual descriptions or keywords. In this work, the video summarization problem is approached as a supervised classification task, which relies on feature fusion of audio and visual data. Specifically, the goal of this work is to generate dynamic video summaries, i.e., compositions of parts of the original video, which include its most essential video segments, while preserving the original temporal sequence. This work relies on annotated datasets on a per-frame basis, wherein parts of videos are annotated as being “informative” or “noninformative”, with the latter being excluded from the produced summary. The novelties of the proposed approach are, (a) prior to classification, a transfer learning strategy to use deep features from pretrained models is employed. These models have been used as input to the classifiers, making them more intuitive and robust to objectiveness, and (b) the training dataset was augmented by using other publicly available datasets. The proposed approach is evaluated using three datasets of user-generated videos, and it is demonstrated that deep features and data augmentation are able to improve the accuracy of video summaries based on human annotations. Moreover, it is domain independent, could be used on any video, and could be extended to rely on richer feature representations or include other data modalities.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135437880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Counterfeit Detection with Multi-Features on Secure 2D Grayscale Codes 利用二维灰度码的多特征增强防伪能力
Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-14 DOI: 10.3390/computers12090183
Bimo Sunarfri Hantono, Syukron Abu Ishaq Alfarozi, Azkario Rizky Pratama, Ahmad Ataka Awwalur Rizqi, I Wayan Mustika, Mardhani Riasetiawan, Anna Maria Sri Asih
Counterfeit products have become a pervasive problem in the global marketplace, necessitating effective strategies to protect both consumers and brands. This study examines the role of cybersecurity in addressing counterfeiting issues, specifically focusing on a multi-level grayscale watermark-based authentication system. The system comprises a generator responsible for creating a secure 2D code, and an authenticator designed to extract watermark information and verify product authenticity. To authenticate the secure 2D code, we propose various features, including the analysis of the spatial domain, frequency domain, and grayscale watermark distribution. Furthermore, we emphasize the importance of selecting appropriate interpolation methods to enhance counterfeit detection. Our proposed approach demonstrates remarkable performance, achieving precision, recall, and specificities surpassing 84.8%, 83.33%, and 84.5%, respectively, across different datasets.
假冒产品已成为全球市场上普遍存在的问题,需要有效的策略来保护消费者和品牌。本研究探讨了网络安全在解决假冒问题中的作用,特别关注基于多级灰度水印的认证系统。该系统包括一个负责创建安全二维代码的生成器和一个用于提取水印信息和验证产品真实性的认证器。为了验证安全的二维码,我们提出了各种特征,包括空间域、频率域和灰度水印分布的分析。此外,我们强调了选择合适的插值方法来增强伪造检测的重要性。我们提出的方法表现出显著的性能,在不同的数据集上,准确率、召回率和特异性分别超过84.8%、83.33%和84.5%。
{"title":"Enhancing Counterfeit Detection with Multi-Features on Secure 2D Grayscale Codes","authors":"Bimo Sunarfri Hantono, Syukron Abu Ishaq Alfarozi, Azkario Rizky Pratama, Ahmad Ataka Awwalur Rizqi, I Wayan Mustika, Mardhani Riasetiawan, Anna Maria Sri Asih","doi":"10.3390/computers12090183","DOIUrl":"https://doi.org/10.3390/computers12090183","url":null,"abstract":"Counterfeit products have become a pervasive problem in the global marketplace, necessitating effective strategies to protect both consumers and brands. This study examines the role of cybersecurity in addressing counterfeiting issues, specifically focusing on a multi-level grayscale watermark-based authentication system. The system comprises a generator responsible for creating a secure 2D code, and an authenticator designed to extract watermark information and verify product authenticity. To authenticate the secure 2D code, we propose various features, including the analysis of the spatial domain, frequency domain, and grayscale watermark distribution. Furthermore, we emphasize the importance of selecting appropriate interpolation methods to enhance counterfeit detection. Our proposed approach demonstrates remarkable performance, achieving precision, recall, and specificities surpassing 84.8%, 83.33%, and 84.5%, respectively, across different datasets.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134912001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Specification Mining over Temporal Data 规范挖掘时态数据
Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-14 DOI: 10.3390/computers12090185
Giacomo Bergami, Samuel Appleby, Graham Morgan
Current specification mining algorithms for temporal data rely on exhaustive search approaches, which become detrimental in real data settings where a plethora of distinct temporal behaviours are recorded over prolonged observations. This paper proposes a novel algorithm, Bolt2, based on a refined heuristic search of our previous algorithm, Bolt. Our experiments show that the proposed approach not only surpasses exhaustive search methods in terms of running time but also guarantees a minimal description that captures the overall temporal behaviour. This is achieved through a hypothesis lattice search that exploits support metrics. Our novel specification mining algorithm also outperforms the results achieved in our previous contribution.
当前规范挖掘算法的时间数据依赖于穷举搜索方法,这在真实的数据设置中是有害的,因为在长时间的观察中记录了大量不同的时间行为。本文提出了一种新的算法,Bolt2,它是在我们之前的算法Bolt的基础上改进的启发式搜索算法。我们的实验表明,所提出的方法不仅在运行时间方面优于穷举搜索方法,而且还保证了捕获整体时间行为的最小描述。这是通过利用支持度量的假设格搜索实现的。我们的新规范挖掘算法也优于我们之前的贡献所取得的结果。
{"title":"Specification Mining over Temporal Data","authors":"Giacomo Bergami, Samuel Appleby, Graham Morgan","doi":"10.3390/computers12090185","DOIUrl":"https://doi.org/10.3390/computers12090185","url":null,"abstract":"Current specification mining algorithms for temporal data rely on exhaustive search approaches, which become detrimental in real data settings where a plethora of distinct temporal behaviours are recorded over prolonged observations. This paper proposes a novel algorithm, Bolt2, based on a refined heuristic search of our previous algorithm, Bolt. Our experiments show that the proposed approach not only surpasses exhaustive search methods in terms of running time but also guarantees a minimal description that captures the overall temporal behaviour. This is achieved through a hypothesis lattice search that exploits support metrics. Our novel specification mining algorithm also outperforms the results achieved in our previous contribution.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134914439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Process-Oriented Requirements Definition and Analysis of Software Components in Critical Systems 关键系统中软件组件面向过程的需求定义与分析
Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-14 DOI: 10.3390/computers12090184
Benedetto Intrigila, Giuseppe Della Penna, Andrea D’Ambrogio, Dario Campagna, Malina Grigore
Requirements management is a key aspect in the development of software components, since complex systems are often subject to frequent updates due to continuously changing requirements. This is especially true in critical systems, i.e., systems whose failure or malfunctioning may lead to severe consequences. This paper proposes a three-step approach that incrementally refines a critical system specification, from a lightweight high-level model targeted to stakeholders, down to a formal standard model that links requirements, processes and data. The resulting model provides the requirements specification used to feed the subsequent development, verification and maintenance activities, and can also be seen as a first step towards the development of a digital twin of the physical system.
需求管理是软件组件开发中的一个关键方面,因为复杂的系统经常由于需求的不断变化而受到频繁更新的影响。在关键系统中尤其如此,即那些失败或故障可能导致严重后果的系统。本文提出了一个三步的方法,逐步细化关键的系统规范,从针对涉众的轻量级高级模型,到连接需求、过程和数据的正式标准模型。生成的模型提供了用于后续开发、验证和维护活动的需求规范,也可以看作是开发物理系统的数字孪生体的第一步。
{"title":"Process-Oriented Requirements Definition and Analysis of Software Components in Critical Systems","authors":"Benedetto Intrigila, Giuseppe Della Penna, Andrea D’Ambrogio, Dario Campagna, Malina Grigore","doi":"10.3390/computers12090184","DOIUrl":"https://doi.org/10.3390/computers12090184","url":null,"abstract":"Requirements management is a key aspect in the development of software components, since complex systems are often subject to frequent updates due to continuously changing requirements. This is especially true in critical systems, i.e., systems whose failure or malfunctioning may lead to severe consequences. This paper proposes a three-step approach that incrementally refines a critical system specification, from a lightweight high-level model targeted to stakeholders, down to a formal standard model that links requirements, processes and data. The resulting model provides the requirements specification used to feed the subsequent development, verification and maintenance activities, and can also be seen as a first step towards the development of a digital twin of the physical system.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134912194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multispectral Image Generation from RGB Based on WSL Color Representation: Wavelength, Saturation, and Lightness 基于WSL颜色表示的RGB多光谱图像生成:波长、饱和度和亮度
Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-13 DOI: 10.3390/computers12090182
Vaclav Skala
Image processing techniques are based nearly exclusively on RGB (red–green–blue) representation, which is significantly influenced by technological issues. The RGB triplet represents a mixture of the wavelength, saturation, and lightness values of light. It leads to unexpected chromaticity artifacts in processing. Therefore, processing based on the wavelength, saturation, and lightness should be more resistant to the introduction of color artifacts. The proposed process of converting RGB values to corresponding wavelengths is not straightforward. In this contribution, a novel simple and accurate method for extracting the wavelength, saturation, and lightness of a color represented by an RGB triplet is described. The conversion relies on the known RGB values of the rainbow spectrum and accommodates variations in color saturation.
图像处理技术几乎完全基于RGB(红-绿-蓝)表示,这受到技术问题的显著影响。RGB三元组表示光的波长、饱和度和亮度值的混合。它会在处理过程中导致意想不到的色度伪影。因此,基于波长、饱和度和明度的处理应更能抵抗彩色伪影的引入。将RGB值转换为相应波长的建议过程并不简单。在这篇贡献中,描述了一种新的简单而准确的方法来提取由RGB三重态表示的颜色的波长,饱和度和亮度。这种转换依赖于已知的彩虹光谱的RGB值,并适应色彩饱和度的变化。
{"title":"Multispectral Image Generation from RGB Based on WSL Color Representation: Wavelength, Saturation, and Lightness","authors":"Vaclav Skala","doi":"10.3390/computers12090182","DOIUrl":"https://doi.org/10.3390/computers12090182","url":null,"abstract":"Image processing techniques are based nearly exclusively on RGB (red–green–blue) representation, which is significantly influenced by technological issues. The RGB triplet represents a mixture of the wavelength, saturation, and lightness values of light. It leads to unexpected chromaticity artifacts in processing. Therefore, processing based on the wavelength, saturation, and lightness should be more resistant to the introduction of color artifacts. The proposed process of converting RGB values to corresponding wavelengths is not straightforward. In this contribution, a novel simple and accurate method for extracting the wavelength, saturation, and lightness of a color represented by an RGB triplet is described. The conversion relies on the known RGB values of the rainbow spectrum and accommodates variations in color saturation.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135740183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building an Expert System through Machine Learning for Predicting the Quality of a Website Based on Its Completion 基于完成度的网站质量预测的机器学习专家系统构建
Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-11 DOI: 10.3390/computers12090181
Vishnu Priya Biyyapu, Sastry Kodanda Rama Jammalamadaka, Sasi Bhanu Jammalamadaka, Bhupati Chokara, Bala Krishna Kamesh Duvvuri, Raja Rao Budaraju
The main channel for disseminating information is now the Internet. Users have different expectations for the calibre of websites regarding the posted and presented content. The website’s quality is influenced by up to 120 factors, each represented by two to fifteen attributes. A major challenge is quantifying the features and evaluating the quality of a website based on the feature counts. One of the aspects that determines a website’s quality is its completeness, which focuses on the existence of all the objects and their connections with one another. It is not easy to build an expert model based on feature counts to evaluate website quality, so this paper has focused on that challenge. Both a methodology for calculating a website’s quality and a parser-based approach for measuring feature counts are offered. We provide a multi-layer perceptron model that is an expert model for forecasting website quality from the "completeness" perspective. The accuracy of the predictions is 98%, whilst the accuracy of the nearest model is 87%.
现在传播信息的主要渠道是互联网。用户对网站发布和呈现内容的质量有不同的期望。网站的质量受到多达120个因素的影响,每个因素由2到15个属性表示。一个主要的挑战是量化特征,并根据特征数量评估网站的质量。决定网站质量的一个方面是它的完整性,它关注的是所有对象的存在以及它们彼此之间的联系。建立一个基于特征数的专家模型来评估网站质量是一件不容易的事情,因此本文重点研究了这一挑战。提供了一种计算网站质量的方法和一种基于解析器的测量功能计数的方法。我们提供了一个多层感知器模型,是一个从“完整性”角度预测网站质量的专家模型。预测的准确率为98%,而最接近的模型的准确率为87%。
{"title":"Building an Expert System through Machine Learning for Predicting the Quality of a Website Based on Its Completion","authors":"Vishnu Priya Biyyapu, Sastry Kodanda Rama Jammalamadaka, Sasi Bhanu Jammalamadaka, Bhupati Chokara, Bala Krishna Kamesh Duvvuri, Raja Rao Budaraju","doi":"10.3390/computers12090181","DOIUrl":"https://doi.org/10.3390/computers12090181","url":null,"abstract":"The main channel for disseminating information is now the Internet. Users have different expectations for the calibre of websites regarding the posted and presented content. The website’s quality is influenced by up to 120 factors, each represented by two to fifteen attributes. A major challenge is quantifying the features and evaluating the quality of a website based on the feature counts. One of the aspects that determines a website’s quality is its completeness, which focuses on the existence of all the objects and their connections with one another. It is not easy to build an expert model based on feature counts to evaluate website quality, so this paper has focused on that challenge. Both a methodology for calculating a website’s quality and a parser-based approach for measuring feature counts are offered. We provide a multi-layer perceptron model that is an expert model for forecasting website quality from the \"completeness\" perspective. The accuracy of the predictions is 98%, whilst the accuracy of the nearest model is 87%.","PeriodicalId":46292,"journal":{"name":"Computers","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136071085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1