首页 > 最新文献

Journal of Intelligent Systems最新文献

英文 中文
A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology 对智能合约技术中未被发现的漏洞和工具进行系统的文献综述
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2023-0038
Oualid Zaazaa, Hanan El Bakkali
Abstract In recent years, smart contract technology has garnered significant attention due to its ability to address trust issues that traditional technologies have long struggled with. However, like any evolving technology, smart contracts are not immune to vulnerabilities, and some remain underexplored, often eluding detection by existing vulnerability assessment tools. In this article, we have performed a systematic literature review of all the scientific research and papers conducted between 2016 and 2021. The main objective of this work is to identify what vulnerabilities and smart contract technologies have not been well studied. In addition, we list all the datasets used by previous researchers that can help researchers in building more efficient machine-learning models in the future. In addition, comparisons are drawn among the smart contract analysis tools by considering various features. Finally, various future directions are also discussed in the field of smart contracts that can help researchers to set the direction for future research in this domain.
近年来,智能合约技术因其解决传统技术长期难以解决的信任问题的能力而引起了广泛关注。然而,就像任何不断发展的技术一样,智能合约也不能幸免于漏洞,有些仍然未被充分开发,通常无法被现有的漏洞评估工具检测到。在本文中,我们对2016 - 2021年间的所有科学研究和论文进行了系统的文献综述。这项工作的主要目标是确定哪些漏洞和智能合约技术尚未得到很好的研究。此外,我们列出了以前研究人员使用的所有数据集,这些数据集可以帮助研究人员在未来构建更有效的机器学习模型。此外,通过考虑各种功能,对智能合约分析工具进行了比较。最后,还讨论了智能合约领域的各种未来方向,可以帮助研究人员为该领域的未来研究设定方向。
{"title":"A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology","authors":"Oualid Zaazaa, Hanan El Bakkali","doi":"10.1515/jisys-2023-0038","DOIUrl":"https://doi.org/10.1515/jisys-2023-0038","url":null,"abstract":"Abstract In recent years, smart contract technology has garnered significant attention due to its ability to address trust issues that traditional technologies have long struggled with. However, like any evolving technology, smart contracts are not immune to vulnerabilities, and some remain underexplored, often eluding detection by existing vulnerability assessment tools. In this article, we have performed a systematic literature review of all the scientific research and papers conducted between 2016 and 2021. The main objective of this work is to identify what vulnerabilities and smart contract technologies have not been well studied. In addition, we list all the datasets used by previous researchers that can help researchers in building more efficient machine-learning models in the future. In addition, comparisons are drawn among the smart contract analysis tools by considering various features. Finally, various future directions are also discussed in the field of smart contracts that can help researchers to set the direction for future research in this domain.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"22 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84117441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing SQL payloads using logistic regression in a big data environment 在大数据环境中使用逻辑回归分析SQL有效负载
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2023-0063
O. Shareef, Rehab Flaih Hasan, Ammar Hatem Farhan
Abstract Protecting big data from attacks on large organizations is essential because of how vital such data are to organizations and individuals. Moreover, such data can be put at risk when attackers gain unauthorized access to information and use it in illegal ways. One of the most common such attacks is the structured query language injection attack (SQLIA). This attack is a vulnerability attack that allows attackers to illegally access a database quickly and easily by manipulating structured query language (SQL) queries, especially when dealing with a big data environment. To address these risks, this study aims to build an approach that acts as a middle protection layer between the client and database server layers and reduces the time consumed to classify the SQL payload sent from the user layer. The proposed method involves training a model by using a machine learning (ML) technique for logistic regression with the Spark ML library that handles big data. An experiment was conducted using the SQLI dataset. Results show that the proposed approach achieved an accuracy of 99.04, a precision of 98.87, a recall of 99.89, and an F-score of 99.04. The time taken to identify and prevent SQLIA is 0.05 s. Our approach can protect the data by using the middle layer. Moreover, using the Spark ML library with ML algorithms gives better accuracy and shortens the time required to determine the type of request sent from the user layer.
保护大数据免受大型组织的攻击至关重要,因为这些数据对组织和个人都至关重要。此外,当攻击者未经授权访问信息并以非法方式使用这些信息时,这些数据可能会处于危险之中。最常见的攻击之一是结构化查询语言注入攻击(SQLIA)。这种攻击是一种漏洞攻击,攻击者可以通过操纵结构化查询语言(SQL)查询,快速轻松地非法访问数据库,特别是在处理大数据环境时。为了解决这些风险,本研究旨在构建一种方法,作为客户端和数据库服务器层之间的中间保护层,减少对从用户层发送的SQL有效负载进行分类所花费的时间。提出的方法包括通过使用机器学习(ML)技术来训练模型,并使用处理大数据的Spark ML库进行逻辑回归。使用SQLI数据集进行了实验。结果表明,该方法的准确率为99.04,精密度为98.87,召回率为99.89,f分数为99.04。识别和预防SQLIA所需时间为0.05 s。我们的方法可以通过使用中间层来保护数据。此外,使用Spark ML库和ML算法提供了更好的准确性,并缩短了确定从用户层发送的请求类型所需的时间。
{"title":"Analyzing SQL payloads using logistic regression in a big data environment","authors":"O. Shareef, Rehab Flaih Hasan, Ammar Hatem Farhan","doi":"10.1515/jisys-2023-0063","DOIUrl":"https://doi.org/10.1515/jisys-2023-0063","url":null,"abstract":"Abstract Protecting big data from attacks on large organizations is essential because of how vital such data are to organizations and individuals. Moreover, such data can be put at risk when attackers gain unauthorized access to information and use it in illegal ways. One of the most common such attacks is the structured query language injection attack (SQLIA). This attack is a vulnerability attack that allows attackers to illegally access a database quickly and easily by manipulating structured query language (SQL) queries, especially when dealing with a big data environment. To address these risks, this study aims to build an approach that acts as a middle protection layer between the client and database server layers and reduces the time consumed to classify the SQL payload sent from the user layer. The proposed method involves training a model by using a machine learning (ML) technique for logistic regression with the Spark ML library that handles big data. An experiment was conducted using the SQLI dataset. Results show that the proposed approach achieved an accuracy of 99.04, a precision of 98.87, a recall of 99.89, and an F-score of 99.04. The time taken to identify and prevent SQLIA is 0.05 s. Our approach can protect the data by using the middle layer. Moreover, using the Spark ML library with ML algorithms gives better accuracy and shortens the time required to determine the type of request sent from the user layer.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"137 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86671255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anti-leakage method of network sensitive information data based on homomorphic encryption 基于同态加密的网络敏感信息数据防泄漏方法
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0281
Junlong Shi, Xiaofeng Zhao
Abstract With the development of artificial intelligence, people begin to pay attention to the protection of sensitive information and data. Therefore, a homomorphic encryption framework based on effective integer vector is proposed and applied to deep learning to protect the privacy of users in binary convolutional neural network model. The conclusion shows that the model can achieve high accuracy. The training is 93.75% in MNIST dataset and 89.24% in original dataset. Because of the confidentiality of data, the training accuracy of the training set is only 86.77%. After increasing the training period, the accuracy began to converge to about 300 cycles, and finally reached about 86.39%. In addition, after taking the absolute value of the elements in the encryption matrix, the training accuracy of the model is 88.79%, and the test accuracy is 85.12%. The improved model is also compared with the traditional model. This model can reduce the storage consumption in the model calculation process, effectively improve the calculation speed, and have little impact on the accuracy. Specifically, the speed of the improved model is 58 times that of the traditional CNN model, and the storage consumption is 1/32 of that of the traditional CNN model. Therefore, homomorphic encryption can be applied to information encryption under the background of big data, and the privacy of the neural network can be realized.
随着人工智能的发展,人们开始关注敏感信息和数据的保护。为此,提出了一种基于有效整数向量的同态加密框架,并将其应用于深度学习中,以保护二元卷积神经网络模型中用户的隐私。结果表明,该模型能够达到较高的精度。MNIST数据集的训练率为93.75%,原始数据集的训练率为89.24%。由于数据的保密性,训练集的训练准确率仅为86.77%。增加训练周期后,准确率开始收敛到300次左右,最终达到86.39%左右。此外,取加密矩阵中元素的绝对值后,该模型的训练准确率为88.79%,测试准确率为85.12%。并将改进后的模型与传统模型进行了比较。该模型可以减少模型计算过程中的存储消耗,有效提高计算速度,对精度影响较小。具体来说,改进模型的速度是传统CNN模型的58倍,存储消耗是传统CNN模型的1/32。因此,可以将同态加密应用于大数据背景下的信息加密,实现神经网络的隐私性。
{"title":"Anti-leakage method of network sensitive information data based on homomorphic encryption","authors":"Junlong Shi, Xiaofeng Zhao","doi":"10.1515/jisys-2022-0281","DOIUrl":"https://doi.org/10.1515/jisys-2022-0281","url":null,"abstract":"Abstract With the development of artificial intelligence, people begin to pay attention to the protection of sensitive information and data. Therefore, a homomorphic encryption framework based on effective integer vector is proposed and applied to deep learning to protect the privacy of users in binary convolutional neural network model. The conclusion shows that the model can achieve high accuracy. The training is 93.75% in MNIST dataset and 89.24% in original dataset. Because of the confidentiality of data, the training accuracy of the training set is only 86.77%. After increasing the training period, the accuracy began to converge to about 300 cycles, and finally reached about 86.39%. In addition, after taking the absolute value of the elements in the encryption matrix, the training accuracy of the model is 88.79%, and the test accuracy is 85.12%. The improved model is also compared with the traditional model. This model can reduce the storage consumption in the model calculation process, effectively improve the calculation speed, and have little impact on the accuracy. Specifically, the speed of the improved model is 58 times that of the traditional CNN model, and the storage consumption is 1/32 of that of the traditional CNN model. Therefore, homomorphic encryption can be applied to information encryption under the background of big data, and the privacy of the neural network can be realized.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"29 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86712507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new method for writer identification based on historical documents 一种基于历史文献的作者鉴定新方法
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0244
A. Gattal, Chawki Djeddi, Faycel Abbas, I. Siddiqi, Bouderah Brahim
Abstract Identifying the writer of a handwritten document has remained an interesting pattern classification problem for document examiners, forensic experts, and paleographers. While mature identification systems have been developed for handwriting in contemporary documents, the problem remains challenging from the viewpoint of historical manuscripts. Design and development of expert systems that can identify the writer of a questioned manuscript or retrieve samples belonging to a given writer can greatly help the paleographers in their practices. In this context, the current study exploits the textural information in handwriting to characterize writer from historical documents. More specifically, we employ oBIF(oriented Basic Image Features) and hinge features and introduce a novel moment-based matching method to compare the feature vectors extracted from writing samples. Classification is based on minimization of a similarity criterion using the proposed moment distance. A comprehensive series of experiments using the International Conference on Document Analysis and Recognition 2017 historical writer identification dataset reported promising results and validated the ideas put forward in this study.
识别手写文件的作者一直是文件审查员、法医专家和古文字学家的一个有趣的模式分类问题。虽然成熟的识别系统已经发展为当代文件中的笔迹,但从历史手稿的角度来看,这个问题仍然具有挑战性。专家系统的设计和开发可以识别被质疑手稿的作者或检索属于给定作者的样本,这可以极大地帮助古文字学家进行实践。在此背景下,本研究利用笔迹的纹理信息来刻画历史文献中的作者。更具体地说,我们采用oBIF(面向基本图像特征)和铰链特征,并引入了一种新的基于矩的匹配方法来比较从书写样本中提取的特征向量。分类是基于最小化的相似性标准使用提出的矩距离。利用2017年国际文献分析与识别会议的历史作者识别数据集进行的一系列综合实验报告了令人鼓舞的结果,并验证了本研究提出的想法。
{"title":"A new method for writer identification based on historical documents","authors":"A. Gattal, Chawki Djeddi, Faycel Abbas, I. Siddiqi, Bouderah Brahim","doi":"10.1515/jisys-2022-0244","DOIUrl":"https://doi.org/10.1515/jisys-2022-0244","url":null,"abstract":"Abstract Identifying the writer of a handwritten document has remained an interesting pattern classification problem for document examiners, forensic experts, and paleographers. While mature identification systems have been developed for handwriting in contemporary documents, the problem remains challenging from the viewpoint of historical manuscripts. Design and development of expert systems that can identify the writer of a questioned manuscript or retrieve samples belonging to a given writer can greatly help the paleographers in their practices. In this context, the current study exploits the textural information in handwriting to characterize writer from historical documents. More specifically, we employ oBIF(oriented Basic Image Features) and hinge features and introduce a novel moment-based matching method to compare the feature vectors extracted from writing samples. Classification is based on minimization of a similarity criterion using the proposed moment distance. A comprehensive series of experiments using the International Conference on Document Analysis and Recognition 2017 historical writer identification dataset reported promising results and validated the ideas put forward in this study.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"87 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81093418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reinforcement learning with Gaussian process regression using variational free energy 基于变分自由能的高斯过程回归强化学习
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0205
Kiseki Kameda, F. Tanaka
Abstract The essential part of existing reinforcement learning algorithms that use Gaussian process regression involves a complicated online Gaussian process regression algorithm. Our study proposes online and mini-batch Gaussian process regression algorithms that are easier to implement and faster to estimate for reinforcement learning. In our algorithm, the Gaussian process regression updates the value function through only the computation of two equations, which we then use to construct reinforcement learning algorithms. Our numerical experiments show that the proposed algorithm works as well as those from previous studies.
现有使用高斯过程回归的强化学习算法的核心部分是复杂的在线高斯过程回归算法。我们的研究提出了在线和小批量高斯过程回归算法,更容易实现,更快地估计强化学习。在我们的算法中,高斯过程回归仅通过计算两个方程来更新值函数,然后我们使用它们来构建强化学习算法。数值实验表明,本文提出的算法与已有的算法一样有效。
{"title":"Reinforcement learning with Gaussian process regression using variational free energy","authors":"Kiseki Kameda, F. Tanaka","doi":"10.1515/jisys-2022-0205","DOIUrl":"https://doi.org/10.1515/jisys-2022-0205","url":null,"abstract":"Abstract The essential part of existing reinforcement learning algorithms that use Gaussian process regression involves a complicated online Gaussian process regression algorithm. Our study proposes online and mini-batch Gaussian process regression algorithms that are easier to implement and faster to estimate for reinforcement learning. In our algorithm, the Gaussian process regression updates the value function through only the computation of two equations, which we then use to construct reinforcement learning algorithms. Our numerical experiments show that the proposed algorithm works as well as those from previous studies.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"15 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75118897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of an intelligent controller for sports training system based on FPGA 基于FPGA的运动训练系统智能控制器的研制
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0260
Yaser M. Abid, N. Kaittan, M. Mahdi, B. I. Bakri, A. Omran, M. Altaee, Sura Khalil Abid
Abstract Training, sports equipment, and facilities are the main aspects of sports advancement. Countries are investing heavily in the training of athletes, especially in table tennis. Athletes require basic equipment for exercises, but most athletes cannot afford the high cost; hence, the necessity for developing a low-cost automated system has increased. To enhance the quality of the athletes’ training, the proposed research focuses on using the enormous developments in artificial intelligence by developing an automated training system that can maintain the training duration and intensity whenever necessary. In this research, an intelligent controller has been designed to simulate training patterns of table tennis. The intelligent controller will control the system that sends the table tennis balls’ intensity, speed, and duration. The system will detect the hand sign that has been previously assigned to different speeds using an image detection method and will work accordingly by accelerating the speed using pulse width modulation techniques. Simply showing the athletes’ hand sign to the system will trigger the artificial intelligent camera to identify it, sending the tennis ball at the assigned speed. The artificial intelligence of the proposed device showed promising results in detecting hand signs with minimum errors in training sessions and intensity. The image detection accuracy collected from the intelligent controller during training was 90.05%. Furthermore, the proposed system has a minimal material cost and can be easily installed and used.
训练、运动器材和设施是体育进步的主要方面。各国正在大力投资于运动员的训练,特别是在乒乓球方面。运动员需要基本的运动设备,但大多数运动员负担不起高昂的费用;因此,开发低成本自动化系统的必要性增加了。为了提高运动员的训练质量,本研究的重点是利用人工智能的巨大发展,开发一种可以随时保持训练时间和强度的自动化训练系统。在本研究中,设计了一个智能控制器来模拟乒乓球的训练模式。智能控制器将控制发送乒乓球的强度、速度和持续时间的系统。该系统将使用图像检测方法检测先前分配给不同速度的手势,并使用脉冲宽度调制技术加速相应的速度。只需向系统显示运动员的手势,就会触发人工智能摄像头进行识别,并以指定的速度发送网球。该设备的人工智能在检测手势方面显示出良好的效果,在训练课程和强度方面的错误最小。训练时从智能控制器采集的图像检测准确率为90.05%。此外,该系统的材料成本最低,易于安装和使用。
{"title":"Development of an intelligent controller for sports training system based on FPGA","authors":"Yaser M. Abid, N. Kaittan, M. Mahdi, B. I. Bakri, A. Omran, M. Altaee, Sura Khalil Abid","doi":"10.1515/jisys-2022-0260","DOIUrl":"https://doi.org/10.1515/jisys-2022-0260","url":null,"abstract":"Abstract Training, sports equipment, and facilities are the main aspects of sports advancement. Countries are investing heavily in the training of athletes, especially in table tennis. Athletes require basic equipment for exercises, but most athletes cannot afford the high cost; hence, the necessity for developing a low-cost automated system has increased. To enhance the quality of the athletes’ training, the proposed research focuses on using the enormous developments in artificial intelligence by developing an automated training system that can maintain the training duration and intensity whenever necessary. In this research, an intelligent controller has been designed to simulate training patterns of table tennis. The intelligent controller will control the system that sends the table tennis balls’ intensity, speed, and duration. The system will detect the hand sign that has been previously assigned to different speeds using an image detection method and will work accordingly by accelerating the speed using pulse width modulation techniques. Simply showing the athletes’ hand sign to the system will trigger the artificial intelligent camera to identify it, sending the tennis ball at the assigned speed. The artificial intelligence of the proposed device showed promising results in detecting hand signs with minimum errors in training sessions and intensity. The image detection accuracy collected from the intelligent controller during training was 90.05%. Furthermore, the proposed system has a minimal material cost and can be easily installed and used.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"34 10 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82780818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On numerical characterizations of the topological reduction of incomplete information systems based on evidence theory 基于证据理论的不完全信息系统拓扑约简的数值表征
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0214
Changqing Li, Yanlan Zhang
Abstract Knowledge reduction of information systems is one of the most important parts of rough set theory in real-world applications. Based on the connections between the rough set theory and the theory of topology, a kind of topological reduction of incomplete information systems is discussed. In this study, the topological reduction of incomplete information systems is characterized by belief and plausibility functions from evidence theory. First, we present that a topological space induced by a pair of approximation operators in an incomplete information system is pseudo-discrete, which deduces a partition. Then, the topological reduction is characterized by the belief and plausibility function values of the sets in the partition. A topological reduction algorithm for computing the topological reducts in incomplete information systems is also proposed based on evidence theory, and its efficiency is examined by an example. Moreover, relationships among the concepts of topological reduct, classical reduct, belief reduct, and plausibility reduct of an incomplete information system are presented.
摘要信息系统的知识约简是粗糙集理论在实际应用中的重要内容之一。基于粗糙集理论与拓扑学理论的联系,讨论了一类不完备信息系统的拓扑约简。在本研究中,不完全信息系统的拓扑约简以证据理论中的信念函数和似然函数为特征。首先,我们给出了不完全信息系统中由一对近似算子诱导的拓扑空间是伪离散的,并推导出了一个划分。然后,用划分中集合的置信函数值和似然函数值来表征拓扑约简。基于证据理论,提出了一种计算不完全信息系统拓扑约简的拓扑约简算法,并通过实例验证了算法的有效性。给出了不完全信息系统的拓扑约简、经典约简、信念约简和可信性约简等概念之间的关系。
{"title":"On numerical characterizations of the topological reduction of incomplete information systems based on evidence theory","authors":"Changqing Li, Yanlan Zhang","doi":"10.1515/jisys-2022-0214","DOIUrl":"https://doi.org/10.1515/jisys-2022-0214","url":null,"abstract":"Abstract Knowledge reduction of information systems is one of the most important parts of rough set theory in real-world applications. Based on the connections between the rough set theory and the theory of topology, a kind of topological reduction of incomplete information systems is discussed. In this study, the topological reduction of incomplete information systems is characterized by belief and plausibility functions from evidence theory. First, we present that a topological space induced by a pair of approximation operators in an incomplete information system is pseudo-discrete, which deduces a partition. Then, the topological reduction is characterized by the belief and plausibility function values of the sets in the partition. A topological reduction algorithm for computing the topological reducts in incomplete information systems is also proposed based on evidence theory, and its efficiency is examined by an example. Moreover, relationships among the concepts of topological reduct, classical reduct, belief reduct, and plausibility reduct of an incomplete information system are presented.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"106 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87091537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting medicine demand using deep learning techniques: A review 使用深度学习技术预测药品需求:综述
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0297
Bashaer Abdurahman Mousa, Belal Al-Khateeb
Abstract The supply and storage of drugs are critical components of the medical industry and distribution. The shelf life of most medications is predetermined. When medicines are supplied in large quantities it is exceeding actual need, and long-term drug storage results. If demand is lower than necessary, this has an impact on consumer happiness and medicine marketing. Therefore, it is necessary to find a way to predict the actual quantity required for the organization’s needs to avoid material spoilage and storage problems. A mathematical prediction model is required to assist any management in achieving the required availability of medicines for customers and safe storage of medicines. Artificial intelligence applications and predictive modeling have used machine learning (ML) and deep learning algorithms to build prediction models. This model allows for the optimization of inventory levels, thus reducing costs and potentially increasing sales. Various measures, such as mean squared error, mean absolute squared error, root mean squared error, and others, are used to evaluate the prediction model. This study aims to review ML and deep learning approaches of forecasting to obtain the highest accuracy in the process of forecasting future demand for pharmaceuticals. Because of the lack of data, they could not use complex models for prediction. Even when there is a long history of accessible demand data, these problems still exist because the old data may not be very useful when it changes the market climate.
药品的供应和储存是医疗行业和分销的关键组成部分。大多数药物的保质期是预先确定的。当大量供应的药品超过实际需要时,就会导致药品长期储存。如果需求低于必要水平,这将影响消费者的幸福感和药品营销。因此,有必要找到一种方法来预测组织需要的实际数量,以避免材料损坏和储存问题。需要一个数学预测模型来协助任何管理人员实现客户所需的药品供应和药品的安全储存。人工智能应用和预测建模已经使用机器学习(ML)和深度学习算法来构建预测模型。这种模式允许优化库存水平,从而降低成本并潜在地增加销售。各种度量,如均方误差、平均绝对平方误差、均方根平方误差等,用于评估预测模型。本研究旨在回顾机器学习和深度学习的预测方法,以在预测未来药品需求的过程中获得最高的准确性。由于缺乏数据,他们无法使用复杂的模型进行预测。即使有很长一段可访问的需求数据历史,这些问题仍然存在,因为旧数据在改变市场环境时可能不是很有用。
{"title":"Predicting medicine demand using deep learning techniques: A review","authors":"Bashaer Abdurahman Mousa, Belal Al-Khateeb","doi":"10.1515/jisys-2022-0297","DOIUrl":"https://doi.org/10.1515/jisys-2022-0297","url":null,"abstract":"Abstract The supply and storage of drugs are critical components of the medical industry and distribution. The shelf life of most medications is predetermined. When medicines are supplied in large quantities it is exceeding actual need, and long-term drug storage results. If demand is lower than necessary, this has an impact on consumer happiness and medicine marketing. Therefore, it is necessary to find a way to predict the actual quantity required for the organization’s needs to avoid material spoilage and storage problems. A mathematical prediction model is required to assist any management in achieving the required availability of medicines for customers and safe storage of medicines. Artificial intelligence applications and predictive modeling have used machine learning (ML) and deep learning algorithms to build prediction models. This model allows for the optimization of inventory levels, thus reducing costs and potentially increasing sales. Various measures, such as mean squared error, mean absolute squared error, root mean squared error, and others, are used to evaluate the prediction model. This study aims to review ML and deep learning approaches of forecasting to obtain the highest accuracy in the process of forecasting future demand for pharmaceuticals. Because of the lack of data, they could not use complex models for prediction. Even when there is a long history of accessible demand data, these problems still exist because the old data may not be very useful when it changes the market climate.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"48 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90279097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer technology of multisensor data fusion based on FWA–BP network 基于FWA-BP网络的多传感器数据融合计算机技术
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0278
Xiaowei Hai
Abstract Due to the diversity and complexity of data information, traditional data fusion methods cannot effectively fuse multidimensional data, which affects the effective application of data. To achieve accurate and efficient fusion of multidimensional data, this experiment used back propagation (BP) neural network and fireworks algorithm (FWA) to establish the FWA–BP multidimensional data processing model, and a case study of PM2.5 concentration prediction was carried out by using the model. In the PM2.5 concentration prediction results, the trend between the FWA–BP prediction curve and the real curve was basically consistent, and the prediction deviation was less than 10. The average mean absolute error and root mean square error of FWA–BP network model in different samples were 3.7 and 4.3%, respectively. The correlation coefficient R value of FWA–BP network model was 0.963, which is higher than other network models. The results showed that FWA–BP network model could continuously optimize when predicting PM2.5 concentration, so as to avoid falling into local optimum prematurely. At the same time, the prediction accuracy is better with the improvement in the correlation coefficient between real and predicted value, which means, in computer technology of multisensor data fusion, this method can be applied better.
摘要由于数据信息的多样性和复杂性,传统的数据融合方法无法有效融合多维数据,影响了数据的有效应用。为实现多维数据的准确高效融合,本实验采用反向传播(BP)神经网络和烟花算法(FWA)建立了FWA - BP多维数据处理模型,并利用该模型对PM2.5浓度预测进行了案例研究。在PM2.5浓度预测结果中,FWA-BP预测曲线与实际曲线趋势基本一致,预测偏差小于10。FWA-BP网络模型在不同样本中的平均绝对误差和均方根误差分别为3.7和4.3%。FWA-BP网络模型的相关系数R值为0.963,高于其他网络模型。结果表明,FWA-BP网络模型在预测PM2.5浓度时可以持续优化,避免过早陷入局部最优。同时,随着预测值与实测值之间相关系数的提高,预测精度得到了提高,这意味着该方法在多传感器数据融合的计算机技术中可以得到更好的应用。
{"title":"Computer technology of multisensor data fusion based on FWA–BP network","authors":"Xiaowei Hai","doi":"10.1515/jisys-2022-0278","DOIUrl":"https://doi.org/10.1515/jisys-2022-0278","url":null,"abstract":"Abstract Due to the diversity and complexity of data information, traditional data fusion methods cannot effectively fuse multidimensional data, which affects the effective application of data. To achieve accurate and efficient fusion of multidimensional data, this experiment used back propagation (BP) neural network and fireworks algorithm (FWA) to establish the FWA–BP multidimensional data processing model, and a case study of PM2.5 concentration prediction was carried out by using the model. In the PM2.5 concentration prediction results, the trend between the FWA–BP prediction curve and the real curve was basically consistent, and the prediction deviation was less than 10. The average mean absolute error and root mean square error of FWA–BP network model in different samples were 3.7 and 4.3%, respectively. The correlation coefficient R value of FWA–BP network model was 0.963, which is higher than other network models. The results showed that FWA–BP network model could continuously optimize when predicting PM2.5 concentration, so as to avoid falling into local optimum prematurely. At the same time, the prediction accuracy is better with the improvement in the correlation coefficient between real and predicted value, which means, in computer technology of multisensor data fusion, this method can be applied better.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"13 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85088144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application study of ant colony algorithm for network data transmission path scheduling optimization 蚁群算法在网络数据传输路径调度优化中的应用研究
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0277
Peng Xiao
Abstract With the rapid development of the information age, the traditional data center network management can no longer meet the rapid expansion of network data traffic needs. Therefore, the research uses the biological ant colony foraging behavior to find the optimal path of network traffic scheduling, and introduces pheromone and heuristic functions to improve the convergence and stability of the algorithm. In order to find the light load path more accurately, the strategy redefines the heuristic function according to the number of large streams on the link and the real-time load. At the same time, in order to reduce the delay, the strategy defines the optimal path determination rule according to the path delay and real-time load. The experiments show that under the link load balancing strategy based on ant colony algorithm, the link utilization ratio is 4.6% higher than that of ECMP, while the traffic delay is reduced, and the delay deviation fluctuates within ±2 ms. The proposed network data transmission scheduling strategy can better solve the problems in traffic scheduling, and effectively improve network throughput and traffic transmission quality.
摘要随着信息时代的飞速发展,传统的数据中心网络管理方式已经不能满足网络数据流量快速膨胀的需求。因此,本研究采用生物蚁群觅食行为寻找网络流量调度的最优路径,并引入信息素和启发式函数来提高算法的收敛性和稳定性。为了更准确地找到轻负载路径,该策略根据链路上的大流数量和实时负载重新定义了启发式函数。同时,为了减少延迟,该策略根据路径延迟和实时负载定义了最优路径确定规则。实验表明,在基于蚁群算法的链路负载均衡策略下,链路利用率比ECMP提高4.6%,同时减少了流量延迟,延迟偏差波动在±2 ms以内。所提出的网络数据传输调度策略能够较好地解决流量调度问题,有效提高网络吞吐量和流量传输质量。
{"title":"Application study of ant colony algorithm for network data transmission path scheduling optimization","authors":"Peng Xiao","doi":"10.1515/jisys-2022-0277","DOIUrl":"https://doi.org/10.1515/jisys-2022-0277","url":null,"abstract":"Abstract With the rapid development of the information age, the traditional data center network management can no longer meet the rapid expansion of network data traffic needs. Therefore, the research uses the biological ant colony foraging behavior to find the optimal path of network traffic scheduling, and introduces pheromone and heuristic functions to improve the convergence and stability of the algorithm. In order to find the light load path more accurately, the strategy redefines the heuristic function according to the number of large streams on the link and the real-time load. At the same time, in order to reduce the delay, the strategy defines the optimal path determination rule according to the path delay and real-time load. The experiments show that under the link load balancing strategy based on ant colony algorithm, the link utilization ratio is 4.6% higher than that of ECMP, while the traffic delay is reduced, and the delay deviation fluctuates within ±2 ms. The proposed network data transmission scheduling strategy can better solve the problems in traffic scheduling, and effectively improve network throughput and traffic transmission quality.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"10 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86173934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1