首页 > 最新文献

Journal of Intelligent Systems最新文献

英文 中文
Intelligent financial decision support system based on big data 基于大数据的智能金融决策支持系统
Q2 Computer Science Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0320
Danna Tong, Guixian Tian
Abstract In the era of big data, data information has exploded, and all walks of life are impacted by big data. The arrival of big data provides the possibility for the realization of intelligent financial analysis of enterprises. At present, most enterprises’ financial analysis and decision-making based on the analysis results are mainly based on human resources, with poor automation and obvious problems in efficiency and error. In order to help the senior management of enterprises to conduct scientific and effective management, the study uses big data web crawler technology and ETL technology to process data and build an intelligent financial decision support system integrating big data together with Internet plus platform. J Group in S Province is taken as an example to study the effect before and after the application of intelligent financial decision support system. The results show that the crawler technology can monitor the basic data and the big data in the industry in real time, and improve the accuracy of decision-making. Through the intelligent financial decision support system which integrates big data, the core indexes such as profit, net asset return, and accounts receivable of the enterprises can be clearly displayed. The system can query the causes of financial changes hidden behind the financial data. Through the intelligent financial decision support system, it is found that the asset liability ratio, current assets growth rate, operating income growth rate, and financial expenses of J Group are 55.27, 10.38, 20.28%, and 1,974 million RMB, respectively. The growth rate of real sales income of J Group is 0.63%, which is 31.27% less than the excellent value of the industry 31.90%. After adopting the intelligent financial decision support system, the monthly financial statements of the enterprises increase significantly, and the monthly report analysis time decreases. The maximum number of financial statements received by the Group per month is 332, and the processing time at this time is only 2 h. According to the results, it can be seen that the intelligent financial decision support system integrating big data as the research result can effectively improve the financial management level of enterprises, improve the usefulness of financial decision-making, and make practical contributions to the field of corporate financial decision-making.
大数据时代,数据信息爆发,各行各业都受到大数据的冲击。大数据的到来,为实现企业财务智能分析提供了可能。目前,大多数企业的财务分析和基于分析结果的决策主要依靠人力资源,自动化程度较差,效率和误差问题明显。为了帮助企业高层进行科学有效的管理,本研究采用大数据网络爬虫技术和ETL技术对数据进行处理,构建大数据与互联网+平台相结合的智能财务决策支持系统。以S省J集团为例,研究智能财务决策支持系统应用前后的效果。结果表明,爬虫技术可以实时监控行业基础数据和大数据,提高决策的准确性。通过集成大数据的智能财务决策支持系统,清晰显示企业利润、净资产收益率、应收账款等核心指标。系统可以查询隐藏在财务数据背后的财务变化的原因。通过智能财务决策支持系统,发现J集团的资产负债率为55.27亿元,流动资产增长率为10.38亿元,营业收入增长率为20.28%,财务费用为19.74亿元。J集团实际销售收入增长率为0.63%,比行业优值31.90%低31.27%。采用智能财务决策支持系统后,企业的月度财务报表数量显著增加,月度报表分析时间减少。集团每月收到的财务报表最多为332份,此时的处理时间仅为2小时。从结果可以看出,以大数据为研究成果的智能财务决策支持系统能够有效提高企业财务管理水平,提高财务决策的有用性,为企业财务决策领域做出实际贡献。
{"title":"Intelligent financial decision support system based on big data","authors":"Danna Tong, Guixian Tian","doi":"10.1515/jisys-2022-0320","DOIUrl":"https://doi.org/10.1515/jisys-2022-0320","url":null,"abstract":"Abstract In the era of big data, data information has exploded, and all walks of life are impacted by big data. The arrival of big data provides the possibility for the realization of intelligent financial analysis of enterprises. At present, most enterprises’ financial analysis and decision-making based on the analysis results are mainly based on human resources, with poor automation and obvious problems in efficiency and error. In order to help the senior management of enterprises to conduct scientific and effective management, the study uses big data web crawler technology and ETL technology to process data and build an intelligent financial decision support system integrating big data together with Internet plus platform. J Group in S Province is taken as an example to study the effect before and after the application of intelligent financial decision support system. The results show that the crawler technology can monitor the basic data and the big data in the industry in real time, and improve the accuracy of decision-making. Through the intelligent financial decision support system which integrates big data, the core indexes such as profit, net asset return, and accounts receivable of the enterprises can be clearly displayed. The system can query the causes of financial changes hidden behind the financial data. Through the intelligent financial decision support system, it is found that the asset liability ratio, current assets growth rate, operating income growth rate, and financial expenses of J Group are 55.27, 10.38, 20.28%, and 1,974 million RMB, respectively. The growth rate of real sales income of J Group is 0.63%, which is 31.27% less than the excellent value of the industry 31.90%. After adopting the intelligent financial decision support system, the monthly financial statements of the enterprises increase significantly, and the monthly report analysis time decreases. The maximum number of financial statements received by the Group per month is 332, and the processing time at this time is only 2 h. According to the results, it can be seen that the intelligent financial decision support system integrating big data as the research result can effectively improve the financial management level of enterprises, improve the usefulness of financial decision-making, and make practical contributions to the field of corporate financial decision-making.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135358255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep neural network model for paternity testing based on 15-loci STR for Iraqi families 基于伊拉克家庭15位点STR的亲子鉴定深度神经网络模型
Q2 Computer Science Pub Date : 2023-01-01 DOI: 10.1515/jisys-2023-0041
Donya A. Khalid, Nasser Nafea
Abstract Paternity testing using a deoxyribose nucleic acid (DNA) profile is an essential branch of forensic science, and DNA short tandem repeat (STR) is usually used for this purpose. Nowadays, in third-world countries, conventional kinship analysis techniques used in forensic investigations result in inadequate accuracy measurements, especially when dealing with large human STR datasets; they compare human profiles manually so that the number of samples is limited due to the required human efforts and time consumption. By utilizing automation made possible by AI, forensic investigations are conducted more efficiently, saving both time conception and cost. In this article, we propose a new algorithm for predicting paternity based on the 15-loci STR-DNA datasets using a deep neural network (DNN), where comparisons among many human profiles are held regardless of the limitation of the number of samples. For the purpose of paternity testing, familial data are artificially created based on the real data of individual Iraqi people from Al-Najaf province. Such action helps to overcome the shortage of Iraqi data due to restricted policies and the secrecy of familial datasets. About 53,530 datasets are used in the proposed DNN model for the purpose of training and testing. The Keras library based on Python is used to implement and test the proposed system, as well as the confusion matrix and receiver operating characteristic curve for system evaluation. The system shows excellent accuracy of 99.6% in paternity tests, which is the highest accuracy compared to the existing works. This system shows a good attempt at testing paternity based on a technique of artificial intelligence.
利用脱氧核糖核酸(DNA)图谱进行亲子鉴定是法医学的一个重要分支,DNA短串联重复序列(STR)通常用于此目的。如今,在第三世界国家,法医调查中使用的传统亲属分析技术导致准确性测量不足,特别是在处理大型人类STR数据集时;他们手动比较人的配置文件,因此由于所需的人力和时间消耗,样本的数量受到限制。通过利用人工智能实现的自动化,法医调查更有效地进行,节省了时间和成本。在本文中,我们提出了一种基于15个位点STR-DNA数据集的预测亲子关系的新算法,该算法使用深度神经网络(DNN),在该算法中,无论样本数量的限制,都可以对许多人类档案进行比较。为了进行亲子鉴定,家庭数据是根据纳杰夫省伊拉克人的真实数据人为创建的。这种行动有助于克服伊拉克由于政策限制和家庭数据集保密而缺乏数据的问题。在提出的DNN模型中,大约使用了53,530个数据集用于训练和测试。使用基于Python的Keras库实现和测试了所提出的系统,并使用混淆矩阵和接收机工作特性曲线进行系统评估。该系统在亲子鉴定中准确率高达99.6%,是现有系统中准确率最高的。该系统在基于人工智能技术的亲子鉴定方面做了很好的尝试。
{"title":"A deep neural network model for paternity testing based on 15-loci STR for Iraqi families","authors":"Donya A. Khalid, Nasser Nafea","doi":"10.1515/jisys-2023-0041","DOIUrl":"https://doi.org/10.1515/jisys-2023-0041","url":null,"abstract":"Abstract Paternity testing using a deoxyribose nucleic acid (DNA) profile is an essential branch of forensic science, and DNA short tandem repeat (STR) is usually used for this purpose. Nowadays, in third-world countries, conventional kinship analysis techniques used in forensic investigations result in inadequate accuracy measurements, especially when dealing with large human STR datasets; they compare human profiles manually so that the number of samples is limited due to the required human efforts and time consumption. By utilizing automation made possible by AI, forensic investigations are conducted more efficiently, saving both time conception and cost. In this article, we propose a new algorithm for predicting paternity based on the 15-loci STR-DNA datasets using a deep neural network (DNN), where comparisons among many human profiles are held regardless of the limitation of the number of samples. For the purpose of paternity testing, familial data are artificially created based on the real data of individual Iraqi people from Al-Najaf province. Such action helps to overcome the shortage of Iraqi data due to restricted policies and the secrecy of familial datasets. About 53,530 datasets are used in the proposed DNN model for the purpose of training and testing. The Keras library based on Python is used to implement and test the proposed system, as well as the confusion matrix and receiver operating characteristic curve for system evaluation. The system shows excellent accuracy of 99.6% in paternity tests, which is the highest accuracy compared to the existing works. This system shows a good attempt at testing paternity based on a technique of artificial intelligence.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135611752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design model-free adaptive PID controller based on lazy learning algorithm 基于懒惰学习算法设计无模型自适应PID控制器
Q2 Computer Science Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0279
Hongcheng Zhou
Abstract The nonlinear system is difficult to achieve the desired effect by using traditional proportional integral derivative (PID) or linear controller. First, this study presents an improved lazy learning algorithm based on k-vector nearest neighbors, which not only considers the matching of input and output data, but also considers the consistency of the model. Based on the optimization index of an additional penalty function, the optimal solution of the lazy learning is obtained by the iterative least-square method. Second, based on the improved lazy learning, an adaptive PID control algorithm is proposed. Finally, the control effect under the condition of complete data and incomplete data is compared by simulation experiment.
传统的比例积分导数(PID)或线性控制器对非线性系统难以达到预期的控制效果。首先,本文提出了一种改进的基于k向量最近邻的懒惰学习算法,该算法不仅考虑了输入输出数据的匹配性,还考虑了模型的一致性。基于附加惩罚函数的优化指标,采用迭代最小二乘法求出懒惰学习的最优解。其次,基于改进的惰性学习,提出了一种自适应PID控制算法。最后,通过仿真实验比较了数据完备和数据不完备情况下的控制效果。
{"title":"Design model-free adaptive PID controller based on lazy learning algorithm","authors":"Hongcheng Zhou","doi":"10.1515/jisys-2022-0279","DOIUrl":"https://doi.org/10.1515/jisys-2022-0279","url":null,"abstract":"Abstract The nonlinear system is difficult to achieve the desired effect by using traditional proportional integral derivative (PID) or linear controller. First, this study presents an improved lazy learning algorithm based on k-vector nearest neighbors, which not only considers the matching of input and output data, but also considers the consistency of the model. Based on the optimization index of an additional penalty function, the optimal solution of the lazy learning is obtained by the iterative least-square method. Second, based on the improved lazy learning, an adaptive PID control algorithm is proposed. Finally, the control effect under the condition of complete data and incomplete data is compared by simulation experiment.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135953356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and research of deep neural network fusion computer vision technology 深度神经网络融合计算机视觉技术的开发与研究
Q2 Computer Science Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0264
Jiangtao Wang
Abstract Deep learning (DL) has revolutionized advanced digital picture processing, enabling significant advancements in computer vision (CV). However, it is important to note that older CV techniques, developed prior to the emergence of DL, still hold value and relevance. Particularly in the realm of more complex, three-dimensional (3D) data such as video and 3D models, CV and multimedia retrieval remain at the forefront of technological advancements. We provide critical insights into the progress made in developing higher-dimensional qualities through the application of DL, and also discuss the advantages and strategies employed in DL. With the widespread use of 3D sensor data and 3D modeling, the analysis and representation of the world in three dimensions have become commonplace. This progress has been facilitated by the development of additional sensors, driven by advancements in areas such as 3D gaming and self-driving vehicles. These advancements have enabled researchers to create feature description models that surpass traditional two-dimensional approaches. This study reveals the current state of advanced digital picture processing, highlighting the role of DL in pushing the boundaries of CV and multimedia retrieval in handling complex, 3D data.
深度学习(DL)彻底改变了先进的数字图像处理,使计算机视觉(CV)取得了重大进展。然而,值得注意的是,在DL出现之前开发的旧CV技术仍然具有价值和相关性。特别是在更复杂的领域,三维(3D)数据,如视频和3D模型,CV和多媒体检索仍然处于技术进步的前沿。我们提供了通过应用深度学习在发展高维质量方面取得的进展的关键见解,并讨论了深度学习的优势和采用的策略。随着三维传感器数据和三维建模的广泛使用,三维世界的分析和表示已经变得司空见惯。3D游戏和自动驾驶汽车等领域的进步推动了额外传感器的发展,从而促进了这一进步。这些进步使研究人员能够创建超越传统二维方法的特征描述模型。本研究揭示了先进数字图像处理的现状,强调了深度学习在处理复杂的3D数据方面在推动CV和多媒体检索的边界方面的作用。
{"title":"Development and research of deep neural network fusion computer vision technology","authors":"Jiangtao Wang","doi":"10.1515/jisys-2022-0264","DOIUrl":"https://doi.org/10.1515/jisys-2022-0264","url":null,"abstract":"Abstract Deep learning (DL) has revolutionized advanced digital picture processing, enabling significant advancements in computer vision (CV). However, it is important to note that older CV techniques, developed prior to the emergence of DL, still hold value and relevance. Particularly in the realm of more complex, three-dimensional (3D) data such as video and 3D models, CV and multimedia retrieval remain at the forefront of technological advancements. We provide critical insights into the progress made in developing higher-dimensional qualities through the application of DL, and also discuss the advantages and strategies employed in DL. With the widespread use of 3D sensor data and 3D modeling, the analysis and representation of the world in three dimensions have become commonplace. This progress has been facilitated by the development of additional sensors, driven by advancements in areas such as 3D gaming and self-driving vehicles. These advancements have enabled researchers to create feature description models that surpass traditional two-dimensional approaches. This study reveals the current state of advanced digital picture processing, highlighting the role of DL in pushing the boundaries of CV and multimedia retrieval in handling complex, 3D data.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135157947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology 对智能合约技术中未被发现的漏洞和工具进行系统的文献综述
IF 3 Q2 Computer Science Pub Date : 2023-01-01 DOI: 10.1515/jisys-2023-0038
Oualid Zaazaa, Hanan El Bakkali
Abstract In recent years, smart contract technology has garnered significant attention due to its ability to address trust issues that traditional technologies have long struggled with. However, like any evolving technology, smart contracts are not immune to vulnerabilities, and some remain underexplored, often eluding detection by existing vulnerability assessment tools. In this article, we have performed a systematic literature review of all the scientific research and papers conducted between 2016 and 2021. The main objective of this work is to identify what vulnerabilities and smart contract technologies have not been well studied. In addition, we list all the datasets used by previous researchers that can help researchers in building more efficient machine-learning models in the future. In addition, comparisons are drawn among the smart contract analysis tools by considering various features. Finally, various future directions are also discussed in the field of smart contracts that can help researchers to set the direction for future research in this domain.
近年来,智能合约技术因其解决传统技术长期难以解决的信任问题的能力而引起了广泛关注。然而,就像任何不断发展的技术一样,智能合约也不能幸免于漏洞,有些仍然未被充分开发,通常无法被现有的漏洞评估工具检测到。在本文中,我们对2016 - 2021年间的所有科学研究和论文进行了系统的文献综述。这项工作的主要目标是确定哪些漏洞和智能合约技术尚未得到很好的研究。此外,我们列出了以前研究人员使用的所有数据集,这些数据集可以帮助研究人员在未来构建更有效的机器学习模型。此外,通过考虑各种功能,对智能合约分析工具进行了比较。最后,还讨论了智能合约领域的各种未来方向,可以帮助研究人员为该领域的未来研究设定方向。
{"title":"A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology","authors":"Oualid Zaazaa, Hanan El Bakkali","doi":"10.1515/jisys-2023-0038","DOIUrl":"https://doi.org/10.1515/jisys-2023-0038","url":null,"abstract":"Abstract In recent years, smart contract technology has garnered significant attention due to its ability to address trust issues that traditional technologies have long struggled with. However, like any evolving technology, smart contracts are not immune to vulnerabilities, and some remain underexplored, often eluding detection by existing vulnerability assessment tools. In this article, we have performed a systematic literature review of all the scientific research and papers conducted between 2016 and 2021. The main objective of this work is to identify what vulnerabilities and smart contract technologies have not been well studied. In addition, we list all the datasets used by previous researchers that can help researchers in building more efficient machine-learning models in the future. In addition, comparisons are drawn among the smart contract analysis tools by considering various features. Finally, various future directions are also discussed in the field of smart contracts that can help researchers to set the direction for future research in this domain.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84117441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anti-leakage method of network sensitive information data based on homomorphic encryption 基于同态加密的网络敏感信息数据防泄漏方法
IF 3 Q2 Computer Science Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0281
Junlong Shi, Xiaofeng Zhao
Abstract With the development of artificial intelligence, people begin to pay attention to the protection of sensitive information and data. Therefore, a homomorphic encryption framework based on effective integer vector is proposed and applied to deep learning to protect the privacy of users in binary convolutional neural network model. The conclusion shows that the model can achieve high accuracy. The training is 93.75% in MNIST dataset and 89.24% in original dataset. Because of the confidentiality of data, the training accuracy of the training set is only 86.77%. After increasing the training period, the accuracy began to converge to about 300 cycles, and finally reached about 86.39%. In addition, after taking the absolute value of the elements in the encryption matrix, the training accuracy of the model is 88.79%, and the test accuracy is 85.12%. The improved model is also compared with the traditional model. This model can reduce the storage consumption in the model calculation process, effectively improve the calculation speed, and have little impact on the accuracy. Specifically, the speed of the improved model is 58 times that of the traditional CNN model, and the storage consumption is 1/32 of that of the traditional CNN model. Therefore, homomorphic encryption can be applied to information encryption under the background of big data, and the privacy of the neural network can be realized.
随着人工智能的发展,人们开始关注敏感信息和数据的保护。为此,提出了一种基于有效整数向量的同态加密框架,并将其应用于深度学习中,以保护二元卷积神经网络模型中用户的隐私。结果表明,该模型能够达到较高的精度。MNIST数据集的训练率为93.75%,原始数据集的训练率为89.24%。由于数据的保密性,训练集的训练准确率仅为86.77%。增加训练周期后,准确率开始收敛到300次左右,最终达到86.39%左右。此外,取加密矩阵中元素的绝对值后,该模型的训练准确率为88.79%,测试准确率为85.12%。并将改进后的模型与传统模型进行了比较。该模型可以减少模型计算过程中的存储消耗,有效提高计算速度,对精度影响较小。具体来说,改进模型的速度是传统CNN模型的58倍,存储消耗是传统CNN模型的1/32。因此,可以将同态加密应用于大数据背景下的信息加密,实现神经网络的隐私性。
{"title":"Anti-leakage method of network sensitive information data based on homomorphic encryption","authors":"Junlong Shi, Xiaofeng Zhao","doi":"10.1515/jisys-2022-0281","DOIUrl":"https://doi.org/10.1515/jisys-2022-0281","url":null,"abstract":"Abstract With the development of artificial intelligence, people begin to pay attention to the protection of sensitive information and data. Therefore, a homomorphic encryption framework based on effective integer vector is proposed and applied to deep learning to protect the privacy of users in binary convolutional neural network model. The conclusion shows that the model can achieve high accuracy. The training is 93.75% in MNIST dataset and 89.24% in original dataset. Because of the confidentiality of data, the training accuracy of the training set is only 86.77%. After increasing the training period, the accuracy began to converge to about 300 cycles, and finally reached about 86.39%. In addition, after taking the absolute value of the elements in the encryption matrix, the training accuracy of the model is 88.79%, and the test accuracy is 85.12%. The improved model is also compared with the traditional model. This model can reduce the storage consumption in the model calculation process, effectively improve the calculation speed, and have little impact on the accuracy. Specifically, the speed of the improved model is 58 times that of the traditional CNN model, and the storage consumption is 1/32 of that of the traditional CNN model. Therefore, homomorphic encryption can be applied to information encryption under the background of big data, and the privacy of the neural network can be realized.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86712507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing SQL payloads using logistic regression in a big data environment 在大数据环境中使用逻辑回归分析SQL有效负载
IF 3 Q2 Computer Science Pub Date : 2023-01-01 DOI: 10.1515/jisys-2023-0063
O. Shareef, Rehab Flaih Hasan, Ammar Hatem Farhan
Abstract Protecting big data from attacks on large organizations is essential because of how vital such data are to organizations and individuals. Moreover, such data can be put at risk when attackers gain unauthorized access to information and use it in illegal ways. One of the most common such attacks is the structured query language injection attack (SQLIA). This attack is a vulnerability attack that allows attackers to illegally access a database quickly and easily by manipulating structured query language (SQL) queries, especially when dealing with a big data environment. To address these risks, this study aims to build an approach that acts as a middle protection layer between the client and database server layers and reduces the time consumed to classify the SQL payload sent from the user layer. The proposed method involves training a model by using a machine learning (ML) technique for logistic regression with the Spark ML library that handles big data. An experiment was conducted using the SQLI dataset. Results show that the proposed approach achieved an accuracy of 99.04, a precision of 98.87, a recall of 99.89, and an F-score of 99.04. The time taken to identify and prevent SQLIA is 0.05 s. Our approach can protect the data by using the middle layer. Moreover, using the Spark ML library with ML algorithms gives better accuracy and shortens the time required to determine the type of request sent from the user layer.
保护大数据免受大型组织的攻击至关重要,因为这些数据对组织和个人都至关重要。此外,当攻击者未经授权访问信息并以非法方式使用这些信息时,这些数据可能会处于危险之中。最常见的攻击之一是结构化查询语言注入攻击(SQLIA)。这种攻击是一种漏洞攻击,攻击者可以通过操纵结构化查询语言(SQL)查询,快速轻松地非法访问数据库,特别是在处理大数据环境时。为了解决这些风险,本研究旨在构建一种方法,作为客户端和数据库服务器层之间的中间保护层,减少对从用户层发送的SQL有效负载进行分类所花费的时间。提出的方法包括通过使用机器学习(ML)技术来训练模型,并使用处理大数据的Spark ML库进行逻辑回归。使用SQLI数据集进行了实验。结果表明,该方法的准确率为99.04,精密度为98.87,召回率为99.89,f分数为99.04。识别和预防SQLIA所需时间为0.05 s。我们的方法可以通过使用中间层来保护数据。此外,使用Spark ML库和ML算法提供了更好的准确性,并缩短了确定从用户层发送的请求类型所需的时间。
{"title":"Analyzing SQL payloads using logistic regression in a big data environment","authors":"O. Shareef, Rehab Flaih Hasan, Ammar Hatem Farhan","doi":"10.1515/jisys-2023-0063","DOIUrl":"https://doi.org/10.1515/jisys-2023-0063","url":null,"abstract":"Abstract Protecting big data from attacks on large organizations is essential because of how vital such data are to organizations and individuals. Moreover, such data can be put at risk when attackers gain unauthorized access to information and use it in illegal ways. One of the most common such attacks is the structured query language injection attack (SQLIA). This attack is a vulnerability attack that allows attackers to illegally access a database quickly and easily by manipulating structured query language (SQL) queries, especially when dealing with a big data environment. To address these risks, this study aims to build an approach that acts as a middle protection layer between the client and database server layers and reduces the time consumed to classify the SQL payload sent from the user layer. The proposed method involves training a model by using a machine learning (ML) technique for logistic regression with the Spark ML library that handles big data. An experiment was conducted using the SQLI dataset. Results show that the proposed approach achieved an accuracy of 99.04, a precision of 98.87, a recall of 99.89, and an F-score of 99.04. The time taken to identify and prevent SQLIA is 0.05 s. Our approach can protect the data by using the middle layer. Moreover, using the Spark ML library with ML algorithms gives better accuracy and shortens the time required to determine the type of request sent from the user layer.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86671255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new method for writer identification based on historical documents 一种基于历史文献的作者鉴定新方法
IF 3 Q2 Computer Science Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0244
A. Gattal, Chawki Djeddi, Faycel Abbas, I. Siddiqi, Bouderah Brahim
Abstract Identifying the writer of a handwritten document has remained an interesting pattern classification problem for document examiners, forensic experts, and paleographers. While mature identification systems have been developed for handwriting in contemporary documents, the problem remains challenging from the viewpoint of historical manuscripts. Design and development of expert systems that can identify the writer of a questioned manuscript or retrieve samples belonging to a given writer can greatly help the paleographers in their practices. In this context, the current study exploits the textural information in handwriting to characterize writer from historical documents. More specifically, we employ oBIF(oriented Basic Image Features) and hinge features and introduce a novel moment-based matching method to compare the feature vectors extracted from writing samples. Classification is based on minimization of a similarity criterion using the proposed moment distance. A comprehensive series of experiments using the International Conference on Document Analysis and Recognition 2017 historical writer identification dataset reported promising results and validated the ideas put forward in this study.
识别手写文件的作者一直是文件审查员、法医专家和古文字学家的一个有趣的模式分类问题。虽然成熟的识别系统已经发展为当代文件中的笔迹,但从历史手稿的角度来看,这个问题仍然具有挑战性。专家系统的设计和开发可以识别被质疑手稿的作者或检索属于给定作者的样本,这可以极大地帮助古文字学家进行实践。在此背景下,本研究利用笔迹的纹理信息来刻画历史文献中的作者。更具体地说,我们采用oBIF(面向基本图像特征)和铰链特征,并引入了一种新的基于矩的匹配方法来比较从书写样本中提取的特征向量。分类是基于最小化的相似性标准使用提出的矩距离。利用2017年国际文献分析与识别会议的历史作者识别数据集进行的一系列综合实验报告了令人鼓舞的结果,并验证了本研究提出的想法。
{"title":"A new method for writer identification based on historical documents","authors":"A. Gattal, Chawki Djeddi, Faycel Abbas, I. Siddiqi, Bouderah Brahim","doi":"10.1515/jisys-2022-0244","DOIUrl":"https://doi.org/10.1515/jisys-2022-0244","url":null,"abstract":"Abstract Identifying the writer of a handwritten document has remained an interesting pattern classification problem for document examiners, forensic experts, and paleographers. While mature identification systems have been developed for handwriting in contemporary documents, the problem remains challenging from the viewpoint of historical manuscripts. Design and development of expert systems that can identify the writer of a questioned manuscript or retrieve samples belonging to a given writer can greatly help the paleographers in their practices. In this context, the current study exploits the textural information in handwriting to characterize writer from historical documents. More specifically, we employ oBIF(oriented Basic Image Features) and hinge features and introduce a novel moment-based matching method to compare the feature vectors extracted from writing samples. Classification is based on minimization of a similarity criterion using the proposed moment distance. A comprehensive series of experiments using the International Conference on Document Analysis and Recognition 2017 historical writer identification dataset reported promising results and validated the ideas put forward in this study.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81093418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A BiLSTM-attention-based point-of-interest recommendation algorithm 基于bilstm的兴趣点推荐算法
Q2 Computer Science Pub Date : 2023-01-01 DOI: 10.1515/jisys-2023-0033
Aichuan Li, Fuzhi Liu
Abstract Aiming at the problem that users’ check-in interest preferences in social networks have complex time dependences, which leads to inaccurate point-of-interest (POI) recommendations, a location-based POI recommendation model using deep learning for social network big data is proposed. First, the original data are fed into an embedding layer of the model for dense vector representation and to obtain the user’s check-in sequence (UCS) and space-time interval information. Then, the UCS and spatiotemporal interval information are sent into a bidirectional long-term memory model for detailed analysis, where the UCS and location sequence representation are updated using a self-attention mechanism. Finally, candidate POIs are compared with the user’s preferences, and a POI sequence with three consecutive recommended locations is generated. The experimental analysis shows that the model performs best when the Huber loss function is used and the number of training iterations is set to 200. In the Foursquare dataset, Recall@20 and NDCG@20 reach 0.418 and 0.143, and in the Gowalla dataset, the corresponding values are 0.387 and 0.148.
摘要针对社交网络用户签到兴趣偏好具有复杂的时间依赖性,导致POI推荐不准确的问题,提出了一种基于位置的深度学习社交网络大数据POI推荐模型。首先,将原始数据输入到模型的嵌入层进行密集向量表示,得到用户签入序列和时空间隔信息;然后,将UCS和时空间隔信息发送到双向长期记忆模型中进行详细分析,在双向长期记忆模型中,UCS和位置序列表示使用自注意机制进行更新。最后,将候选POI与用户的偏好进行比较,并生成具有三个连续推荐位置的POI序列。实验分析表明,当使用Huber损失函数并将训练迭代次数设置为200次时,该模型表现最佳。在Foursquare数据集中,Recall@20和NDCG@20分别达到0.418和0.143,在Gowalla数据集中,对应的值分别为0.387和0.148。
{"title":"A BiLSTM-attention-based point-of-interest recommendation algorithm","authors":"Aichuan Li, Fuzhi Liu","doi":"10.1515/jisys-2023-0033","DOIUrl":"https://doi.org/10.1515/jisys-2023-0033","url":null,"abstract":"Abstract Aiming at the problem that users’ check-in interest preferences in social networks have complex time dependences, which leads to inaccurate point-of-interest (POI) recommendations, a location-based POI recommendation model using deep learning for social network big data is proposed. First, the original data are fed into an embedding layer of the model for dense vector representation and to obtain the user’s check-in sequence (UCS) and space-time interval information. Then, the UCS and spatiotemporal interval information are sent into a bidirectional long-term memory model for detailed analysis, where the UCS and location sequence representation are updated using a self-attention mechanism. Finally, candidate POIs are compared with the user’s preferences, and a POI sequence with three consecutive recommended locations is generated. The experimental analysis shows that the model performs best when the Huber loss function is used and the number of training iterations is set to 200. In the Foursquare dataset, Recall@20 and NDCG@20 reach 0.418 and 0.143, and in the Gowalla dataset, the corresponding values are 0.387 and 0.148.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135104735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Waste material classification using performance evaluation of deep learning models 利用深度学习模型的性能评价进行废弃物分类
Q2 Computer Science Pub Date : 2023-01-01 DOI: 10.1515/jisys-2023-0064
Israa Badr Al-Mashhadani
Abstract Waste classification is the issue of sorting rubbish into valuable categories for efficient waste management. Problems arise from issues such as individual ignorance or inactivity and more overt issues like pollution in the environment, lack of resources, or a malfunctioning system. Education, established behaviors, an improved infrastructure, technology, and legislative incentives to promote effective trash sorting and management are all necessary for a solution to be implemented. For solid waste management and recycling efforts to be successful, waste materials must be sorted appropriately. This study evaluates the effectiveness of several deep learning (DL) models for the challenge of waste material classification. The focus will be on finding the best DL technique for solid waste classification. This study extensively compares several DL architectures (Resnet50, GoogleNet, InceptionV3, and Xception). Images of various types of trash are amassed and cleaned up to form a dataset. Accuracy, precision, recall, and F 1 score are only a few measures used to assess the performance of the many DL models trained and tested on this dataset. ResNet50 showed impressive performance in waste material classification, with 95% accuracy, 95.4% precision, 95% recall, and 94.8% in the F 1 score, with only two incorrect categories in the glass class. All classes are correctly classified with an F 1 score of 100% due to Inception V3’s remarkable accuracy, precision, recall, and F 1 score. Xception’s classification accuracy was excellent (100%), with a few difficulties in the glass and trash categories. With a good 90.78% precision, 100% recall, and 89.81% F 1 score, GoogleNet performed admirably. This study highlights the significance of using models based on DL for categorizing trash. The results open the way for enhanced trash sorting and recycling operations, contributing to an economically and ecologically friendly future.
垃圾分类是将垃圾分类成有价值的类别,以便进行有效的垃圾管理。问题来自个人的无知或不作为,以及更明显的问题,如环境污染、资源缺乏或系统故障。教育、既定行为、改善基础设施、技术和立法激励措施,以促进有效的垃圾分类和管理,都是实施解决方案的必要条件。要使固体废物管理和回收工作取得成功,必须对废物进行适当分类。本研究评估了几种深度学习(DL)模型对废物分类挑战的有效性。重点将是寻找固体废物分类的最佳DL技术。这项研究广泛地比较了几种深度学习架构(Resnet50, GoogleNet, InceptionV3和Xception)。各种类型的垃圾图像被收集和清理,形成一个数据集。准确性、精密度、召回率和f1分数只是用来评估在该数据集上训练和测试的许多深度学习模型的性能的几个指标。ResNet50在废物分类方面表现出色,准确率为95%,精密度为95.4%,召回率为95%,f1得分为94.8%,玻璃类中只有两个分类不正确。由于Inception V3出色的准确率、精密度、召回率和f1分数,所有类都被正确分类,并获得了100%的f1分数。Xception的分类准确率非常好(100%),在玻璃和垃圾类别中有一些困难。GoogleNet的准确率为90.78%,召回率为100%,f1得分为89.81%,表现令人钦佩。本研究强调了使用基于深度学习的模型进行垃圾分类的重要性。研究结果为加强垃圾分类和回收操作开辟了道路,为经济和生态友好的未来做出了贡献。
{"title":"Waste material classification using performance evaluation of deep learning models","authors":"Israa Badr Al-Mashhadani","doi":"10.1515/jisys-2023-0064","DOIUrl":"https://doi.org/10.1515/jisys-2023-0064","url":null,"abstract":"Abstract Waste classification is the issue of sorting rubbish into valuable categories for efficient waste management. Problems arise from issues such as individual ignorance or inactivity and more overt issues like pollution in the environment, lack of resources, or a malfunctioning system. Education, established behaviors, an improved infrastructure, technology, and legislative incentives to promote effective trash sorting and management are all necessary for a solution to be implemented. For solid waste management and recycling efforts to be successful, waste materials must be sorted appropriately. This study evaluates the effectiveness of several deep learning (DL) models for the challenge of waste material classification. The focus will be on finding the best DL technique for solid waste classification. This study extensively compares several DL architectures (Resnet50, GoogleNet, InceptionV3, and Xception). Images of various types of trash are amassed and cleaned up to form a dataset. Accuracy, precision, recall, and F 1 score are only a few measures used to assess the performance of the many DL models trained and tested on this dataset. ResNet50 showed impressive performance in waste material classification, with 95% accuracy, 95.4% precision, 95% recall, and 94.8% in the F 1 score, with only two incorrect categories in the glass class. All classes are correctly classified with an F 1 score of 100% due to Inception V3’s remarkable accuracy, precision, recall, and F 1 score. Xception’s classification accuracy was excellent (100%), with a few difficulties in the glass and trash categories. With a good 90.78% precision, 100% recall, and 89.81% F 1 score, GoogleNet performed admirably. This study highlights the significance of using models based on DL for categorizing trash. The results open the way for enhanced trash sorting and recycling operations, contributing to an economically and ecologically friendly future.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135561223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1