首页 > 最新文献

Journal of Intelligent Systems最新文献

英文 中文
Validation of machine learning ridge regression models using Monte Carlo, bootstrap, and variations in cross-validation 验证机器学习岭回归模型使用蒙特卡罗,bootstrap和交叉验证的变化
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0224
Robbie T. Nakatsu
Abstract In recent years, there have been several calls by practitioners of machine learning to provide more guidelines on how to use its methods and techniques. For example, the current literature on resampling methods is confusing and sometimes contradictory; worse, there are sometimes no practical guidelines offered at all. To address this shortcoming, a simulation study was conducted that evaluated ridge regression models fitted on five real-world datasets. The study compared the performance of four resampling methods, namely, Monte Carlo resampling, bootstrap, k-fold cross-validation, and repeated k-fold cross-validation. The goal was to find the best-fitting λ (regularization) parameter that would minimize mean squared error, by using nine variations of these resampling methods. For each of the nine resampling variations, 1,000 runs were performed to see how often a good fit, average fit, and poor fit λ value would be chosen. The resampling method that chose good fit values the greatest number of times was deemed the best method. Based on the results of the investigation, three general recommendations are made: (1) repeated k-fold cross-validation is the best method to select as a general-purpose resampling method; (2) k = 10 folds is a good choice in k-fold cross-validation; (3) Monte Carlo and bootstrap are underperformers, so they are not recommended as general-purpose resampling methods. At the same time, no resampling method was found to be uniformly better than the others.
近年来,机器学习从业者多次呼吁提供更多关于如何使用机器学习方法和技术的指导方针。例如,目前关于重采样方法的文献是混乱的,有时甚至是矛盾的;更糟糕的是,有时根本没有提供实用的指导方针。为了解决这一缺点,进行了一项模拟研究,评估了在五个真实数据集上拟合的脊回归模型。研究比较了蒙特卡罗重采样、自举、k-fold交叉验证和重复k-fold交叉验证四种重采样方法的性能。目标是通过使用这些重采样方法的九种变体,找到将均方误差最小化的最佳拟合λ(正则化)参数。对于9个重新采样变量中的每一个,执行1,000次运行,以查看选择良好拟合、平均拟合和差拟合λ值的频率。选择拟合值次数最多的重采样方法为最佳方法。根据调查结果,提出了三个一般性建议:(1)重复k-fold交叉验证是通用重采样方法的最佳选择;(2)在k-fold交叉验证中,k = 10是较好的选择;(3)蒙特卡罗和bootstrap表现不佳,因此不推荐它们作为通用重采样方法。同时,没有一种重采样方法的均匀性优于其他方法。
{"title":"Validation of machine learning ridge regression models using Monte Carlo, bootstrap, and variations in cross-validation","authors":"Robbie T. Nakatsu","doi":"10.1515/jisys-2022-0224","DOIUrl":"https://doi.org/10.1515/jisys-2022-0224","url":null,"abstract":"Abstract In recent years, there have been several calls by practitioners of machine learning to provide more guidelines on how to use its methods and techniques. For example, the current literature on resampling methods is confusing and sometimes contradictory; worse, there are sometimes no practical guidelines offered at all. To address this shortcoming, a simulation study was conducted that evaluated ridge regression models fitted on five real-world datasets. The study compared the performance of four resampling methods, namely, Monte Carlo resampling, bootstrap, k-fold cross-validation, and repeated k-fold cross-validation. The goal was to find the best-fitting λ (regularization) parameter that would minimize mean squared error, by using nine variations of these resampling methods. For each of the nine resampling variations, 1,000 runs were performed to see how often a good fit, average fit, and poor fit λ value would be chosen. The resampling method that chose good fit values the greatest number of times was deemed the best method. Based on the results of the investigation, three general recommendations are made: (1) repeated k-fold cross-validation is the best method to select as a general-purpose resampling method; (2) k = 10 folds is a good choice in k-fold cross-validation; (3) Monte Carlo and bootstrap are underperformers, so they are not recommended as general-purpose resampling methods. At the same time, no resampling method was found to be uniformly better than the others.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"6 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78871554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HWCD: A hybrid approach for image compression using wavelet, encryption using confusion, and decryption using diffusion scheme HWCD:一种使用小波进行图像压缩、使用混淆进行加密和使用扩散方案进行解密的混合方法
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-9056
H. R. Latha, Alagarswamy Ramaprasath
Abstract Image data play important role in various real-time online and offline applications. Biomedical field has adopted the imaging system to detect, diagnose, and prevent several types of diseases and abnormalities. The biomedical imaging data contain huge information which requires huge storage space. Moreover, currently telemedicine and IoT based remote health monitoring systems are widely developed where data is transmitted from one place to another. Transmission of this type of huge data consumes more bandwidth. Along with this, during this transmission, the attackers can attack the communication channel and obtain the important and secret information. Hence, biomedical image compression and encryption are considered the solution to deal with these issues. Several techniques have been presented but achieving desired performance for combined module is a challenging task. Hence, in this work, a novel combined approach for image compression and encryption is developed. First, image compression scheme using wavelet transform is presented and later a cryptography scheme is presented using confusion and diffusion schemes. The outcome of the proposed approach is compared with various existing techniques. The experimental analysis shows that the proposed approach achieves better performance in terms of autocorrelation, histogram, information entropy, PSNR, MSE, and SSIM.
图像数据在各种实时在线和离线应用中发挥着重要作用。生物医学领域已经采用成像系统来检测、诊断和预防多种疾病和异常。生物医学成像数据信息量巨大,需要巨大的存储空间。此外,目前广泛开发了远程医疗和基于物联网的远程健康监测系统,其中数据从一个地方传输到另一个地方。这种类型的大数据传输消耗更多的带宽。同时,在这种传输过程中,攻击者可以攻击通信通道,获取重要的机密信息。因此,生物医学图像压缩和加密被认为是解决这些问题的解决方案。已经提出了几种技术,但要实现组合模块所需的性能是一项具有挑战性的任务。因此,在这项工作中,开发了一种新的图像压缩和加密组合方法。首先提出了一种基于小波变换的图像压缩方案,然后提出了一种基于混淆和扩散的加密方案。该方法的结果与现有的各种技术进行了比较。实验分析表明,该方法在自相关、直方图、信息熵、PSNR、MSE和SSIM方面都取得了较好的性能。
{"title":"HWCD: A hybrid approach for image compression using wavelet, encryption using confusion, and decryption using diffusion scheme","authors":"H. R. Latha, Alagarswamy Ramaprasath","doi":"10.1515/jisys-2022-9056","DOIUrl":"https://doi.org/10.1515/jisys-2022-9056","url":null,"abstract":"Abstract Image data play important role in various real-time online and offline applications. Biomedical field has adopted the imaging system to detect, diagnose, and prevent several types of diseases and abnormalities. The biomedical imaging data contain huge information which requires huge storage space. Moreover, currently telemedicine and IoT based remote health monitoring systems are widely developed where data is transmitted from one place to another. Transmission of this type of huge data consumes more bandwidth. Along with this, during this transmission, the attackers can attack the communication channel and obtain the important and secret information. Hence, biomedical image compression and encryption are considered the solution to deal with these issues. Several techniques have been presented but achieving desired performance for combined module is a challenging task. Hence, in this work, a novel combined approach for image compression and encryption is developed. First, image compression scheme using wavelet transform is presented and later a cryptography scheme is presented using confusion and diffusion schemes. The outcome of the proposed approach is compared with various existing techniques. The experimental analysis shows that the proposed approach achieves better performance in terms of autocorrelation, histogram, information entropy, PSNR, MSE, and SSIM.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"37 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78967186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An intelligent algorithm for fast machine translation of long English sentences 一种用于英语长句子快速机器翻译的智能算法
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0257
Hengheng He
Abstract Translation of long sentences in English is a complex problem in machine translation. This work briefly introduced the basic framework of intelligent machine translation algorithm and improved the long short-term memory (LSTM)-based intelligent machine translation algorithm by introducing the long sentence segmentation module and reordering module. Simulation experiments were conducted using the public corpus and the local corpus containing self-collected linguistic data. The improved algorithm was compared with machine translation algorithms based on a recurrent neural network and LSTM. The results suggested that the LSTM-based machine translation algorithm added with the long sentence segmentation module and reordering module effectively segmented long sentences and translated long English sentences more accurately, and the translation was more grammatically correct.
摘要英语长句的翻译是机器翻译中的一个复杂问题。本文简要介绍了智能机器翻译算法的基本框架,并通过引入长句切分模块和重排模块对基于LSTM的智能机器翻译算法进行了改进。使用公共语料库和包含自收集语言数据的局部语料库进行仿真实验。将改进算法与基于递归神经网络和LSTM的机器翻译算法进行了比较。结果表明,加入长句切分模块和重排模块的基于lstm的机器翻译算法能有效切分长句,翻译英语长句的准确性更高,翻译的语法正确性更强。
{"title":"An intelligent algorithm for fast machine translation of long English sentences","authors":"Hengheng He","doi":"10.1515/jisys-2022-0257","DOIUrl":"https://doi.org/10.1515/jisys-2022-0257","url":null,"abstract":"Abstract Translation of long sentences in English is a complex problem in machine translation. This work briefly introduced the basic framework of intelligent machine translation algorithm and improved the long short-term memory (LSTM)-based intelligent machine translation algorithm by introducing the long sentence segmentation module and reordering module. Simulation experiments were conducted using the public corpus and the local corpus containing self-collected linguistic data. The improved algorithm was compared with machine translation algorithms based on a recurrent neural network and LSTM. The results suggested that the LSTM-based machine translation algorithm added with the long sentence segmentation module and reordering module effectively segmented long sentences and translated long English sentences more accurately, and the translation was more grammatically correct.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"22 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83202682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-sensor remote sensing image alignment based on fast algorithms 基于快速算法的多传感器遥感图像对准
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0289
Tao Shu
Abstract Remote sensing image technology to the ground has important guiding significance in disaster assessment and emergency rescue deployment. In order to realize the fast automatic registration of multi-sensor remote sensing images, the remote sensing image block registration idea is introduced, and the image reconstruction is processed by using the conjugate gradient descent (CGD) method. The scale-invariant feature transformation (SIFT) algorithm is improved and optimized by combining the function-fitting method. By this way, it can improve the registration accuracy and efficiency of multi-sensor remote sensing images. The results show that the average peak signal-to-noise ratio of the image processed by the CGD method is 25.428. The average root mean square value is 17.442. The average image processing time is 6.093 s. These indicators are better than the passive filter algorithm and the gradient descent method. The average accuracy of image registration of the improved SIFT registration method is 96.37%, and the average image registration time is 2.14 s. These indicators are significantly better than the traditional SIFT algorithm and speeded-up robust features algorithm. It is proved that the improved SIFT registration method can effectively improve the accuracy and operation efficiency of multi-sensor remote sensing image registration methods. The improved SIFT registration method effectively solves the problems of low accuracy and long time consumption of traditional multi-sensor remote sensing image fast registration methods. While maintaining high registration accuracy, it improves the image registration speed and provides technical support for a rapid disaster assessment after major disasters such as earthquakes and floods. And it has an important value for the development of the efficient post-disaster rescue deployment.
遥感影像技术对地面灾害评估和应急救援部署具有重要的指导意义。为了实现多传感器遥感图像的快速自动配准,引入了遥感图像分块配准思想,采用共轭梯度下降(CGD)方法对图像进行重构。结合函数拟合方法对尺度不变特征变换(SIFT)算法进行了改进和优化。这样可以提高多传感器遥感图像的配准精度和配准效率。结果表明,经CGD方法处理后的图像平均峰值信噪比为25.428。均方根平均值为17.442。平均图像处理时间为6.093 s。这些指标优于无源滤波算法和梯度下降法。改进SIFT配准方法的平均配准精度为96.37%,平均配准时间为2.14 s。这些指标明显优于传统的SIFT算法和加速鲁棒特征算法。实验证明,改进后的SIFT配准方法能有效提高多传感器遥感图像配准方法的精度和运算效率。改进的SIFT配准方法有效地解决了传统多传感器遥感图像快速配准方法精度低、耗时长的问题。在保持高配准精度的同时,提高了图像配准速度,为地震、洪水等重大灾害后的快速灾害评估提供技术支持。对开展高效的灾后救援部署具有重要价值。
{"title":"Multi-sensor remote sensing image alignment based on fast algorithms","authors":"Tao Shu","doi":"10.1515/jisys-2022-0289","DOIUrl":"https://doi.org/10.1515/jisys-2022-0289","url":null,"abstract":"Abstract Remote sensing image technology to the ground has important guiding significance in disaster assessment and emergency rescue deployment. In order to realize the fast automatic registration of multi-sensor remote sensing images, the remote sensing image block registration idea is introduced, and the image reconstruction is processed by using the conjugate gradient descent (CGD) method. The scale-invariant feature transformation (SIFT) algorithm is improved and optimized by combining the function-fitting method. By this way, it can improve the registration accuracy and efficiency of multi-sensor remote sensing images. The results show that the average peak signal-to-noise ratio of the image processed by the CGD method is 25.428. The average root mean square value is 17.442. The average image processing time is 6.093 s. These indicators are better than the passive filter algorithm and the gradient descent method. The average accuracy of image registration of the improved SIFT registration method is 96.37%, and the average image registration time is 2.14 s. These indicators are significantly better than the traditional SIFT algorithm and speeded-up robust features algorithm. It is proved that the improved SIFT registration method can effectively improve the accuracy and operation efficiency of multi-sensor remote sensing image registration methods. The improved SIFT registration method effectively solves the problems of low accuracy and long time consumption of traditional multi-sensor remote sensing image fast registration methods. While maintaining high registration accuracy, it improves the image registration speed and provides technical support for a rapid disaster assessment after major disasters such as earthquakes and floods. And it has an important value for the development of the efficient post-disaster rescue deployment.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"25 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82616252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and research of deep neural network fusion computer vision technology 深度神经网络融合计算机视觉技术的开发与研究
Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0264
Jiangtao Wang
Abstract Deep learning (DL) has revolutionized advanced digital picture processing, enabling significant advancements in computer vision (CV). However, it is important to note that older CV techniques, developed prior to the emergence of DL, still hold value and relevance. Particularly in the realm of more complex, three-dimensional (3D) data such as video and 3D models, CV and multimedia retrieval remain at the forefront of technological advancements. We provide critical insights into the progress made in developing higher-dimensional qualities through the application of DL, and also discuss the advantages and strategies employed in DL. With the widespread use of 3D sensor data and 3D modeling, the analysis and representation of the world in three dimensions have become commonplace. This progress has been facilitated by the development of additional sensors, driven by advancements in areas such as 3D gaming and self-driving vehicles. These advancements have enabled researchers to create feature description models that surpass traditional two-dimensional approaches. This study reveals the current state of advanced digital picture processing, highlighting the role of DL in pushing the boundaries of CV and multimedia retrieval in handling complex, 3D data.
深度学习(DL)彻底改变了先进的数字图像处理,使计算机视觉(CV)取得了重大进展。然而,值得注意的是,在DL出现之前开发的旧CV技术仍然具有价值和相关性。特别是在更复杂的领域,三维(3D)数据,如视频和3D模型,CV和多媒体检索仍然处于技术进步的前沿。我们提供了通过应用深度学习在发展高维质量方面取得的进展的关键见解,并讨论了深度学习的优势和采用的策略。随着三维传感器数据和三维建模的广泛使用,三维世界的分析和表示已经变得司空见惯。3D游戏和自动驾驶汽车等领域的进步推动了额外传感器的发展,从而促进了这一进步。这些进步使研究人员能够创建超越传统二维方法的特征描述模型。本研究揭示了先进数字图像处理的现状,强调了深度学习在处理复杂的3D数据方面在推动CV和多媒体检索的边界方面的作用。
{"title":"Development and research of deep neural network fusion computer vision technology","authors":"Jiangtao Wang","doi":"10.1515/jisys-2022-0264","DOIUrl":"https://doi.org/10.1515/jisys-2022-0264","url":null,"abstract":"Abstract Deep learning (DL) has revolutionized advanced digital picture processing, enabling significant advancements in computer vision (CV). However, it is important to note that older CV techniques, developed prior to the emergence of DL, still hold value and relevance. Particularly in the realm of more complex, three-dimensional (3D) data such as video and 3D models, CV and multimedia retrieval remain at the forefront of technological advancements. We provide critical insights into the progress made in developing higher-dimensional qualities through the application of DL, and also discuss the advantages and strategies employed in DL. With the widespread use of 3D sensor data and 3D modeling, the analysis and representation of the world in three dimensions have become commonplace. This progress has been facilitated by the development of additional sensors, driven by advancements in areas such as 3D gaming and self-driving vehicles. These advancements have enabled researchers to create feature description models that surpass traditional two-dimensional approaches. This study reveals the current state of advanced digital picture processing, highlighting the role of DL in pushing the boundaries of CV and multimedia retrieval in handling complex, 3D data.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135157947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent financial decision support system based on big data 基于大数据的智能金融决策支持系统
Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0320
Danna Tong, Guixian Tian
Abstract In the era of big data, data information has exploded, and all walks of life are impacted by big data. The arrival of big data provides the possibility for the realization of intelligent financial analysis of enterprises. At present, most enterprises’ financial analysis and decision-making based on the analysis results are mainly based on human resources, with poor automation and obvious problems in efficiency and error. In order to help the senior management of enterprises to conduct scientific and effective management, the study uses big data web crawler technology and ETL technology to process data and build an intelligent financial decision support system integrating big data together with Internet plus platform. J Group in S Province is taken as an example to study the effect before and after the application of intelligent financial decision support system. The results show that the crawler technology can monitor the basic data and the big data in the industry in real time, and improve the accuracy of decision-making. Through the intelligent financial decision support system which integrates big data, the core indexes such as profit, net asset return, and accounts receivable of the enterprises can be clearly displayed. The system can query the causes of financial changes hidden behind the financial data. Through the intelligent financial decision support system, it is found that the asset liability ratio, current assets growth rate, operating income growth rate, and financial expenses of J Group are 55.27, 10.38, 20.28%, and 1,974 million RMB, respectively. The growth rate of real sales income of J Group is 0.63%, which is 31.27% less than the excellent value of the industry 31.90%. After adopting the intelligent financial decision support system, the monthly financial statements of the enterprises increase significantly, and the monthly report analysis time decreases. The maximum number of financial statements received by the Group per month is 332, and the processing time at this time is only 2 h. According to the results, it can be seen that the intelligent financial decision support system integrating big data as the research result can effectively improve the financial management level of enterprises, improve the usefulness of financial decision-making, and make practical contributions to the field of corporate financial decision-making.
大数据时代,数据信息爆发,各行各业都受到大数据的冲击。大数据的到来,为实现企业财务智能分析提供了可能。目前,大多数企业的财务分析和基于分析结果的决策主要依靠人力资源,自动化程度较差,效率和误差问题明显。为了帮助企业高层进行科学有效的管理,本研究采用大数据网络爬虫技术和ETL技术对数据进行处理,构建大数据与互联网+平台相结合的智能财务决策支持系统。以S省J集团为例,研究智能财务决策支持系统应用前后的效果。结果表明,爬虫技术可以实时监控行业基础数据和大数据,提高决策的准确性。通过集成大数据的智能财务决策支持系统,清晰显示企业利润、净资产收益率、应收账款等核心指标。系统可以查询隐藏在财务数据背后的财务变化的原因。通过智能财务决策支持系统,发现J集团的资产负债率为55.27亿元,流动资产增长率为10.38亿元,营业收入增长率为20.28%,财务费用为19.74亿元。J集团实际销售收入增长率为0.63%,比行业优值31.90%低31.27%。采用智能财务决策支持系统后,企业的月度财务报表数量显著增加,月度报表分析时间减少。集团每月收到的财务报表最多为332份,此时的处理时间仅为2小时。从结果可以看出,以大数据为研究成果的智能财务决策支持系统能够有效提高企业财务管理水平,提高财务决策的有用性,为企业财务决策领域做出实际贡献。
{"title":"Intelligent financial decision support system based on big data","authors":"Danna Tong, Guixian Tian","doi":"10.1515/jisys-2022-0320","DOIUrl":"https://doi.org/10.1515/jisys-2022-0320","url":null,"abstract":"Abstract In the era of big data, data information has exploded, and all walks of life are impacted by big data. The arrival of big data provides the possibility for the realization of intelligent financial analysis of enterprises. At present, most enterprises’ financial analysis and decision-making based on the analysis results are mainly based on human resources, with poor automation and obvious problems in efficiency and error. In order to help the senior management of enterprises to conduct scientific and effective management, the study uses big data web crawler technology and ETL technology to process data and build an intelligent financial decision support system integrating big data together with Internet plus platform. J Group in S Province is taken as an example to study the effect before and after the application of intelligent financial decision support system. The results show that the crawler technology can monitor the basic data and the big data in the industry in real time, and improve the accuracy of decision-making. Through the intelligent financial decision support system which integrates big data, the core indexes such as profit, net asset return, and accounts receivable of the enterprises can be clearly displayed. The system can query the causes of financial changes hidden behind the financial data. Through the intelligent financial decision support system, it is found that the asset liability ratio, current assets growth rate, operating income growth rate, and financial expenses of J Group are 55.27, 10.38, 20.28%, and 1,974 million RMB, respectively. The growth rate of real sales income of J Group is 0.63%, which is 31.27% less than the excellent value of the industry 31.90%. After adopting the intelligent financial decision support system, the monthly financial statements of the enterprises increase significantly, and the monthly report analysis time decreases. The maximum number of financial statements received by the Group per month is 332, and the processing time at this time is only 2 h. According to the results, it can be seen that the intelligent financial decision support system integrating big data as the research result can effectively improve the financial management level of enterprises, improve the usefulness of financial decision-making, and make practical contributions to the field of corporate financial decision-making.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135358255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep neural network model for paternity testing based on 15-loci STR for Iraqi families 基于伊拉克家庭15位点STR的亲子鉴定深度神经网络模型
Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2023-0041
Donya A. Khalid, Nasser Nafea
Abstract Paternity testing using a deoxyribose nucleic acid (DNA) profile is an essential branch of forensic science, and DNA short tandem repeat (STR) is usually used for this purpose. Nowadays, in third-world countries, conventional kinship analysis techniques used in forensic investigations result in inadequate accuracy measurements, especially when dealing with large human STR datasets; they compare human profiles manually so that the number of samples is limited due to the required human efforts and time consumption. By utilizing automation made possible by AI, forensic investigations are conducted more efficiently, saving both time conception and cost. In this article, we propose a new algorithm for predicting paternity based on the 15-loci STR-DNA datasets using a deep neural network (DNN), where comparisons among many human profiles are held regardless of the limitation of the number of samples. For the purpose of paternity testing, familial data are artificially created based on the real data of individual Iraqi people from Al-Najaf province. Such action helps to overcome the shortage of Iraqi data due to restricted policies and the secrecy of familial datasets. About 53,530 datasets are used in the proposed DNN model for the purpose of training and testing. The Keras library based on Python is used to implement and test the proposed system, as well as the confusion matrix and receiver operating characteristic curve for system evaluation. The system shows excellent accuracy of 99.6% in paternity tests, which is the highest accuracy compared to the existing works. This system shows a good attempt at testing paternity based on a technique of artificial intelligence.
利用脱氧核糖核酸(DNA)图谱进行亲子鉴定是法医学的一个重要分支,DNA短串联重复序列(STR)通常用于此目的。如今,在第三世界国家,法医调查中使用的传统亲属分析技术导致准确性测量不足,特别是在处理大型人类STR数据集时;他们手动比较人的配置文件,因此由于所需的人力和时间消耗,样本的数量受到限制。通过利用人工智能实现的自动化,法医调查更有效地进行,节省了时间和成本。在本文中,我们提出了一种基于15个位点STR-DNA数据集的预测亲子关系的新算法,该算法使用深度神经网络(DNN),在该算法中,无论样本数量的限制,都可以对许多人类档案进行比较。为了进行亲子鉴定,家庭数据是根据纳杰夫省伊拉克人的真实数据人为创建的。这种行动有助于克服伊拉克由于政策限制和家庭数据集保密而缺乏数据的问题。在提出的DNN模型中,大约使用了53,530个数据集用于训练和测试。使用基于Python的Keras库实现和测试了所提出的系统,并使用混淆矩阵和接收机工作特性曲线进行系统评估。该系统在亲子鉴定中准确率高达99.6%,是现有系统中准确率最高的。该系统在基于人工智能技术的亲子鉴定方面做了很好的尝试。
{"title":"A deep neural network model for paternity testing based on 15-loci STR for Iraqi families","authors":"Donya A. Khalid, Nasser Nafea","doi":"10.1515/jisys-2023-0041","DOIUrl":"https://doi.org/10.1515/jisys-2023-0041","url":null,"abstract":"Abstract Paternity testing using a deoxyribose nucleic acid (DNA) profile is an essential branch of forensic science, and DNA short tandem repeat (STR) is usually used for this purpose. Nowadays, in third-world countries, conventional kinship analysis techniques used in forensic investigations result in inadequate accuracy measurements, especially when dealing with large human STR datasets; they compare human profiles manually so that the number of samples is limited due to the required human efforts and time consumption. By utilizing automation made possible by AI, forensic investigations are conducted more efficiently, saving both time conception and cost. In this article, we propose a new algorithm for predicting paternity based on the 15-loci STR-DNA datasets using a deep neural network (DNN), where comparisons among many human profiles are held regardless of the limitation of the number of samples. For the purpose of paternity testing, familial data are artificially created based on the real data of individual Iraqi people from Al-Najaf province. Such action helps to overcome the shortage of Iraqi data due to restricted policies and the secrecy of familial datasets. About 53,530 datasets are used in the proposed DNN model for the purpose of training and testing. The Keras library based on Python is used to implement and test the proposed system, as well as the confusion matrix and receiver operating characteristic curve for system evaluation. The system shows excellent accuracy of 99.6% in paternity tests, which is the highest accuracy compared to the existing works. This system shows a good attempt at testing paternity based on a technique of artificial intelligence.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135611752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design model-free adaptive PID controller based on lazy learning algorithm 基于懒惰学习算法设计无模型自适应PID控制器
Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2022-0279
Hongcheng Zhou
Abstract The nonlinear system is difficult to achieve the desired effect by using traditional proportional integral derivative (PID) or linear controller. First, this study presents an improved lazy learning algorithm based on k-vector nearest neighbors, which not only considers the matching of input and output data, but also considers the consistency of the model. Based on the optimization index of an additional penalty function, the optimal solution of the lazy learning is obtained by the iterative least-square method. Second, based on the improved lazy learning, an adaptive PID control algorithm is proposed. Finally, the control effect under the condition of complete data and incomplete data is compared by simulation experiment.
传统的比例积分导数(PID)或线性控制器对非线性系统难以达到预期的控制效果。首先,本文提出了一种改进的基于k向量最近邻的懒惰学习算法,该算法不仅考虑了输入输出数据的匹配性,还考虑了模型的一致性。基于附加惩罚函数的优化指标,采用迭代最小二乘法求出懒惰学习的最优解。其次,基于改进的惰性学习,提出了一种自适应PID控制算法。最后,通过仿真实验比较了数据完备和数据不完备情况下的控制效果。
{"title":"Design model-free adaptive PID controller based on lazy learning algorithm","authors":"Hongcheng Zhou","doi":"10.1515/jisys-2022-0279","DOIUrl":"https://doi.org/10.1515/jisys-2022-0279","url":null,"abstract":"Abstract The nonlinear system is difficult to achieve the desired effect by using traditional proportional integral derivative (PID) or linear controller. First, this study presents an improved lazy learning algorithm based on k-vector nearest neighbors, which not only considers the matching of input and output data, but also considers the consistency of the model. Based on the optimization index of an additional penalty function, the optimal solution of the lazy learning is obtained by the iterative least-square method. Second, based on the improved lazy learning, an adaptive PID control algorithm is proposed. Finally, the control effect under the condition of complete data and incomplete data is compared by simulation experiment.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135953356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology 对智能合约技术中未被发现的漏洞和工具进行系统的文献综述
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2023-0038
Oualid Zaazaa, Hanan El Bakkali
Abstract In recent years, smart contract technology has garnered significant attention due to its ability to address trust issues that traditional technologies have long struggled with. However, like any evolving technology, smart contracts are not immune to vulnerabilities, and some remain underexplored, often eluding detection by existing vulnerability assessment tools. In this article, we have performed a systematic literature review of all the scientific research and papers conducted between 2016 and 2021. The main objective of this work is to identify what vulnerabilities and smart contract technologies have not been well studied. In addition, we list all the datasets used by previous researchers that can help researchers in building more efficient machine-learning models in the future. In addition, comparisons are drawn among the smart contract analysis tools by considering various features. Finally, various future directions are also discussed in the field of smart contracts that can help researchers to set the direction for future research in this domain.
近年来,智能合约技术因其解决传统技术长期难以解决的信任问题的能力而引起了广泛关注。然而,就像任何不断发展的技术一样,智能合约也不能幸免于漏洞,有些仍然未被充分开发,通常无法被现有的漏洞评估工具检测到。在本文中,我们对2016 - 2021年间的所有科学研究和论文进行了系统的文献综述。这项工作的主要目标是确定哪些漏洞和智能合约技术尚未得到很好的研究。此外,我们列出了以前研究人员使用的所有数据集,这些数据集可以帮助研究人员在未来构建更有效的机器学习模型。此外,通过考虑各种功能,对智能合约分析工具进行了比较。最后,还讨论了智能合约领域的各种未来方向,可以帮助研究人员为该领域的未来研究设定方向。
{"title":"A systematic literature review of undiscovered vulnerabilities and tools in smart contract technology","authors":"Oualid Zaazaa, Hanan El Bakkali","doi":"10.1515/jisys-2023-0038","DOIUrl":"https://doi.org/10.1515/jisys-2023-0038","url":null,"abstract":"Abstract In recent years, smart contract technology has garnered significant attention due to its ability to address trust issues that traditional technologies have long struggled with. However, like any evolving technology, smart contracts are not immune to vulnerabilities, and some remain underexplored, often eluding detection by existing vulnerability assessment tools. In this article, we have performed a systematic literature review of all the scientific research and papers conducted between 2016 and 2021. The main objective of this work is to identify what vulnerabilities and smart contract technologies have not been well studied. In addition, we list all the datasets used by previous researchers that can help researchers in building more efficient machine-learning models in the future. In addition, comparisons are drawn among the smart contract analysis tools by considering various features. Finally, various future directions are also discussed in the field of smart contracts that can help researchers to set the direction for future research in this domain.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"22 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84117441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing SQL payloads using logistic regression in a big data environment 在大数据环境中使用逻辑回归分析SQL有效负载
IF 3 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-01-01 DOI: 10.1515/jisys-2023-0063
O. Shareef, Rehab Flaih Hasan, Ammar Hatem Farhan
Abstract Protecting big data from attacks on large organizations is essential because of how vital such data are to organizations and individuals. Moreover, such data can be put at risk when attackers gain unauthorized access to information and use it in illegal ways. One of the most common such attacks is the structured query language injection attack (SQLIA). This attack is a vulnerability attack that allows attackers to illegally access a database quickly and easily by manipulating structured query language (SQL) queries, especially when dealing with a big data environment. To address these risks, this study aims to build an approach that acts as a middle protection layer between the client and database server layers and reduces the time consumed to classify the SQL payload sent from the user layer. The proposed method involves training a model by using a machine learning (ML) technique for logistic regression with the Spark ML library that handles big data. An experiment was conducted using the SQLI dataset. Results show that the proposed approach achieved an accuracy of 99.04, a precision of 98.87, a recall of 99.89, and an F-score of 99.04. The time taken to identify and prevent SQLIA is 0.05 s. Our approach can protect the data by using the middle layer. Moreover, using the Spark ML library with ML algorithms gives better accuracy and shortens the time required to determine the type of request sent from the user layer.
保护大数据免受大型组织的攻击至关重要,因为这些数据对组织和个人都至关重要。此外,当攻击者未经授权访问信息并以非法方式使用这些信息时,这些数据可能会处于危险之中。最常见的攻击之一是结构化查询语言注入攻击(SQLIA)。这种攻击是一种漏洞攻击,攻击者可以通过操纵结构化查询语言(SQL)查询,快速轻松地非法访问数据库,特别是在处理大数据环境时。为了解决这些风险,本研究旨在构建一种方法,作为客户端和数据库服务器层之间的中间保护层,减少对从用户层发送的SQL有效负载进行分类所花费的时间。提出的方法包括通过使用机器学习(ML)技术来训练模型,并使用处理大数据的Spark ML库进行逻辑回归。使用SQLI数据集进行了实验。结果表明,该方法的准确率为99.04,精密度为98.87,召回率为99.89,f分数为99.04。识别和预防SQLIA所需时间为0.05 s。我们的方法可以通过使用中间层来保护数据。此外,使用Spark ML库和ML算法提供了更好的准确性,并缩短了确定从用户层发送的请求类型所需的时间。
{"title":"Analyzing SQL payloads using logistic regression in a big data environment","authors":"O. Shareef, Rehab Flaih Hasan, Ammar Hatem Farhan","doi":"10.1515/jisys-2023-0063","DOIUrl":"https://doi.org/10.1515/jisys-2023-0063","url":null,"abstract":"Abstract Protecting big data from attacks on large organizations is essential because of how vital such data are to organizations and individuals. Moreover, such data can be put at risk when attackers gain unauthorized access to information and use it in illegal ways. One of the most common such attacks is the structured query language injection attack (SQLIA). This attack is a vulnerability attack that allows attackers to illegally access a database quickly and easily by manipulating structured query language (SQL) queries, especially when dealing with a big data environment. To address these risks, this study aims to build an approach that acts as a middle protection layer between the client and database server layers and reduces the time consumed to classify the SQL payload sent from the user layer. The proposed method involves training a model by using a machine learning (ML) technique for logistic regression with the Spark ML library that handles big data. An experiment was conducted using the SQLI dataset. Results show that the proposed approach achieved an accuracy of 99.04, a precision of 98.87, a recall of 99.89, and an F-score of 99.04. The time taken to identify and prevent SQLIA is 0.05 s. Our approach can protect the data by using the middle layer. Moreover, using the Spark ML library with ML algorithms gives better accuracy and shortens the time required to determine the type of request sent from the user layer.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"137 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86671255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1