首页 > 最新文献

Advances in computational intelligence最新文献

英文 中文
The impact of digital twins on the evolution of intelligent manufacturing and Industry 4.0 数字孪生对智能制造和工业4.0演进的影响。
Pub Date : 2023-06-07 DOI: 10.1007/s43674-023-00058-y
Mohsen Attaran, Sharmin Attaran, Bilge Gokhan Celik

As the adoption of Industry 4.0 advances and the manufacturing process becomes increasingly digital, the Digital Twin (DT) will prove invaluable for testing and simulating new parameters and design variants. DT solutions build a 3D digital replica of the physical object allowing the managers to develop better products, detect physical issues sooner, and predict outcomes more accurately. In the past few years, Digital Twins (DTs) dramatically reduced the cost of developing new manufacturing approaches, improved efficiency, reduced waste, and minimized batch-to-batch variability. This paper aims to highlight the evolution of DTs, review its enabling technologies, identify challenges and opportunities for implementing DT in Industry 4.0, and examine its range of applications in manufacturing, including smart logistics and supply chain management. The paper also highlights some real examples of the application of DT in manufacturing.

随着工业4.0的普及和制造过程的数字化,数字孪生(DT)将被证明在测试和模拟新参数和设计变体方面是非常宝贵的。DT解决方案构建了物理对象的3D数字复制品,使管理人员能够开发更好的产品,更快地发现物理问题,并更准确地预测结果。在过去的几年里,数字双胞胎(DT)大大降低了开发新制造方法的成本,提高了效率,减少了浪费,并最大限度地减少了批次间的差异。本文旨在强调DT的发展,回顾其使能技术,确定在工业4.0中实施DT的挑战和机遇,并研究其在制造业的应用范围,包括智能物流和供应链管理。文中还重点介绍了DT在制造业中的应用实例。
{"title":"The impact of digital twins on the evolution of intelligent manufacturing and Industry 4.0","authors":"Mohsen Attaran,&nbsp;Sharmin Attaran,&nbsp;Bilge Gokhan Celik","doi":"10.1007/s43674-023-00058-y","DOIUrl":"10.1007/s43674-023-00058-y","url":null,"abstract":"<div><p>As the adoption of Industry 4.0 advances and the manufacturing process becomes increasingly digital, the Digital Twin (DT) will prove invaluable for testing and simulating new parameters and design variants. DT solutions build a 3D digital replica of the physical object allowing the managers to develop better products, detect physical issues sooner, and predict outcomes more accurately. In the past few years, Digital Twins (DTs) dramatically reduced the cost of developing new manufacturing approaches, improved efficiency, reduced waste, and minimized batch-to-batch variability. This paper aims to highlight the evolution of DTs, review its enabling technologies, identify challenges and opportunities for implementing DT in Industry 4.0, and examine its range of applications in manufacturing, including smart logistics and supply chain management. The paper also highlights some real examples of the application of DT in manufacturing.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-023-00058-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9623751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A survey on cyber threat intelligence sharing based on Blockchain 基于区块链的网络威胁情报共享研究
Pub Date : 2023-05-23 DOI: 10.1007/s43674-023-00057-z
Ahmed El-Kosairy, Nashwa Abdelbaki, Heba Aslan

In recent years, cyber security attacks have increased massively. This introduces the need to defend against such attacks. Cyber security threat intelligence has recently been introduced to secure systems against security attacks. Cyber security threat intelligence (CTI) should be fast, trustful, and protect the sender's identity to stop these attacks at the right time. Threat intelligence sharing is vitally important since it is considered an effective way to improve threat understanding. This leads to protecting the assets and preventing the attack vectors. However, there is a paradox between the privacy safeguard needs of threat intelligence sharing; the need to produce complete proper threat intelligence feeds to be shared with the community, and other challenges and needs that are not covered in the traditional CTI. This paper aims to study how Blockchain technology can be incorporated with the CTI to solve the current issues and challenges in the traditional CTI. We collected the latest contributions that use Blockchain to overcome the conventional CTI problems and compared them to raise the reader’s awareness about the different methods used. Also, we mentioned the uncovered areas for each paper to offer a wide range of details and information about different areas that need to be investigated. Furthermore, the prospect challenges of integrating the Blockchain and CTI are discussed.

近年来,网络安全攻击大幅增加。这就提出了防范此类攻击的必要性。网络安全威胁情报最近被引入安全系统以抵御安全攻击。网络安全威胁情报(CTI)应该快速、可靠,并保护发送者的身份,以在正确的时间阻止这些攻击。威胁情报共享至关重要,因为它被认为是提高对威胁认识的有效途径。这导致保护资产和防止攻击媒介。然而,威胁情报共享的隐私保护需求之间存在着矛盾;生成完整的适当威胁情报源以与社区共享的必要性,以及传统CTI中未涵盖的其他挑战和需求。本文旨在研究如何将区块链技术与CTI相结合,以解决传统CTI中存在的问题和挑战。我们收集了使用区块链克服传统CTI问题的最新贡献,并对其进行了比较,以提高读者对所使用的不同方法的认识。此外,我们在每篇论文中都提到了未覆盖的领域,以提供关于需要调查的不同领域的广泛细节和信息。此外,还讨论了区块链与CTI融合的前景挑战。
{"title":"A survey on cyber threat intelligence sharing based on Blockchain","authors":"Ahmed El-Kosairy,&nbsp;Nashwa Abdelbaki,&nbsp;Heba Aslan","doi":"10.1007/s43674-023-00057-z","DOIUrl":"10.1007/s43674-023-00057-z","url":null,"abstract":"<div><p>In recent years, cyber security attacks have increased massively. This introduces the need to defend against such attacks. Cyber security threat intelligence has recently been introduced to secure systems against security attacks. Cyber security threat intelligence (CTI) should be fast, trustful, and protect the sender's identity to stop these attacks at the right time. Threat intelligence sharing is vitally important since it is considered an effective way to improve threat understanding. This leads to protecting the assets and preventing the attack vectors. However, there is a paradox between the privacy safeguard needs of threat intelligence sharing; the need to produce complete proper threat intelligence feeds to be shared with the community, and other challenges and needs that are not covered in the traditional CTI. This paper aims to study how Blockchain technology can be incorporated with the CTI to solve the current issues and challenges in the traditional CTI. We collected the latest contributions that use Blockchain to overcome the conventional CTI problems and compared them to raise the reader’s awareness about the different methods used. Also, we mentioned the uncovered areas for each paper to offer a wide range of details and information about different areas that need to be investigated. Furthermore, the prospect challenges of integrating the Blockchain and CTI are discussed.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50507816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deception as a service: Intrusion and Ransomware Detection System for Cloud Computing (IRDS4C) 欺骗即服务:用于云计算的入侵和勒索软件检测系统(IRDS4C)
Pub Date : 2023-05-20 DOI: 10.1007/s43674-023-00056-0
Ahmed El-Kosairy, Nashwa Abdelbaki

Cloud computing technology is growing fast. It offers end-users flexibility, ease of use, agility, and more at a low cost. This expands the attack surface and factors, resulting in more attacks, vulnerabilities, and corruption. Traditional and old security controls are insufficient against new attacks and cybercrime. Technologies such as intrusion detection system (IDS), intrusion prevention system (IPS), Firewalls, Web Application Firewall (WAF), Next-Generation Firewall (NGFW), and endpoints are not enough, especially against a new generation of ransomware and hacking techniques. In addition to a slew of cloud computing options, such as software as a service (SaaS), it is challenging to manage and secure cloud technology. A new technique is needed to detect zero-day attacks related to ransomware, targeted attacks, or intruders. This paper presents our new technique for detecting zero-day ransomware attacks and intruders inside cloud technology. The proposed technique is based on a deception system based on honey files and tokens.

云计算技术发展迅速。它以低成本为最终用户提供了灵活性、易用性、灵活性等。这扩大了攻击面和因素,导致更多的攻击、漏洞和腐败。传统和旧的安全控制措施不足以应对新的攻击和网络犯罪。入侵检测系统(IDS)、入侵防御系统(IPS)、防火墙、Web应用程序防火墙(WAF)、下一代防火墙(NGFW)和端点等技术还不够,尤其是针对新一代勒索软件和黑客技术。除了软件即服务(SaaS)等一系列云计算选项外,管理和保护云技术也是一项挑战。需要一种新技术来检测与勒索软件、目标攻击或入侵者相关的零日攻击。本文介绍了我们在云技术中检测零日勒索软件攻击和入侵者的新技术。所提出的技术基于基于蜂蜜文件和令牌的欺骗系统。
{"title":"Deception as a service: Intrusion and Ransomware Detection System for Cloud Computing (IRDS4C)","authors":"Ahmed El-Kosairy,&nbsp;Nashwa Abdelbaki","doi":"10.1007/s43674-023-00056-0","DOIUrl":"10.1007/s43674-023-00056-0","url":null,"abstract":"<div><p>Cloud computing technology is growing fast. It offers end-users flexibility, ease of use, agility, and more at a low cost. This expands the attack surface and factors, resulting in more attacks, vulnerabilities, and corruption. Traditional and old security controls are insufficient against new attacks and cybercrime. Technologies such as intrusion detection system (IDS), intrusion prevention system (IPS), Firewalls, Web Application Firewall (WAF), Next-Generation Firewall (NGFW), and endpoints are not enough, especially against a new generation of ransomware and hacking techniques. In addition to a slew of cloud computing options, such as software as a service (SaaS), it is challenging to manage and secure cloud technology. A new technique is needed to detect zero-day attacks related to ransomware, targeted attacks, or intruders. This paper presents our new technique for detecting zero-day ransomware attacks and intruders inside cloud technology. The proposed technique is based on a deception system based on honey files and tokens.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50499988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Corn cash-futures basis forecasting via neural networks 基于神经网络的玉米现金期货基差预测
Pub Date : 2023-04-12 DOI: 10.1007/s43674-023-00054-2
Xiaojie Xu, Yun Zhang

Cash-futures basis forecasting represents a vital concern for various market participants in the agricultural sector, which has been rarely explored due to limitations on data and traditional econometric methods. The current study explores usefulness of the nonlinear autoregressive neural network technique for the forecasting problem in a unique and proprietary data set of daily corn cash-futures basis across nearly five-hundred cash markets from sixteen most important harvest states in the United States over a 5-year period. Through investigations of various model settings across the hidden neuron, delay, data splitting ratio, and algorithm, a chosen model with five delays and twenty hidden neurons is reached, trained using the Levenberg–Marquardt algorithm and data splitting ratio of 70% vs. 15% vs. 15% for training, validation, and testing. This model results in accurate and stable performance across the cash markets explored, which illustrates usefulness of the machine learning technique for corn cash-futures basis forecasting. Particularly, the model leads to average relative root mean square errors (RRMSEs) of 9.97%, 8.51%, and 9.64% for the training, validation, and testing phases, respectively, and the average RRMSE of 9.83% for the overall sample across all cash markets. Results here might be used as standalone technical forecasts or combined with fundamental forecasts for forming perspectives of cash-futures basis trends and carrying out policy analysis. The empirical framework here is easy to implement, which is an essential consideration to many decision makers, and has potential to be generalized for forecasting cash-futures basis of other commodities.

现金期货基础预测是农业部门各市场参与者关注的一个重要问题,由于数据和传统计量经济方法的限制,很少对其进行探索。目前的研究探索了非线性自回归神经网络技术在5年内对美国16个最重要的收获州的近500个现金市场的每日玉米现金期货基础的独特专有数据集中的预测问题的有用性。通过研究隐藏神经元、延迟、数据分割率和算法的各种模型设置,得出了一个具有5个延迟和20个隐藏神经元的选定模型,使用Levenberg–Marquardt算法进行训练,数据分割率为70%对15%对15%,用于训练、验证和测试。该模型在所探索的现金市场中产生了准确稳定的表现,这说明了机器学习技术在玉米现金期货基差预测中的有用性。特别是,该模型导致训练、验证和测试阶段的平均相对均方根误差(RRMSE)分别为9.97%、8.51%和9.64%,所有现金市场的总体样本的平均RRMSE为9.83%。这里的结果可以用作独立的技术预测,也可以与基本面预测相结合,以形成现金期货基差趋势的视角并进行政策分析。这里的实证框架易于实施,这是许多决策者的重要考虑因素,并且有可能推广用于预测其他商品的现金期货基础。
{"title":"Corn cash-futures basis forecasting via neural networks","authors":"Xiaojie Xu,&nbsp;Yun Zhang","doi":"10.1007/s43674-023-00054-2","DOIUrl":"10.1007/s43674-023-00054-2","url":null,"abstract":"<div><p>Cash-futures basis forecasting represents a vital concern for various market participants in the agricultural sector, which has been rarely explored due to limitations on data and traditional econometric methods. The current study explores usefulness of the nonlinear autoregressive neural network technique for the forecasting problem in a unique and proprietary data set of daily corn cash-futures basis across nearly five-hundred cash markets from sixteen most important harvest states in the United States over a 5-year period. Through investigations of various model settings across the hidden neuron, delay, data splitting ratio, and algorithm, a chosen model with five delays and twenty hidden neurons is reached, trained using the Levenberg–Marquardt algorithm and data splitting ratio of 70% vs. 15% vs. 15% for training, validation, and testing. This model results in accurate and stable performance across the cash markets explored, which illustrates usefulness of the machine learning technique for corn cash-futures basis forecasting. Particularly, the model leads to average relative root mean square errors (RRMSEs) of 9.97%, 8.51%, and 9.64% for the training, validation, and testing phases, respectively, and the average RRMSE of 9.83% for the overall sample across all cash markets. Results here might be used as standalone technical forecasts or combined with fundamental forecasts for forming perspectives of cash-futures basis trends and carrying out policy analysis. The empirical framework here is easy to implement, which is an essential consideration to many decision makers, and has potential to be generalized for forecasting cash-futures basis of other commodities.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-023-00054-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50473535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Optimization of deep learning models: benchmark and analysis 深度学习模型的优化:基准和分析
Pub Date : 2023-03-30 DOI: 10.1007/s43674-023-00055-1
Rasheed Ahmad, Izzat Alsmadi, Mohammad Al-Ramahi

Model optimization in deep learning (DL) and neural networks is concerned about how and why the model can be successfully trained towards one or more objective functions. The evolutionary learning or training process continuously considers the dynamic parameters of the model. Many researchers propose a deep learning-based solution by randomly selecting a single classifier model architecture. Such approaches generally overlook the hidden and complex nature of the model’s internal working, producing biased results. Larger and deeper NN models bring many complexities and logistic challenges while building and deploying them. To obtain high-quality performance results, an optimal model generally depends on the appropriate architectural settings, such as the number of hidden layers and the number of neurons at each layer. A challenging and time-consuming task is to select and test various combinations of these settings manually. This paper presents an extensive empirical analysis of various deep learning algorithms trained recursively using permutated settings to establish benchmarks and find an optimal model. The paper analyzed the Stack Overflow dataset to predict the quality of posted questions. The extensive empirical analysis revealed that some famous deep learning algorithms such as CNN are the least effective algorithm in solving this problem compared to multilayer perceptron (MLP), which provides efficient computing and the best results in terms of prediction accuracy. The analysis also shows that manipulating the number of neurons alone at each layer in a network does not influence model optimization. This paper’s findings will help to recognize the fact that future models should be built by considering a vast range of model architectural settings for an optimal solution.

深度学习(DL)和神经网络中的模型优化关注如何以及为什么可以针对一个或多个目标函数成功地训练模型。进化学习或训练过程不断地考虑模型的动态参数。许多研究人员通过随机选择单个分类器模型架构,提出了一种基于深度学习的解决方案。这种方法通常忽略了模型内部工作的隐蔽性和复杂性,从而产生有偏见的结果。更大、更深层次的神经网络模型在构建和部署时带来了许多复杂性和后勤挑战。为了获得高质量的性能结果,最佳模型通常取决于适当的架构设置,例如隐藏层的数量和每层神经元的数量。手动选择和测试这些设置的各种组合是一项具有挑战性且耗时的任务。本文对使用置换设置递归训练的各种深度学习算法进行了广泛的实证分析,以建立基准并找到最优模型。本文分析了Stack Overflow数据集来预测发布问题的质量。广泛的实证分析表明,与多层感知器(MLP)相比,一些著名的深度学习算法(如CNN)是解决这一问题的最不有效的算法,多层感知机提供了高效的计算和预测精度方面的最佳结果。分析还表明,单独操纵网络中每一层的神经元数量不会影响模型优化。本文的发现将有助于认识到这样一个事实,即未来的模型应该通过考虑广泛的模型体系结构设置来构建最佳解决方案。
{"title":"Optimization of deep learning models: benchmark and analysis","authors":"Rasheed Ahmad,&nbsp;Izzat Alsmadi,&nbsp;Mohammad Al-Ramahi","doi":"10.1007/s43674-023-00055-1","DOIUrl":"10.1007/s43674-023-00055-1","url":null,"abstract":"<div><p>Model optimization in deep learning (DL) and neural networks is concerned about how and why the model can be successfully trained towards one or more objective functions. The evolutionary learning or training process continuously considers the dynamic parameters of the model. Many researchers propose a deep learning-based solution by randomly selecting a single classifier model architecture. Such approaches generally overlook the hidden and complex nature of the model’s internal working, producing biased results. Larger and deeper NN models bring many complexities and logistic challenges while building and deploying them. To obtain high-quality performance results, an optimal model generally depends on the appropriate architectural settings, such as the number of hidden layers and the number of neurons at each layer. A challenging and time-consuming task is to select and test various combinations of these settings manually. This paper presents an extensive empirical analysis of various deep learning algorithms trained recursively using permutated settings to establish benchmarks and find an optimal model. The paper analyzed the Stack Overflow dataset to predict the quality of posted questions. The extensive empirical analysis revealed that some famous deep learning algorithms such as CNN are the least effective algorithm in solving this problem compared to multilayer perceptron (MLP), which provides efficient computing and the best results in terms of prediction accuracy. The analysis also shows that manipulating the number of neurons alone at each layer in a network does not influence model optimization. This paper’s findings will help to recognize the fact that future models should be built by considering a vast range of model architectural settings for an optimal solution.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50526712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Gorge: graph convolutional networks on heterogeneous multi-relational graphs for polypharmacy side effect prediction Gorge:用于多药副作用预测的异构多关系图上的图卷积网络
Pub Date : 2023-03-03 DOI: 10.1007/s43674-023-00053-3
Yike Wang, Huifang Ma, Ruoyi Zhang, Zihao Gao

Determining the side effects of multidrug combinations is a very important issue in drug risk studies. However, designing clinical trials to determine frequencies is often time-consuming and expensive, and previous work has often been limited to using the target protein of a drug without screening. Although this alleviates the sparsity of the raw data to some extent, blindly introducing proteins as auxiliary information can lead to a large amount of noisy information being added, which in turn leads to less efficient models. For this reason, we propose a new method called Gorge (graph convolutional networks on heterogeneous multi-relational graphs for polypharmacy side effect prediction). Specifically, we designed two protein auxiliary pathways directly related to drugs and combined these two auxiliary pathways with a multi-relational graph of drug side effects, which both alleviates data sparsity and filters noisy data. Then, we introduce a query-aware attention mechanism that generates different attention pathways for drug entities based on different drug pairs, fine-grained to determine the extent of information delivery. Finally, we output the exact frequency of drug side effects occurring through a tensor factorization decoder, in contrast to most existing methods that can only predict the presence or association of side effects, but not their frequency. We found that Gorge achieves excellent performance on real-world datasets (average AUROC of 0.822 and average AUPR of 0.775), outperforming existing methods. Further examination provides literature evidence for highly ranked predictions.

确定多药联合用药的副作用是药物风险研究中的一个非常重要的问题。然而,设计临床试验来确定频率通常耗时且昂贵,而且以前的工作通常仅限于在没有筛选的情况下使用药物的靶蛋白。尽管这在一定程度上缓解了原始数据的稀疏性,但盲目引入蛋白质作为辅助信息会导致添加大量噪声信息,进而导致模型效率降低。因此,我们提出了一种新的方法,称为Gorge(用于多药副作用预测的异构多关系图上的图卷积网络)。具体而言,我们设计了两种与药物直接相关的蛋白质辅助途径,并将这两种辅助途径与药物副作用的多关系图相结合,既缓解了数据稀疏性,又过滤了噪声数据。然后,我们引入了一种查询感知注意力机制,该机制基于不同的药物对为药物实体生成不同的注意力途径,细粒度地确定信息传递的程度。最后,我们通过张量分解解码器输出药物副作用发生的确切频率,这与大多数现有方法不同,这些方法只能预测副作用的存在或关联,而不能预测其频率。我们发现Gorge在真实世界的数据集上实现了出色的性能(平均AUROC为0.822,平均AUPR为0.775),优于现有方法。进一步的研究为高排名的预测提供了文献证据。
{"title":"Gorge: graph convolutional networks on heterogeneous multi-relational graphs for polypharmacy side effect prediction","authors":"Yike Wang,&nbsp;Huifang Ma,&nbsp;Ruoyi Zhang,&nbsp;Zihao Gao","doi":"10.1007/s43674-023-00053-3","DOIUrl":"10.1007/s43674-023-00053-3","url":null,"abstract":"<div><p>Determining the side effects of multidrug combinations is a very important issue in drug risk studies. However, designing clinical trials to determine frequencies is often time-consuming and expensive, and previous work has often been limited to using the target protein of a drug without screening. Although this alleviates the sparsity of the raw data to some extent, blindly introducing proteins as auxiliary information can lead to a large amount of noisy information being added, which in turn leads to less efficient models. For this reason, we propose a new method called Gorge (graph convolutional networks on heterogeneous multi-relational graphs for polypharmacy side effect prediction). Specifically, we designed two protein auxiliary pathways directly related to drugs and combined these two auxiliary pathways with a multi-relational graph of drug side effects, which both alleviates data sparsity and filters noisy data. Then, we introduce a query-aware attention mechanism that generates different attention pathways for drug entities based on different drug pairs, fine-grained to determine the extent of information delivery. Finally, we output the exact frequency of drug side effects occurring through a tensor factorization decoder, in contrast to most existing methods that can only predict the presence or association of side effects, but not their frequency. We found that Gorge achieves excellent performance on real-world datasets (average AUROC of 0.822 and average AUPR of 0.775), outperforming existing methods. Further examination provides literature evidence for highly ranked predictions.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50446531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A transparent machine learning algorithm to manage diabetes: TDMSML 管理糖尿病的透明机器学习算法:TDMSML
Pub Date : 2023-02-10 DOI: 10.1007/s43674-022-00051-x
Amrit Kumar Verma, Saroj Kr. Biswas, Manomita Chakraborty, Arpita Nath Boruah

Diabetes is nowadays a very common medical problem among the people worldwide. The disease is becoming more prevalent with the modern and hectic lifestyle followed by people. As a result, designing an adequate medical expert system to assist physicians in treating the disease on time is critical. Expert systems are required to identify the major cause(s) of the disease, so that precautionary measures can be taken ahead of time. Several medical expert systems have already been proposed, but each has its own set of shortcomings, such as the use of trial and error methods, trivial decision-making procedures, and so on. As a result, this paper proposes a Transparent Diabetes Management System Using Machine Learning (TDMSML) expert system that uses decision tree rules to identify the major factor(s) of diabetes. The TDMSML model comprises of three phases: rule generation, transparent rule selection, and major factor identification. The rule generation phase generates rules using decision tree. Transparent rule selection stage selects the transparent rules followed by pruning the redundant rules to get the minimized rule-set. The major factor identification stage extracts the major factor(s) with range(s) from the minimized rule-set. These factor(s) with certain range(s) are characterized as major cause(s) of diabetes disease. The model is validated with the Pima Indian diabetes data set collected from Kaggle.

糖尿病是当今世界人民中一个非常常见的医学问题。随着人们追求现代繁忙的生活方式,这种疾病变得越来越普遍。因此,设计一个足够的医学专家系统来帮助医生按时治疗这种疾病至关重要。需要专家系统来确定疾病的主要原因,以便提前采取预防措施。已经提出了几种医学专家系统,但每种系统都有自己的缺点,如使用试错法、琐碎的决策程序等。因此,本文提出了一种使用机器学习的透明糖尿病管理系统(TDMSML)专家系统,该系统使用决策树规则来识别糖尿病的主要因素。TDMSML模型包括三个阶段:规则生成、透明规则选择和主要因素识别。规则生成阶段使用决策树生成规则。透明规则选择阶段选择透明规则,然后修剪冗余规则以获得最小化的规则集。主要因素识别阶段从最小化规则集中提取具有范围的主要因素。这些具有一定范围的因素是糖尿病的主要病因。该模型通过从Kaggle收集的Pima Indian糖尿病数据集进行了验证。
{"title":"A transparent machine learning algorithm to manage diabetes: TDMSML","authors":"Amrit Kumar Verma,&nbsp;Saroj Kr. Biswas,&nbsp;Manomita Chakraborty,&nbsp;Arpita Nath Boruah","doi":"10.1007/s43674-022-00051-x","DOIUrl":"10.1007/s43674-022-00051-x","url":null,"abstract":"<div><p>Diabetes is nowadays a very common medical problem among the people worldwide. The disease is becoming more prevalent with the modern and hectic lifestyle followed by people. As a result, designing an adequate medical expert system to assist physicians in treating the disease on time is critical. Expert systems are required to identify the major cause(s) of the disease, so that precautionary measures can be taken ahead of time. Several medical expert systems have already been proposed, but each has its own set of shortcomings, such as the use of trial and error methods, trivial decision-making procedures, and so on. As a result, this paper proposes a Transparent Diabetes Management System Using Machine Learning (TDMSML) expert system that uses decision tree rules to identify the major factor(s) of diabetes. The TDMSML model comprises of three phases: rule generation, transparent rule selection, and major factor identification. The rule generation phase generates rules using decision tree. Transparent rule selection stage selects the transparent rules followed by pruning the redundant rules to get the minimized rule-set. The major factor identification stage extracts the major factor(s) with range(s) from the minimized rule-set. These factor(s) with certain range(s) are characterized as major cause(s) of diabetes disease. The model is validated with the Pima Indian diabetes data set collected from Kaggle.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50468175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LAD in finance: accounting analytics and fraud detection 金融领域的LAD:会计分析和欺诈检测
Pub Date : 2023-01-30 DOI: 10.1007/s43674-023-00052-4
Aditi Kar Gangopadhyay, Tanay Sheth, Sneha Chauhan

The paper explores advancements in accounting analytics using the logical analysis of data (LAD) approach by identifying fraudulent firms and transactions. The straightforward approach to fraud detection with an analytic model is to identify possible predictors of fraud associated with known fraudsters and their historical actions. LAD is a machine learning methodology that combines Boolean functions, optimization, and logic ideas in alignment with the traditional approach. The key characteristic of the LAD is discovering minimal sets of features necessary for explaining all observations and detecting hidden patterns in the data capable of distinguishing observations describing “positive” outcome events from “negative” outcome events. The combinatorial optimization model described in the paper represents a variation on the general theme of set covering and concludes with an outline of LAD applications to detect fraudulent firms and financial frauds. The dataset consists of Annual data of 777 firms from 14 different sectors. The results demonstrate 97.4% accuracy with an F1 score of 0.97. Another dataset on credit card transactions and finance is also used to test the effectiveness of LAD in finance. With the appearance of the immense growth of financial fraud cases, these promising results lead to future advancements in analytical audit fieldwork.

本文通过识别欺诈公司和交易,探讨了使用数据逻辑分析(LAD)方法进行会计分析的进展。使用分析模型进行欺诈检测的直接方法是识别与已知欺诈者及其历史行为相关的欺诈的可能预测因素。LAD是一种机器学习方法,它结合了布尔函数、优化和逻辑思想,与传统方法保持一致。LAD的关键特征是发现解释所有观察结果所需的最小特征集,并检测数据中的隐藏模式,从而能够区分描述“积极”结果事件和“消极”结果事件的观察结果。本文中描述的组合优化模型代表了集合覆盖这一主题的变体,并概述了LAD在检测欺诈企业和金融欺诈方面的应用。该数据集包括来自14个不同行业的777家公司的年度数据。结果显示准确率为97.4%,F1得分为0.97。另一个关于信用卡交易和金融的数据集也用于测试LAD在金融方面的有效性。随着财务欺诈案件的大幅增长,这些有希望的结果为分析审计领域的未来发展带来了希望。
{"title":"LAD in finance: accounting analytics and fraud detection","authors":"Aditi Kar Gangopadhyay,&nbsp;Tanay Sheth,&nbsp;Sneha Chauhan","doi":"10.1007/s43674-023-00052-4","DOIUrl":"10.1007/s43674-023-00052-4","url":null,"abstract":"<div><p>The paper explores advancements in accounting analytics using the logical analysis of data (LAD) approach by identifying fraudulent firms and transactions. The straightforward approach to fraud detection with an analytic model is to identify possible predictors of fraud associated with known fraudsters and their historical actions. LAD is a machine learning methodology that combines Boolean functions, optimization, and logic ideas in alignment with the traditional approach. The key characteristic of the LAD is discovering minimal sets of features necessary for explaining all observations and detecting hidden patterns in the data capable of distinguishing observations describing “positive” outcome events from “negative” outcome events. The combinatorial optimization model described in the paper represents a variation on the general theme of set covering and concludes with an outline of LAD applications to detect fraudulent firms and financial frauds. The dataset consists of Annual data of 777 firms from 14 different sectors. The results demonstrate 97.4% accuracy with an F1 score of 0.97. Another dataset on credit card transactions and finance is also used to test the effectiveness of LAD in finance. With the appearance of the immense growth of financial fraud cases, these promising results lead to future advancements in analytical audit fieldwork.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50526002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User structural information in priority-based ranking for top-N recommendation 基于优先级的前N推荐排序中的用户结构信息
Pub Date : 2023-01-06 DOI: 10.1007/s43674-022-00050-y
Mohammad Majid Fayezi, Alireza Hashemi Golpayegani

The recommender system is a set of data recovery tools and techniques used to recommend items to users based on their selection. To improve the accuracy of the recommendation, the use of additional information (e.g., social information, trust, item tags, etc.) in addition to user-item ranking data has been an active area of research for the past decade.

In this paper, we present a new method for recommending top-N items, which uses structural information and trust among users within the social network and extracts the implicit connections between users and uses them in the item recommendation process. The proposed method has seven main steps: (1) extract items liked by neighbors, (ii) constructing item features for neighbors, (iii) extract embedding trust features for neighbors, (iv) create user-feature matrix, (v) calculate user’s priority, (vi) calculate item’s priority and finally, (vii) recommend top-N items. We implement the proposed method with three datasets for recommendations. We compare our results with some advanced ranking methods and observe that the accuracy of our method for all users and cold-start users improves. Our method can also create more items for cold-start users in the list of recommended items.

推荐系统是一套数据恢复工具和技术,用于根据用户的选择向用户推荐项目。为了提高推荐的准确性,在过去十年中,除了用户项目排名数据之外,还使用额外的信息(例如,社交信息、信任、项目标签等)一直是一个活跃的研究领域。在本文中,我们提出了一种推荐前N个项目的新方法,该方法利用社交网络中用户之间的结构信息和信任,提取用户之间的隐含联系,并在项目推荐过程中使用。所提出的方法有七个主要步骤:(1)提取邻居喜欢的项目,(ii)为邻居构建项目特征,(iii)提取邻居的嵌入信任特征,(iv)创建用户特征矩阵,(v)计算用户的优先级,(vi)计算项目的优先级,最后,(vii)推荐前N个项目。我们用三个用于推荐的数据集来实现所提出的方法。我们将我们的结果与一些先进的排名方法进行了比较,并观察到我们的方法对所有用户和冷启动用户的准确性有所提高。我们的方法还可以在推荐项目列表中为冷启动用户创建更多项目。
{"title":"User structural information in priority-based ranking for top-N recommendation","authors":"Mohammad Majid Fayezi,&nbsp;Alireza Hashemi Golpayegani","doi":"10.1007/s43674-022-00050-y","DOIUrl":"10.1007/s43674-022-00050-y","url":null,"abstract":"<div><p>The recommender system is a set of data recovery tools and techniques used to recommend items to users based on their selection. To improve the accuracy of the recommendation, the use of additional information (e.g., social information, trust, item tags, etc.) in addition to user-item ranking data has been an active area of research for the past decade.</p><p>In this paper, we present a new method for recommending top-N items, which uses structural information and trust among users within the social network and extracts the implicit connections between users and uses them in the item recommendation process. The proposed method has seven main steps: (1) extract items liked by neighbors, (ii) constructing item features for neighbors, (iii) extract embedding trust features for neighbors, (iv) create user-feature matrix, (v) calculate user’s priority, (vi) calculate item’s priority and finally, (vii) recommend top-N items. We implement the proposed method with three datasets for recommendations. We compare our results with some advanced ranking methods and observe that the accuracy of our method for all users and cold-start users improves. Our method can also create more items for cold-start users in the list of recommended items.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-022-00050-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50455557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fish recognition model for fraud prevention using convolutional neural networks 基于卷积神经网络的防欺诈鱼类识别模型
Pub Date : 2022-12-19 DOI: 10.1007/s43674-022-00048-6
Rhayane S. Monteiro, Morgana C. O. Ribeiro, Calebi A. S. Viana, Mário W. L. Moreira, Glácio S. Araúo, Joel J. P. C. Rodrigues

Fraud, misidentification, and adulteration of food, whether unintentional or purposeful, are a worldwide and growing concern. Aquaculture and fisheries are recognized as one of the sectors most vulnerable to food fraud. Besides, a series of risks related to health and distrust between consumer and popular market makes this sector develop an effective solution for fraud control. Species identification is an essential aspect to expose commercial fraud. Convolutional neural networks (CNNs) are one of the most powerful tools for image recognition and classification tasks. Thus, the objective of this study is to propose a model of recognition of fish species based on CNNs. After the implementation and comparison of the results of the CNNs, it was found that the Xception architecture achieved better performance with 86% accuracy. It was also possible to build a web application mockup. The proposal is easily applied in other aquaculture areas such as the species recognition of lobsters, shrimp, among other seafood.

食品的欺诈、误认和掺假,无论是无意的还是有目的的,都是全世界日益关注的问题。水产养殖和渔业被公认为最容易受到粮食欺诈的部门之一。此外,一系列与健康相关的风险以及消费者和大众市场之间的不信任,使该行业成为控制欺诈的有效解决方案。物种鉴定是揭露商业欺诈的一个重要方面。卷积神经网络是图像识别和分类任务中最强大的工具之一。因此,本研究的目的是提出一种基于细胞神经网络的鱼类物种识别模型。在实现和比较CNNs的结果后,发现Xception架构实现了更好的性能,准确率为86%。还可以构建一个web应用程序模型。该提案很容易应用于其他水产养殖领域,如龙虾、虾和其他海鲜的物种识别。
{"title":"Fish recognition model for fraud prevention using convolutional neural networks","authors":"Rhayane S. Monteiro,&nbsp;Morgana C. O. Ribeiro,&nbsp;Calebi A. S. Viana,&nbsp;Mário W. L. Moreira,&nbsp;Glácio S. Araúo,&nbsp;Joel J. P. C. Rodrigues","doi":"10.1007/s43674-022-00048-6","DOIUrl":"10.1007/s43674-022-00048-6","url":null,"abstract":"<div><p>Fraud, misidentification, and adulteration of food, whether unintentional or purposeful, are a worldwide and growing concern. Aquaculture and fisheries are recognized as one of the sectors most vulnerable to food fraud. Besides, a series of risks related to health and distrust between consumer and popular market makes this sector develop an effective solution for fraud control. Species identification is an essential aspect to expose commercial fraud. Convolutional neural networks (CNNs) are one of the most powerful tools for image recognition and classification tasks. Thus, the objective of this study is to propose a model of recognition of fish species based on CNNs. After the implementation and comparison of the results of the CNNs, it was found that the Xception architecture achieved better performance with 86% accuracy. It was also possible to build a web application mockup. The proposal is easily applied in other aquaculture areas such as the species recognition of lobsters, shrimp, among other seafood.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50494855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Advances in computational intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1