首页 > 最新文献

AI (Basel, Switzerland)最新文献

英文 中文
CAA-PPI: A Computational Feature Design to Predict Protein–Protein Interactions Using Different Encoding Strategies CAA-PPI:使用不同编码策略预测蛋白质-蛋白质相互作用的计算特征设计
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-04-28 DOI: 10.3390/ai4020020
Bhawna Mewara, Gunjan Sahni, Soniya Lalwani, Rajesh Kumar
Protein–protein interactions (PPIs) are involved in an extensive variety of biological procedures, including cell-to-cell interactions, and metabolic and developmental control. PPIs are becoming one of the most important aims of system biology. PPIs act as a fundamental part in predicting the protein function of the target protein and the drug ability of molecules. An abundance of work has been performed to develop methods to computationally predict PPIs as this supplements laboratory trials and offers a cost-effective way of predicting the most likely set of interactions at the entire proteome scale. This article presents an innovative feature representation method (CAA-PPI) to extract features from protein sequences using two different encoding strategies followed by an ensemble learning method. The random forest methodwas used as a classifier for PPI prediction. CAA-PPI considers the role of the trigram and bond of a given amino acid with its nearby ones. The proposed PPI model achieved more than a 98% prediction accuracy with one encoding scheme and more than a 95% prediction accuracy with another encoding scheme for the two diverse PPI datasets, i.e., H. pylori and Yeast. Further, investigations were performed to compare the CAA-PPI approach with existing sequence-based methods and revealed the proficiency of the proposed method with both encoding strategies. To further assess the practical prediction competence, a blind test was implemented on five other species’ datasets independent of the training set, and the obtained results ascertained the productivity of CAA-PPI with both encoding schemes.
蛋白质-蛋白质相互作用(PPIs)涉及广泛的生物过程,包括细胞间相互作用,代谢和发育控制。PPIs正成为系统生物学最重要的目标之一。PPIs在预测靶蛋白的蛋白质功能和分子的药物能力方面起着重要的作用。已经进行了大量的工作来开发计算预测ppi的方法,作为实验室试验的补充,并提供了一种在整个蛋白质组尺度上预测最可能的相互作用集的经济有效的方法。本文提出了一种创新的特征表示方法(CAA-PPI),该方法使用两种不同的编码策略和集成学习方法从蛋白质序列中提取特征。采用随机森林方法作为PPI预测的分类器。CAA-PPI考虑的是给定氨基酸与邻近氨基酸的三元键和键的作用。对于H. pylori和Yeast两种不同PPI数据集,所提出的PPI模型在一种编码方案下的预测准确率达到98%以上,在另一种编码方案下的预测准确率达到95%以上。此外,研究人员还将CAA-PPI方法与现有的基于序列的方法进行了比较,并揭示了所提出的方法对两种编码策略的熟练程度。为了进一步评估实际预测能力,对另外5个独立于训练集的物种数据集进行了盲测,得到的结果确定了两种编码方案下CAA-PPI的生产力。
{"title":"CAA-PPI: A Computational Feature Design to Predict Protein–Protein Interactions Using Different Encoding Strategies","authors":"Bhawna Mewara, Gunjan Sahni, Soniya Lalwani, Rajesh Kumar","doi":"10.3390/ai4020020","DOIUrl":"https://doi.org/10.3390/ai4020020","url":null,"abstract":"Protein–protein interactions (PPIs) are involved in an extensive variety of biological procedures, including cell-to-cell interactions, and metabolic and developmental control. PPIs are becoming one of the most important aims of system biology. PPIs act as a fundamental part in predicting the protein function of the target protein and the drug ability of molecules. An abundance of work has been performed to develop methods to computationally predict PPIs as this supplements laboratory trials and offers a cost-effective way of predicting the most likely set of interactions at the entire proteome scale. This article presents an innovative feature representation method (CAA-PPI) to extract features from protein sequences using two different encoding strategies followed by an ensemble learning method. The random forest methodwas used as a classifier for PPI prediction. CAA-PPI considers the role of the trigram and bond of a given amino acid with its nearby ones. The proposed PPI model achieved more than a 98% prediction accuracy with one encoding scheme and more than a 95% prediction accuracy with another encoding scheme for the two diverse PPI datasets, i.e., H. pylori and Yeast. Further, investigations were performed to compare the CAA-PPI approach with existing sequence-based methods and revealed the proficiency of the proposed method with both encoding strategies. To further assess the practical prediction competence, a blind test was implemented on five other species’ datasets independent of the training set, and the obtained results ascertained the productivity of CAA-PPI with both encoding schemes.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136000603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FatNet: High-Resolution Kernels for Classification Using Fully Convolutional Optical Neural Networks FatNet:使用全卷积光学神经网络进行分类的高分辨率核
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-04-03 DOI: 10.3390/ai4020018
Riad Ibadulla, Thomas M. Chen, Constantino Carlos Reyes-Aldasoro
This paper describes the transformation of a traditional in silico classification network into an optical fully convolutional neural network with high-resolution feature maps and kernels. When using the free-space 4f system to accelerate the inference speed of neural networks, higher resolutions of feature maps and kernels can be used without the loss in frame rate. We present FatNet for the classification of images, which is more compatible with free-space acceleration than standard convolutional classifiers. It neglects the standard combination of convolutional feature extraction and classifier dense layers by performing both in one fully convolutional network. This approach takes full advantage of the parallelism in the 4f free-space system and performs fewer conversions between electronics and optics by reducing the number of channels and increasing the resolution, making this network faster in optics than off-the-shelf networks. To demonstrate the capabilities of FatNet, it was trained with the CIFAR100 dataset on GPU and the simulator of the 4f system. A comparison of the results against ResNet-18 shows 8.2 times fewer convolution operations at the cost of only 6% lower accuracy. This demonstrates that the optical implementation of FatNet results in significantly faster inference than the optical implementation of the original ResNet-18. These are promising results for the approach of training deep learning with high-resolution kernels in the direction toward the upcoming optics era.
本文描述了将传统的计算机分类网络转化为具有高分辨率特征映射和核的光学全卷积神经网络。当使用自由空间4f系统加速神经网络的推理速度时,可以在不损失帧率的情况下使用更高分辨率的特征映射和核。我们提出了用于图像分类的FatNet,它比标准卷积分类器更兼容自由空间加速。它忽略了卷积特征提取和分类器密集层的标准组合,在一个全卷积网络中执行。这种方法充分利用了4f自由空间系统的并行性,并通过减少通道数量和提高分辨率来减少电子和光学之间的转换,使该网络在光学上比现成的网络更快。为了证明FatNet的能力,在GPU上使用CIFAR100数据集和4f系统的模拟器对其进行了训练。与ResNet-18的结果比较显示,卷积操作减少了8.2倍,而准确率仅降低了6%。这表明FatNet的光学实现比原始ResNet-18的光学实现的推理速度要快得多。这些都是在即将到来的光学时代,用高分辨率核训练深度学习方法的有希望的结果。
{"title":"FatNet: High-Resolution Kernels for Classification Using Fully Convolutional Optical Neural Networks","authors":"Riad Ibadulla, Thomas M. Chen, Constantino Carlos Reyes-Aldasoro","doi":"10.3390/ai4020018","DOIUrl":"https://doi.org/10.3390/ai4020018","url":null,"abstract":"This paper describes the transformation of a traditional in silico classification network into an optical fully convolutional neural network with high-resolution feature maps and kernels. When using the free-space 4f system to accelerate the inference speed of neural networks, higher resolutions of feature maps and kernels can be used without the loss in frame rate. We present FatNet for the classification of images, which is more compatible with free-space acceleration than standard convolutional classifiers. It neglects the standard combination of convolutional feature extraction and classifier dense layers by performing both in one fully convolutional network. This approach takes full advantage of the parallelism in the 4f free-space system and performs fewer conversions between electronics and optics by reducing the number of channels and increasing the resolution, making this network faster in optics than off-the-shelf networks. To demonstrate the capabilities of FatNet, it was trained with the CIFAR100 dataset on GPU and the simulator of the 4f system. A comparison of the results against ResNet-18 shows 8.2 times fewer convolution operations at the cost of only 6% lower accuracy. This demonstrates that the optical implementation of FatNet results in significantly faster inference than the optical implementation of the original ResNet-18. These are promising results for the approach of training deep learning with high-resolution kernels in the direction toward the upcoming optics era.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136329582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Can Sequential Images from the Same Object Be Used for Training Machine Learning Models? A Case Study for Detecting Liver Disease by Ultrasound Radiomics. 来自同一对象的连续图像可以用于训练机器学习模型吗?超声放射组学检测肝脏疾病的案例研究。
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-09-01 DOI: 10.3390/ai3030043
Laith R Sultan, Theodore W Cary, Maryam Al-Hasani, Mrigendra B Karmacharya, Santosh S Venkatesh, Charles-Antoine Assenmacher, Enrico Radaelli, Chandra M Sehgal
Machine learning for medical imaging not only requires sufficient amounts of data for training and testing but also that the data be independent. It is common to see highly interdependent data whenever there are inherent correlations between observations. This is especially to be expected for sequential imaging data taken from time series. In this study, we evaluate the use of statistical measures to test the independence of sequential ultrasound image data taken from the same case. A total of 1180 B-mode liver ultrasound images with 5903 regions of interests were analyzed. The ultrasound images were taken from two liver disease groups, fibrosis and steatosis, as well as normal cases. Computer-extracted texture features were then used to train a machine learning (ML) model for computer-aided diagnosis. The experiment resulted in high two-category diagnosis using logistic regression, with AUC of 0.928 and high performance of multicategory classification, using random forest ML, with AUC of 0.917. To evaluate the image region independence for machine learning, Jenson–Shannon (JS) divergence was used. JS distributions showed that images of normal liver were independent from each other, while the images from the two disease pathologies were not independent. To guarantee the generalizability of machine learning models, and to prevent data leakage, multiple frames of image data acquired of the same object should be tested for independence before machine learning. Such tests can be applied to real-world medical image problems to determine if images from the same subject can be used for training.
医学成像的机器学习不仅需要足够的数据进行训练和测试,而且需要数据是独立的。每当观测结果之间存在固有相关性时,就会看到高度相互依赖的数据。这尤其适用于从时间序列中获取的连续成像数据。在本研究中,我们评估了使用统计措施来测试从同一病例中获取的连续超声图像数据的独立性。共分析肝脏b超1180张,5903个感兴趣区域。超声图像取自两组肝脏疾病,纤维化和脂肪变性,以及正常病例。然后使用计算机提取的纹理特征来训练用于计算机辅助诊断的机器学习(ML)模型。实验结果表明,采用logistic回归的两类诊断效果良好,AUC为0.928;采用随机森林ML的多类分类效果良好,AUC为0.917。为了评估机器学习的图像区域独立性,使用了jensen - shannon (JS)散度。JS分布显示正常肝脏的图像相互独立,而两种疾病病理的图像不独立。为了保证机器学习模型的泛化性,防止数据泄露,在机器学习之前,需要对同一对象的多帧图像数据进行独立性测试。这些测试可以应用于现实世界的医学图像问题,以确定来自同一主题的图像是否可以用于训练。
{"title":"Can Sequential Images from the Same Object Be Used for Training Machine Learning Models? A Case Study for Detecting Liver Disease by Ultrasound Radiomics.","authors":"Laith R Sultan, Theodore W Cary, Maryam Al-Hasani, Mrigendra B Karmacharya, Santosh S Venkatesh, Charles-Antoine Assenmacher, Enrico Radaelli, Chandra M Sehgal","doi":"10.3390/ai3030043","DOIUrl":"https://doi.org/10.3390/ai3030043","url":null,"abstract":"Machine learning for medical imaging not only requires sufficient amounts of data for training and testing but also that the data be independent. It is common to see highly interdependent data whenever there are inherent correlations between observations. This is especially to be expected for sequential imaging data taken from time series. In this study, we evaluate the use of statistical measures to test the independence of sequential ultrasound image data taken from the same case. A total of 1180 B-mode liver ultrasound images with 5903 regions of interests were analyzed. The ultrasound images were taken from two liver disease groups, fibrosis and steatosis, as well as normal cases. Computer-extracted texture features were then used to train a machine learning (ML) model for computer-aided diagnosis. The experiment resulted in high two-category diagnosis using logistic regression, with AUC of 0.928 and high performance of multicategory classification, using random forest ML, with AUC of 0.917. To evaluate the image region independence for machine learning, Jenson–Shannon (JS) divergence was used. JS distributions showed that images of normal liver were independent from each other, while the images from the two disease pathologies were not independent. To guarantee the generalizability of machine learning models, and to prevent data leakage, multiple frames of image data acquired of the same object should be tested for independence before machine learning. Such tests can be applied to real-world medical image problems to determine if images from the same subject can be used for training.","PeriodicalId":93633,"journal":{"name":"AI (Basel, Switzerland)","volume":"3 3","pages":"739-750"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9511699/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40378366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
AI (Basel, Switzerland)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1