首页 > 最新文献

Machine learning with applications最新文献

英文 中文
The Dark Side of Language Models: Exploring the Potential of LLMs in Multimedia Disinformation Generation and Dissemination 语言模型的阴暗面:探索语言模型在多媒体虚假信息生成和传播中的潜力
Pub Date : 2024-03-11 DOI: 10.1016/j.mlwa.2024.100545
Dipto Barman, Ziyi Guo, Owen Conlan

Disinformation - the deliberate spread of false or misleading information poses a significant threat to our society by undermining trust, exacerbating polarization, and manipulating public opinion. With the rapid advancement of artificial intelligence and the growing prominence of large language models (LLMs) such as ChatGPT, new avenues for the dissemination of disinformation are emerging. This review paper explores the potential of LLMs to initiate the generation of multi-media disinformation, encompassing text, images, audio, and video. We begin by examining the capabilities of LLMs, highlighting their potential to create compelling, context-aware content that can be weaponized for malicious purposes. Subsequently, we examine the nature of disinformation and the various mechanisms through which it spreads in the digital landscape. Utilizing these advanced models, malicious actors can automate and scale up disinformation effectively. We describe a theoretical pipeline for creating and disseminating disinformation on social media. Existing interventions to combat disinformation are also reviewed. While these efforts have shown success, we argue that they need to be strengthened to effectively counter the escalating threat posed by LLMs. Digital platforms have, unfortunately, enabled malicious actors to extend the reach of disinformation. The advent of LLMs poses an additional concern as they can be harnessed to significantly amplify the velocity, variety, and volume of disinformation. Thus, this review proposes augmenting current interventions with AI tools like LLMs, capable of assessing information more swiftly and comprehensively than human fact-checkers. This paper illuminates the dark side of LLMs and highlights their potential to be exploited as disinformation dissemination tools.

虚假信息--蓄意传播虚假或误导性信息,通过破坏信任、加剧两极分化和操纵公众舆论,对我们的社会构成重大威胁。随着人工智能的快速发展和大型语言模型(LLM)(如 ChatGPT)的日益突出,传播虚假信息的新途径正在出现。本评论文章探讨了 LLMs 在生成多媒体虚假信息方面的潜力,包括文本、图像、音频和视频。我们首先研究了 LLMs 的能力,强调了它们在创建引人注目、可感知上下文的内容方面的潜力,这些内容可被用于恶意目的。随后,我们研究了虚假信息的本质及其在数字环境中传播的各种机制。利用这些先进的模型,恶意行为者可以有效地自动化和扩大虚假信息的规模。我们描述了在社交媒体上制造和传播虚假信息的理论管道。我们还回顾了打击虚假信息的现有干预措施。虽然这些努力取得了成功,但我们认为需要加强这些努力,以有效应对 LLM 构成的不断升级的威胁。不幸的是,数字平台使恶意行为者得以扩大虚假信息的传播范围。非法移民的出现带来了额外的担忧,因为他们可以利用这些平台大大提高虚假信息的传播速度、种类和数量。因此,本综述建议利用 LLMs 等人工智能工具加强当前的干预措施,因为它们能够比人类事实核查人员更迅速、更全面地评估信息。本文揭示了 LLMs 的阴暗面,并强调了其作为虚假信息传播工具被利用的潜力。
{"title":"The Dark Side of Language Models: Exploring the Potential of LLMs in Multimedia Disinformation Generation and Dissemination","authors":"Dipto Barman,&nbsp;Ziyi Guo,&nbsp;Owen Conlan","doi":"10.1016/j.mlwa.2024.100545","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100545","url":null,"abstract":"<div><p>Disinformation - the deliberate spread of false or misleading information poses a significant threat to our society by undermining trust, exacerbating polarization, and manipulating public opinion. With the rapid advancement of artificial intelligence and the growing prominence of large language models (LLMs) such as ChatGPT, new avenues for the dissemination of disinformation are emerging. This review paper explores the potential of LLMs to initiate the generation of multi-media disinformation, encompassing text, images, audio, and video. We begin by examining the capabilities of LLMs, highlighting their potential to create compelling, context-aware content that can be weaponized for malicious purposes. Subsequently, we examine the nature of disinformation and the various mechanisms through which it spreads in the digital landscape. Utilizing these advanced models, malicious actors can automate and scale up disinformation effectively. We describe a theoretical pipeline for creating and disseminating disinformation on social media. Existing interventions to combat disinformation are also reviewed. While these efforts have shown success, we argue that they need to be strengthened to effectively counter the escalating threat posed by LLMs. Digital platforms have, unfortunately, enabled malicious actors to extend the reach of disinformation. The advent of LLMs poses an additional concern as they can be harnessed to significantly amplify the velocity, variety, and volume of disinformation. Thus, this review proposes augmenting current interventions with AI tools like LLMs, capable of assessing information more swiftly and comprehensively than human fact-checkers. This paper illuminates the dark side of LLMs and highlights their potential to be exploited as disinformation dissemination tools.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100545"},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000215/pdfft?md5=62d261346a52f0843148ea85c02785d0&pid=1-s2.0-S2666827024000215-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140162418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automated machine learning approach for detecting anomalous peak patterns in time series data from a research watershed in the northeastern United States critical zone 从美国东北部临界区研究流域的时间序列数据中检测异常峰值模式的自动机器学习方法
Pub Date : 2024-03-07 DOI: 10.1016/j.mlwa.2024.100543
Ijaz Ul Haq , Byung Suk Lee , Donna M. Rizzo , Julia N. Perdrial

This paper presents an automated machine learning framework designed to assist hydrologists in detecting anomalies in time series data generated by sensors in a research watershed in the northeastern United States critical zone. The framework specifically focuses on identifying peak-pattern anomalies, which may arise from sensor malfunctions or natural phenomena. However, the use of classification methods for anomaly detection poses challenges, such as the requirement for labeled data as ground truth and the selection of the most suitable deep learning model for the given task and dataset. To address these challenges, our framework generates labeled datasets by injecting synthetic peak patterns into synthetically generated time series data and incorporates an automated hyperparameter optimization mechanism. This mechanism generates an optimized model instance with the best architectural and training parameters from a pool of five selected models, namely Temporal Convolutional Network (TCN), InceptionTime, MiniRocket, Residual Networks (ResNet), and Long Short-Term Memory (LSTM). The selection is based on the user’s preferences regarding anomaly detection accuracy and computational cost. The framework employs Time-series Generative Adversarial Networks (TimeGAN) as the synthetic dataset generator. The generated model instances are evaluated using a combination of accuracy and computational cost metrics, including training time and memory, during the anomaly detection process. Performance evaluation of the framework was conducted using a dataset from a watershed, demonstrating consistent selection of the most fitting model instance that satisfies the user’s preferences.

本文介绍了一种自动机器学习框架,旨在协助水文学家从美国东北部临界区研究流域的传感器生成的时间序列数据中发现异常。该框架特别侧重于识别可能由传感器故障或自然现象引起的峰值模式异常。然而,使用分类方法进行异常检测会带来一些挑战,例如需要将标注数据作为地面实况,以及为给定任务和数据集选择最合适的深度学习模型。为了应对这些挑战,我们的框架通过向合成生成的时间序列数据中注入合成峰值模式来生成标注数据集,并结合了自动超参数优化机制。该机制从五种选定模型(即时序卷积网络 (TCN)、InceptionTime、MiniRocket、残差网络 (ResNet) 和长短时记忆 (LSTM))中生成具有最佳架构和训练参数的优化模型实例。选择的依据是用户对异常检测准确性和计算成本的偏好。该框架采用时间序列生成对抗网络(TimeGAN)作为合成数据集生成器。在异常检测过程中,使用准确度和计算成本指标(包括训练时间和内存)的组合对生成的模型实例进行评估。利用一个流域数据集对该框架进行了性能评估,结果表明,该框架能根据用户的偏好选择最合适的模型实例。
{"title":"An automated machine learning approach for detecting anomalous peak patterns in time series data from a research watershed in the northeastern United States critical zone","authors":"Ijaz Ul Haq ,&nbsp;Byung Suk Lee ,&nbsp;Donna M. Rizzo ,&nbsp;Julia N. Perdrial","doi":"10.1016/j.mlwa.2024.100543","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100543","url":null,"abstract":"<div><p>This paper presents an automated machine learning framework designed to assist hydrologists in detecting anomalies in time series data generated by sensors in a research watershed in the northeastern United States critical zone. The framework specifically focuses on identifying <em>peak-pattern</em> anomalies, which may arise from sensor malfunctions or natural phenomena. However, the use of classification methods for anomaly detection poses challenges, such as the requirement for labeled data as ground truth and the selection of the most suitable deep learning model for the given task and dataset. To address these challenges, our framework generates labeled datasets by injecting synthetic peak patterns into synthetically generated time series data and incorporates an automated hyperparameter optimization mechanism. This mechanism generates an optimized model instance with the best architectural and training parameters from a pool of five selected models, namely Temporal Convolutional Network (TCN), InceptionTime, MiniRocket, Residual Networks (ResNet), and Long Short-Term Memory (LSTM). The selection is based on the user’s preferences regarding anomaly detection accuracy and computational cost. The framework employs Time-series Generative Adversarial Networks (TimeGAN) as the synthetic dataset generator. The generated model instances are evaluated using a combination of accuracy and computational cost metrics, including training time and memory, during the anomaly detection process. Performance evaluation of the framework was conducted using a dataset from a watershed, demonstrating consistent selection of the most fitting model instance that satisfies the user’s preferences.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100543"},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000197/pdfft?md5=2510bcf29d309e109b6368c90dc183ef&pid=1-s2.0-S2666827024000197-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140113000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Best performance with fewest resources: Unveiling the most resource-efficient Convolutional Neural Network for P300 detection with the aid of Explainable AI 用最少的资源实现最好的性能借助 "可解释人工智能 "揭示用于 P300 检测的资源效率最高的卷积神经网络
Pub Date : 2024-03-05 DOI: 10.1016/j.mlwa.2024.100542
Maohua Liu , Wenchong Shi , Liqiang Zhao , Fred R. Beyette Jr.

Convolutional Neural Networks (CNNs) have shown remarkable prowess in detecting P300, an Event-Related Potential (ERP) crucial in Brain–Computer Interfaces (BCIs). Researchers persistently seek simple and efficient CNNs for P300 detection, exemplified by models like DeepConvNet, EEGNet, and SepConv1D. Noteworthy progress has been made, manifesting in reducing parameters from millions to hundreds while sustaining state-of-the-art performance. However, achieving further simplification or performance improvement beyond SepConv1D appears challenging due to inherent oversimplification. This study explores landmark CNNs and P300 data with the aid of Explainable AI, proposing a simpler yet superior-performing CNN architecture which incorporates (1) precise separable convolution for feature extraction of P300 data, (2) adaptive activation function tailored for P300 data, and (3) customized large learning rate schedules for training P300 data. Termed the Minimalist CNN for P300 detection (P300MCNN), this novel model is characterized by its requirement of the fewest filters and epochs to date, concurrently achieving best performance in cross-subject P300 detection. P300MCNN not only introduces groundbreaking concepts for CNN architectures in P300 detection but also showcases the importance of Explainable AI in demystifying the “black box” design of CNNs.

卷积神经网络(CNN)在检测 P300(脑机接口(BCI)中至关重要的事件相关电位(ERP))方面表现出了非凡的能力。研究人员一直在为 P300 检测寻找简单高效的 CNN,例如 DeepConvNet、EEGNet 和 SepConv1D 等模型。目前已经取得了显著进展,表现在将参数从数百万个减少到数百个,同时保持了最先进的性能。然而,由于固有的过度简化,在 SepConv1D 之外实现进一步简化或性能提升似乎具有挑战性。本研究借助 "可解释的人工智能"(Explainable AI)探索了具有里程碑意义的 CNN 和 P300 数据,提出了一种更简单但性能更优越的 CNN 架构,该架构包含:(1)用于 P300 数据特征提取的精确可分离卷积;(2)为 P300 数据量身定制的自适应激活函数;以及(3)用于训练 P300 数据的定制化大学习率计划。这种新型模型被称为用于 P300 检测的极简 CNN(P300MCNN),其特点是要求使用迄今为止最少的过滤器和历时,同时在跨受试者 P300 检测中取得最佳性能。P300MCNN 不仅为 P300 检测中的 CNN 架构引入了突破性概念,还展示了可解释人工智能在揭开 CNN "黑箱 "设计神秘面纱方面的重要性。
{"title":"Best performance with fewest resources: Unveiling the most resource-efficient Convolutional Neural Network for P300 detection with the aid of Explainable AI","authors":"Maohua Liu ,&nbsp;Wenchong Shi ,&nbsp;Liqiang Zhao ,&nbsp;Fred R. Beyette Jr.","doi":"10.1016/j.mlwa.2024.100542","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100542","url":null,"abstract":"<div><p>Convolutional Neural Networks (CNNs) have shown remarkable prowess in detecting P300, an Event-Related Potential (ERP) crucial in Brain–Computer Interfaces (BCIs). Researchers persistently seek simple and efficient CNNs for P300 detection, exemplified by models like DeepConvNet, EEGNet, and SepConv1D. Noteworthy progress has been made, manifesting in reducing parameters from millions to hundreds while sustaining state-of-the-art performance. However, achieving further simplification or performance improvement beyond SepConv1D appears challenging due to inherent oversimplification. This study explores landmark CNNs and P300 data with the aid of Explainable AI, proposing a simpler yet superior-performing CNN architecture which incorporates (1) precise separable convolution for feature extraction of P300 data, (2) adaptive activation function tailored for P300 data, and (3) customized large learning rate schedules for training P300 data. Termed the Minimalist CNN for P300 detection (P300MCNN), this novel model is characterized by its requirement of the fewest filters and epochs to date, concurrently achieving best performance in cross-subject P300 detection. P300MCNN not only introduces groundbreaking concepts for CNN architectures in P300 detection but also showcases the importance of Explainable AI in demystifying the “black box” design of CNNs.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100542"},"PeriodicalIF":0.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000185/pdfft?md5=f2f045fb988b74e8636d755e676a6c29&pid=1-s2.0-S2666827024000185-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140103561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT: A meta-analysis after 2.5 months ChatGPT:2.5 个月后的荟萃分析
Pub Date : 2024-03-05 DOI: 10.1016/j.mlwa.2024.100541
Christoph Leiter, Ran Zhang, Yanran Chen, Jonas Belouadi, Daniil Larionov, Vivian Fresen, Steffen Eger

ChatGPT, a chatbot developed by OpenAI, has gained widespread popularity and media attention since its release in November 2022. However, little hard evidence is available regarding its perception in various sources. In this paper, we analyze over 300,000 tweets and more than 150 scientific papers to investigate how ChatGPT is perceived and discussed. Our findings show that ChatGPT is generally viewed as of high quality, with positive sentiment and emotions of joy dominating social media. Its perception has slightly decreased since its debut, however, with joy decreasing and (negative) surprise on the rise, and it is perceived more negatively in languages other than English. In recent scientific papers, ChatGPT is characterized as a great opportunity across various fields including the medical domain, but also as a threat concerning ethics and receives mixed assessments for education. Our comprehensive meta-analysis of ChatGPT’s perception after 2.5 months since its release can contribute to shaping the public debate and informing its future development. We make our data available.1

由 OpenAI 开发的聊天机器人 ChatGPT 自 2022 年 11 月发布以来,受到了广泛欢迎和媒体关注。然而,关于它在各种来源中的认知度,却鲜有确凿证据。在本文中,我们分析了 30 多万条推文和 150 多篇科学论文,以研究人们是如何看待和讨论 ChatGPT 的。我们的研究结果表明,人们普遍认为 ChatGPT 的质量很高,社交媒体上主要是正面情绪和喜悦情感。不过,自首次发布以来,人们对它的看法略有下降,喜悦的情绪在减少,(负面)惊讶的情绪在增加,而且在英语以外的语言中,人们对它的负面看法更多。在最近的科学论文中,ChatGPT 被认为是包括医疗领域在内的各个领域的巨大机遇,但也被认为是道德方面的威胁,对教育的评价也褒贬不一。我们对 ChatGPT 发布 2.5 个月后的看法进行了全面的荟萃分析,这有助于引导公众辩论并为其未来发展提供信息。我们提供的数据1
{"title":"ChatGPT: A meta-analysis after 2.5 months","authors":"Christoph Leiter,&nbsp;Ran Zhang,&nbsp;Yanran Chen,&nbsp;Jonas Belouadi,&nbsp;Daniil Larionov,&nbsp;Vivian Fresen,&nbsp;Steffen Eger","doi":"10.1016/j.mlwa.2024.100541","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100541","url":null,"abstract":"<div><p>ChatGPT, a chatbot developed by OpenAI, has gained widespread popularity and media attention since its release in November 2022. However, little hard evidence is available regarding its perception in various sources. In this paper, we analyze over 300,000 tweets and more than 150 scientific papers to investigate how ChatGPT is perceived and discussed. Our findings show that ChatGPT is generally viewed as of high quality, with positive sentiment and emotions of joy dominating social media. Its perception has slightly decreased since its debut, however, with joy decreasing and (negative) surprise on the rise, and it is perceived more negatively in languages other than English. In recent scientific papers, ChatGPT is characterized as a great opportunity across various fields including the medical domain, but also as a threat concerning ethics and receives mixed assessments for education. Our comprehensive meta-analysis of ChatGPT’s perception after 2.5 months since its release can contribute to shaping the public debate and informing its future development. We make our data available.<span><sup>1</sup></span></p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100541"},"PeriodicalIF":0.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000173/pdfft?md5=1a30030447cf6bdbf640292ef708dd50&pid=1-s2.0-S2666827024000173-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140066979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to “New interpretation of GNRVⓇ knee arthrometer results for ACL injury diagnosis support using machine learning” [Machine Learning with Applications 13 (2023) 100480] 对《利用机器学习支持前交叉韧带损伤诊断的 GNRVⓇ 膝关节测量仪结果新解读》[机器学习与应用 13 (2023) 100480]的勘误
Pub Date : 2024-03-02 DOI: 10.1016/j.mlwa.2024.100540
Jean Mouchotte , Matthieu LeBerre , Théo Cojean , Henri Robert
{"title":"Erratum to “New interpretation of GNRVⓇ knee arthrometer results for ACL injury diagnosis support using machine learning” [Machine Learning with Applications 13 (2023) 100480]","authors":"Jean Mouchotte ,&nbsp;Matthieu LeBerre ,&nbsp;Théo Cojean ,&nbsp;Henri Robert","doi":"10.1016/j.mlwa.2024.100540","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100540","url":null,"abstract":"","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100540"},"PeriodicalIF":0.0,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000161/pdfft?md5=0229c6bb28a88dcc3c361660bd44b2f6&pid=1-s2.0-S2666827024000161-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140015321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for sports betting: Should model selection be based on accuracy or calibration? 体育博彩的机器学习:模型选择应基于准确性还是校准?
Pub Date : 2024-02-28 DOI: 10.1016/j.mlwa.2024.100539
Conor Walsh, Alok Joshi

Sports betting’s recent federal legalisation in the USA coincides with the golden age of machine learning. If bettors can leverage data to reliably predict the probability of an outcome, they can recognise when the bookmaker’s odds are in their favour. As sports betting is a multi-billion dollar industry in the USA alone, identifying such opportunities could be extremely lucrative. Many researchers have applied machine learning to the sports outcome prediction problem, generally using accuracy to evaluate the performance of predictive models. We hypothesise that for the sports betting problem, model calibration is more important than accuracy. To test this hypothesis, we train models on NBA data over several seasons and run betting experiments on a single season, using published odds. We show that using calibration, rather than accuracy, as the basis for model selection leads to greater returns, on average (return on investment of +34.69% versus -35.17%) and in the best case (+36.93% versus +5.56%). These findings suggest that for sports betting (or any probabilistic decision-making problem), calibration is a more important metric than accuracy. Sports bettors who wish to increase profits should therefore select their predictive model based on calibration, rather than accuracy.

体育博彩最近在美国联邦合法化,这与机器学习的黄金时代不谋而合。如果投注者能利用数据可靠地预测结果的概率,他们就能识别博彩公司的赔率何时对他们有利。仅在美国,体育博彩就是一个价值数十亿美元的产业,因此发现这样的机会可能会带来巨大的利润。许多研究人员已将机器学习应用于体育比赛结果预测问题,通常使用准确率来评估预测模型的性能。我们假设,对于体育博彩问题,模型校准比准确性更重要。为了验证这一假设,我们对 NBA 几个赛季的数据进行了模型训练,并使用已公布的赔率对单个赛季进行了投注实验。我们的研究表明,使用校准而非准确性作为模型选择的基础,会带来更大的回报,平均而言(投资回报率为 +34.69% 对 -35.17%),在最佳情况下(投资回报率为 +36.93% 对 +5.56%)。这些发现表明,对于体育博彩(或任何概率决策问题)而言,校准是比准确性更重要的指标。因此,希望提高利润的体育投注者应该根据校准而不是准确性来选择预测模型。
{"title":"Machine learning for sports betting: Should model selection be based on accuracy or calibration?","authors":"Conor Walsh,&nbsp;Alok Joshi","doi":"10.1016/j.mlwa.2024.100539","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100539","url":null,"abstract":"<div><p>Sports betting’s recent federal legalisation in the USA coincides with the golden age of machine learning. If bettors can leverage data to reliably predict the probability of an outcome, they can recognise when the bookmaker’s odds are in their favour. As sports betting is a multi-billion dollar industry in the USA alone, identifying such opportunities could be extremely lucrative. Many researchers have applied machine learning to the sports outcome prediction problem, generally using accuracy to evaluate the performance of predictive models. We hypothesise that for the sports betting problem, model calibration is more important than accuracy. To test this hypothesis, we train models on NBA data over several seasons and run betting experiments on a single season, using published odds. We show that using calibration, rather than accuracy, as the basis for model selection leads to greater returns, on average (return on investment of +34.69% versus -35.17%) and in the best case (+36.93% versus +5.56%). These findings suggest that for sports betting (or any probabilistic decision-making problem), calibration is a more important metric than accuracy. Sports bettors who wish to increase profits should therefore select their predictive model based on calibration, rather than accuracy.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100539"},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266682702400015X/pdfft?md5=7a2843381079bfe268bd8fedd4ba2592&pid=1-s2.0-S266682702400015X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139999127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate detection of cell deformability tracking in hydrodynamic flow by coupling unsupervised and supervised learning 通过无监督学习和有监督学习的耦合,准确检测流体动力流中的细胞变形性跟踪
Pub Date : 2024-02-28 DOI: 10.1016/j.mlwa.2024.100538
Imen Halima, Mehdi Maleki, Gabriel Frossard, Celine Thomann, Edwin-Joffrey Courtial

The using of deep learning methods in medical images has been successfully used for various applications, including cell segmentation and deformability detection, thereby contributing significantly to advancements in medical analysis. Cell deformability is a fundamental criterion, which must be measured easily and accurately. One common approach for measuring cell deformability is to use microscopy techniques. Recent works have been efforts to develop more advanced and automated methods for measuring cell deformability based on microscopic images, but cell membrane segmentation techniques are still difficult to achieve with precision because of the quality of images. In this paper, we introduce a novel algorithm for cell segmentation that addresses the challenge of microscopic images. AD-MSC cells were controlled by a microfluidic-based system and cell images were acquired by an ultra-fast camera with variable frequency connected to a powerful computer to collect data. The proposed algorithm has a combination of two main components: denoising images using unsupervised learning and cell segmentation and deformability detection using supervised learning which aim to enhance image quality without the need for expensive materials and expert intervention and segment cell deformability with more precision. The contribution of this paper is the combination of two neural networks that treat the database more easily and without the presence of experts. This approach is used to have faster results with high performance according to low datasets from microscopy even with noisy microscopic images. The precision increases to 81 % when we combine DAE with U-Net, compared to 78 % when adding VAE to U-Net and 59 % when using only U-Net.

在医学图像中使用深度学习方法已成功应用于多种领域,包括细胞分割和可变形性检测,从而极大地推动了医学分析的进步。细胞变形性是一项基本标准,必须能够轻松、准确地测量。测量细胞变形性的一种常见方法是使用显微镜技术。近年来,人们一直在努力开发基于显微图像的更先进、更自动化的细胞变形性测量方法,但由于图像质量的原因,细胞膜分割技术仍难以实现精确测量。在本文中,我们介绍了一种新型的细胞膜分割算法,以应对显微图像带来的挑战。AD-MSC细胞由基于微流控的系统控制,细胞图像由连接到功能强大的计算机的可变频率超快相机采集。所提出的算法由两个主要部分组成:利用无监督学习对图像进行去噪,以及利用有监督学习对细胞进行分割和变形检测,目的是在无需昂贵材料和专家干预的情况下提高图像质量,并更精确地分割细胞变形。本文的贡献在于将两个神经网络结合起来,在没有专家在场的情况下更轻松地处理数据库。根据显微镜的低数据集,即使是嘈杂的显微图像,这种方法也能获得更快的结果和更高的性能。当我们将 DAE 与 U-Net 相结合时,精确度提高到 81%,而将 VAE 添加到 U-Net 时为 78%,仅使用 U-Net 时为 59%。
{"title":"Accurate detection of cell deformability tracking in hydrodynamic flow by coupling unsupervised and supervised learning","authors":"Imen Halima,&nbsp;Mehdi Maleki,&nbsp;Gabriel Frossard,&nbsp;Celine Thomann,&nbsp;Edwin-Joffrey Courtial","doi":"10.1016/j.mlwa.2024.100538","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100538","url":null,"abstract":"<div><p>The using of deep learning methods in medical images has been successfully used for various applications, including cell segmentation and deformability detection, thereby contributing significantly to advancements in medical analysis. Cell deformability is a fundamental criterion, which must be measured easily and accurately. One common approach for measuring cell deformability is to use microscopy techniques. Recent works have been efforts to develop more advanced and automated methods for measuring cell deformability based on microscopic images, but cell membrane segmentation techniques are still difficult to achieve with precision because of the quality of images. In this paper, we introduce a novel algorithm for cell segmentation that addresses the challenge of microscopic images. AD-MSC cells were controlled by a microfluidic-based system and cell images were acquired by an ultra-fast camera with variable frequency connected to a powerful computer to collect data. The proposed algorithm has a combination of two main components: denoising images using unsupervised learning and cell segmentation and deformability detection using supervised learning which aim to enhance image quality without the need for expensive materials and expert intervention and segment cell deformability with more precision. The contribution of this paper is the combination of two neural networks that treat the database more easily and without the presence of experts. This approach is used to have faster results with high performance according to low datasets from microscopy even with noisy microscopic images. The precision increases to 81 % when we combine DAE with U-Net, compared to 78 % when adding VAE to U-Net and 59 % when using only U-Net.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100538"},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000148/pdfft?md5=1f12d72c76472bd1c4f4bc2e88f648e1&pid=1-s2.0-S2666827024000148-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140069231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient machine learning-assisted failure analysis method for circuit-level defect prediction 用于电路级缺陷预测的高效机器学习辅助故障分析方法
Pub Date : 2024-02-24 DOI: 10.1016/j.mlwa.2024.100537
Joydeep Ghosh

Integral to the success of transistor advancements is the accurate use of failure analysis (FA) which benefits in fine-tuning and optimization of the fabrication processes. However, the chip makers face several FA challenges as device sizes, structure, and material complexities scale dramatically. To sustain manufacturability, one can accelerate defect identification at all steps of the chip processing and design. On the other hand, as technologies scale below the nanometer nodes, devices are more sensitive to unavoidable process-induced variability. Therefore, metallic defects and process-induced variability need to be treated concurrently in the context of chip scaling, while failure diagnostic methods to decouple the effects should be developed. Indeed, the locating a defective component from thousands of circuits in a microchip in the presence of variability is a tedious task. This work shows how the SPICE circuit simulations coupled with machine learning based-physical modeling should be effectively used to tackle such a problem for a 6T-SRAM bit cell. An automatic bridge defect recognition system for such a circuit is devised by training a predictive model on simulation data. For feature descriptors of the model, the symmetry of the circuit and a fundamental material property are leveraged: metals (semiconductors) have a positive (negative) temperature coefficient of resistance up to a certain voltage range. Then, this work successfully demonstrates that how a defective circuit is identified along with its defective component's position with approximately 99.5 % accuracy. This proposed solution should greatly help to accelerate the production process of the integrated circuits.

准确使用故障分析(FA)是晶体管技术进步取得成功的关键,它有利于微调和优化制造工艺。然而,随着器件尺寸、结构和材料复杂性的急剧扩大,芯片制造商面临着多项故障分析挑战。为了保持可制造性,可以在芯片加工和设计的所有步骤中加快缺陷识别。另一方面,随着技术扩展到纳米节点以下,器件对不可避免的工艺引起的变异更加敏感。因此,在芯片扩展过程中,需要同时处理金属缺陷和工艺引起的变异性,并开发故障诊断方法,将两者的影响分离开来。事实上,在存在变异性的情况下,要从微芯片中成千上万的电路中找出有缺陷的元件是一项繁琐的任务。这项工作展示了如何有效利用 SPICE 电路仿真和基于机器学习的物理建模来解决 6T-SRAM 位单元的这一问题。通过在仿真数据上训练一个预测模型,为这种电路设计了一个自动桥接缺陷识别系统。对于模型的特征描述,利用了电路的对称性和基本材料特性:金属(半导体)在一定电压范围内具有正(负)电阻温度系数。然后,这项工作成功证明了如何以约 99.5% 的准确率识别出故障电路及其故障元件的位置。这一建议的解决方案将大大有助于加快集成电路的生产过程。
{"title":"Efficient machine learning-assisted failure analysis method for circuit-level defect prediction","authors":"Joydeep Ghosh","doi":"10.1016/j.mlwa.2024.100537","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100537","url":null,"abstract":"<div><p>Integral to the success of transistor advancements is the accurate use of failure analysis (FA) which benefits in fine-tuning and optimization of the fabrication processes. However, the chip makers face several FA challenges as device sizes, structure, and material complexities scale dramatically. To sustain manufacturability, one can accelerate defect identification at all steps of the chip processing and design. On the other hand, as technologies scale below the nanometer nodes, devices are more sensitive to unavoidable process-induced variability. Therefore, metallic defects and process-induced variability need to be treated concurrently in the context of chip scaling, while failure diagnostic methods to decouple the effects should be developed. Indeed, the locating a defective component from thousands of circuits in a microchip in the presence of variability is a tedious task. This work shows how the SPICE circuit simulations coupled with machine learning based-physical modeling should be effectively used to tackle such a problem for a 6T-SRAM bit cell. An automatic bridge defect recognition system for such a circuit is devised by training a predictive model on simulation data. For feature descriptors of the model, the symmetry of the circuit and a fundamental material property are leveraged: metals (semiconductors) have a positive (negative) temperature coefficient of resistance up to a certain voltage range. Then, this work successfully demonstrates that how a defective circuit is identified along with its defective component's position with approximately 99.5 % accuracy. This proposed solution should greatly help to accelerate the production process of the integrated circuits.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100537"},"PeriodicalIF":0.0,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000136/pdfft?md5=ae9a4eb7cc772f472d349a33069ffdc2&pid=1-s2.0-S2666827024000136-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140015269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Case-Base Neural Network: Survival analysis with time-varying, higher-order interactions 案例基础神经网络:具有时变高阶交互作用的生存分析
Pub Date : 2024-02-20 DOI: 10.1016/j.mlwa.2024.100535
Jesse Islam , Maxime Turgeon , Robert Sladek , Sahir Bhatnagar

In the context of survival analysis, data-driven neural network-based methods have been developed to model complex covariate effects. While these methods may provide better predictive performance than regression-based approaches, not all can model time-varying interactions and complex baseline hazards. To address this, we propose Case-Base Neural Networks (CBNNs) as a new approach that combines the case-base sampling framework with flexible neural network architectures. Using a novel sampling scheme and data augmentation to naturally account for censoring, we construct a feed-forward neural network that includes time as an input. CBNNs predict the probability of an event occurring at a given moment to estimate the full hazard function. We compare the performance of CBNNs to regression and neural network-based survival methods in a simulation and three case studies using two time-dependent metrics. First, we examine performance on a simulation involving a complex baseline hazard and time-varying interactions to assess all methods, with CBNN outperforming competitors. Then, we apply all methods to three real data applications, with CBNNs outperforming the competing models in two studies and showing similar performance in the third. Our results highlight the benefit of combining case-base sampling with deep learning to provide a simple and flexible framework for data-driven modeling of single event survival outcomes that estimates time-varying effects and a complex baseline hazard by design. An R package is available at https://github.com/Jesse-Islam/cbnn.

在生存分析中,人们开发了基于数据驱动的神经网络方法,以模拟复杂的协变量效应。虽然这些方法可能比基于回归的方法具有更好的预测性能,但并非所有方法都能模拟时变交互作用和复杂的基线危害。为了解决这个问题,我们提出了病例基础神经网络(CBNN)作为一种新方法,将病例基础抽样框架与灵活的神经网络架构相结合。我们利用新颖的抽样方案和数据增强技术自然地考虑到了人口普查,构建了一个将时间作为输入的前馈神经网络。CBNN 预测事件在给定时刻发生的概率,从而估算出完整的危险函数。我们使用两个随时间变化的指标,在模拟和三个案例研究中比较了 CBNN 与回归和基于神经网络的生存方法的性能。首先,我们检查了涉及复杂基线危险和时变交互作用的模拟性能,以评估所有方法,其中 CBNN 的性能优于竞争对手。然后,我们将所有方法应用到三个真实数据应用中,在两个研究中,CBNN 的表现优于竞争模型,在第三个研究中,CBNN 的表现与竞争模型相似。我们的研究结果凸显了将病例基础抽样与深度学习相结合的优势,它为单次事件生存结果的数据驱动建模提供了一个简单灵活的框架,可以通过设计估计时变效应和复杂的基线危险。R软件包可在https://github.com/Jesse-Islam/cbnn。
{"title":"Case-Base Neural Network: Survival analysis with time-varying, higher-order interactions","authors":"Jesse Islam ,&nbsp;Maxime Turgeon ,&nbsp;Robert Sladek ,&nbsp;Sahir Bhatnagar","doi":"10.1016/j.mlwa.2024.100535","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100535","url":null,"abstract":"<div><p>In the context of survival analysis, data-driven neural network-based methods have been developed to model complex covariate effects. While these methods may provide better predictive performance than regression-based approaches, not all can model time-varying interactions and complex baseline hazards. To address this, we propose Case-Base Neural Networks (CBNNs) as a new approach that combines the case-base sampling framework with flexible neural network architectures. Using a novel sampling scheme and data augmentation to naturally account for censoring, we construct a feed-forward neural network that includes time as an input. CBNNs predict the probability of an event occurring at a given moment to estimate the full hazard function. We compare the performance of CBNNs to regression and neural network-based survival methods in a simulation and three case studies using two time-dependent metrics. First, we examine performance on a simulation involving a complex baseline hazard and time-varying interactions to assess all methods, with CBNN outperforming competitors. Then, we apply all methods to three real data applications, with CBNNs outperforming the competing models in two studies and showing similar performance in the third. Our results highlight the benefit of combining case-base sampling with deep learning to provide a simple and flexible framework for data-driven modeling of single event survival outcomes that estimates time-varying effects and a complex baseline hazard by design. An R package is available at <span>https://github.com/Jesse-Islam/cbnn</span><svg><path></path></svg>.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100535"},"PeriodicalIF":0.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000112/pdfft?md5=2ca7d27c28c284bcf38b04eedc246216&pid=1-s2.0-S2666827024000112-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139936839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPT classifications, with application to credit lending GPT 分类,适用于信用贷款
Pub Date : 2024-02-20 DOI: 10.1016/j.mlwa.2024.100534
Golnoosh Babaei, Paolo Giudici

Generative Pre-trained Transformers (GPT) and Large language models (LLMs) have made significant advancements in natural language processing in recent years. The practical applications of LLMs are undeniable, rendering moot any debate about their impending influence. The power of LLMs has made them similar to machine learning models for decision-making problems. In this paper, we focus on binary classification which is a common use of ML models, particularly in credit lending applications. We show how a GPT model can perform almost as accurately as a classical logistic machine learning model but with a much lower number of sample observations. In particular, we show how, in the context of credit lending, LLMs can be improved and reach performances similar to classical logistic regression models using only a small set of examples.

近年来,预训练生成变换器(GPT)和大型语言模型(LLM)在自然语言处理领域取得了重大进展。LLMs 的实际应用是毋庸置疑的,这使得任何关于其即将产生的影响的争论都变得毫无意义。LLM 的强大功能使其在决策问题上类似于机器学习模型。在本文中,我们将重点放在二元分类上,这是 ML 模型的常见用途,尤其是在信用借贷应用中。我们展示了 GPT 模型如何在样本观察数少得多的情况下,实现与经典逻辑机器学习模型几乎一样的精确度。特别是,我们展示了在信用借贷的背景下,如何改进 LLM,使其仅使用一小部分示例就能达到与经典逻辑回归模型类似的性能。
{"title":"GPT classifications, with application to credit lending","authors":"Golnoosh Babaei,&nbsp;Paolo Giudici","doi":"10.1016/j.mlwa.2024.100534","DOIUrl":"https://doi.org/10.1016/j.mlwa.2024.100534","url":null,"abstract":"<div><p>Generative Pre-trained Transformers (GPT) and Large language models (LLMs) have made significant advancements in natural language processing in recent years. The practical applications of LLMs are undeniable, rendering moot any debate about their impending influence. The power of LLMs has made them similar to machine learning models for decision-making problems. In this paper, we focus on binary classification which is a common use of ML models, particularly in credit lending applications. We show how a GPT model can perform almost as accurately as a classical logistic machine learning model but with a much lower number of sample observations. In particular, we show how, in the context of credit lending, LLMs can be improved and reach performances similar to classical logistic regression models using only a small set of examples.</p></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"16 ","pages":"Article 100534"},"PeriodicalIF":0.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666827024000100/pdfft?md5=6b1b9c86ebd9e871a9ace0066d5292f2&pid=1-s2.0-S2666827024000100-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139936043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Machine learning with applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1