首页 > 最新文献

Ethics and Information Technology最新文献

英文 中文
Engineers on responsibility: feminist approaches to who’s responsible for ethical AI 关于责任的工程师:女权主义者对人工智能伦理责任的看法
IF 3.6 2区 哲学 Q1 Social Sciences Pub Date : 2024-01-02 DOI: 10.1007/s10676-023-09739-1
Eleanor Drage, Kerry McInerney, Jude Browne
{"title":"Engineers on responsibility: feminist approaches to who’s responsible for ethical AI","authors":"Eleanor Drage, Kerry McInerney, Jude Browne","doi":"10.1007/s10676-023-09739-1","DOIUrl":"https://doi.org/10.1007/s10676-023-09739-1","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139391278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI and the need for justification (to the patient). 人工智能和(向患者)说明理由的必要性。
IF 3.6 2区 哲学 Q1 Social Sciences Pub Date : 2024-01-01 Epub Date: 2024-03-04 DOI: 10.1007/s10676-024-09754-w
Anantharaman Muralidharan, Julian Savulescu, G Owen Schaefer

This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.

本文认为,困扰黑盒人工智能的一个问题是它缺乏算法上的合理性。我们认为,医疗保健中的共同决策规范预先假定治疗决定对患者而言应该是合理的。只有当医疗决策符合患者的价值观和偏好,并且患者能够看到这一点时,医疗决策对患者来说才是合理的。以患者为导向的合理性受到了黑盒人工智能的威胁,因为人工智能决策缺乏合理性,患者很难确定决策与患者的价值观之间是否有足够的契合点。本文认为,实现算法透明并不能帮助患者弥合医疗决策与价值观之间的差距。为了说明这一论点,我们引入了一个假设模型,我们称之为 "正当的人工智能"(Justifiable AI)。合理的人工智能旨在以明确的方式模拟规范性和评价性考虑因素,从而为病人和医生共同决定治疗方案提供一块垫脚石。如果我们的论证成功,我们就应该优先选择这些合理的模型,而不是替代品(如果有的话),如果没有的话,我们就应该致力于开发上述模型。
{"title":"AI and the need for justification (to the patient).","authors":"Anantharaman Muralidharan, Julian Savulescu, G Owen Schaefer","doi":"10.1007/s10676-024-09754-w","DOIUrl":"10.1007/s10676-024-09754-w","url":null,"abstract":"<p><p>This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10912120/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140051468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trustworthiness of voting advice applications in Europe. 欧洲投票建议应用程序的可信度。
IF 3.4 2区 哲学 Q1 ETHICS Pub Date : 2024-01-01 Epub Date: 2024-08-12 DOI: 10.1007/s10676-024-09790-6
Elisabeth Stockinger, Jonne Maas, Christofer Talvitie, Virginia Dignum

Voting Advice Applications (VAAs) are interactive tools used to assist in one's choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens' trust and participation in democratic structures. However, there is no established ground truth for one's electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions.

Supplementary information: The online version contains supplementary material available at 10.1007/s10676-024-09790-6.

投票咨询应用程序(VAAs)是一种互动工具,用于帮助人们在即将举行的选举中选择党派或候选人。它们有可能增加公民对民主结构的信任和参与。然而,一个人的选举选择并没有既定的基本事实,VAA 的建议在很大程度上取决于架构和设计选择。我们根据欧盟委员会提供的《可信人工智能道德准则》,利用公开信息评估了几个具有代表性的欧洲 VAA。我们发现各 VAA 的得分不相上下,大多数要求的得分较低,差异反映了发展中机构的类型。在所有 VAA 中,我们发现在以下方面需要改进:(i) 建议主观性的透明度;(ii) 利益相关者参与的多样性;(iii) 以用户为中心的算法文档;(iv) 基本价值观和假设的披露:在线版本包含补充材料,可查阅 10.1007/s10676-024-09790-6。
{"title":"Trustworthiness of voting advice applications in Europe.","authors":"Elisabeth Stockinger, Jonne Maas, Christofer Talvitie, Virginia Dignum","doi":"10.1007/s10676-024-09790-6","DOIUrl":"10.1007/s10676-024-09790-6","url":null,"abstract":"<p><p>Voting Advice Applications (VAAs) are interactive tools used to assist in one's choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens' trust and participation in democratic structures. However, there is no established ground truth for one's electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10676-024-09790-6.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11415416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language models and their big bullshit potential. 大型语言模型及其巨大潜力
IF 3.4 2区 哲学 Q1 ETHICS Pub Date : 2024-01-01 Epub Date: 2024-10-04 DOI: 10.1007/s10676-024-09802-5
Sarah A Fisher

Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they need not bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.

功能强大的新大型语言模型已经崭露头角,应用范围十分广泛。我们现在可以预见,它们的输出量和频率会迅速增加。一些评论家声称,大型语言模型是在胡说八道,它们不顾事实真相,产生令人信服的输出结果。如果这种说法是正确的,那么大型语言模型就会成为非常危险的话语参与者。胡说八道者不仅破坏了真实性规范(说假话),而且破坏了真理本身的规范地位(将真理视为完全无关紧要)。那么,大型语言模型真的会胡说八道吗?我认为大型语言模型真的会胡说八道,因为它们会根据寻找事实的提示发出命题内容,而不会首先评估这些内容的真假。不过,我进一步认为,只要有适当的防护措施,它们就不需要胡说八道。因此,正如人类说话者一样,大型语言模型的废话倾向取决于其自身的特殊构成。
{"title":"Large language models and their big bullshit potential.","authors":"Sarah A Fisher","doi":"10.1007/s10676-024-09802-5","DOIUrl":"10.1007/s10676-024-09802-5","url":null,"abstract":"<p><p>Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are <i>bullshitting</i>, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they <i>need not</i> bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to teach responsible AI in Higher Education: challenges and opportunities 如何在高等教育中教授负责任的人工智能:挑战与机遇
IF 3.6 2区 哲学 Q1 Social Sciences Pub Date : 2023-12-13 DOI: 10.1007/s10676-023-09733-7
Andrea Aler Tubella, Marçal Mora-Cantallops, Juan Carlos Nieves
{"title":"How to teach responsible AI in Higher Education: challenges and opportunities","authors":"Andrea Aler Tubella, Marçal Mora-Cantallops, Juan Carlos Nieves","doi":"10.1007/s10676-023-09733-7","DOIUrl":"https://doi.org/10.1007/s10676-023-09733-7","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139005686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can machine learning make naturalism about health truly naturalistic? A reflection on a data-driven concept of health 机器学习能否使关于健康的自然主义成为真正的自然主义?对数据驱动的健康概念的思考
IF 3.6 2区 哲学 Q1 Social Sciences Pub Date : 2023-12-12 DOI: 10.1007/s10676-023-09734-6
A. Guersenzvaig
{"title":"Can machine learning make naturalism about health truly naturalistic? A reflection on a data-driven concept of health","authors":"A. Guersenzvaig","doi":"10.1007/s10676-023-09734-6","DOIUrl":"https://doi.org/10.1007/s10676-023-09734-6","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139010041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital twins, big data governance, and sustainable tourism 数字双胞胎、大数据治理和可持续旅游业
IF 3.6 2区 哲学 Q1 Social Sciences Pub Date : 2023-11-16 DOI: 10.1007/s10676-023-09730-w
E. Rahmadian, Daniel Feitosa, Yulia Virantina
{"title":"Digital twins, big data governance, and sustainable tourism","authors":"E. Rahmadian, Daniel Feitosa, Yulia Virantina","doi":"10.1007/s10676-023-09730-w","DOIUrl":"https://doi.org/10.1007/s10676-023-09730-w","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139270569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Public health measures and the rise of incidental surveillance: Considerations about private informational power and accountability 公共卫生措施和偶然监督的兴起:关于私人信息权力和问责制的思考
IF 3.6 2区 哲学 Q1 Social Sciences Pub Date : 2023-11-16 DOI: 10.1007/s10676-023-09732-8
Bart Kamphorst, Adam Henschke
{"title":"Public health measures and the rise of incidental surveillance: Considerations about private informational power and accountability","authors":"Bart Kamphorst, Adam Henschke","doi":"10.1007/s10676-023-09732-8","DOIUrl":"https://doi.org/10.1007/s10676-023-09732-8","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139268942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conceptualising and regulating all neural data from consumer-directed devices as medical data: more scope for an unnecessary expansion of medical influence? 将所有来自消费者自主设备的神经数据视为医疗数据并加以监管:医疗影响力的不必要扩张还有更大空间?
IF 3.6 2区 哲学 Q1 Social Sciences Pub Date : 2023-11-15 DOI: 10.1007/s10676-023-09735-5
Brad Partridge, Susan Dodds
{"title":"Conceptualising and regulating all neural data from consumer-directed devices as medical data: more scope for an unnecessary expansion of medical influence?","authors":"Brad Partridge, Susan Dodds","doi":"10.1007/s10676-023-09735-5","DOIUrl":"https://doi.org/10.1007/s10676-023-09735-5","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139272673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Right to Break the Law? Perfect Enforcement of the Law Using Technology Impedes the Development of Legal Systems 违法的权利?利用技术完美执法阻碍了法律制度的发展
IF 3.6 2区 哲学 Q1 Social Sciences Pub Date : 2023-11-15 DOI: 10.1007/s10676-023-09737-3
Bart Custers
{"title":"The Right to Break the Law? Perfect Enforcement of the Law Using Technology Impedes the Development of Legal Systems","authors":"Bart Custers","doi":"10.1007/s10676-023-09737-3","DOIUrl":"https://doi.org/10.1007/s10676-023-09737-3","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139273216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Ethics and Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1