{"title":"Defining intelligence: Bridging the gap between human and artificial perspectives","authors":"Gilles E. Gignac , Eva T. Szodorai","doi":"10.1016/j.intell.2024.101832","DOIUrl":null,"url":null,"abstract":"<div><p>Achieving a widely accepted definition of human intelligence has been challenging, a situation mirrored by the diverse definitions of artificial intelligence in computer science. By critically examining published definitions, highlighting both consistencies and inconsistencies, this paper proposes a refined nomenclature that harmonizes conceptualizations across the two disciplines. Abstract and operational definitions for human and artificial intelligence are proposed that emphasize maximal capacity for completing novel goals successfully through respective perceptual-cognitive and computational processes. Additionally, support for considering intelligence, both human and artificial, as consistent with a multidimensional model of capabilities is provided. The implications of current practices in artificial intelligence training and testing are also described, as they can be expected to lead to artificial achievement or expertise rather than artificial intelligence. Paralleling psychometrics, ‘AI metrics’ is suggested as a needed computer science discipline that acknowledges the importance of test reliability and validity, as well as standardized measurement procedures in artificial system evaluations. Drawing parallels with human general intelligence, artificial general intelligence (AGI) is described as a reflection of the shared variance in artificial system performances. We conclude that current evidence more greatly supports the observation of artificial achievement and expertise over artificial intelligence. However, interdisciplinary collaborations, based on common understandings of the nature of intelligence, as well as sound measurement practices, could facilitate scientific innovations that help bridge the gap between artificial and human-like intelligence.</p></div>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0160289624000266/pdfft?md5=63c65fbb7e59d45a51e8f055e92ca453&pid=1-s2.0-S0160289624000266-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160289624000266","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Achieving a widely accepted definition of human intelligence has been challenging, a situation mirrored by the diverse definitions of artificial intelligence in computer science. By critically examining published definitions, highlighting both consistencies and inconsistencies, this paper proposes a refined nomenclature that harmonizes conceptualizations across the two disciplines. Abstract and operational definitions for human and artificial intelligence are proposed that emphasize maximal capacity for completing novel goals successfully through respective perceptual-cognitive and computational processes. Additionally, support for considering intelligence, both human and artificial, as consistent with a multidimensional model of capabilities is provided. The implications of current practices in artificial intelligence training and testing are also described, as they can be expected to lead to artificial achievement or expertise rather than artificial intelligence. Paralleling psychometrics, ‘AI metrics’ is suggested as a needed computer science discipline that acknowledges the importance of test reliability and validity, as well as standardized measurement procedures in artificial system evaluations. Drawing parallels with human general intelligence, artificial general intelligence (AGI) is described as a reflection of the shared variance in artificial system performances. We conclude that current evidence more greatly supports the observation of artificial achievement and expertise over artificial intelligence. However, interdisciplinary collaborations, based on common understandings of the nature of intelligence, as well as sound measurement practices, could facilitate scientific innovations that help bridge the gap between artificial and human-like intelligence.