{"title":"论新冠肺炎大流行期间的智力测量","authors":"C. Chartier","doi":"10.26443/MJM.V19I1.828","DOIUrl":null,"url":null,"abstract":"There are two commonly accepted ways to conceptualize intelligence. One involves competency in certain skills, such as problem-solving. The other, more abstract – dare I say innate – view holds that being good at a specific task is an insufficient condition for intelligence. Historically, the medical and artificial intelligence communities have grappled for position vis-à-vis these philosophies, with each side staking its claim for the more “authentic” definition of intelligence. This dispute has endured, for the most part, unresolved since the advent of artificial intelligence and its first foray into healthcare applications in the early 21st century. What is occurring when data scientists leverage massive quantities of data to replicate complex clinical decision-making, while still failing to teach a machine to correctly think about disease? This simultaneously validates imitative capacity as a metric for intelligence (machines can learn from infinite correct or incorrect diagnoses, farmore than any human physician can absorb throughout an entire career) and preserves the medical profession’s breadth of clini-","PeriodicalId":18292,"journal":{"name":"McGill Journal of Medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the Measure of Intelligence During the COVID-19 Pandemic\",\"authors\":\"C. Chartier\",\"doi\":\"10.26443/MJM.V19I1.828\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There are two commonly accepted ways to conceptualize intelligence. One involves competency in certain skills, such as problem-solving. The other, more abstract – dare I say innate – view holds that being good at a specific task is an insufficient condition for intelligence. Historically, the medical and artificial intelligence communities have grappled for position vis-à-vis these philosophies, with each side staking its claim for the more “authentic” definition of intelligence. This dispute has endured, for the most part, unresolved since the advent of artificial intelligence and its first foray into healthcare applications in the early 21st century. What is occurring when data scientists leverage massive quantities of data to replicate complex clinical decision-making, while still failing to teach a machine to correctly think about disease? This simultaneously validates imitative capacity as a metric for intelligence (machines can learn from infinite correct or incorrect diagnoses, farmore than any human physician can absorb throughout an entire career) and preserves the medical profession’s breadth of clini-\",\"PeriodicalId\":18292,\"journal\":{\"name\":\"McGill Journal of Medicine\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"McGill Journal of Medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.26443/MJM.V19I1.828\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"McGill Journal of Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.26443/MJM.V19I1.828","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On the Measure of Intelligence During the COVID-19 Pandemic
There are two commonly accepted ways to conceptualize intelligence. One involves competency in certain skills, such as problem-solving. The other, more abstract – dare I say innate – view holds that being good at a specific task is an insufficient condition for intelligence. Historically, the medical and artificial intelligence communities have grappled for position vis-à-vis these philosophies, with each side staking its claim for the more “authentic” definition of intelligence. This dispute has endured, for the most part, unresolved since the advent of artificial intelligence and its first foray into healthcare applications in the early 21st century. What is occurring when data scientists leverage massive quantities of data to replicate complex clinical decision-making, while still failing to teach a machine to correctly think about disease? This simultaneously validates imitative capacity as a metric for intelligence (machines can learn from infinite correct or incorrect diagnoses, farmore than any human physician can absorb throughout an entire career) and preserves the medical profession’s breadth of clini-