一些预测:人工智能与会计

IF 3.1 3区 管理学 Q2 BUSINESS, FINANCE Australian Accounting Review Pub Date : 2023-06-07 DOI:10.1111/auar.12403
Ron Weber
{"title":"一些预测:人工智能与会计","authors":"Ron Weber","doi":"10.1111/auar.12403","DOIUrl":null,"url":null,"abstract":"<p>The Colin Ferguson Oration is the address given to attendees at the annual Australia Accounting Hall of Fame dinner and presentation evening. It is an invited oration, whereby an eminent modern-day leader addresses the audience on matters at the intersection of business, government and the academe as they relate to the rich history, the current state and/or the future direction of the accounting profession. The oration is named in honour of our colleague Professor Colin Ferguson (1949–2014). Colin was the key figure driving the inception of the Australian Accounting Hall of Fame. In a decorated academic career, he worked tirelessly for many years and with great distinction at the intersection of accounting thought and practice encompassing auditing, forensic accounting and accounting information systems, so it was only fitting that this oration is named in his honour.</p><p>This year's oration was delivered by Ron Weber, Emeritus Professor at Monash University and The University of Queensland who in 2018 was inducted into the Australian Accounting Hall of Fame.</p><p>It is crucial that as a profession we continue to bring together academe, practitioners and standard setters to explore relevant challenges and issues in our field. This year's oration addresses a topical issue, which is the likely role that artificial intelligence (AI) will play as we consider the future of accounting. We are absolutely thrilled that Ron's oration is published in the <i>Australian Accounting Review</i> (AAR), a journal that for a long time has occupied a unique and valued position in our professional landscape. We thank sincerely the editors of the journal.</p><p>At the outset, I'd like to indicate that I'm going to take a tack that might surprise you. Specifically, I'm not going to try to ‘wow’ you with AI (artificial intelligence) innovations that potentially will turn the accounting field on its head. Hyperbole tends to leave me cold, especially when it clouds deep issues that need to be addressed. And hyperbole about the latest information technology becomes dated quickly and sometimes appears quite funny in hindsight. Instead, I want to examine the likely impact of AI on accounting from a more philosophical perspective.</p><p><b>Let me lay a foundation for what will follow with two anecdotes</b></p><p>Here is the first anecdote. When I was studying for my PhD at the University of Minnesota in the mid 1970s, I had some involvement with several academics and students who were trying to figure out how humans understood language. Their goal was to build software that would understand natural language input to a computer through either voice or text by emulating how humans understood natural language. Some of you will remember that the mid-1970s were the days before personal computers, the worldwide web, and graphical user interfaces. Working with computers was still difficult! Yes, at the time, we were living in the dark ages!</p><p>Here is the second anecdote. In 1982, I spent a six-month sabbatical leave at New York University (NYU). There I met a colleague who was trying to build a computer program to play the Chinese game of Go. I had never heard of Go until I went to NYU. It is the oldest board game in existence (over 2500 years old). In several ways, apparently it is a more complex and difficult game than chess. Anyway, the reason my colleague at NYU was interested in Go was that he already had extensive experience in building chess-playing programs. He was a graduate of the AI laboratory at Carnegie-Mellon University led by a famous scholar – Nobel Laureate in Economics, Herbert Simon. In Simon's laboratory at Carnegie, my colleague had worked on chess-playing programs that were written based on the ways that grand masters play chess. He hoped he would get additional insights about human intelligence by working with grand masters of Go.</p><p><b>What Happened Subsequently?</b></p><p>We now have natural-language understanding software (e.g., Siri) that is fairly good at ‘understanding’ spoken natural language. The way the software works, however, has only a few similarities with the ways that humans understand natural language (at least to the best of our knowledge). Rather, the software depends on the breath-taking speeds with which modern computers now operate, the availability of high-speed communications networks, and the high-speed, enormous-capacity storage devices that now exist. For instance, when you ask Siri to do something, a sound file gets transmitted via the internet to Apple computers, and the sounds are matched against a huge database of sounds and their corresponding words. Siri then uses pattern recognition with an enormous database of phrases, questions and answers to determine what most likely is being said and its meaning.</p><p>A similar situation exists with chess-playing programs. They don't work like human chess players. Instead, they use brute-force methods to determine their moves. They access a huge database of historical grandmaster games, winning endgames, strategic moves and so on, and they have sophisticated algorithms that they use to examine millions of positions in a second and to optimally evaluate their next move. Today, many different chess-playing programs exist that will beat the best human chess players every time.</p><p>Do the impressive capabilities of speech-recognition software and chess-playing software manifest they possess human-like intelligence? The answer is ‘no’. And the situation with speech-recognition software and chess-playing software typifies AI work in many other domains.</p><p>Will this situation change? At some time in the future, are we likely to see AI programs that mirror human intelligence in, for instance, the accounting domain? My view is that the answer is ‘no’, and here I want to turn to some philosophy to explain my reasons.</p><p>If computer programs are to have any chance of mirroring human intelligence, we first need to solve a deep, fundamental problem that philosophers and cognitive scientists call the ‘mind–body problem’. Basically, the mind–body problem addresses the questions of what constitutes the human mind, how the human mind and consciousness arise, and how human consciousness relates to the human body.</p><p>Almost 30 years ago, an Australian philosopher named David Chalmers called the mind–body problem the ‘hard’ problem in philosophy (Chalmers <span>1996</span>). The fact that his name for the problem is still in vogue reflects that we currently have a long way to go before we have some sense of whether the mind–body problem can ever be solved.</p><p>While a solution to the hard problem of human consciousness and intelligence remains elusive, nonetheless some philosophers have given us a theory of the way in which they believe human consciousness and intelligence have come about. I want to use their theory (Bunge <span>1979</span>; Mahner <span>2015</span>) to explain why I doubt AI will ever mirror human intelligence, but I also want to stress that the theory I am using is not accepted universally.</p><p>Clearly, human consciousness and intelligence didn't always exist! Specifically, the theory I'm using postulates that they arose progressively over the eons through a particular evolutionary process called ‘assemblage’. This process involves things in the world beginning to interact with other things and these interactions leading to the emergence of new, more complex things. These new things have a critical feature – namely, they have new properties not possessed by their components – their so-called <i>emergent</i> properties. These novel properties are somehow related to the properties of their components, but the critical issue is they are properties that are <i>not</i> possessed by any of their components (Bunge <span>2003</span>).</p><p>Let me illustrate the notion of emergent properties through a simple example. Consider a work team that has a number of employees who interact with one another to perform certain tasks. The <i>cohesiveness</i> of the work team is an emergent property of the team. Somehow cohesiveness is related to properties of the individuals who make up the team, but it is not a property of the individual members of the team – we don't say a person is ‘cohesive’.</p><p>Think about humans, therefore, as an extraordinarily complex level structure of things (in essence, the things are systems) that have assembled over time. Billions of years ago, the evolutionary processes that led to the emergence of humans began with particular atoms (primarily hydrogen, oxygen and nitrogen with a little carbon). These atoms eventually assembled into molecules. Some of these molecules eventually assembled into organelles. And then we see the formation of cells, tissues, organs and organisms as the assembly process that underpins evolution unfolded over time. Finally, we have a human made up of about 100 trillion cells, with each cell in turn made up of 100 trillion atoms. All the components of a human (atoms, cells, tissues and so on) are things (systems) with emergent properties.</p><p>The philosophers who developed this theory argue that only after this evolutionary process was quite advanced did consciousness and intelligence, at least as we know it, start to appear. They contend higher-level systems had to evolve in the life form that eventually became a human before we had the types of emergent properties that they believe are needed to produce human consciousness and intelligence.</p><p>What does this mean for the chances of machines ever emulating human consciousness and intelligence? If the philosophers who developed the theory I've described are right, the answer is that the chances are not good (see also Mahner <span>2015</span>).</p><p>Think about the numbers! Remember, the human body has roughly one trillion cells, each of which is composed of roughly one trillion atoms. Many of these atoms and cells are connected to other atoms and cells. Of course, not everything is connected to everything else. Nonetheless, the possible number of connections and the number that most likely exist are mind-boggling. What are the emergent properties that have to exist among the different components of a life form if consciousness and intelligence are to eventually appear?</p><p>To make matters even more complex, after higher-level systems have evolved, we know that they sometimes exert an influence on their lower-level components – the components that initially assembled to form the higher-level system – such that the properties of the lower-level system change. For instance, consider someone who becomes a head of department or a dean in a university. They <i>acquire</i> new properties such as (a) the authority to make certain decisions, and (b) the unbelievable frustrations arising from being a head or dean in a university. And they can <i>lose</i> certain properties – for instance, if you have been a head or a dean, you will know that the property you often lose is the will to live!!</p><p>If we are trying to mirror human intelligence, here is the catch. First, we are a long way from knowing (and perhaps we may never know) all the connections that exist between the huge number of components of the human body – the atoms, the cells, the tissues and so on. Second, even where we know some that exist, we don't always know their exact nature and thus how to replicate them. Third, how the emergent properties of higher-level systems in the human body relate to the properties of lower-level components is often unclear.</p><p>Here, then, is the important moral to my story so far. Focusing on whether computers can and eventually will have the capabilities to mirror human consciousness and human intelligence is, in my opinion, the <i>wrong</i> focus. I doubt this will ever occur. Humans are the outcome of an evolutionary process that has occurred over billions of years. After a couple of thousand years of philosophers trying to understand human consciousness and intelligence and more recently cognitive neuroscientists tackling the same task, we have barely scratched the surface.</p><p>We also have to consider the properties that continue to differentiate humans from machines – empathy, sympathy, love, self-sacrifice – and how they affect human consciousness and intelligence. Where do these properties come from? Can you envisage a machine with these properties? Can you conceive of a situation where you and a computer might fall in love with each other?</p><p>Does the moral of my story mean that as humans (as accountants) we do not have to be concerned about artificial intelligence because the likelihood of computers being able to mirror human consciousness and intelligence, at least for the foreseeable future, is very low? The answer is a resounding, an emphatic, ‘No!’. A certain type of consciousness and intelligence – let's just simply call it machine intelligence – will continue to evolve rapidly as computers become more powerful and our knowledge of how to use them increases exponentially. It is this form of artificial intelligence that has to be our focus.</p><p>The reason is that we need to understand the nature of and significant implications of a concept that philosophers interested in general system theory call <i>equifinality</i> – very simply, the idea that we can sometimes achieve the same (or almost the same) outcomes in the world using different processes (e.g., Gresov and Drazin <span>1997</span>). Language-recognition software and chess-playing software are good examples of equifinality in practice. We don't have quite the same outcomes with the software as we do with humans. But in one case, language-recognition software, the outcome is good enough for many purposes. And in the other case, chess-playing software, we have a superior outcome (at least if winning the game is our objective criterion).</p><p>The challenges we face because of equifinality are becoming increasingly salient. For instance, for those of us who are academics, we now have concerns about student use of so-called <i>generative</i> AI programs such as ChatGPT. The fact that a student's response to an assignment has been produced by a generative AI program can be extraordinarily difficult to detect – again, equifinality at work.</p><p>It's so hard to predict how equifinality will manifest. It's often hard for humans to ‘think’ like computers! For instance, we have difficulty comprehending how computers perform tasks in a few seconds that would take humans large amounts of time to complete. In this regard, we are at the dawn of quantum computing – currently, a field of research that promises the development of a new kind of computer that can perform certain kinds of calculations in a few seconds that would otherwise take today's supercomputers decades or millennia to complete. In a world of quantum computers, what forms of equifinality and machine intelligence will arise?</p><p>Where to from here? As accountants, what should we do in a world where machine intelligence will continue to develop rapidly. I wish I had privileged insights, but sadly I don't. For what they are worth, however, I'd like to conclude my oration with just a few thoughts that might provide some matters for reflection.</p><p>First, as accountants, we should focus on identifying those tasks where humans are likely to have a long-term comparative advantage over computers. I suspect these kinds of tasks will be those that require very human attributes – for instance, an ability to interact with others with warmth and empathy, an ability to read body language, a sense of the ephemeral and spiritual, and an ability to develop rapport and trust. We should continue to develop our capabilities in relation to these tasks.</p><p>Second, we need to think very hard about those accounting tasks where machine intelligence will have a comparative advantage over humans. We already have some pointers to the tasks that will be affected – specifically, those that are amenable to machine-learning, pattern-matching and classification techniques. But developments in generative AI and quantum computing should motivate us to think more broadly. Where equifinality is likely to arise, we should exit systematically and gracefully from the tasks that will be affected.</p><p>Third, we can look for opportunities to work synergistically with machine intelligence. As accountants, ultimately, we are seeking ways to provide information about economic phenomena. With better tools, we are progressively expanding our views about what economic phenomena can and should be our focus. In this regard, I am mindful of Bill Edge's (<span>2022</span>) excellent oration last year where he spoke about developments in sustainability reporting and the opportunities provided to accountants. With powerful tools such as networks of environmental sensors, pattern-recognition and machine-learning software, generative AI tools and creative thinking, we can expand the scope of the work we do as accountants.</p><p>Here is my closing comment. I feel some sense of irony and remorse about the topic of my oration. My focus has been <i>artificial</i> intelligence and its possible implications for the accounting profession. But tonight, we are commemorating someone, Professor Colin Ferguson, who had an extraordinary amount of very real <i>human</i> intelligence, personal and professional. There was nothing artificial about it! I hope Col will forgive me.</p><p>Thank you!</p>","PeriodicalId":51552,"journal":{"name":"Australian Accounting Review","volume":"33 2","pages":"110-113"},"PeriodicalIF":3.1000,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/auar.12403","citationCount":"0","resultStr":"{\"title\":\"Some Prognostications: Artificial Intelligence and Accounting\",\"authors\":\"Ron Weber\",\"doi\":\"10.1111/auar.12403\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The Colin Ferguson Oration is the address given to attendees at the annual Australia Accounting Hall of Fame dinner and presentation evening. It is an invited oration, whereby an eminent modern-day leader addresses the audience on matters at the intersection of business, government and the academe as they relate to the rich history, the current state and/or the future direction of the accounting profession. The oration is named in honour of our colleague Professor Colin Ferguson (1949–2014). Colin was the key figure driving the inception of the Australian Accounting Hall of Fame. In a decorated academic career, he worked tirelessly for many years and with great distinction at the intersection of accounting thought and practice encompassing auditing, forensic accounting and accounting information systems, so it was only fitting that this oration is named in his honour.</p><p>This year's oration was delivered by Ron Weber, Emeritus Professor at Monash University and The University of Queensland who in 2018 was inducted into the Australian Accounting Hall of Fame.</p><p>It is crucial that as a profession we continue to bring together academe, practitioners and standard setters to explore relevant challenges and issues in our field. This year's oration addresses a topical issue, which is the likely role that artificial intelligence (AI) will play as we consider the future of accounting. We are absolutely thrilled that Ron's oration is published in the <i>Australian Accounting Review</i> (AAR), a journal that for a long time has occupied a unique and valued position in our professional landscape. We thank sincerely the editors of the journal.</p><p>At the outset, I'd like to indicate that I'm going to take a tack that might surprise you. Specifically, I'm not going to try to ‘wow’ you with AI (artificial intelligence) innovations that potentially will turn the accounting field on its head. Hyperbole tends to leave me cold, especially when it clouds deep issues that need to be addressed. And hyperbole about the latest information technology becomes dated quickly and sometimes appears quite funny in hindsight. Instead, I want to examine the likely impact of AI on accounting from a more philosophical perspective.</p><p><b>Let me lay a foundation for what will follow with two anecdotes</b></p><p>Here is the first anecdote. When I was studying for my PhD at the University of Minnesota in the mid 1970s, I had some involvement with several academics and students who were trying to figure out how humans understood language. Their goal was to build software that would understand natural language input to a computer through either voice or text by emulating how humans understood natural language. Some of you will remember that the mid-1970s were the days before personal computers, the worldwide web, and graphical user interfaces. Working with computers was still difficult! Yes, at the time, we were living in the dark ages!</p><p>Here is the second anecdote. In 1982, I spent a six-month sabbatical leave at New York University (NYU). There I met a colleague who was trying to build a computer program to play the Chinese game of Go. I had never heard of Go until I went to NYU. It is the oldest board game in existence (over 2500 years old). In several ways, apparently it is a more complex and difficult game than chess. Anyway, the reason my colleague at NYU was interested in Go was that he already had extensive experience in building chess-playing programs. He was a graduate of the AI laboratory at Carnegie-Mellon University led by a famous scholar – Nobel Laureate in Economics, Herbert Simon. In Simon's laboratory at Carnegie, my colleague had worked on chess-playing programs that were written based on the ways that grand masters play chess. He hoped he would get additional insights about human intelligence by working with grand masters of Go.</p><p><b>What Happened Subsequently?</b></p><p>We now have natural-language understanding software (e.g., Siri) that is fairly good at ‘understanding’ spoken natural language. The way the software works, however, has only a few similarities with the ways that humans understand natural language (at least to the best of our knowledge). Rather, the software depends on the breath-taking speeds with which modern computers now operate, the availability of high-speed communications networks, and the high-speed, enormous-capacity storage devices that now exist. For instance, when you ask Siri to do something, a sound file gets transmitted via the internet to Apple computers, and the sounds are matched against a huge database of sounds and their corresponding words. Siri then uses pattern recognition with an enormous database of phrases, questions and answers to determine what most likely is being said and its meaning.</p><p>A similar situation exists with chess-playing programs. They don't work like human chess players. Instead, they use brute-force methods to determine their moves. They access a huge database of historical grandmaster games, winning endgames, strategic moves and so on, and they have sophisticated algorithms that they use to examine millions of positions in a second and to optimally evaluate their next move. Today, many different chess-playing programs exist that will beat the best human chess players every time.</p><p>Do the impressive capabilities of speech-recognition software and chess-playing software manifest they possess human-like intelligence? The answer is ‘no’. And the situation with speech-recognition software and chess-playing software typifies AI work in many other domains.</p><p>Will this situation change? At some time in the future, are we likely to see AI programs that mirror human intelligence in, for instance, the accounting domain? My view is that the answer is ‘no’, and here I want to turn to some philosophy to explain my reasons.</p><p>If computer programs are to have any chance of mirroring human intelligence, we first need to solve a deep, fundamental problem that philosophers and cognitive scientists call the ‘mind–body problem’. Basically, the mind–body problem addresses the questions of what constitutes the human mind, how the human mind and consciousness arise, and how human consciousness relates to the human body.</p><p>Almost 30 years ago, an Australian philosopher named David Chalmers called the mind–body problem the ‘hard’ problem in philosophy (Chalmers <span>1996</span>). The fact that his name for the problem is still in vogue reflects that we currently have a long way to go before we have some sense of whether the mind–body problem can ever be solved.</p><p>While a solution to the hard problem of human consciousness and intelligence remains elusive, nonetheless some philosophers have given us a theory of the way in which they believe human consciousness and intelligence have come about. I want to use their theory (Bunge <span>1979</span>; Mahner <span>2015</span>) to explain why I doubt AI will ever mirror human intelligence, but I also want to stress that the theory I am using is not accepted universally.</p><p>Clearly, human consciousness and intelligence didn't always exist! Specifically, the theory I'm using postulates that they arose progressively over the eons through a particular evolutionary process called ‘assemblage’. This process involves things in the world beginning to interact with other things and these interactions leading to the emergence of new, more complex things. These new things have a critical feature – namely, they have new properties not possessed by their components – their so-called <i>emergent</i> properties. These novel properties are somehow related to the properties of their components, but the critical issue is they are properties that are <i>not</i> possessed by any of their components (Bunge <span>2003</span>).</p><p>Let me illustrate the notion of emergent properties through a simple example. Consider a work team that has a number of employees who interact with one another to perform certain tasks. The <i>cohesiveness</i> of the work team is an emergent property of the team. Somehow cohesiveness is related to properties of the individuals who make up the team, but it is not a property of the individual members of the team – we don't say a person is ‘cohesive’.</p><p>Think about humans, therefore, as an extraordinarily complex level structure of things (in essence, the things are systems) that have assembled over time. Billions of years ago, the evolutionary processes that led to the emergence of humans began with particular atoms (primarily hydrogen, oxygen and nitrogen with a little carbon). These atoms eventually assembled into molecules. Some of these molecules eventually assembled into organelles. And then we see the formation of cells, tissues, organs and organisms as the assembly process that underpins evolution unfolded over time. Finally, we have a human made up of about 100 trillion cells, with each cell in turn made up of 100 trillion atoms. All the components of a human (atoms, cells, tissues and so on) are things (systems) with emergent properties.</p><p>The philosophers who developed this theory argue that only after this evolutionary process was quite advanced did consciousness and intelligence, at least as we know it, start to appear. They contend higher-level systems had to evolve in the life form that eventually became a human before we had the types of emergent properties that they believe are needed to produce human consciousness and intelligence.</p><p>What does this mean for the chances of machines ever emulating human consciousness and intelligence? If the philosophers who developed the theory I've described are right, the answer is that the chances are not good (see also Mahner <span>2015</span>).</p><p>Think about the numbers! Remember, the human body has roughly one trillion cells, each of which is composed of roughly one trillion atoms. Many of these atoms and cells are connected to other atoms and cells. Of course, not everything is connected to everything else. Nonetheless, the possible number of connections and the number that most likely exist are mind-boggling. What are the emergent properties that have to exist among the different components of a life form if consciousness and intelligence are to eventually appear?</p><p>To make matters even more complex, after higher-level systems have evolved, we know that they sometimes exert an influence on their lower-level components – the components that initially assembled to form the higher-level system – such that the properties of the lower-level system change. For instance, consider someone who becomes a head of department or a dean in a university. They <i>acquire</i> new properties such as (a) the authority to make certain decisions, and (b) the unbelievable frustrations arising from being a head or dean in a university. And they can <i>lose</i> certain properties – for instance, if you have been a head or a dean, you will know that the property you often lose is the will to live!!</p><p>If we are trying to mirror human intelligence, here is the catch. First, we are a long way from knowing (and perhaps we may never know) all the connections that exist between the huge number of components of the human body – the atoms, the cells, the tissues and so on. Second, even where we know some that exist, we don't always know their exact nature and thus how to replicate them. Third, how the emergent properties of higher-level systems in the human body relate to the properties of lower-level components is often unclear.</p><p>Here, then, is the important moral to my story so far. Focusing on whether computers can and eventually will have the capabilities to mirror human consciousness and human intelligence is, in my opinion, the <i>wrong</i> focus. I doubt this will ever occur. Humans are the outcome of an evolutionary process that has occurred over billions of years. After a couple of thousand years of philosophers trying to understand human consciousness and intelligence and more recently cognitive neuroscientists tackling the same task, we have barely scratched the surface.</p><p>We also have to consider the properties that continue to differentiate humans from machines – empathy, sympathy, love, self-sacrifice – and how they affect human consciousness and intelligence. Where do these properties come from? Can you envisage a machine with these properties? Can you conceive of a situation where you and a computer might fall in love with each other?</p><p>Does the moral of my story mean that as humans (as accountants) we do not have to be concerned about artificial intelligence because the likelihood of computers being able to mirror human consciousness and intelligence, at least for the foreseeable future, is very low? The answer is a resounding, an emphatic, ‘No!’. A certain type of consciousness and intelligence – let's just simply call it machine intelligence – will continue to evolve rapidly as computers become more powerful and our knowledge of how to use them increases exponentially. It is this form of artificial intelligence that has to be our focus.</p><p>The reason is that we need to understand the nature of and significant implications of a concept that philosophers interested in general system theory call <i>equifinality</i> – very simply, the idea that we can sometimes achieve the same (or almost the same) outcomes in the world using different processes (e.g., Gresov and Drazin <span>1997</span>). Language-recognition software and chess-playing software are good examples of equifinality in practice. We don't have quite the same outcomes with the software as we do with humans. But in one case, language-recognition software, the outcome is good enough for many purposes. And in the other case, chess-playing software, we have a superior outcome (at least if winning the game is our objective criterion).</p><p>The challenges we face because of equifinality are becoming increasingly salient. For instance, for those of us who are academics, we now have concerns about student use of so-called <i>generative</i> AI programs such as ChatGPT. The fact that a student's response to an assignment has been produced by a generative AI program can be extraordinarily difficult to detect – again, equifinality at work.</p><p>It's so hard to predict how equifinality will manifest. It's often hard for humans to ‘think’ like computers! For instance, we have difficulty comprehending how computers perform tasks in a few seconds that would take humans large amounts of time to complete. In this regard, we are at the dawn of quantum computing – currently, a field of research that promises the development of a new kind of computer that can perform certain kinds of calculations in a few seconds that would otherwise take today's supercomputers decades or millennia to complete. In a world of quantum computers, what forms of equifinality and machine intelligence will arise?</p><p>Where to from here? As accountants, what should we do in a world where machine intelligence will continue to develop rapidly. I wish I had privileged insights, but sadly I don't. For what they are worth, however, I'd like to conclude my oration with just a few thoughts that might provide some matters for reflection.</p><p>First, as accountants, we should focus on identifying those tasks where humans are likely to have a long-term comparative advantage over computers. I suspect these kinds of tasks will be those that require very human attributes – for instance, an ability to interact with others with warmth and empathy, an ability to read body language, a sense of the ephemeral and spiritual, and an ability to develop rapport and trust. We should continue to develop our capabilities in relation to these tasks.</p><p>Second, we need to think very hard about those accounting tasks where machine intelligence will have a comparative advantage over humans. We already have some pointers to the tasks that will be affected – specifically, those that are amenable to machine-learning, pattern-matching and classification techniques. But developments in generative AI and quantum computing should motivate us to think more broadly. Where equifinality is likely to arise, we should exit systematically and gracefully from the tasks that will be affected.</p><p>Third, we can look for opportunities to work synergistically with machine intelligence. As accountants, ultimately, we are seeking ways to provide information about economic phenomena. With better tools, we are progressively expanding our views about what economic phenomena can and should be our focus. In this regard, I am mindful of Bill Edge's (<span>2022</span>) excellent oration last year where he spoke about developments in sustainability reporting and the opportunities provided to accountants. With powerful tools such as networks of environmental sensors, pattern-recognition and machine-learning software, generative AI tools and creative thinking, we can expand the scope of the work we do as accountants.</p><p>Here is my closing comment. I feel some sense of irony and remorse about the topic of my oration. My focus has been <i>artificial</i> intelligence and its possible implications for the accounting profession. But tonight, we are commemorating someone, Professor Colin Ferguson, who had an extraordinary amount of very real <i>human</i> intelligence, personal and professional. There was nothing artificial about it! I hope Col will forgive me.</p><p>Thank you!</p>\",\"PeriodicalId\":51552,\"journal\":{\"name\":\"Australian Accounting Review\",\"volume\":\"33 2\",\"pages\":\"110-113\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/auar.12403\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Australian Accounting Review\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/auar.12403\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"BUSINESS, FINANCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Australian Accounting Review","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/auar.12403","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 0

摘要

科林·弗格森演讲是在一年一度的澳大利亚会计名人堂晚宴和演讲晚会上向与会者发表的演讲。这是一篇受邀发表的演讲,由一位杰出的现代领导人就企业、政府和学术界的交叉问题向听众发表演讲,因为这些问题与会计行业的丰富历史、现状和/或未来方向有关。该演讲是为了纪念我们的同事Colin Ferguson教授(1949–2014)而命名的。科林是推动澳大利亚会计名人堂成立的关键人物。在辉煌的学术生涯中,他孜孜不倦地工作了多年,在审计、法务会计和会计信息系统等会计思想和实践的交叉点上表现出色,因此以他的名字命名这篇演讲是非常合适的。今年的演讲由莫纳什大学和昆士兰大学名誉教授Ron Weber发表,他于2018年入选澳大利亚会计名人堂。作为一个职业,我们必须继续将学术界、从业者和标准制定者聚集在一起,探讨我们领域的相关挑战和问题。今年的演讲讨论了一个热门问题,即人工智能在我们考虑会计未来时可能发挥的作用。我们非常高兴罗恩的演讲发表在《澳大利亚会计评论》(AAR)上,这本杂志长期以来在我们的专业领域中占据着独特而有价值的地位。我们衷心感谢该杂志的编辑们。首先,我想表明,我将采取一种可能会让你感到惊讶的策略。具体来说,我不会试图用人工智能创新来“惊艳”你,这些创新可能会颠覆会计领域。夸张往往让我感到冷漠,尤其是当它掩盖了需要解决的深层问题时。对最新信息技术的夸大其词很快就过时了,事后看来有时很有趣。相反,我想从更哲学的角度来研究人工智能对会计的可能影响。让我为接下来的两个轶事打下基础。这是第一个轶事。20世纪70年代中期,当我在明尼苏达大学攻读博士学位时,我与几位学者和学生进行了一些接触,他们试图弄清楚人类是如何理解语言的。他们的目标是通过模拟人类对自然语言的理解,构建能够理解通过语音或文本输入计算机的自然语言的软件。你们中的一些人会记得,20世纪70年代中期是个人电脑、万维网和图形用户界面出现之前的日子。使用电脑仍然很困难!是的,当时,我们生活在黑暗的时代!这是第二则轶事。1982年,我在纽约大学(NYU)休了六个月的假。在那里,我遇到了一位同事,他正试图构建一个电脑程序来玩中国围棋。直到我去了纽约大学,我才听说过围棋。它是现存最古老的棋盘游戏(已有2500多年的历史)。从几个方面来看,这显然是一场比国际象棋更复杂、更困难的比赛。无论如何,我在纽约大学的同事之所以对围棋感兴趣,是因为他在构建国际象棋项目方面已经有了丰富的经验。他毕业于卡内基梅隆大学人工智能实验室,由著名学者、诺贝尔经济学奖获得者赫伯特·西蒙领导。在西蒙位于卡内基的实验室里,我的同事曾根据特级大师下棋的方式编写过下棋程序。他希望通过与围棋大师们的合作,能对人类的智力有更多的了解。后来发生了什么?我们现在有了自然语言理解软件(例如Siri),它在“理解”口语自然语言方面相当出色。然而,该软件的工作方式与人类理解自然语言的方式只有一些相似之处(至少据我们所知)。相反,该软件取决于现代计算机现在令人惊叹的运行速度、高速通信网络的可用性,以及现在存在的高速、大容量存储设备。例如,当你要求Siri做某事时,一个声音文件会通过互联网传输到苹果电脑,这些声音会与一个庞大的声音及其对应单词数据库相匹配。然后,Siri使用模式识别和庞大的短语、问题和答案数据库来确定最有可能说的话及其含义。下棋程序也存在类似的情况。他们不像人类的棋手那样工作。相反,他们使用暴力的方法来决定他们的行动。 尽管如此,可能存在的连接数量和最有可能存在的数量令人难以置信。如果意识和智慧最终要出现,生命形式的不同组成部分之间必须存在哪些涌现的特性?更为复杂的是,在更高级别的系统进化后,我们知道它们有时会对其较低级别的组件施加影响,即最初组装形成更高级别系统的组件,从而使较低级别系统的属性发生变化。例如,考虑一个成为大学系主任或院长的人。他们获得了新的财产,例如(a)做出某些决定的权力,以及(b)作为大学校长或院长所产生的令人难以置信的挫折感。他们可能会失去某些财产——例如,如果你曾经是校长或院长,你就会知道你经常失去的财产是生存的意志!!如果我们试图反映人类的智慧,那么问题就在这里。首先,我们离了解(也许我们永远不会知道)人体大量组成部分——原子、细胞、组织等等——之间存在的所有联系还有很长的路要走。其次,即使我们知道一些存在的成分,我们也不总是知道它们的确切性质,从而知道如何复制它们。第三,人体中高级系统的涌现特性与低级组件的特性之间的关系往往不清楚。到目前为止,这是我故事的重要寓意。在我看来,关注计算机是否能够并且最终将具有反映人类意识和人类智慧的能力是错误的。我怀疑这种情况是否会发生。人类是数十亿年进化过程的产物。几千年来,哲学家们试图理解人类的意识和智力,最近,认知神经科学家也在处理同样的任务,但我们几乎没有触及表面。我们还必须考虑将人类与机器区分开来的特性——同理心、同情、爱、自我牺牲——以及它们如何影响人类的意识和智力。这些财产来自哪里?你能想象出一台具有这些特性的机器吗?你能想象一种情况,你和电脑可能会相爱吗?我的故事的寓意是否意味着,作为人类(作为会计师),我们不必担心人工智能,因为计算机能够反映人类意识和智能的可能性非常低,至少在可预见的未来是这样?答案是响亮而有力的,“不!”。随着计算机变得越来越强大,我们对如何使用它们的了解呈指数级增长,某种类型的意识和智能——我们可以简单地称之为机器智能——将继续快速发展。正是这种形式的人工智能必须成为我们关注的焦点。原因是,我们需要理解一个概念的性质和重要含义,对一般系统理论感兴趣的哲学家称之为等终结性——非常简单,即我们有时可以使用不同的过程在世界上实现相同(或几乎相同)的结果(例如,Gresov和Drazin,1997)。语言识别软件和下棋软件是实践中等价性的好例子。我们使用软件的结果与使用人类的结果不太一样。但在一个案例中,语言识别软件的结果对于许多目的来说已经足够好了。在另一种情况下,在下棋软件中,我们有一个优越的结果(至少如果赢得比赛是我们的客观标准的话)。由于公平性,我们面临的挑战越来越突出。例如,对于我们这些学者来说,我们现在担心学生使用所谓的生成人工智能程序,如ChatGPT。学生对作业的反应是由生成性人工智能程序产生的,这一事实可能非常难以检测——同样,在工作中也是平等的。很难预测公平将如何体现。人类通常很难像电脑一样“思考”!例如,我们很难理解计算机是如何在几秒钟内完成任务的,而这些任务需要人类大量的时间才能完成。在这方面,我们正处于量子计算的黎明——目前,这一研究领域有望开发出一种新型计算机,它可以在几秒钟内执行某些类型的计算,否则今天的超级计算机需要几十年或几千年才能完成。在量子计算机的世界里,会出现什么形式的等价性和机器智能?从这里到哪里?作为会计师,在一个机器智能将继续快速发展的世界里,我们该怎么办。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Some Prognostications: Artificial Intelligence and Accounting

The Colin Ferguson Oration is the address given to attendees at the annual Australia Accounting Hall of Fame dinner and presentation evening. It is an invited oration, whereby an eminent modern-day leader addresses the audience on matters at the intersection of business, government and the academe as they relate to the rich history, the current state and/or the future direction of the accounting profession. The oration is named in honour of our colleague Professor Colin Ferguson (1949–2014). Colin was the key figure driving the inception of the Australian Accounting Hall of Fame. In a decorated academic career, he worked tirelessly for many years and with great distinction at the intersection of accounting thought and practice encompassing auditing, forensic accounting and accounting information systems, so it was only fitting that this oration is named in his honour.

This year's oration was delivered by Ron Weber, Emeritus Professor at Monash University and The University of Queensland who in 2018 was inducted into the Australian Accounting Hall of Fame.

It is crucial that as a profession we continue to bring together academe, practitioners and standard setters to explore relevant challenges and issues in our field. This year's oration addresses a topical issue, which is the likely role that artificial intelligence (AI) will play as we consider the future of accounting. We are absolutely thrilled that Ron's oration is published in the Australian Accounting Review (AAR), a journal that for a long time has occupied a unique and valued position in our professional landscape. We thank sincerely the editors of the journal.

At the outset, I'd like to indicate that I'm going to take a tack that might surprise you. Specifically, I'm not going to try to ‘wow’ you with AI (artificial intelligence) innovations that potentially will turn the accounting field on its head. Hyperbole tends to leave me cold, especially when it clouds deep issues that need to be addressed. And hyperbole about the latest information technology becomes dated quickly and sometimes appears quite funny in hindsight. Instead, I want to examine the likely impact of AI on accounting from a more philosophical perspective.

Let me lay a foundation for what will follow with two anecdotes

Here is the first anecdote. When I was studying for my PhD at the University of Minnesota in the mid 1970s, I had some involvement with several academics and students who were trying to figure out how humans understood language. Their goal was to build software that would understand natural language input to a computer through either voice or text by emulating how humans understood natural language. Some of you will remember that the mid-1970s were the days before personal computers, the worldwide web, and graphical user interfaces. Working with computers was still difficult! Yes, at the time, we were living in the dark ages!

Here is the second anecdote. In 1982, I spent a six-month sabbatical leave at New York University (NYU). There I met a colleague who was trying to build a computer program to play the Chinese game of Go. I had never heard of Go until I went to NYU. It is the oldest board game in existence (over 2500 years old). In several ways, apparently it is a more complex and difficult game than chess. Anyway, the reason my colleague at NYU was interested in Go was that he already had extensive experience in building chess-playing programs. He was a graduate of the AI laboratory at Carnegie-Mellon University led by a famous scholar – Nobel Laureate in Economics, Herbert Simon. In Simon's laboratory at Carnegie, my colleague had worked on chess-playing programs that were written based on the ways that grand masters play chess. He hoped he would get additional insights about human intelligence by working with grand masters of Go.

What Happened Subsequently?

We now have natural-language understanding software (e.g., Siri) that is fairly good at ‘understanding’ spoken natural language. The way the software works, however, has only a few similarities with the ways that humans understand natural language (at least to the best of our knowledge). Rather, the software depends on the breath-taking speeds with which modern computers now operate, the availability of high-speed communications networks, and the high-speed, enormous-capacity storage devices that now exist. For instance, when you ask Siri to do something, a sound file gets transmitted via the internet to Apple computers, and the sounds are matched against a huge database of sounds and their corresponding words. Siri then uses pattern recognition with an enormous database of phrases, questions and answers to determine what most likely is being said and its meaning.

A similar situation exists with chess-playing programs. They don't work like human chess players. Instead, they use brute-force methods to determine their moves. They access a huge database of historical grandmaster games, winning endgames, strategic moves and so on, and they have sophisticated algorithms that they use to examine millions of positions in a second and to optimally evaluate their next move. Today, many different chess-playing programs exist that will beat the best human chess players every time.

Do the impressive capabilities of speech-recognition software and chess-playing software manifest they possess human-like intelligence? The answer is ‘no’. And the situation with speech-recognition software and chess-playing software typifies AI work in many other domains.

Will this situation change? At some time in the future, are we likely to see AI programs that mirror human intelligence in, for instance, the accounting domain? My view is that the answer is ‘no’, and here I want to turn to some philosophy to explain my reasons.

If computer programs are to have any chance of mirroring human intelligence, we first need to solve a deep, fundamental problem that philosophers and cognitive scientists call the ‘mind–body problem’. Basically, the mind–body problem addresses the questions of what constitutes the human mind, how the human mind and consciousness arise, and how human consciousness relates to the human body.

Almost 30 years ago, an Australian philosopher named David Chalmers called the mind–body problem the ‘hard’ problem in philosophy (Chalmers 1996). The fact that his name for the problem is still in vogue reflects that we currently have a long way to go before we have some sense of whether the mind–body problem can ever be solved.

While a solution to the hard problem of human consciousness and intelligence remains elusive, nonetheless some philosophers have given us a theory of the way in which they believe human consciousness and intelligence have come about. I want to use their theory (Bunge 1979; Mahner 2015) to explain why I doubt AI will ever mirror human intelligence, but I also want to stress that the theory I am using is not accepted universally.

Clearly, human consciousness and intelligence didn't always exist! Specifically, the theory I'm using postulates that they arose progressively over the eons through a particular evolutionary process called ‘assemblage’. This process involves things in the world beginning to interact with other things and these interactions leading to the emergence of new, more complex things. These new things have a critical feature – namely, they have new properties not possessed by their components – their so-called emergent properties. These novel properties are somehow related to the properties of their components, but the critical issue is they are properties that are not possessed by any of their components (Bunge 2003).

Let me illustrate the notion of emergent properties through a simple example. Consider a work team that has a number of employees who interact with one another to perform certain tasks. The cohesiveness of the work team is an emergent property of the team. Somehow cohesiveness is related to properties of the individuals who make up the team, but it is not a property of the individual members of the team – we don't say a person is ‘cohesive’.

Think about humans, therefore, as an extraordinarily complex level structure of things (in essence, the things are systems) that have assembled over time. Billions of years ago, the evolutionary processes that led to the emergence of humans began with particular atoms (primarily hydrogen, oxygen and nitrogen with a little carbon). These atoms eventually assembled into molecules. Some of these molecules eventually assembled into organelles. And then we see the formation of cells, tissues, organs and organisms as the assembly process that underpins evolution unfolded over time. Finally, we have a human made up of about 100 trillion cells, with each cell in turn made up of 100 trillion atoms. All the components of a human (atoms, cells, tissues and so on) are things (systems) with emergent properties.

The philosophers who developed this theory argue that only after this evolutionary process was quite advanced did consciousness and intelligence, at least as we know it, start to appear. They contend higher-level systems had to evolve in the life form that eventually became a human before we had the types of emergent properties that they believe are needed to produce human consciousness and intelligence.

What does this mean for the chances of machines ever emulating human consciousness and intelligence? If the philosophers who developed the theory I've described are right, the answer is that the chances are not good (see also Mahner 2015).

Think about the numbers! Remember, the human body has roughly one trillion cells, each of which is composed of roughly one trillion atoms. Many of these atoms and cells are connected to other atoms and cells. Of course, not everything is connected to everything else. Nonetheless, the possible number of connections and the number that most likely exist are mind-boggling. What are the emergent properties that have to exist among the different components of a life form if consciousness and intelligence are to eventually appear?

To make matters even more complex, after higher-level systems have evolved, we know that they sometimes exert an influence on their lower-level components – the components that initially assembled to form the higher-level system – such that the properties of the lower-level system change. For instance, consider someone who becomes a head of department or a dean in a university. They acquire new properties such as (a) the authority to make certain decisions, and (b) the unbelievable frustrations arising from being a head or dean in a university. And they can lose certain properties – for instance, if you have been a head or a dean, you will know that the property you often lose is the will to live!!

If we are trying to mirror human intelligence, here is the catch. First, we are a long way from knowing (and perhaps we may never know) all the connections that exist between the huge number of components of the human body – the atoms, the cells, the tissues and so on. Second, even where we know some that exist, we don't always know their exact nature and thus how to replicate them. Third, how the emergent properties of higher-level systems in the human body relate to the properties of lower-level components is often unclear.

Here, then, is the important moral to my story so far. Focusing on whether computers can and eventually will have the capabilities to mirror human consciousness and human intelligence is, in my opinion, the wrong focus. I doubt this will ever occur. Humans are the outcome of an evolutionary process that has occurred over billions of years. After a couple of thousand years of philosophers trying to understand human consciousness and intelligence and more recently cognitive neuroscientists tackling the same task, we have barely scratched the surface.

We also have to consider the properties that continue to differentiate humans from machines – empathy, sympathy, love, self-sacrifice – and how they affect human consciousness and intelligence. Where do these properties come from? Can you envisage a machine with these properties? Can you conceive of a situation where you and a computer might fall in love with each other?

Does the moral of my story mean that as humans (as accountants) we do not have to be concerned about artificial intelligence because the likelihood of computers being able to mirror human consciousness and intelligence, at least for the foreseeable future, is very low? The answer is a resounding, an emphatic, ‘No!’. A certain type of consciousness and intelligence – let's just simply call it machine intelligence – will continue to evolve rapidly as computers become more powerful and our knowledge of how to use them increases exponentially. It is this form of artificial intelligence that has to be our focus.

The reason is that we need to understand the nature of and significant implications of a concept that philosophers interested in general system theory call equifinality – very simply, the idea that we can sometimes achieve the same (or almost the same) outcomes in the world using different processes (e.g., Gresov and Drazin 1997). Language-recognition software and chess-playing software are good examples of equifinality in practice. We don't have quite the same outcomes with the software as we do with humans. But in one case, language-recognition software, the outcome is good enough for many purposes. And in the other case, chess-playing software, we have a superior outcome (at least if winning the game is our objective criterion).

The challenges we face because of equifinality are becoming increasingly salient. For instance, for those of us who are academics, we now have concerns about student use of so-called generative AI programs such as ChatGPT. The fact that a student's response to an assignment has been produced by a generative AI program can be extraordinarily difficult to detect – again, equifinality at work.

It's so hard to predict how equifinality will manifest. It's often hard for humans to ‘think’ like computers! For instance, we have difficulty comprehending how computers perform tasks in a few seconds that would take humans large amounts of time to complete. In this regard, we are at the dawn of quantum computing – currently, a field of research that promises the development of a new kind of computer that can perform certain kinds of calculations in a few seconds that would otherwise take today's supercomputers decades or millennia to complete. In a world of quantum computers, what forms of equifinality and machine intelligence will arise?

Where to from here? As accountants, what should we do in a world where machine intelligence will continue to develop rapidly. I wish I had privileged insights, but sadly I don't. For what they are worth, however, I'd like to conclude my oration with just a few thoughts that might provide some matters for reflection.

First, as accountants, we should focus on identifying those tasks where humans are likely to have a long-term comparative advantage over computers. I suspect these kinds of tasks will be those that require very human attributes – for instance, an ability to interact with others with warmth and empathy, an ability to read body language, a sense of the ephemeral and spiritual, and an ability to develop rapport and trust. We should continue to develop our capabilities in relation to these tasks.

Second, we need to think very hard about those accounting tasks where machine intelligence will have a comparative advantage over humans. We already have some pointers to the tasks that will be affected – specifically, those that are amenable to machine-learning, pattern-matching and classification techniques. But developments in generative AI and quantum computing should motivate us to think more broadly. Where equifinality is likely to arise, we should exit systematically and gracefully from the tasks that will be affected.

Third, we can look for opportunities to work synergistically with machine intelligence. As accountants, ultimately, we are seeking ways to provide information about economic phenomena. With better tools, we are progressively expanding our views about what economic phenomena can and should be our focus. In this regard, I am mindful of Bill Edge's (2022) excellent oration last year where he spoke about developments in sustainability reporting and the opportunities provided to accountants. With powerful tools such as networks of environmental sensors, pattern-recognition and machine-learning software, generative AI tools and creative thinking, we can expand the scope of the work we do as accountants.

Here is my closing comment. I feel some sense of irony and remorse about the topic of my oration. My focus has been artificial intelligence and its possible implications for the accounting profession. But tonight, we are commemorating someone, Professor Colin Ferguson, who had an extraordinary amount of very real human intelligence, personal and professional. There was nothing artificial about it! I hope Col will forgive me.

Thank you!

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Australian Accounting Review
Australian Accounting Review BUSINESS, FINANCE-
CiteScore
6.30
自引率
17.60%
发文量
31
期刊最新文献
Issue Information The Boundaries of Accounting The Shifting and Permeable Boundaries of Auditing: Evidence from Early Australian Examination Papers CEO Benevolence and Corporate Social Performance Integrated Reporting Impact on Core Organisational Practices: A Practice-Based Perspective
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1