advanced, we should consider what are the ideal human–computer relationships for their interactions. This means that the human and computer subsystems should be structured and designed to work in mutually cooperating ways, and the quality of system decision and control depends greatly on the quality of information generation on its interfaces.
{"title":"Ecological interface enabling human-embodied cognition in mobile robot teleoperation","authors":"T. Sawaragi, Y. Horiguchi","doi":"10.1145/350752.350761","DOIUrl":"https://doi.org/10.1145/350752.350761","url":null,"abstract":"advanced, we should consider what are the ideal human–computer relationships for their interactions. This means that the human and computer subsystems should be structured and designed to work in mutually cooperating ways, and the quality of system decision and control depends greatly on the quality of information generation on its interfaces.","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"87 1","pages":"20-32"},"PeriodicalIF":0.0,"publicationDate":"2000-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81046568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A central issue in any discussion of the ethical and social implications of artificial intelligence (AI) is the appropriate role of intelligent systems in the world that we are creating. Can intelligent systems potentially threaten the vitality of human con-sciousness? Can intelligent systems " steal " vital capabilities and skills from humanity? Over the past several years I have been writing stories and plays that address the ethical and social implications of AI. These stories and plays are available through my AI Stories web-site (www.cs.wcupa.edu/~epstein/stoplay-html). I hope that professors who teach artificial intelligence, computer ethics, or the social implications of computing will use these stories and plays in their courses. The AI Stories Web project began as a story about the future that I wrote for my book, The Case of the Killer Robot (Epstein 1997). The Killer Robot is a fictitious scenario that uses various written media (e.g., newspaper stories and magazine interviews) to tell the story of how a programming error led to the death of a robot operator. One of our reviewers liked the future story and said that he would like to see more stories about the future. Consequently, I embarked on a new pro-ject—to create a portrait of the future (circa 2028) using a variety of print media (e.g., newspaper articles, book reviews, television infomercial transcripts, magazine interviews, commencement addresses). The purpose of this effort was to provide professors with materials that they could use to teach and discuss the ethical and social implications of computer technology, especially artificial intelligence and virtual reality (VR). I call this collection of stories Sunday, May 14, 2028. Stories that specifically relate to AI and VR are available in the AI Stories Web. I will briefly introduce these stories and two plays that are available at the aforementioned website. The 37 stories in the AI Stories Web are organized according to the domain of human experience that is affected by the technology being discussed. One story that gets to the heart of the matter is " The Great Brain Robbery. " This story discusses the impact of computer technology (especially, artificial intelligence) in a broad social context. The story is told through an interview with Professor Lowe-Tignoff (who also appeared in the Killer Robot book). He discusses his belief that intelligent systems (again, he is speaking from the perspective of 2028) are stealing human capabilities in various domains, including …
在任何关于人工智能(AI)的伦理和社会影响的讨论中,一个核心问题是智能系统在我们正在创造的世界中的适当角色。智能系统会威胁到人类意识的生命力吗?智能系统能从人类那里“偷走”重要的能力和技能吗?在过去的几年里,我一直在写关于人工智能的伦理和社会影响的故事和剧本。这些故事和戏剧可以通过我的AI故事网站(www.cs.wcupa.edu/~epstein/stoplay-html)获得。我希望教授人工智能、计算机伦理学或计算机的社会影响的教授们能在他们的课程中使用这些故事和戏剧。AI故事网络项目最初是我为自己的书《杀手机器人的案例》(The Case of The Killer Robot, Epstein 1997)写的一个关于未来的故事。杀手机器人是一个虚构的场景,使用各种书面媒体(如报纸故事和杂志采访)来讲述一个编程错误如何导致机器人操作员死亡的故事。我们的一位评论家喜欢未来的故事,并说他想看到更多关于未来的故事。因此,我开始了一个新的项目——用各种印刷媒体(如报纸文章、书评、电视信息商业记录、杂志采访、毕业典礼演讲)来描绘未来(大约2028年)的画像。这项工作的目的是为教授们提供他们可以用来教授和讨论计算机技术,特别是人工智能和虚拟现实(VR)的伦理和社会影响的材料。我把这个故事集命名为2028年5月14日星期日。与AI和VR相关的故事可以在AI Stories Web上找到。我将简要介绍这些故事和两个剧本,可以在上述网站上找到。AI stories Web中的37个故事是根据受正在讨论的技术影响的人类经验领域组织的。有一个故事触及了这个问题的核心,那就是“大脑大劫案”。这个故事讨论了计算机技术(尤其是人工智能)在广泛的社会背景下的影响。这个故事是通过对洛伊-蒂格诺夫教授(他也出现在《机器人杀手》一书中)的采访来讲述的。他谈到了他的信念,即智能系统(再次,他是从2028年的角度出发)正在窃取人类在各个领域的能力,包括……
{"title":"Curriculum descant: stories and plays about the ethical and social implications of artificial intelligence","authors":"Richard G. Epstein, Deepak Kumar","doi":"10.1145/350752.350758","DOIUrl":"https://doi.org/10.1145/350752.350758","url":null,"abstract":"A central issue in any discussion of the ethical and social implications of artificial intelligence (AI) is the appropriate role of intelligent systems in the world that we are creating. Can intelligent systems potentially threaten the vitality of human con-sciousness? Can intelligent systems \" steal \" vital capabilities and skills from humanity? Over the past several years I have been writing stories and plays that address the ethical and social implications of AI. These stories and plays are available through my AI Stories web-site (www.cs.wcupa.edu/~epstein/stoplay-html). I hope that professors who teach artificial intelligence, computer ethics, or the social implications of computing will use these stories and plays in their courses. The AI Stories Web project began as a story about the future that I wrote for my book, The Case of the Killer Robot (Epstein 1997). The Killer Robot is a fictitious scenario that uses various written media (e.g., newspaper stories and magazine interviews) to tell the story of how a programming error led to the death of a robot operator. One of our reviewers liked the future story and said that he would like to see more stories about the future. Consequently, I embarked on a new pro-ject—to create a portrait of the future (circa 2028) using a variety of print media (e.g., newspaper articles, book reviews, television infomercial transcripts, magazine interviews, commencement addresses). The purpose of this effort was to provide professors with materials that they could use to teach and discuss the ethical and social implications of computer technology, especially artificial intelligence and virtual reality (VR). I call this collection of stories Sunday, May 14, 2028. Stories that specifically relate to AI and VR are available in the AI Stories Web. I will briefly introduce these stories and two plays that are available at the aforementioned website. The 37 stories in the AI Stories Web are organized according to the domain of human experience that is affected by the technology being discussed. One story that gets to the heart of the matter is \" The Great Brain Robbery. \" This story discusses the impact of computer technology (especially, artificial intelligence) in a broad social context. The story is told through an interview with Professor Lowe-Tignoff (who also appeared in the Killer Robot book). He discusses his belief that intelligent systems (again, he is speaking from the perspective of 2028) are stealing human capabilities in various domains, including …","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"199 1","pages":"17-19"},"PeriodicalIF":0.0,"publicationDate":"2000-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75963010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Professor Freedman's research focuses on reactive planning and theories of discourse and dialog processing with the goal of building better intelligent tutoring systems. T he term " intelligent tutoring system " (ITS) refers to any computer program that can be used in learning and that contains intelligence—this breadth has no doubt helped make ITS research the large and varied field that it is. ITS research is an out-growth of the earlier computer-aided instruction (CAI) model, which usually refers to a frame-based system with hard-coded links, that is, hypertext with an instructional purpose. The traditional ITS model has four components: the domain model, the student model, the teaching model, and a learning environment or user interface. ITS projects can vary significantly by the relative level of intelligence of the components. For example, a project focusing on intelligence in the domain model may generate solutions to complex and novel problems so that students can always have new problems on which to practice, but it might only have simple methods for teaching those problems. Or a system might concentrate on multiple or novel ways to teach a particular topic and therefore find a less sophisticated representation of that content sufficient. When multiple components contain intelligence, homogeneous or heterogeneous representations can be used. ITSs can also be classified by their underlying algorithm. One well-known category is the model-tracing tutor, which tracks students' progress and keeps them within a specified tolerance of an acceptable solution path. A theme underlying much of ITS research is domain independence, that is, the degree to which knowledge encoded in the teaching model can be reused in different domains. Although to the external observer domain independence seems like an essential characteristic of intelligence, many experts believe that some of the essential pedagogical knowledge in every domain is fundamentally domain dependent. For example, some analogies used in teaching physics, and even in teaching specific topics in physics, have no equivalents in other domains. Task independence, or the degree to which the knowledge in the system can be used to support a variety of tasks on the part of the student, has not yet been addressed by most systems. Journals The International Journal of Artificial Intelligence in Education (cbl.leeds.ac. uk/ijaied/), the official journal of the International AIED Society, is the preeminent journal in the field; it is published both in print and on the Web. Other journals that publish significant ITS research …
{"title":"Links: what is an intelligent tutoring system?","authors":"Reva Freedman, Syed S. Ali, S. McRoy","doi":"10.1145/350752.350756","DOIUrl":"https://doi.org/10.1145/350752.350756","url":null,"abstract":"Professor Freedman's research focuses on reactive planning and theories of discourse and dialog processing with the goal of building better intelligent tutoring systems. T he term \" intelligent tutoring system \" (ITS) refers to any computer program that can be used in learning and that contains intelligence—this breadth has no doubt helped make ITS research the large and varied field that it is. ITS research is an out-growth of the earlier computer-aided instruction (CAI) model, which usually refers to a frame-based system with hard-coded links, that is, hypertext with an instructional purpose. The traditional ITS model has four components: the domain model, the student model, the teaching model, and a learning environment or user interface. ITS projects can vary significantly by the relative level of intelligence of the components. For example, a project focusing on intelligence in the domain model may generate solutions to complex and novel problems so that students can always have new problems on which to practice, but it might only have simple methods for teaching those problems. Or a system might concentrate on multiple or novel ways to teach a particular topic and therefore find a less sophisticated representation of that content sufficient. When multiple components contain intelligence, homogeneous or heterogeneous representations can be used. ITSs can also be classified by their underlying algorithm. One well-known category is the model-tracing tutor, which tracks students' progress and keeps them within a specified tolerance of an acceptable solution path. A theme underlying much of ITS research is domain independence, that is, the degree to which knowledge encoded in the teaching model can be reused in different domains. Although to the external observer domain independence seems like an essential characteristic of intelligence, many experts believe that some of the essential pedagogical knowledge in every domain is fundamentally domain dependent. For example, some analogies used in teaching physics, and even in teaching specific topics in physics, have no equivalents in other domains. Task independence, or the degree to which the knowledge in the system can be used to support a variety of tasks on the part of the student, has not yet been addressed by most systems. Journals The International Journal of Artificial Intelligence in Education (cbl.leeds.ac. uk/ijaied/), the official journal of the International AIED Society, is the preeminent journal in the field; it is published both in print and on the Web. Other journals that publish significant ITS research …","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"9 1","pages":"15-16"},"PeriodicalIF":0.0,"publicationDate":"2000-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79763295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Backtracking: robots that fly, part I","authors":"Chris Welty, L. Hoebel","doi":"10.1145/337897.338003","DOIUrl":"https://doi.org/10.1145/337897.338003","url":null,"abstract":"","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"46 1","pages":"64"},"PeriodicalIF":0.0,"publicationDate":"2000-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84726908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ing General Principles of Intelligent Behavior In the classical view of artificial intelligence, the general principles dealt mostly with symbol processing and computational architecture. In more recent approaches, in which embodiment plays an important role, the principles that have been suggested are more strongly related to the interaction with the real world as it is mediated by the body of the agent. One principle asserts that we must not look at the agent in isolation but must define its ecological niche, its tasks, and the types of interactions of the agent with its environment. Another principle, inexpensive design, states that these interactions can be exploited in the design of an agent. A beautiful illustration of this principle is Ian Horsewill’s robot Polly. In the early 1990s Polly gave tours of the MIT AI Lab. Its camera was slightly tilted downwards so that more distant objects were higher up on the y-axis in the image—an inexpensive way of visually detecting the nearest obstacles. The principle of sensory-motor coordination was inspired by John Dewey, who, as early as 1896, had pointed out the importance of sensory-motor coordination for perception. This principle implies that through sensorymotor coordination, through coordinated interaction with the environment, an agent can structure its own sensory input. In this way, correlated sensory stimulation can be generated in different sensory channels—an important prerequisite for perceptual learning and concept development. Another principle has its origins in the work of Rodney Brooks, who introduced into AI research the idea of embodiment and the subsumption architecture. According to the principle of parallel, loosely coupled processes, intelligence emerges from a large number of parallel processes that are only loosely coupled and are mostly coordinated through interaction with the environment. An example is an insect walking: coordination of the individual legs is achieved not only through neural connections but also the environment. Because of the body’s stiffness and its weight, if one leg is lifted, the force on all the legs changes instantaneously, a fact that is exploited by the leg coordination system in the insect. Understanding
{"title":"Curriculum descant: teaching “New AI”","authors":"R. Pfeifer, Deepak Kumar","doi":"10.1145/337897.337989","DOIUrl":"https://doi.org/10.1145/337897.337989","url":null,"abstract":"ing General Principles of Intelligent Behavior In the classical view of artificial intelligence, the general principles dealt mostly with symbol processing and computational architecture. In more recent approaches, in which embodiment plays an important role, the principles that have been suggested are more strongly related to the interaction with the real world as it is mediated by the body of the agent. One principle asserts that we must not look at the agent in isolation but must define its ecological niche, its tasks, and the types of interactions of the agent with its environment. Another principle, inexpensive design, states that these interactions can be exploited in the design of an agent. A beautiful illustration of this principle is Ian Horsewill’s robot Polly. In the early 1990s Polly gave tours of the MIT AI Lab. Its camera was slightly tilted downwards so that more distant objects were higher up on the y-axis in the image—an inexpensive way of visually detecting the nearest obstacles. The principle of sensory-motor coordination was inspired by John Dewey, who, as early as 1896, had pointed out the importance of sensory-motor coordination for perception. This principle implies that through sensorymotor coordination, through coordinated interaction with the environment, an agent can structure its own sensory input. In this way, correlated sensory stimulation can be generated in different sensory channels—an important prerequisite for perceptual learning and concept development. Another principle has its origins in the work of Rodney Brooks, who introduced into AI research the idea of embodiment and the subsumption architecture. According to the principle of parallel, loosely coupled processes, intelligence emerges from a large number of parallel processes that are only loosely coupled and are mostly coordinated through interaction with the environment. An example is an insect walking: coordination of the individual legs is achieved not only through neural connections but also the environment. Because of the body’s stiffness and its weight, if one leg is lifted, the force on all the legs changes instantaneously, a fact that is exploited by the leg coordination system in the insect. Understanding","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"120 1","pages":"17-19"},"PeriodicalIF":0.0,"publicationDate":"2000-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77294245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive applications extend human abilities along an enormous number of dimensions. What can we learn from agents that use these same software tools? Artificial intelligence and human-computer interaction have close historical ties, going back to Newell and Simon's work on human problem solving [3], and farther. We see the influence of AI on HCI, for example, in the notion of the user as a rational problem-solving agent and task analysis concepts that match the goals and actions of planning representations. Conversely, user interface issues have given AI developers challenging problems in realistic environments, leading to results in automatic interface adaptation, multi-modal interaction, interface generation, and agent interaction, among a wide range of other areas [2]. The relationship is natural. Both fields are concerned with facilitating the interaction of agents with their environments-humans in software environments, artificial agents in a variety of problem-solving domains. In a sense, agent developers and user interface designers see opposite sides of the same problem. As AI developers, we build better and better agents, driven by the complexity of an environment or problem domain we are given. As user interface designers, in contrast, we canÕt simply build better human beings. Fortunately, the environment of the user interface is not fixed; we can tailor it to the capabilities and limitations of its human users. Though the means differ, the goal in both cases is effective interaction between the agent and the environment. Research and development toward intelligent interface agents can contribute to this goal in many ways. This article examines two approaches. The first is a modeling approach, in which we treat interface agents as surrogate users. Building engineering models of a user, or programmable user models [5], lets us predict some aspects of the usability of an interface through analysis or simulation, rather than testing with real users, a more expensive and time-consuming process. In the second approach, which has a more traditional agents flavor, we treat the user interface as a tool-using environment for an autonomous agent. The tools provided by a general-purpose software environment significantly extend the capabilities of a software agent, ideally to approach the competence we would ordinarily expect of human users.
{"title":"Interface agents as surrogate users","authors":"R. Amant","doi":"10.1145/337897.337998","DOIUrl":"https://doi.org/10.1145/337897.337998","url":null,"abstract":"Interactive applications extend human abilities along an enormous number of dimensions. What can we learn from agents that use these same software tools? Artificial intelligence and human-computer interaction have close historical ties, going back to Newell and Simon's work on human problem solving [3], and farther. We see the influence of AI on HCI, for example, in the notion of the user as a rational problem-solving agent and task analysis concepts that match the goals and actions of planning representations. Conversely, user interface issues have given AI developers challenging problems in realistic environments, leading to results in automatic interface adaptation, multi-modal interaction, interface generation, and agent interaction, among a wide range of other areas [2]. The relationship is natural. Both fields are concerned with facilitating the interaction of agents with their environments-humans in software environments, artificial agents in a variety of problem-solving domains. In a sense, agent developers and user interface designers see opposite sides of the same problem. As AI developers, we build better and better agents, driven by the complexity of an environment or problem domain we are given. As user interface designers, in contrast, we canÕt simply build better human beings. Fortunately, the environment of the user interface is not fixed; we can tailor it to the capabilities and limitations of its human users. Though the means differ, the goal in both cases is effective interaction between the agent and the environment. Research and development toward intelligent interface agents can contribute to this goal in many ways. This article examines two approaches. The first is a modeling approach, in which we treat interface agents as surrogate users. Building engineering models of a user, or programmable user models [5], lets us predict some aspects of the usability of an interface through analysis or simulation, rather than testing with real users, a more expensive and time-consuming process. In the second approach, which has a more traditional agents flavor, we treat the user interface as a tool-using environment for an autonomous agent. The tools provided by a general-purpose software environment significantly extend the capabilities of a software agent, ideally to approach the competence we would ordinarily expect of human users.","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"16 1","pages":"28-38"},"PeriodicalIF":0.0,"publicationDate":"2000-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90236314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
What is Java? Java is a new, object-oriented programming language developed by Sun Microsystems. In this article we will be motivating the use of Java for building software for artificial intelligence (AI). Additionally, we will point out some existing AI resources that have been written in Java. Although Java is a general-purpose programming language and not, by itself, an ideal language for building AI software (it is a bit too low-level, like C++), it offers many benefits to AI applications and application designers. For example, it provides platform-independent support for rapid development of graphical user interfaces, as well as for building programs that are network aware. Java also provides ideal wrapper services, allowing you to write AI programs that work in a variety of situations, with minimal recoding. Java resembles C++ but is much simpler; like nondestructive Lisp, it does not have explicit manipulation of pointers (it uses object references). The complexity and utility of Java lies in the libraries provided by the Java 2.0 platform. The Java 2.0 platform features the following characteristics relevant to AI: ✦ Runs independently of machine and operating system. ✦ Runs quickly (and is getting faster with new versions of the Java 2.0 platform). ✦ Is available from a number of sources, including free ones. ✦ Includes a relatively small run-time environment. ✦ Provides a sophisticated library of GUI-building components called Swing. ✦ Supports multithreaded programming. ✦ Is Internet aware (that is, it provides intrinsic support for network functions). Why use Java for AI? Machine-independence, size, speed, and costeffectiveness are clear advantages of Java. However, these benefits are not free; learning how to effectively program with Java is a significant task, even for experienced programmers. Building appealing and usable GUI frontends to software (AI or otherwise) is necessary. The Swing library is especially useful for AI programming, because it allows AI programmers to add and test GUI front-ends quickly. For example, the library includes facilities for adding a variety of GUI components, including toolbars, menus, and dialog boxes. More complex GUI components include trees and tables. All these components are implemented as objects and thus can be created, changed, and extended easily. Support for multithreading is also important for building AI programs because a complex task can be broken into subtasks that run in separate threads. Multiprocessing in Java is accomplished using threads; they allow a Java program to create subprocesses that run separately and to communicate with these processes as easily as one might read from or write to a file. Java includes thread synchronization that is based on semaphores and is easy to use.
{"title":"Links: Java resource for artificial intelligence","authors":"Syed S. Ali, S. McRoy","doi":"10.1145/337897.337987","DOIUrl":"https://doi.org/10.1145/337897.337987","url":null,"abstract":"What is Java? Java is a new, object-oriented programming language developed by Sun Microsystems. In this article we will be motivating the use of Java for building software for artificial intelligence (AI). Additionally, we will point out some existing AI resources that have been written in Java. Although Java is a general-purpose programming language and not, by itself, an ideal language for building AI software (it is a bit too low-level, like C++), it offers many benefits to AI applications and application designers. For example, it provides platform-independent support for rapid development of graphical user interfaces, as well as for building programs that are network aware. Java also provides ideal wrapper services, allowing you to write AI programs that work in a variety of situations, with minimal recoding. Java resembles C++ but is much simpler; like nondestructive Lisp, it does not have explicit manipulation of pointers (it uses object references). The complexity and utility of Java lies in the libraries provided by the Java 2.0 platform. The Java 2.0 platform features the following characteristics relevant to AI: ✦ Runs independently of machine and operating system. ✦ Runs quickly (and is getting faster with new versions of the Java 2.0 platform). ✦ Is available from a number of sources, including free ones. ✦ Includes a relatively small run-time environment. ✦ Provides a sophisticated library of GUI-building components called Swing. ✦ Supports multithreaded programming. ✦ Is Internet aware (that is, it provides intrinsic support for network functions). Why use Java for AI? Machine-independence, size, speed, and costeffectiveness are clear advantages of Java. However, these benefits are not free; learning how to effectively program with Java is a significant task, even for experienced programmers. Building appealing and usable GUI frontends to software (AI or otherwise) is necessary. The Swing library is especially useful for AI programming, because it allows AI programmers to add and test GUI front-ends quickly. For example, the library includes facilities for adding a variety of GUI components, including toolbars, menus, and dialog boxes. More complex GUI components include trees and tables. All these components are implemented as objects and thus can be created, changed, and extended easily. Support for multithreading is also important for building AI programs because a complex task can be broken into subtasks that run in separate threads. Multiprocessing in Java is accomplished using threads; they allow a Java program to create subprocesses that run separately and to communicate with these processes as easily as one might read from or write to a file. Java includes thread synchronization that is based on semaphores and is easy to use.","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"17 1","pages":"15-16"},"PeriodicalIF":0.0,"publicationDate":"2000-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75270495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evolving a checkers player without relying on human experience","authors":"D. Fogel","doi":"10.1145/337897.337996","DOIUrl":"https://doi.org/10.1145/337897.337996","url":null,"abstract":"","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"6 1","pages":"20-27"},"PeriodicalIF":0.0,"publicationDate":"2000-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89265354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
s a course offered within computer science programs, artificial intelligence should be an interdisciplinary course. Stated more carefully, an undergraduate artificial intelligence course for a computer science department, correctly designed, should be able to be taken by any student with good analytic skills but lacking programming skills. Making a well-designed artificial intelligence course interdisciplinary is not itself a goal of the preferred course design but rather a consequence of it. Many computer science students are primarily and sometimes exclusively interested in programming and related technical matters. Their focus is implementation. Most computer science instructors, myself included, talk, sometimes a good deal, about the idea that we aim primarily to teach students problem solving , but in fact we mostly end up focusing on implementation, too. (Perhaps a " proper " computer science degree should, after all, à la Dijkstra, ban actual programming for the first two years or so.) We as instructors contribute to this unfortunate state of affairs by, sometimes unwittingly, overdesigning our class projects. In our attempts to make sure that the students get the top-level design " right, " we give it to them up front, often giving detailed descriptions of the suite of functions and so on that must be implemented. The task that falls to the student is often little more than to implement our design. It is more difficult to correct the situation than those who have not taught might imagine. Such is the case much of the time in typical computer science courses , mine included. In an artificial intelligence course, problem solving fares even worse because the problems tackled by artificial intelligence are so much more difficult. The problems tackled by artificial intelligence are not only complex, they also require a good deal of background theory in order to be properly grasped. The amount of background varies, but it is always considerable. Computer science programs are not the ideal training grounds for artificial intelligence. There are of course exceptions, but in general, computer science students lack, for example, an understanding of philosophical issues, which bears on KR, or a detailed understanding of natural languages, which bears on NLP. But most of all, they are not strong mathematically: many struggle through calculus, statistics, logic, and discrete math. As a result, the theoretical content and mathematical sophistication of discussions in artificial intelligence courses are all too often quite weak or, at any rate, weaker …
{"title":"Curriculum Descant: Interdisciplinary artificial intelligence","authors":"Deepak Kumar, Richard Wyatt","doi":"10.1145/333175.333178","DOIUrl":"https://doi.org/10.1145/333175.333178","url":null,"abstract":"s a course offered within computer science programs, artificial intelligence should be an interdisciplinary course. Stated more carefully, an undergraduate artificial intelligence course for a computer science department, correctly designed, should be able to be taken by any student with good analytic skills but lacking programming skills. Making a well-designed artificial intelligence course interdisciplinary is not itself a goal of the preferred course design but rather a consequence of it. Many computer science students are primarily and sometimes exclusively interested in programming and related technical matters. Their focus is implementation. Most computer science instructors, myself included, talk, sometimes a good deal, about the idea that we aim primarily to teach students problem solving , but in fact we mostly end up focusing on implementation, too. (Perhaps a \" proper \" computer science degree should, after all, à la Dijkstra, ban actual programming for the first two years or so.) We as instructors contribute to this unfortunate state of affairs by, sometimes unwittingly, overdesigning our class projects. In our attempts to make sure that the students get the top-level design \" right, \" we give it to them up front, often giving detailed descriptions of the suite of functions and so on that must be implemented. The task that falls to the student is often little more than to implement our design. It is more difficult to correct the situation than those who have not taught might imagine. Such is the case much of the time in typical computer science courses , mine included. In an artificial intelligence course, problem solving fares even worse because the problems tackled by artificial intelligence are so much more difficult. The problems tackled by artificial intelligence are not only complex, they also require a good deal of background theory in order to be properly grasped. The amount of background varies, but it is always considerable. Computer science programs are not the ideal training grounds for artificial intelligence. There are of course exceptions, but in general, computer science students lack, for example, an understanding of philosophical issues, which bears on KR, or a detailed understanding of natural languages, which bears on NLP. But most of all, they are not strong mathematically: many struggle through calculus, statistics, logic, and discrete math. As a result, the theoretical content and mathematical sophistication of discussions in artificial intelligence courses are all too often quite weak or, at any rate, weaker …","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"39 1","pages":"11-12"},"PeriodicalIF":0.0,"publicationDate":"2000-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88470627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Johnson, L. Birnbaum, R. Bareiss, T. Hinrichs
{"title":"War stories: harnessing organizational memories to support task performance","authors":"Christopher Johnson, L. Birnbaum, R. Bareiss, T. Hinrichs","doi":"10.1145/333175.333180","DOIUrl":"https://doi.org/10.1145/333175.333180","url":null,"abstract":"","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"24 1","pages":"16-31"},"PeriodicalIF":0.0,"publicationDate":"2000-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83621330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}