A lan Turing's " imitation game, " defined in his classic 1950 paper " Computing Machinery and Intelligence, " proposed a method of testing for intelligence based on a dialogue over teletype machines. The Turing Test, as it has come to be called, posed the question: could a computer fool humans into thinking that they were talking to one of their own? If so, then the computer, Turing declared, had intelligence. Although the Turing Test is still considered a radical " definition " of artificial intelligence 50 years after its introduction, it turns out that we need this test. You see, there are currently artificially intelligent creatures among us, and they want to sell us stuff! Let me explain. As you may know, " chat rooms " are virtual places accessed by programs for use by people who want to type messages to each other in real time. Many people meet in these chat rooms to discuss the important events of the day, for example , when will the sequel to the Matrix will be released, or more importantly, who will star in Matrix III. Some programmers have built " chat robots " that enter these chat rooms disguised as humans and either simply spew out advertisements, or wait patiently, and then spew out advertisements. The problem is that one can't tell by looking at a handle (i.e., a screen name) that a user is in fact not human. A typical chat room conversation might look like: neo23: so, have you seen the trailer yet? neo45: no, but I bet it's gonna b kewl :) neo67: Visit www.X.com for $50 off of a pair of swim trunks, while supplies last. Now, the normal user wouldn't be able to tell that neo67 was in fact a adbot until it was too late: They have already advertised (some may still not be convinced that this user is not human but I assure you, no one would say such things in a chat room). Detecting adbots might not be seen as a pressing problem, until you are bombarded with 1 million advertisements at once. This does slow down communication about the Matrix, and other topics I suspect. How can we detect these insidious adbots before they do their dirty deeds? Enter The CAPTCHA Bongo Project. The CAPTCHA Bongo Project is a project of the School of Computer Science at Carnegie Mellon University. Their …
{"title":"AI update","authors":"D. R. Hobaugh","doi":"10.1145/504313.504317","DOIUrl":"https://doi.org/10.1145/504313.504317","url":null,"abstract":"A lan Turing's \" imitation game, \" defined in his classic 1950 paper \" Computing Machinery and Intelligence, \" proposed a method of testing for intelligence based on a dialogue over teletype machines. The Turing Test, as it has come to be called, posed the question: could a computer fool humans into thinking that they were talking to one of their own? If so, then the computer, Turing declared, had intelligence. Although the Turing Test is still considered a radical \" definition \" of artificial intelligence 50 years after its introduction, it turns out that we need this test. You see, there are currently artificially intelligent creatures among us, and they want to sell us stuff! Let me explain. As you may know, \" chat rooms \" are virtual places accessed by programs for use by people who want to type messages to each other in real time. Many people meet in these chat rooms to discuss the important events of the day, for example , when will the sequel to the Matrix will be released, or more importantly, who will star in Matrix III. Some programmers have built \" chat robots \" that enter these chat rooms disguised as humans and either simply spew out advertisements, or wait patiently, and then spew out advertisements. The problem is that one can't tell by looking at a handle (i.e., a screen name) that a user is in fact not human. A typical chat room conversation might look like: neo23: so, have you seen the trailer yet? neo45: no, but I bet it's gonna b kewl :) neo67: Visit www.X.com for $50 off of a pair of swim trunks, while supplies last. Now, the normal user wouldn't be able to tell that neo67 was in fact a adbot until it was too late: They have already advertised (some may still not be convinced that this user is not human but I assure you, no one would say such things in a chat room). Detecting adbots might not be seen as a pressing problem, until you are bombarded with 1 million advertisements at once. This does slow down communication about the Matrix, and other topics I suspect. How can we detect these insidious adbots before they do their dirty deeds? Enter The CAPTCHA Bongo Project. The CAPTCHA Bongo Project is a project of the School of Computer Science at Carnegie Mellon University. Their …","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"48 1","pages":"6-13"},"PeriodicalIF":0.0,"publicationDate":"2001-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84857104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As part of research concerning facial expression of emotion, IBM continues a series of studies under Project BlueEyes that attempts to address four major issues: 1. Do emotions occur naturally in Human Computer Interaction (HCI)? If so, how often, and which emotions? 2. Using the image of a person, can people assess emotions reliably? 3. What information do people use to assess emotions? 4. What HCI stimuli cause what emotion and what is the user’s experience of the emotion? IBM’s first two studies have provided evidence on the first two issues. They have found evidence that some affective states (like anxiety and happiness) do occur in HCI and that people can use visual information to assess these states. Of course, those familiar with certain operating systems know that emotions can pop up in HCI every once in a while. But I assume that IBM is talking about visual clues more subtle than users pounding on monitors with their fists. In any event, people can visually detect emotions. IBM hopes that if people can perform this assessment reliably, so could a computer. To test out this hope, IBM has built Pong, a blue-eyed (of course) robo-head. Currently, Pong is a plastic and metal face that sits on a table and watches you with two ping pong-like eyes. Once it sees you, it smiles or frowns based on its interpretation of your mood. John Dvorak, computer pundit and AI hypemaster (see page 9), described interacting with Pong as “fascinating and creepy.” IBM is apparently completing further studies. For more information, see www.almaden.ibm.com/cs/ blueeyes/. New BlueEyes
{"title":"AI update","authors":"D. Blank","doi":"10.1145/383824.383827","DOIUrl":"https://doi.org/10.1145/383824.383827","url":null,"abstract":"As part of research concerning facial expression of emotion, IBM continues a series of studies under Project BlueEyes that attempts to address four major issues: 1. Do emotions occur naturally in Human Computer Interaction (HCI)? If so, how often, and which emotions? 2. Using the image of a person, can people assess emotions reliably? 3. What information do people use to assess emotions? 4. What HCI stimuli cause what emotion and what is the user’s experience of the emotion? IBM’s first two studies have provided evidence on the first two issues. They have found evidence that some affective states (like anxiety and happiness) do occur in HCI and that people can use visual information to assess these states. Of course, those familiar with certain operating systems know that emotions can pop up in HCI every once in a while. But I assume that IBM is talking about visual clues more subtle than users pounding on monitors with their fists. In any event, people can visually detect emotions. IBM hopes that if people can perform this assessment reliably, so could a computer. To test out this hope, IBM has built Pong, a blue-eyed (of course) robo-head. Currently, Pong is a plastic and metal face that sits on a table and watches you with two ping pong-like eyes. Once it sees you, it smiles or frowns based on its interpretation of your mood. John Dvorak, computer pundit and AI hypemaster (see page 9), described interacting with Pong as “fascinating and creepy.” IBM is apparently completing further studies. For more information, see www.almaden.ibm.com/cs/ blueeyes/. New BlueEyes","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"129 1","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2001-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74863016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
If you missed this June’s NASW National Conference, which was quite appropriately titled “Leading Change: Transforming Lives,” please know that it was amazingly enriching and enlightening! The plenary speeches, keynote addresses, preconference workshops, posters, and symposium presentations helped us to think about and prepare to make changes in our practice and personal lives. There was plenty of opportunity over the four days to reflect on how we as social workers can “be the change we want to see in the world,” as Mahatma Gandhi said.
{"title":"Letter from the chair","authors":"J. Marks","doi":"10.1145/383824.383825","DOIUrl":"https://doi.org/10.1145/383824.383825","url":null,"abstract":"If you missed this June’s NASW National Conference, which was quite appropriately titled “Leading Change: Transforming Lives,” please know that it was amazingly enriching and enlightening! The plenary speeches, keynote addresses, preconference workshops, posters, and symposium presentations helped us to think about and prepare to make changes in our practice and personal lives. There was plenty of opportunity over the four days to reflect on how we as social workers can “be the change we want to see in the world,” as Mahatma Gandhi said.","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"71 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2001-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85882750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hayes discussed some of the essential points characterizing the training that philosophers and mathematicians receive as part of their education. Philosophers, said Hayes, are trained to argue not about conclusions, but about arguments. Mathematicians are trained to find shorter proofs. While the talk was mainly tongue and cheek, as such things go what made it humorous was precisely how true it was. This set me to thinking about something that Hayes' talk seem to leave wide open: what can we joke about the training of computer scientists, and those in AI in particular? I spend far too much time thinking about jokes, I suspect, but this thinking lead me quickly to an obstacle. A person who studies philosophy is called a " philosopher, " a person who studies mathematics is called a " mathematician , " a person who studies computer science is called a " computer scientist. " What do we call a person who studies artificial intelli-gence? Using the grammatical rules that appear to govern the three examples here, we get " artificial intelligencer " , " artificial intelli-gencian, " or " artificial intelligentist. " At AAAI-2000 in Orlando, I recall seeing promotional material for the conference that read, " Hey AI scientist! " I don't think AI can proceed until we finally decide what to call ourselves. " AI scientist " evokes images of manqué scientists like " political scientist, " or " social scientist. " This, of course, is a problem with the name of our parent field as well, and not an easy one to solve. Rather than attempt to solve it here for the benefit of the four people who read this column , I will simply leave it open as an important path for future research in our field, and probably a major government funding program. Returning then to the initial problem, how would we characterize the basic nature of an artificial intelligencian's education? As com-puterists, we inherit to begin with a slight inferiority complex with respect to the other sciences, since we are often considered to be less than a " true " science —there is, after all, no Nobel Prize in computer science. As a result, one common element to our training is denying that we did any programming. Some take this training as an offensive weapon as well, and accuse others of having done no more than write a program. …
{"title":"Real science","authors":"Chris Welty","doi":"10.1145/504313.504329","DOIUrl":"https://doi.org/10.1145/504313.504329","url":null,"abstract":"Hayes discussed some of the essential points characterizing the training that philosophers and mathematicians receive as part of their education. Philosophers, said Hayes, are trained to argue not about conclusions, but about arguments. Mathematicians are trained to find shorter proofs. While the talk was mainly tongue and cheek, as such things go what made it humorous was precisely how true it was. This set me to thinking about something that Hayes' talk seem to leave wide open: what can we joke about the training of computer scientists, and those in AI in particular? I spend far too much time thinking about jokes, I suspect, but this thinking lead me quickly to an obstacle. A person who studies philosophy is called a \" philosopher, \" a person who studies mathematics is called a \" mathematician , \" a person who studies computer science is called a \" computer scientist. \" What do we call a person who studies artificial intelli-gence? Using the grammatical rules that appear to govern the three examples here, we get \" artificial intelligencer \" , \" artificial intelli-gencian, \" or \" artificial intelligentist. \" At AAAI-2000 in Orlando, I recall seeing promotional material for the conference that read, \" Hey AI scientist! \" I don't think AI can proceed until we finally decide what to call ourselves. \" AI scientist \" evokes images of manqué scientists like \" political scientist, \" or \" social scientist. \" This, of course, is a problem with the name of our parent field as well, and not an easy one to solve. Rather than attempt to solve it here for the benefit of the four people who read this column , I will simply leave it open as an important path for future research in our field, and probably a major government funding program. Returning then to the initial problem, how would we characterize the basic nature of an artificial intelligencian's education? As com-puterists, we inherit to begin with a slight inferiority complex with respect to the other sciences, since we are often considered to be less than a \" true \" science —there is, after all, no Nobel Prize in computer science. As a result, one common element to our training is denying that we did any programming. Some take this training as an offensive weapon as well, and accuse others of having done no more than write a program. …","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"1 1","pages":"48"},"PeriodicalIF":0.0,"publicationDate":"2001-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91229160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At long last the completely redesigned and enhanced SIGART site is up and running. And are we glad! This morning the SIGART Board got the following email message from Chris Welty, our intelligence editor, who with his students first got the site up and running several years ago and has been maintaining it ever since: Wow, are we lucky. There was a lightning strike and power outage yesterday and the old sigart.acm.org, an 8-10 year old Sparc II, did not survive the incident as it has so many times before. It has served us well (moment of silence). I believe we are close to operational on the new Web site, and not a moment too soon. Indeed, Eric Wilson, our SIGART Information Director had recently transferred all the information off the old machine and onto the ACM server that we'll be operating with from now on. In addition to Eric Wilson, Chris Welty, and the members of the SIGART Board, we owe thanks to Bill Smith and Katrina Brehob of DiamondBullet, who designed and implemented the site. Among other features, it includes a very nice way for you to submit announcements for your AI related events or resources. Once you've entered information about your announcement, it will be automatically sent to Amruth Kumar who is serving as moderator. Now that the site is up, the focus for the next few months will be to increase the content and features available to members. We welcome your feedback and comments, and hope you will visit often!
{"title":"Letter from the chair","authors":"J. Bradshaw","doi":"10.1145/376451.376457","DOIUrl":"https://doi.org/10.1145/376451.376457","url":null,"abstract":"At long last the completely redesigned and enhanced SIGART site is up and running. And are we glad! This morning the SIGART Board got the following email message from Chris Welty, our intelligence editor, who with his students first got the site up and running several years ago and has been maintaining it ever since: Wow, are we lucky. There was a lightning strike and power outage yesterday and the old sigart.acm.org, an 8-10 year old Sparc II, did not survive the incident as it has so many times before. It has served us well (moment of silence). I believe we are close to operational on the new Web site, and not a moment too soon. Indeed, Eric Wilson, our SIGART Information Director had recently transferred all the information off the old machine and onto the ACM server that we'll be operating with from now on. In addition to Eric Wilson, Chris Welty, and the members of the SIGART Board, we owe thanks to Bill Smith and Katrina Brehob of DiamondBullet, who designed and implemented the site. Among other features, it includes a very nice way for you to submit announcements for your AI related events or resources. Once you've entered information about your announcement, it will be automatically sent to Amruth Kumar who is serving as moderator. Now that the site is up, the focus for the next few months will be to increase the content and features available to members. We welcome your feedback and comments, and hope you will visit often!","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"13 7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2001-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89947040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I n a recent study Prechelt (1999) compared the relative performance of Java and C++ in execution time and memory usage. Unlike many benchmark studies, Prechelt compared multiple implementations of the same task by multiple programmers in order to control for the effects of differences in programmer skill. Prechelt concluded that " as of JDK 1.2, Java programs are typically much slower than programs written in C or C++. They also consume much more memory. " We repeated Prechelt's study using Lisp as the implementation language. Our results show that Lisp's performance is comparable to or better than C++ in execution speed; it also has significantly lower variability, which translates into reduced project risk. Furthermore, development time is significantly lower and less variable than either C++ or Java. Memory consumption is comparable to Java. Lisp thus presents a viable alternative to Java for dynamic applications where performance is important. Experiment Our data set consists of 16 programs written by 14 programmers. (Two programmers submitted more than one program, as was the case in the original study.) Twelve of the programs were written in Common Lisp (Steele 1990), and the other four were in Scheme (ACM 1991). All of the subjects were volunteers recruited from an Internet newsgroup. To the extent possible we duplicated the circumstances of the original study. We used the same problem statement (slightly edited but essentially unchanged), the same program input files, and the same kind of machine for the benchmark tests: a SPARC Ultra 1. The only difference was that the original machine had 192 MB of RAM and ours had only 64 MB; however , none of the programs used all the available RAM, so the results should not have changed. Common Lisp benchmarks were run using Allegro CL 4.3. Scheme benchmarks were run using MzScheme (Flatt 2000). All the programs were compiled to native code. Figure 1: Experimental results. The vertical lines from left to right indicate, respectively, the 10th percentile, median, and 90th percentile. The hollow box encloses the 25th to 50th percentile. The thick grey line is the width of two standard deviations centered on the mean.
{"title":"Point of view: Lisp as an alternative to Java","authors":"E. Gat","doi":"10.1145/355137.355142","DOIUrl":"https://doi.org/10.1145/355137.355142","url":null,"abstract":"I n a recent study Prechelt (1999) compared the relative performance of Java and C++ in execution time and memory usage. Unlike many benchmark studies, Prechelt compared multiple implementations of the same task by multiple programmers in order to control for the effects of differences in programmer skill. Prechelt concluded that \" as of JDK 1.2, Java programs are typically much slower than programs written in C or C++. They also consume much more memory. \" We repeated Prechelt's study using Lisp as the implementation language. Our results show that Lisp's performance is comparable to or better than C++ in execution speed; it also has significantly lower variability, which translates into reduced project risk. Furthermore, development time is significantly lower and less variable than either C++ or Java. Memory consumption is comparable to Java. Lisp thus presents a viable alternative to Java for dynamic applications where performance is important. Experiment Our data set consists of 16 programs written by 14 programmers. (Two programmers submitted more than one program, as was the case in the original study.) Twelve of the programs were written in Common Lisp (Steele 1990), and the other four were in Scheme (ACM 1991). All of the subjects were volunteers recruited from an Internet newsgroup. To the extent possible we duplicated the circumstances of the original study. We used the same problem statement (slightly edited but essentially unchanged), the same program input files, and the same kind of machine for the benchmark tests: a SPARC Ultra 1. The only difference was that the original machine had 192 MB of RAM and ours had only 64 MB; however , none of the programs used all the available RAM, so the results should not have changed. Common Lisp benchmarks were run using Allegro CL 4.3. Scheme benchmarks were run using MzScheme (Flatt 2000). All the programs were compiled to native code. Figure 1: Experimental results. The vertical lines from left to right indicate, respectively, the 10th percentile, median, and 90th percentile. The hollow box encloses the 25th to 50th percentile. The thick grey line is the width of two standard deviations centered on the mean.","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"30 1","pages":"21-24"},"PeriodicalIF":0.0,"publicationDate":"2000-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83024608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D esigning programming assignments for an artificial intelligence (AI) course presents several challenges. How much programming should there be in an AI course? What kinds of programming assignments should one design? What programming languages or platforms would one use? Are the students sufficiently prepared? For anyone looking for answers in this column, here is the punch line: It depends. It depends on the kind of course you are planning, where it fits in your curriculum, what students expect of your course, and what the rest of your department perceives the course to be. I will attempt to highlight some of the major concerns here that will hopefully bring some awareness to these important pedagogical issues the next time you plan your course. The amount of programming included in the course depends on the level and the department where it is offered. An introductory AI course may have no programming component at all. At the other end of the spectrum, it can be offered as a course with a heavy programming component. If the course is offered outside a computer science program, it is unlikely to have any computer programming. However, even in a course offered in a computer science program , the amount of programming required of students varies. In most cases, you might encounter anywhere from two to eight assignments in an AI course, not all of which might involve programming. This leads to the next question: What kinds of programming assignments should you design? In thinking about the kinds of programming assignments you have several choices, some of which depend on your own pedagog-ical objectives. The dividing line here lies between a choice of implementing " tools " versus implementing " applications. " For some instructors it is important to expose their students to the specialized algorithms embedded inside most AI tools—for example, learning and implementing pattern matching and unification, modeling a back-propagation neural network, and implementing a natural language parser of a specific kind. Some AI instructors like to use the programming exercises as a vehicle for teaching complex programming techniques. Exercises mentioned earlier serve that purpose well. Algorithms embedded in tools tend to be quite complex and are a good way of improving students' programming skills. In exercises that involve implementing complete applications, the amount of programming can also vary. Sometimes, in implementing game playing programs, for example, implementation involves a fair amount of programming. In …
{"title":"Curriculum descant: How much programming? What kind?","authors":"Deepak Kumar","doi":"10.1145/355137.355140","DOIUrl":"https://doi.org/10.1145/355137.355140","url":null,"abstract":"D esigning programming assignments for an artificial intelligence (AI) course presents several challenges. How much programming should there be in an AI course? What kinds of programming assignments should one design? What programming languages or platforms would one use? Are the students sufficiently prepared? For anyone looking for answers in this column, here is the punch line: It depends. It depends on the kind of course you are planning, where it fits in your curriculum, what students expect of your course, and what the rest of your department perceives the course to be. I will attempt to highlight some of the major concerns here that will hopefully bring some awareness to these important pedagogical issues the next time you plan your course. The amount of programming included in the course depends on the level and the department where it is offered. An introductory AI course may have no programming component at all. At the other end of the spectrum, it can be offered as a course with a heavy programming component. If the course is offered outside a computer science program, it is unlikely to have any computer programming. However, even in a course offered in a computer science program , the amount of programming required of students varies. In most cases, you might encounter anywhere from two to eight assignments in an AI course, not all of which might involve programming. This leads to the next question: What kinds of programming assignments should you design? In thinking about the kinds of programming assignments you have several choices, some of which depend on your own pedagog-ical objectives. The dividing line here lies between a choice of implementing \" tools \" versus implementing \" applications. \" For some instructors it is important to expose their students to the specialized algorithms embedded inside most AI tools—for example, learning and implementing pattern matching and unification, modeling a back-propagation neural network, and implementing a natural language parser of a specific kind. Some AI instructors like to use the programming exercises as a vehicle for teaching complex programming techniques. Exercises mentioned earlier serve that purpose well. Algorithms embedded in tools tend to be quite complex and are a good way of improving students' programming skills. In exercises that involve implementing complete applications, the amount of programming can also vary. Sometimes, in implementing game playing programs, for example, implementation involves a fair amount of programming. In …","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"128 1","pages":"15-16"},"PeriodicalIF":0.0,"publicationDate":"2000-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79553790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Backtracking: and the winner is...","authors":"Chris Welty","doi":"10.1145/355137.355145","DOIUrl":"https://doi.org/10.1145/355137.355145","url":null,"abstract":"","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"161 1","pages":"57"},"PeriodicalIF":0.0,"publicationDate":"2000-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84470724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T he fifth annual SIGART/AAAI Doctoral Consortium was held in August 2000 during the 17th National Conference on Artificial Intelligence, sponsored by the American Association for Artificial Intelligence (AAAI). At the consortium, doctoral students in artificial intelligence (AI) presented their proposed research and received feedback from a panel of researchers and other students. This provided the students with exposure to outside perspectives on their work at a critical time in their research and allowed them to explore their career objectives. Free-ranging discussion sessions were also held, covering topics such as the relative benefits of academic versus industry careers; proposal writing; balancing research and teaching; and resisting pressure to leave school without finishing their doctorates. The students also participated in the student poster session, held during the AAAI-2000 /IAAI-2000 Technical Paper Poster Session, and attended social events with the panelists. Altogether, the intensive, 2-day event and continuing contact during the AAAI conference afforded great opportunities for networking and getting to know peers. Twelve students—four women and eight men—presented their work (gender was not considered by the review committee). Nine attend universities in the United States, one in Taiwan, and two in Canada. Their research represents a variety of subfields of AI, ranging from machine learning techniques to knowledge representation. In keeping with the move to integrate AI basic research and applications (highlighted by the merging of AAAI and IAAI), two students presented research focused on applications. Panelists Six distinguished panelists participated in the consortium. Feedback from the students showed that they found the panelists' comments and discussion to be valuable and constructive. The panelists were Marie Reviewing The 12 participants were chosen from 14 submissions. Students were selected who had settled on their thesis focus but who still had significant research remaining. Students were selected on the basis of the clarity and completeness of their submission, their advisor's letter, and other evidence of promise such as published papers and technical reports. Although unusually low in number, the quality of the submissions was very high. The review committee consisted of
{"title":"Conference review: the 2000 SIGART/AAAI doctoral consortium","authors":"M. Bienkowski","doi":"10.1145/355137.355143","DOIUrl":"https://doi.org/10.1145/355137.355143","url":null,"abstract":"T he fifth annual SIGART/AAAI Doctoral Consortium was held in August 2000 during the 17th National Conference on Artificial Intelligence, sponsored by the American Association for Artificial Intelligence (AAAI). At the consortium, doctoral students in artificial intelligence (AI) presented their proposed research and received feedback from a panel of researchers and other students. This provided the students with exposure to outside perspectives on their work at a critical time in their research and allowed them to explore their career objectives. Free-ranging discussion sessions were also held, covering topics such as the relative benefits of academic versus industry careers; proposal writing; balancing research and teaching; and resisting pressure to leave school without finishing their doctorates. The students also participated in the student poster session, held during the AAAI-2000 /IAAI-2000 Technical Paper Poster Session, and attended social events with the panelists. Altogether, the intensive, 2-day event and continuing contact during the AAAI conference afforded great opportunities for networking and getting to know peers. Twelve students—four women and eight men—presented their work (gender was not considered by the review committee). Nine attend universities in the United States, one in Taiwan, and two in Canada. Their research represents a variety of subfields of AI, ranging from machine learning techniques to knowledge representation. In keeping with the move to integrate AI basic research and applications (highlighted by the merging of AAAI and IAAI), two students presented research focused on applications. Panelists Six distinguished panelists participated in the consortium. Feedback from the students showed that they found the panelists' comments and discussion to be valuable and constructive. The panelists were Marie Reviewing The 12 participants were chosen from 14 submissions. Students were selected who had settled on their thesis focus but who still had significant research remaining. Students were selected on the basis of the clarity and completeness of their submission, their advisor's letter, and other evidence of promise such as published papers and technical reports. Although unusually low in number, the quality of the submissions was very high. The review committee consisted of","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"54 1","pages":"39-47"},"PeriodicalIF":0.0,"publicationDate":"2000-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74770430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A n information retrieval (IR) system informs the user about the existence and whereabouts of documents or data relating to a query made by the user. Traditional methods for automated information retrieval are largely based on searching and indexing techniques performed by people (such as librarians). Figure 1 illustrates the operation of a generic IR system. In Figure 1, the user enters a query (in this example a Boolean query that asks the IR system to find documents that contain the phrase " information retrieval " as well as the word " resources "). The user query may be processed (for example, to convert the plural " resources " to the singular " resource ") and matched against a database of documents that have been preprocessed in order to speed matching. The database can be a local document collection or a collection of networked documents, such as those on the World Wide Web (WWW). The output of the IR system is typically a ranked list of documents. Some IR systems may provide an option for user feedback, such as asking the user to give his opinions on the quality of the matches, and can use this feedback to improve the quality of the search. Increased capabilities of computer hardware and software have created a vast body of machine-readable resources. Typically there is no lack of available information; more often, users, seeking needles in haystacks, are overwhelmed by the quantity of irrelevant information. Often this is caused by a poor query (too vague or too generic; for example, try searching for " computer science "). Even with a well-formulated specific query (such as in Figure 1), results can be poor (for example, Google.com returned as one match a document titled: " Distributed Information Search and Retrieval for Astronomical Resource Discovery and Data Mining "). The popularity of the Web has spurred enormous growth in the number and types of available resources. Many networked information retrieval (NIR) tools can be used to search the Web and provide information on demand to unsophisticated end users. Search engines are a simple example; typically they make use of a program (called a spider) that traverses the Web and creates databases of the keywords in a Web page (allowing fast, local retrieval of these resources). IR systems, such as search engines, are most useful when the user makes a precise query, has a clear idea what …
{"title":"Links: information retrieval","authors":"Syed S. Ali, S. McRoy","doi":"10.1145/355137.355141","DOIUrl":"https://doi.org/10.1145/355137.355141","url":null,"abstract":"A n information retrieval (IR) system informs the user about the existence and whereabouts of documents or data relating to a query made by the user. Traditional methods for automated information retrieval are largely based on searching and indexing techniques performed by people (such as librarians). Figure 1 illustrates the operation of a generic IR system. In Figure 1, the user enters a query (in this example a Boolean query that asks the IR system to find documents that contain the phrase \" information retrieval \" as well as the word \" resources \"). The user query may be processed (for example, to convert the plural \" resources \" to the singular \" resource \") and matched against a database of documents that have been preprocessed in order to speed matching. The database can be a local document collection or a collection of networked documents, such as those on the World Wide Web (WWW). The output of the IR system is typically a ranked list of documents. Some IR systems may provide an option for user feedback, such as asking the user to give his opinions on the quality of the matches, and can use this feedback to improve the quality of the search. Increased capabilities of computer hardware and software have created a vast body of machine-readable resources. Typically there is no lack of available information; more often, users, seeking needles in haystacks, are overwhelmed by the quantity of irrelevant information. Often this is caused by a poor query (too vague or too generic; for example, try searching for \" computer science \"). Even with a well-formulated specific query (such as in Figure 1), results can be poor (for example, Google.com returned as one match a document titled: \" Distributed Information Search and Retrieval for Astronomical Resource Discovery and Data Mining \"). The popularity of the Web has spurred enormous growth in the number and types of available resources. Many networked information retrieval (NIR) tools can be used to search the Web and provide information on demand to unsophisticated end users. Search engines are a simple example; typically they make use of a program (called a spider) that traverses the Web and creates databases of the keywords in a Web page (allowing fast, local retrieval of these resources). IR systems, such as search engines, are most useful when the user makes a precise query, has a clear idea what …","PeriodicalId":8272,"journal":{"name":"Appl. Intell.","volume":"41 1","pages":"17-19"},"PeriodicalIF":0.0,"publicationDate":"2000-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74481676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}