In this paper, we propose a new Hardware Transactional Memory (HTM) system for a shared-memory multiprocessor in which elementary processors are connected by a single common bus. One of the key features of our system is a modified snoop cache protocol to reduce overheads on the transactional memory consistency control. By publishing all of modified data in a transaction at once when the transaction commits, our system avoids the overhead on the commit, which would arise from a sequential publication (or write-back to main memory) of each data item in the transaction otherwise. Another feature is a virtualization of a cache layer in the memory hierarchy. When a cache must replace a line which contains speculatively modified data, our system dynamically reallocates the address of the line to another location in main memory, and back up the evicted data to a lower layer cache or main memory. The backed-up data is still under the control of the transactional memory consistency through our snoop cache protocol. By enlarging a cache capacity virtually in this manner, our system can support unbounded transactions which are not limited by the hardware resources in the size and the duration.
{"title":"A Lazy-Updating Snoop Cache Protocol for Transactional Memory","authors":"Sekai Ichii, Atsushi Nunome, Hiroaki Hirata, Kiyoshi Shibayama","doi":"10.1109/IIAI-AAI.2014.134","DOIUrl":"https://doi.org/10.1109/IIAI-AAI.2014.134","url":null,"abstract":"In this paper, we propose a new Hardware Transactional Memory (HTM) system for a shared-memory multiprocessor in which elementary processors are connected by a single common bus. One of the key features of our system is a modified snoop cache protocol to reduce overheads on the transactional memory consistency control. By publishing all of modified data in a transaction at once when the transaction commits, our system avoids the overhead on the commit, which would arise from a sequential publication (or write-back to main memory) of each data item in the transaction otherwise. Another feature is a virtualization of a cache layer in the memory hierarchy. When a cache must replace a line which contains speculatively modified data, our system dynamically reallocates the address of the line to another location in main memory, and back up the evicted data to a lower layer cache or main memory. The backed-up data is still under the control of the transactional memory consistency through our snoop cache protocol. By enlarging a cache capacity virtually in this manner, our system can support unbounded transactions which are not limited by the hardware resources in the size and the duration.","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127725103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IIAI-AAI.2014.128
Gaku Nakagawa, S. Oikawa
There are several research projects about new generation non-volatile memory (NVM), such as STT-MRAM, PCM and ReRAM. Non-volatile main memory makes it possible to integrate secondary storages in main memory. The integration enables to reduce I/O to slow block devices. It is, however, impossible to construct large capacity main memory with a single NVM in this point. It is required to combine DRAM and NVM or combine NVM and another NVM to construct unified non-volatile main memory. The previous researches discussed NVM/DRAM hybrid main memory architecture, which combine PCM and DRAM. In our previous work, we proposed a method to manage NVM/DRAM hybrid main memory with programming language runtimes supports. Language runtimes, such as Java runtimes, have more detailed informaion about write acceess to data than operating system has. The language runtime supports are useful to manage NVM/DRAM hybrid memory therefore. In the proposed method, the runtime migrates objects between NVM and DRAM based on the characteristics of write access. The language runtime executes the migration processes during garbage collection processes. The performance of the proposed method rely on the frequency of garbage collection. In this paper, we will discuss and do an experiment about how the frequency of garbage collection effects the performance of the proposed method. The results of the experiment shows that the improved method cut 91 percent of write access. The results also show that the improved method cut 50 percent of the usage of DRAM.
{"title":"An Analysis of the Relationship between a Write Access Reduction Method for NVM/DRAM Hybrid Memory with Programming Language Runtime Support and Execution Policies of Garbage Collection","authors":"Gaku Nakagawa, S. Oikawa","doi":"10.1109/IIAI-AAI.2014.128","DOIUrl":"https://doi.org/10.1109/IIAI-AAI.2014.128","url":null,"abstract":"There are several research projects about new generation non-volatile memory (NVM), such as STT-MRAM, PCM and ReRAM. Non-volatile main memory makes it possible to integrate secondary storages in main memory. The integration enables to reduce I/O to slow block devices. It is, however, impossible to construct large capacity main memory with a single NVM in this point. It is required to combine DRAM and NVM or combine NVM and another NVM to construct unified non-volatile main memory. The previous researches discussed NVM/DRAM hybrid main memory architecture, which combine PCM and DRAM. In our previous work, we proposed a method to manage NVM/DRAM hybrid main memory with programming language runtimes supports. Language runtimes, such as Java runtimes, have more detailed informaion about write acceess to data than operating system has. The language runtime supports are useful to manage NVM/DRAM hybrid memory therefore. In the proposed method, the runtime migrates objects between NVM and DRAM based on the characteristics of write access. The language runtime executes the migration processes during garbage collection processes. The performance of the proposed method rely on the frequency of garbage collection. In this paper, we will discuss and do an experiment about how the frequency of garbage collection effects the performance of the proposed method. The results of the experiment shows that the improved method cut 91 percent of write access. The results also show that the improved method cut 50 percent of the usage of DRAM.","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133241029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IIAI-AAI.2014.97
Masanori Ikrashi, K. Fujita
Bilateral multi-issue closed negotiation is an important class of real-life negotiations. Usually, negotiation problems have constraints, such as a complex and unknown opponent's utility in real time or time discounting. In the class of negotiation with constraints, effective automated negotiation agents can estimate their opponent's model depending on the proposals of their opponents and the negotiation scenarios. Recently, the attention of this study has focused on interleaving learning with negotiation strategies from past negotiation sessions. By analyzing such previous sessions, agents can estimate their opponent's utility function based on exchanging bids. In this paper, we propose an automated agent that estimates its opponent's strategies based on past negotiation sessions. Our agent decides the estimated values of its opponent using effective weighted functions based on the negotiation time. By using the estimated values of each issue, our agent can calculate its opponent's utility. In addition, we employ the estimated method proposed in this paper to the compromise strategy, which is the agent of the basic strategy of our proposed agent. In our experiments, we compared seven different weighted functions to determine the most effective one. In addition, we demonstrated that our proposed agent has better outcomes and a greater search technique for the Pareto frontier than existing ANAC2013 agents. We also compared our proposed agent and the basic compromising strategy.
{"title":"Compromising Strategy Using Weighted Counting in Multi-times Negotiations","authors":"Masanori Ikrashi, K. Fujita","doi":"10.1109/IIAI-AAI.2014.97","DOIUrl":"https://doi.org/10.1109/IIAI-AAI.2014.97","url":null,"abstract":"Bilateral multi-issue closed negotiation is an important class of real-life negotiations. Usually, negotiation problems have constraints, such as a complex and unknown opponent's utility in real time or time discounting. In the class of negotiation with constraints, effective automated negotiation agents can estimate their opponent's model depending on the proposals of their opponents and the negotiation scenarios. Recently, the attention of this study has focused on interleaving learning with negotiation strategies from past negotiation sessions. By analyzing such previous sessions, agents can estimate their opponent's utility function based on exchanging bids. In this paper, we propose an automated agent that estimates its opponent's strategies based on past negotiation sessions. Our agent decides the estimated values of its opponent using effective weighted functions based on the negotiation time. By using the estimated values of each issue, our agent can calculate its opponent's utility. In addition, we employ the estimated method proposed in this paper to the compromise strategy, which is the agent of the basic strategy of our proposed agent. In our experiments, we compared seven different weighted functions to determine the most effective one. In addition, we demonstrated that our proposed agent has better outcomes and a greater search technique for the Pareto frontier than existing ANAC2013 agents. We also compared our proposed agent and the basic compromising strategy.","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130318457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IIAI-AAI.2014.62
Chung-Hung Tsai, Dauw-Song Zhu, S. Wang, Lixia Jian
Under the considerations of trading costs and efficiency, more and more investors choose the way of online stock trading which is fast, convenient, and has preferential fee. The cost of establishing a new customer is five times more than maintaining an old one. As a result, to explore the investors choose continuous usage behavior of online security trading system as reference for security industry to promote and improve relevant business in the future. The model of this study is developed by combining DeLone and McLean's Information System Success Model (D&M IS Success Model) and Technology Acceptance Model (TAM). The variable of "Habit" was collected from "Focus Group" to develop research model. This research consists of 340 valid samples who using domestic opened and operative ones of security e-trade accounts. Then we apply "Structural Equation Modeling" to analyze the data and test hypothesis. Depending on the analysis of real evidences, this study have four conclusions: 1. In overall model, according to exogenous variables, "Habit" is the most effective to "Continuous Usage Behavior" of online stock trading, and followed by "System Quality". It reveals that "Habit" have greatly influence on "Continuous Usage Behavior". 2. In three quality contents, "System Quality" is the most important factor affecting "Continuous Usage Behavior", it is also the only significant factor affecting "Perceived usefulness" and "User Satisfaction", it shows "System Quality" is the major requirement of continued users. 4. We demonstrate that not only "User Satisfaction" positively affects "Using Intention" and further affects "Continuous Usage Behavior" in this study, but also the most important discovery is "Habit" positive affects "Using Intention" and further affects "Continuous Usage Behavior".
{"title":"A Study of Continuous Usage Behavior for Online Stock Trading - Domestic Brokers of Securities Industry as Examples","authors":"Chung-Hung Tsai, Dauw-Song Zhu, S. Wang, Lixia Jian","doi":"10.1109/IIAI-AAI.2014.62","DOIUrl":"https://doi.org/10.1109/IIAI-AAI.2014.62","url":null,"abstract":"Under the considerations of trading costs and efficiency, more and more investors choose the way of online stock trading which is fast, convenient, and has preferential fee. The cost of establishing a new customer is five times more than maintaining an old one. As a result, to explore the investors choose continuous usage behavior of online security trading system as reference for security industry to promote and improve relevant business in the future. The model of this study is developed by combining DeLone and McLean's Information System Success Model (D&M IS Success Model) and Technology Acceptance Model (TAM). The variable of \"Habit\" was collected from \"Focus Group\" to develop research model. This research consists of 340 valid samples who using domestic opened and operative ones of security e-trade accounts. Then we apply \"Structural Equation Modeling\" to analyze the data and test hypothesis. Depending on the analysis of real evidences, this study have four conclusions: 1. In overall model, according to exogenous variables, \"Habit\" is the most effective to \"Continuous Usage Behavior\" of online stock trading, and followed by \"System Quality\". It reveals that \"Habit\" have greatly influence on \"Continuous Usage Behavior\". 2. In three quality contents, \"System Quality\" is the most important factor affecting \"Continuous Usage Behavior\", it is also the only significant factor affecting \"Perceived usefulness\" and \"User Satisfaction\", it shows \"System Quality\" is the major requirement of continued users. 4. We demonstrate that not only \"User Satisfaction\" positively affects \"Using Intention\" and further affects \"Continuous Usage Behavior\" in this study, but also the most important discovery is \"Habit\" positive affects \"Using Intention\" and further affects \"Continuous Usage Behavior\".","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133716676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IIAI-AAI.2014.45
T. Kamizono, H. Abe, K. Baba, S. Takano, K. Murakami
Understanding the states of learners at a lecture is useful for improving the quality of the lecture. A video camera with an infrared sensor Kinect has been widely studied and proved to be useful for some kinds of activity recognition. However, learners in a lecture usually do not act with large moving. This paper evaluates Kinect for use of activity recognition of learners. The authors considered four activities for detecting states of a learner, and collected the data with the activities by a Kinect. They applied K-nearest neighbor algorithm to the collected data and obtained the accuracy 0.936 of the activity recognition. The result shows that Kinect is applicable also to the activity recognition of learners in a lecture.
{"title":"Towards Activity Recognition of Learners by Kinect","authors":"T. Kamizono, H. Abe, K. Baba, S. Takano, K. Murakami","doi":"10.1109/IIAI-AAI.2014.45","DOIUrl":"https://doi.org/10.1109/IIAI-AAI.2014.45","url":null,"abstract":"Understanding the states of learners at a lecture is useful for improving the quality of the lecture. A video camera with an infrared sensor Kinect has been widely studied and proved to be useful for some kinds of activity recognition. However, learners in a lecture usually do not act with large moving. This paper evaluates Kinect for use of activity recognition of learners. The authors considered four activities for detecting states of a learner, and collected the data with the activities by a Kinect. They applied K-nearest neighbor algorithm to the collected data and obtained the accuracy 0.936 of the activity recognition. The result shows that Kinect is applicable also to the activity recognition of learners in a lecture.","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115251515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IIAI-AAI.2014.76
Susumu Yamazaki, Takashi Satoh, T. Jiromaru, Nobuyuki Tachi, M. Iwano
After teaching and observing students for several years, we hypothesize that learning programming is difficult for students who cannot imagine concretely how a computer works, or the process by which the CPU accesses memory and I/O via the bus according to coded programs. In this paper, we discuss why we believe it is important for programing education to help students understand how a computer works. We have developed a workshop to help students understand this more intuitively. We surveyed the students to assess their perceptions of the workshop, and we discuss its further development and progress toward use in a future full-scale course.
{"title":"Instructional Design of a Workshop \"How a Computer Works\" Aimed at Improving Intuitive Comprehension and Motivation","authors":"Susumu Yamazaki, Takashi Satoh, T. Jiromaru, Nobuyuki Tachi, M. Iwano","doi":"10.1109/IIAI-AAI.2014.76","DOIUrl":"https://doi.org/10.1109/IIAI-AAI.2014.76","url":null,"abstract":"After teaching and observing students for several years, we hypothesize that learning programming is difficult for students who cannot imagine concretely how a computer works, or the process by which the CPU accesses memory and I/O via the bus according to coded programs. In this paper, we discuss why we believe it is important for programing education to help students understand how a computer works. We have developed a workshop to help students understand this more intuitively. We surveyed the students to assess their perceptions of the workshop, and we discuss its further development and progress toward use in a future full-scale course.","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115635101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IIAI-AAI.2014.145
Ayano Terakawa, T. Hochin, Hiroki Nomiya
This paper proposes the system enabling users to use heterogeneous databases in an integrated manner without any conversion and servers. In order to treat a variety of sources in a unified manner, this system is realized by using Java Database Connectivity (JDBC) in accessing databases on the user's computer. It also joins and/or projects them as required. The syntax identifying a kind of database or file is introduced. The system maintains data by using Array Lists in Java. It is experimentally shown that there is no practical problem in the equijoin of three tables having 100,000 rows in heterogeneous databases.
{"title":"Integrated Usage of Heterogeneous Databases for Novice Users","authors":"Ayano Terakawa, T. Hochin, Hiroki Nomiya","doi":"10.1109/IIAI-AAI.2014.145","DOIUrl":"https://doi.org/10.1109/IIAI-AAI.2014.145","url":null,"abstract":"This paper proposes the system enabling users to use heterogeneous databases in an integrated manner without any conversion and servers. In order to treat a variety of sources in a unified manner, this system is realized by using Java Database Connectivity (JDBC) in accessing databases on the user's computer. It also joins and/or projects them as required. The syntax identifying a kind of database or file is introduced. The system maintains data by using Array Lists in Java. It is experimentally shown that there is no practical problem in the equijoin of three tables having 100,000 rows in heterogeneous databases.","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130874563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IIAI-AAI.2014.42
Yuji Yamagata, Naoki Fukuta
On a retrieval of Linked Open Data using SPARQL, it is important to construct an efficient query that considers its execution cost, especially when the query utilizes inference capability on the endpoint. A query often causes enormous consumption of endpoints' computing resources since it is sometimes difficult to understand and predict what computations will occur on the endpoints. Preventing such an execution of time-consuming queries, approximating the original query could reduce loads of endpoints. In this paper, we present a preliminary idea and its concept on building endpoints having a mechanism to automatically avoid unwanted amount of inference computation by predicting its computational costs and allowing it to transform such a query into speed optimized query. Our preliminary experiment shows a potential benefit on speed optimizations of query executions by applying query rewriting approach. We also present a preliminary prototype system that classifies whether a query execution is time-consuming or not by using machine learning techniques at the endpoint-side.
在使用SPARQL检索Linked Open Data时,构造一个考虑其执行成本的高效查询是很重要的,特别是当查询利用端点上的推理能力时。查询通常会大量消耗端点的计算资源,因为有时很难理解和预测端点上将发生什么计算。防止这种耗时查询的执行,近似原始查询可以减少端点的负载。在本文中,我们提出了一个初步的想法和概念,即构建具有一种机制的端点,通过预测其计算成本并允许其将此类查询转换为速度优化查询来自动避免不必要的推理计算量。我们的初步实验显示了应用查询重写方法对查询执行速度优化的潜在好处。我们还提出了一个初步的原型系统,该系统通过在端点端使用机器学习技术来分类查询执行是否耗时。
{"title":"A Dynamic Query Optimization on a Sparql Endpoint by Approximate Inference Processing","authors":"Yuji Yamagata, Naoki Fukuta","doi":"10.1109/IIAI-AAI.2014.42","DOIUrl":"https://doi.org/10.1109/IIAI-AAI.2014.42","url":null,"abstract":"On a retrieval of Linked Open Data using SPARQL, it is important to construct an efficient query that considers its execution cost, especially when the query utilizes inference capability on the endpoint. A query often causes enormous consumption of endpoints' computing resources since it is sometimes difficult to understand and predict what computations will occur on the endpoints. Preventing such an execution of time-consuming queries, approximating the original query could reduce loads of endpoints. In this paper, we present a preliminary idea and its concept on building endpoints having a mechanism to automatically avoid unwanted amount of inference computation by predicting its computational costs and allowing it to transform such a query into speed optimized query. Our preliminary experiment shows a potential benefit on speed optimizations of query executions by applying query rewriting approach. We also present a preliminary prototype system that classifies whether a query execution is time-consuming or not by using machine learning techniques at the endpoint-side.","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125847495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IIAI-AAI.2014.107
Hans Tobias Sopu, Y. Chisaki, T. Usagawa
Facebook® is a popular social network. There are approximately 1 billion registered users as of 2013. On the other hand, developing countries are starting to take advan- tage of the free platform and capabilities provided through facebook. The Island Kingdom of Tonga located in the South Pacific is also taking advantage of this social network. The secondary students of the Kingdom of Tonga are also curiously and interestingly socializing through facebook. In this paper, we observed through an online survey how much facebook is attracted in high school students and teachers. We also explained a facebook module for Moodle platform version 2.4.3 that was developed to enhance online collaboration in Moodle for students in the Kingdom of Tonga as part of the future works which is also mentioned.
{"title":"The Attractiveness of Facebook in Secondary Students in the Kingdom of Tonga and its Potential","authors":"Hans Tobias Sopu, Y. Chisaki, T. Usagawa","doi":"10.1109/IIAI-AAI.2014.107","DOIUrl":"https://doi.org/10.1109/IIAI-AAI.2014.107","url":null,"abstract":"Facebook® is a popular social network. There are approximately 1 billion registered users as of 2013. On the other hand, developing countries are starting to take advan- tage of the free platform and capabilities provided through facebook. The Island Kingdom of Tonga located in the South Pacific is also taking advantage of this social network. The secondary students of the Kingdom of Tonga are also curiously and interestingly socializing through facebook. In this paper, we observed through an online survey how much facebook is attracted in high school students and teachers. We also explained a facebook module for Moodle platform version 2.4.3 that was developed to enhance online collaboration in Moodle for students in the Kingdom of Tonga as part of the future works which is also mentioned.","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125859304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IIAI-AAI.2014.182
H. Hojo, Nozomi Nomachi, Yutaro Tomoto, Tsuyoshi Nakamura, M. Kanoh, Koji Yamada
Embodied expertise, which expresses skills of experts, is a kind of tacit knowledge that is difficult to transfer from one person to another by writing it down or verbalizing it. The aim of our study is to translate embodied expertise into explicit knowledge, i.e. onomatopoeias. We call the onomatopoeias "embodied expertise onomatopoeias", which can enable people to understand the skills intuitively and easily. Acquiring embodied expertise onomatopoeias is considered as a problem of pattern recognition. Our study focused on the skills of Japanese penmanship, Pen Shodo, which is Japanese calligraphy using a pen, to translate tacit knowledge into onomatopoeias and investigated the possibility of constructing a training system for these skills.
{"title":"Fundamental Study for Verbalization of Embodied Expertise based on Pattern Recognition","authors":"H. Hojo, Nozomi Nomachi, Yutaro Tomoto, Tsuyoshi Nakamura, M. Kanoh, Koji Yamada","doi":"10.1109/IIAI-AAI.2014.182","DOIUrl":"https://doi.org/10.1109/IIAI-AAI.2014.182","url":null,"abstract":"Embodied expertise, which expresses skills of experts, is a kind of tacit knowledge that is difficult to transfer from one person to another by writing it down or verbalizing it. The aim of our study is to translate embodied expertise into explicit knowledge, i.e. onomatopoeias. We call the onomatopoeias \"embodied expertise onomatopoeias\", which can enable people to understand the skills intuitively and easily. Acquiring embodied expertise onomatopoeias is considered as a problem of pattern recognition. Our study focused on the skills of Japanese penmanship, Pen Shodo, which is Japanese calligraphy using a pen, to translate tacit knowledge into onomatopoeias and investigated the possibility of constructing a training system for these skills.","PeriodicalId":432222,"journal":{"name":"2014 IIAI 3rd International Conference on Advanced Applied Informatics","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125324175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}