Pub Date : 2021-07-01DOI: 10.1109/iiai-aai53430.2021.00113
Kento Ohshima, T. Hasuike
In this study, we add a new loss-cutting constraint formula to the scenario-tree-type multi-period stochastic programming model, which is used in conventional portfolio theory when a portfolio is held for multiple periods, and examine the effect of loss-cutting. Specifically, we compare the return, risk, and Sharpe ratio before and after the addition of the loss-cutting constraint equation, and examine how the loss-cutting constraint equation affects the objective function value. Assuming that stock prices follow a geometric Brownian motion, we create a scenario tree using the simulated results. In this study, we assume that the portfolio holding period is three periods and that the scenario has four branches in each period. Next, we set the probability of occurrence of each node at the end of the plan. We assume that the occurrence probability of each node follows a uniform distribution. Specifically, random numbers that follow a uniform distribution are generated, and in order to treat them as random variables, the sum of the occurrence probabilities of each node is obtained, and the value of each node divided by the obtained sum is used as the occurrence probability. Using the above simulation results, we implement a scenario-tree type multi-period stochastic programming model and obtain the objective function value. Furthermore, we define and implement the loss-cut constraint equation, calculate the objective function value again, and verify how the return, risk, and Sharpe ratio change before and after adding the loss-cut constraint equation. The experimental results show that the return increases or decreases and the risk increases or decreases depending on the price of loss-cutting. The results also show that the Sharpe ratio improves depending on the price of loss-cutting, and thus the effectiveness of the proposed method is verified.
{"title":"Verification of Loss Cut Effect in Scenario-tree-type Multi-period Probability Planning Model","authors":"Kento Ohshima, T. Hasuike","doi":"10.1109/iiai-aai53430.2021.00113","DOIUrl":"https://doi.org/10.1109/iiai-aai53430.2021.00113","url":null,"abstract":"In this study, we add a new loss-cutting constraint formula to the scenario-tree-type multi-period stochastic programming model, which is used in conventional portfolio theory when a portfolio is held for multiple periods, and examine the effect of loss-cutting. Specifically, we compare the return, risk, and Sharpe ratio before and after the addition of the loss-cutting constraint equation, and examine how the loss-cutting constraint equation affects the objective function value. Assuming that stock prices follow a geometric Brownian motion, we create a scenario tree using the simulated results. In this study, we assume that the portfolio holding period is three periods and that the scenario has four branches in each period. Next, we set the probability of occurrence of each node at the end of the plan. We assume that the occurrence probability of each node follows a uniform distribution. Specifically, random numbers that follow a uniform distribution are generated, and in order to treat them as random variables, the sum of the occurrence probabilities of each node is obtained, and the value of each node divided by the obtained sum is used as the occurrence probability. Using the above simulation results, we implement a scenario-tree type multi-period stochastic programming model and obtain the objective function value. Furthermore, we define and implement the loss-cut constraint equation, calculate the objective function value again, and verify how the return, risk, and Sharpe ratio change before and after adding the loss-cut constraint equation. The experimental results show that the return increases or decreases and the risk increases or decreases depending on the price of loss-cutting. The results also show that the Sharpe ratio improves depending on the price of loss-cutting, and thus the effectiveness of the proposed method is verified.","PeriodicalId":414070,"journal":{"name":"2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133755616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/iiai-aai53430.2021.00176
S. Sakata, Naoki Fukuta
This paper explains our ongoing implementation of an enemy agent in a city development game which players freely build a city. This paper also explains the way to reproduce competitive or cooperative city developments which are important in simulating more real world city development in a game.
{"title":"On a Preliminary Implementation of Enemy Agent on a City Development Simulation Game","authors":"S. Sakata, Naoki Fukuta","doi":"10.1109/iiai-aai53430.2021.00176","DOIUrl":"https://doi.org/10.1109/iiai-aai53430.2021.00176","url":null,"abstract":"This paper explains our ongoing implementation of an enemy agent in a city development game which players freely build a city. This paper also explains the way to reproduce competitive or cooperative city developments which are important in simulating more real world city development in a game.","PeriodicalId":414070,"journal":{"name":"2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115085509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/iiai-aai53430.2021.00057
Tessai Hayama, Qi Zhang
User modeling based on the contents of social network services has been developed to recommend information related to the preference of each user. Most of the previous studies have analyzed active users' tweets and estimated their interests. Meanwhile, although there are more than a certain number of passive users, who do not tweet but only gather information, little research has been conducted on interest estimation for them due to the lack of clues for estimating their interests. In this study, we developed an interest estimation method for passive Twitter users from the tweets of followed users by applying an interest topic extraction method for active users. In our evaluation, we confirmed the effectiveness of the proposed method by comparison with simple topic extraction methods based on data with interest topic evaluation of 12 users.
{"title":"Topic-model based Estimation of Passive Twitter-User's Interests from Followed Users' Tweets","authors":"Tessai Hayama, Qi Zhang","doi":"10.1109/iiai-aai53430.2021.00057","DOIUrl":"https://doi.org/10.1109/iiai-aai53430.2021.00057","url":null,"abstract":"User modeling based on the contents of social network services has been developed to recommend information related to the preference of each user. Most of the previous studies have analyzed active users' tweets and estimated their interests. Meanwhile, although there are more than a certain number of passive users, who do not tweet but only gather information, little research has been conducted on interest estimation for them due to the lack of clues for estimating their interests. In this study, we developed an interest estimation method for passive Twitter users from the tweets of followed users by applying an interest topic extraction method for active users. In our evaluation, we confirmed the effectiveness of the proposed method by comparison with simple topic extraction methods based on data with interest topic evaluation of 12 users.","PeriodicalId":414070,"journal":{"name":"2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122044682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/iiai-aai53430.2021.00070
W. Chung, Yen-Nan Ho, Yu-Hsuan Wu, Jheng-Long Wu
Many studies have used the embedding method to represent the traffic flow information with high dimensional embedding. Recently, due to the advancement of transfer learning technology which enhances the performance of subsequent learning tasks. The information such as locations, timestamps, and distance have been used to train a static embedding in a feature space, and the static embedding also can transfer to the subsequent task to improve performance. However, many factors affect the traffic flow prediction so more diverse traffic information needs to be considered in the pre-train embedding model. If the embedding can be dynamically obtained to generate useful features to represent a mass rapid transit (MRT) station, the features will enhance the passenger flow prediction performance of a subsequent task. Therefore, the paper proposes a dynamic pre-trained embedding model by the bidirectional encoder representations from transformers (BERT) model to represent station status and learn from traffic information in a geographical relation. To solve the problem that the fixed pre-training embedding cannot generate diversified features on different time and stations. The pre-training model also considers time and distance at the same time, and it transfers the weights of the pre-trained model to the subsequent model of passenger flow estimation for generating dynamic embedding of the station. The performance of MRT station passenger flow estimation using dynamic station embedding has significantly improved.
{"title":"A Dynamic Embedding Method for Passenger Flow Estimation","authors":"W. Chung, Yen-Nan Ho, Yu-Hsuan Wu, Jheng-Long Wu","doi":"10.1109/iiai-aai53430.2021.00070","DOIUrl":"https://doi.org/10.1109/iiai-aai53430.2021.00070","url":null,"abstract":"Many studies have used the embedding method to represent the traffic flow information with high dimensional embedding. Recently, due to the advancement of transfer learning technology which enhances the performance of subsequent learning tasks. The information such as locations, timestamps, and distance have been used to train a static embedding in a feature space, and the static embedding also can transfer to the subsequent task to improve performance. However, many factors affect the traffic flow prediction so more diverse traffic information needs to be considered in the pre-train embedding model. If the embedding can be dynamically obtained to generate useful features to represent a mass rapid transit (MRT) station, the features will enhance the passenger flow prediction performance of a subsequent task. Therefore, the paper proposes a dynamic pre-trained embedding model by the bidirectional encoder representations from transformers (BERT) model to represent station status and learn from traffic information in a geographical relation. To solve the problem that the fixed pre-training embedding cannot generate diversified features on different time and stations. The pre-training model also considers time and distance at the same time, and it transfers the weights of the pre-trained model to the subsequent model of passenger flow estimation for generating dynamic embedding of the station. The performance of MRT station passenger flow estimation using dynamic station embedding has significantly improved.","PeriodicalId":414070,"journal":{"name":"2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126113496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/iiai-aai53430.2021.00165
Shoko Usui, Yoko Noborimoto, Yosuke Watanabe, H. Furukawa
In the educational use of 3D printers, the key issue is how to support students in learning 3D modeling software. However, students have few opportunities to practice operating them, especially in primary and secondary education in Japan. In order to obtain insight into how to support students in learning to operate 3D modeling software, we used online video to exam whether students could create 3D data similar to their design sketches. Specifically, we learned how long it would take them to create 3D data in a 3D printer workshop for five high school students. We found that all of the students could create 3D data with almost the same shape as their design sketches using online video. The students took 2 to 5 hours to create the first 3D data, and all of the students completed the second and subsequent 3D data in much less time than the first.
{"title":"An Examination of Learning 3D Modeling Software and Creating 3D Data Using Online Video in a 3D Printer Workshop for High School Students","authors":"Shoko Usui, Yoko Noborimoto, Yosuke Watanabe, H. Furukawa","doi":"10.1109/iiai-aai53430.2021.00165","DOIUrl":"https://doi.org/10.1109/iiai-aai53430.2021.00165","url":null,"abstract":"In the educational use of 3D printers, the key issue is how to support students in learning 3D modeling software. However, students have few opportunities to practice operating them, especially in primary and secondary education in Japan. In order to obtain insight into how to support students in learning to operate 3D modeling software, we used online video to exam whether students could create 3D data similar to their design sketches. Specifically, we learned how long it would take them to create 3D data in a 3D printer workshop for five high school students. We found that all of the students could create 3D data with almost the same shape as their design sketches using online video. The students took 2 to 5 hours to create the first 3D data, and all of the students completed the second and subsequent 3D data in much less time than the first.","PeriodicalId":414070,"journal":{"name":"2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129531103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/iiai-aai53430.2021.00126
Ryuya Mishina, S. Tanimoto, Hideki Goromaru, Hiroyuki Sato, Atsushi Kanai
In recent years, new cyber attacks such as targeted attacks have caused extensive damage. With the continuing development of the IoT society, various devices are now connected to the network and are being used for various purposes. The Internet of Things has the potential to link cyber risks to actual property damage, as cyberspace risks are connected to physical space. With this increase in unknown cyber risks, the demand for cyber insurance is increasing. One of the most serious emerging risks is the silent cyber risk, and it is likely to increase in the future. However, at present, security measures against silent cyber risks are insufficient. In this study, we conducted a risk management of silent cyber risk for organizations with the objective of contributing to the development of risk management methods for new cyber risks that are expected to increase in the future. Specifically, we modeled silent cyber risk by focusing on state transitions to different risks. We newly defined two types of silent cyber risk, namely, Alteration risk and Combination risk, and conducted risk assessment. Our assessment identified 23 risk factors, and after analyzing them, we found that all of them were classified as Risk Transference. We clarified that the most effective risk countermeasure for Alteration risk was insurance and for Combination risk was measures to reduce the impact of the risk factors themselves. Our evaluation showed that the silent cyber risk could be reduced by about 50%, thus demonstrating the effectiveness of the proposed countermeasures.
{"title":"Risk Management of Silent Cyber Risks in Consideration of Emerging Risks","authors":"Ryuya Mishina, S. Tanimoto, Hideki Goromaru, Hiroyuki Sato, Atsushi Kanai","doi":"10.1109/iiai-aai53430.2021.00126","DOIUrl":"https://doi.org/10.1109/iiai-aai53430.2021.00126","url":null,"abstract":"In recent years, new cyber attacks such as targeted attacks have caused extensive damage. With the continuing development of the IoT society, various devices are now connected to the network and are being used for various purposes. The Internet of Things has the potential to link cyber risks to actual property damage, as cyberspace risks are connected to physical space. With this increase in unknown cyber risks, the demand for cyber insurance is increasing. One of the most serious emerging risks is the silent cyber risk, and it is likely to increase in the future. However, at present, security measures against silent cyber risks are insufficient. In this study, we conducted a risk management of silent cyber risk for organizations with the objective of contributing to the development of risk management methods for new cyber risks that are expected to increase in the future. Specifically, we modeled silent cyber risk by focusing on state transitions to different risks. We newly defined two types of silent cyber risk, namely, Alteration risk and Combination risk, and conducted risk assessment. Our assessment identified 23 risk factors, and after analyzing them, we found that all of them were classified as Risk Transference. We clarified that the most effective risk countermeasure for Alteration risk was insurance and for Combination risk was measures to reduce the impact of the risk factors themselves. Our evaluation showed that the silent cyber risk could be reduced by about 50%, thus demonstrating the effectiveness of the proposed countermeasures.","PeriodicalId":414070,"journal":{"name":"2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128739941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/iiai-aai53430.2021.00120
Yaeko Mitsumori
The Indian pharmaceutical industry ranks third globally in terms of volume. Due to the World Trade Organization's Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS), which entered into force in 1995, India revised its patent law in 2005 and reintroduced product patents. Mega-pharma companies that had left India in the 1970s re-entered India between the late 1990s and early 2000s and started engaging in R&D activities and submitting patent applications to the Indian Patent Office. Due to these activities, the number of patent applications in India shot up in the mid-2000s; however the number has been dropping since then. This study analyzed the transition of mega-pharma companies business strategies toward India by using both the Indian Patent Office's database and Clarivate's Derwent World Patents Index (DWPI).
{"title":"An Analysis on Pharmaceutical Patent Applications and Grants in India: Mega-pharma shifts its strategies toward India","authors":"Yaeko Mitsumori","doi":"10.1109/iiai-aai53430.2021.00120","DOIUrl":"https://doi.org/10.1109/iiai-aai53430.2021.00120","url":null,"abstract":"The Indian pharmaceutical industry ranks third globally in terms of volume. Due to the World Trade Organization's Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS), which entered into force in 1995, India revised its patent law in 2005 and reintroduced product patents. Mega-pharma companies that had left India in the 1970s re-entered India between the late 1990s and early 2000s and started engaging in R&D activities and submitting patent applications to the Indian Patent Office. Due to these activities, the number of patent applications in India shot up in the mid-2000s; however the number has been dropping since then. This study analyzed the transition of mega-pharma companies business strategies toward India by using both the Indian Patent Office's database and Clarivate's Derwent World Patents Index (DWPI).","PeriodicalId":414070,"journal":{"name":"2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128919897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/iiai-aai53430.2021.00079
M. Rana, B. Murali, A. Sung
Deepfake, a new video manipulation technique, has drawn much attention recently. Among the unlawful or nefarious applications, Deepfake has been used for spreading misinformation, fomenting political discord, smearing opponents, or even blackmailing. As the technology becomes more sophisticated and the apps for creating them ever more available, detecting Deepfake has become a challenging task, and accordingly researchers have proposed various deep learning (DL) methods for detection. Though the DL-based approach can achieve good solutions, this paper presents the results of our study indicating that traditional machine learning (ML) techniques alone can obtain superior performance in detecting Deepfake. The ML-based approach is based on the standard methods of feature development and feature selection, followed by training, tuning, and testing an ML classifier. The advantage of the ML approach is that it allows better understandability and interpretability of the model with reduced computational cost. We present results on several Deepfake datasets that are obtained relatively fast with comparable or superior performance to the state-of-the-art DL-based methods: 99.84% accuracy on FaceForecics++, 99.38% accuracy on DFDC, 99.66% accuracy on VDFD, and 99.43% on Celeb-DF datasets. Our results suggest that an effective system for detecting Deepfakes can be built using traditional ML methods.
{"title":"Deepfake Detection Using Machine Learning Algorithms","authors":"M. Rana, B. Murali, A. Sung","doi":"10.1109/iiai-aai53430.2021.00079","DOIUrl":"https://doi.org/10.1109/iiai-aai53430.2021.00079","url":null,"abstract":"Deepfake, a new video manipulation technique, has drawn much attention recently. Among the unlawful or nefarious applications, Deepfake has been used for spreading misinformation, fomenting political discord, smearing opponents, or even blackmailing. As the technology becomes more sophisticated and the apps for creating them ever more available, detecting Deepfake has become a challenging task, and accordingly researchers have proposed various deep learning (DL) methods for detection. Though the DL-based approach can achieve good solutions, this paper presents the results of our study indicating that traditional machine learning (ML) techniques alone can obtain superior performance in detecting Deepfake. The ML-based approach is based on the standard methods of feature development and feature selection, followed by training, tuning, and testing an ML classifier. The advantage of the ML approach is that it allows better understandability and interpretability of the model with reduced computational cost. We present results on several Deepfake datasets that are obtained relatively fast with comparable or superior performance to the state-of-the-art DL-based methods: 99.84% accuracy on FaceForecics++, 99.38% accuracy on DFDC, 99.66% accuracy on VDFD, and 99.43% on Celeb-DF datasets. Our results suggest that an effective system for detecting Deepfakes can be built using traditional ML methods.","PeriodicalId":414070,"journal":{"name":"2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130635550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/iiai-aai53430.2021.00154
Yuto Fukui, Tomoaki Tabata, Takaaki Hosoda
With the proliferation of the Internet, retailers are obtaining large amounts of big data such as access logs and customer attributes from their daily online interactions with customers. By using those data, retailers can understand the characteristics of the customers who visit their sites, and can tailor their marketing strategies accordingly. Specifically, by building a purchase prediction model, a model that predicts which customers will visit a site and make a purchase and which will not, it is possible to understand what factors are influencing customer purchases. Traditionally, one such model has been built using data such as POS data and customer attributes, focusing only on the resulting purchases by customers. However, since those models fail to take into account the process by which the customer makes the purchase, they are unable to understand what the customer was thinking when he or she made the purchase. In e-commerce, which is a transaction via the Internet, it is possible to obtain data on the process of a customer's purchase, such as how much time the customer spent on what product, what products the customer browsed before making a purchase, etc. By using these features in the model, it will be possible to gain a more precise understanding of the factors influencing the customer's purchase. The purpose of this study is to construct a purchase prediction model that incorporates variables that indicate the time spent on the site by customers, the time spent browsing products, and the bias of the time spent on the products browsed by customers, and to obtain the contribution of the features to the prediction results to help formulate marketing strategies.
{"title":"A Purchasing Prediction Model Considering the Time Consumers Spend on a Site and Consumers Characteristics (Second Report)","authors":"Yuto Fukui, Tomoaki Tabata, Takaaki Hosoda","doi":"10.1109/iiai-aai53430.2021.00154","DOIUrl":"https://doi.org/10.1109/iiai-aai53430.2021.00154","url":null,"abstract":"With the proliferation of the Internet, retailers are obtaining large amounts of big data such as access logs and customer attributes from their daily online interactions with customers. By using those data, retailers can understand the characteristics of the customers who visit their sites, and can tailor their marketing strategies accordingly. Specifically, by building a purchase prediction model, a model that predicts which customers will visit a site and make a purchase and which will not, it is possible to understand what factors are influencing customer purchases. Traditionally, one such model has been built using data such as POS data and customer attributes, focusing only on the resulting purchases by customers. However, since those models fail to take into account the process by which the customer makes the purchase, they are unable to understand what the customer was thinking when he or she made the purchase. In e-commerce, which is a transaction via the Internet, it is possible to obtain data on the process of a customer's purchase, such as how much time the customer spent on what product, what products the customer browsed before making a purchase, etc. By using these features in the model, it will be possible to gain a more precise understanding of the factors influencing the customer's purchase. The purpose of this study is to construct a purchase prediction model that incorporates variables that indicate the time spent on the site by customers, the time spent browsing products, and the bias of the time spent on the products browsed by customers, and to obtain the contribution of the features to the prediction results to help formulate marketing strategies.","PeriodicalId":414070,"journal":{"name":"2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132009240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1109/iiai-aai53430.2021.00162
Noriko Horibe, Yuuto Kai, Koji Yamauchi, M. Komatsu, Takuya Matsunaga, Keisuke Noguchi, S. Aoqui
In agricultural management, harmful pest occurrences are very serious problems for achieving farmer's stable income. Speedy and appropriate pest control are necessary to minimize harmful pest damages. However, it is difficult to realize such pest controls because many experts or systems with high costs are needed essentially in considerable traditional methods. In this research, we suppose a universal simulation system as one of the solutions for the problem. The system can be applied to various kind of it is important to develop a technology to realize systems in rapid and low cost. In this research, we propose a method to generate pest models, which is one of the most important components for pest occurrence simulation systems. Weather information and past pest occurrence data are used by machine learning algorithm “C 4. 5” to find hypotheses which represent the relationship between them. Each pest model is automatically generated based on the hypotheses, and the model is refined by comparing their behavior with real cultivation experiments.
{"title":"Universal Simulation System by Learning from Historical Data of Agricultural Pest Occurrence","authors":"Noriko Horibe, Yuuto Kai, Koji Yamauchi, M. Komatsu, Takuya Matsunaga, Keisuke Noguchi, S. Aoqui","doi":"10.1109/iiai-aai53430.2021.00162","DOIUrl":"https://doi.org/10.1109/iiai-aai53430.2021.00162","url":null,"abstract":"In agricultural management, harmful pest occurrences are very serious problems for achieving farmer's stable income. Speedy and appropriate pest control are necessary to minimize harmful pest damages. However, it is difficult to realize such pest controls because many experts or systems with high costs are needed essentially in considerable traditional methods. In this research, we suppose a universal simulation system as one of the solutions for the problem. The system can be applied to various kind of it is important to develop a technology to realize systems in rapid and low cost. In this research, we propose a method to generate pest models, which is one of the most important components for pest occurrence simulation systems. Weather information and past pest occurrence data are used by machine learning algorithm “C 4. 5” to find hypotheses which represent the relationship between them. Each pest model is automatically generated based on the hypotheses, and the model is refined by comparing their behavior with real cultivation experiments.","PeriodicalId":414070,"journal":{"name":"2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132138474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}