A switch is a typical input interface. There are various types of switches according to the purpose and preference. In this speech, I will introduce a film type of touch panel switch that can be bended, added, and deleted. Since the switch is made of a thin film, it can be changed to various shapes and easily painted. Furthermore, even if a part of the touch switch is replaced with a push button or a toggle switch, it operates correctly. This makes it possible to easily create an arbitrary switch according to the preference required by the user. On the other hand, recently, the demand for non-contact switch is increasing. However, non-contact switch sometimes malfunctions when a people approaches it. Thus, I propose a new signal code for non-contact. A waving action is used to do the code. By using the code, it is possible to easily practice a plurality of operations without malfunction.
{"title":"User-friendly Switches and Secure Non-contact Switches for Universal Design","authors":"S. Serikawa","doi":"10.12792/icisip2021.001","DOIUrl":"https://doi.org/10.12792/icisip2021.001","url":null,"abstract":"A switch is a typical input interface. There are various types of switches according to the purpose and preference. In this speech, I will introduce a film type of touch panel switch that can be bended, added, and deleted. Since the switch is made of a thin film, it can be changed to various shapes and easily painted. Furthermore, even if a part of the touch switch is replaced with a push button or a toggle switch, it operates correctly. This makes it possible to easily create an arbitrary switch according to the preference required by the user. On the other hand, recently, the demand for non-contact switch is increasing. However, non-contact switch sometimes malfunctions when a people approaches it. Thus, I propose a new signal code for non-contact. A waving action is used to do the code. By using the code, it is possible to easily practice a plurality of operations without malfunction.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115389036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geunho Lee, Kouki Ogata, Kotatsu Okabe, R. Aizawa, Seiya Sakaguchi
In this paper, we tackle a monitoring problem for a head of pastured cattle by employing sensors that self-organize their networks while adapting to topological changes. Our challenge is exploiting locally communicative interactions of a relative received signal strength under minimal conditions, such as locality, state memory, and implicit coordination. The solution approach focuses on achieving some network redundancy by selecting specific neighboring sensors with high connectivity, resulting in the determination of favorite relations in a cattle herd. The solution is verified by performing extensive simulations featuring self-organization, topological adaptation, and self-healing. Moreover, the effectiveness of the self-organization is demonstrated in a Miyazaki ranch by using five cows attached to sensor tags. Our approach is effective for the secure adaptive self-organization of mobile sensor networks in real-world sensor network applications.
{"title":"Monitoring a Herd of Pastured Cattle Using Mobile Sensor Networks","authors":"Geunho Lee, Kouki Ogata, Kotatsu Okabe, R. Aizawa, Seiya Sakaguchi","doi":"10.12792/icisip2021.031","DOIUrl":"https://doi.org/10.12792/icisip2021.031","url":null,"abstract":"In this paper, we tackle a monitoring problem for a head of pastured cattle by employing sensors that self-organize their networks while adapting to topological changes. Our challenge is exploiting locally communicative interactions of a relative received signal strength under minimal conditions, such as locality, state memory, and implicit coordination. The solution approach focuses on achieving some network redundancy by selecting specific neighboring sensors with high connectivity, resulting in the determination of favorite relations in a cattle herd. The solution is verified by performing extensive simulations featuring self-organization, topological adaptation, and self-healing. Moreover, the effectiveness of the self-organization is demonstrated in a Miyazaki ranch by using five cows attached to sensor tags. Our approach is effective for the secure adaptive self-organization of mobile sensor networks in real-world sensor network applications.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"596 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123427634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taiki Sunakawa, Y. Horikawa, A. Matsubara, S. Nishifuji, Shota Nakashima
This paper explores a novel method of fall detection assuming elderly people which can be trained easily by using AutoEncoder. The classifier has accuracy is 98.7%, which is 2.1 points higher than conventional method. In this method, Obrid-Sensor acquire brightness information. Moreover, the information based to detect whether a person is in a falling state with protecting privacy. On the other hand, the conventional method uses a classifier built by Support Vector Machine for fall detection. However it is necessary to prepare the data of the falling state as well as the standing state for training. In the proposed method, 78% less required training data than the conventional method, and only use the data of standing state for training.
{"title":"Person Anomaly Detection based on Autoencoder with Obrid-Sensor","authors":"Taiki Sunakawa, Y. Horikawa, A. Matsubara, S. Nishifuji, Shota Nakashima","doi":"10.12792/icisip2021.020","DOIUrl":"https://doi.org/10.12792/icisip2021.020","url":null,"abstract":"This paper explores a novel method of fall detection assuming elderly people which can be trained easily by using AutoEncoder. The classifier has accuracy is 98.7%, which is 2.1 points higher than conventional method. In this method, Obrid-Sensor acquire brightness information. Moreover, the information based to detect whether a person is in a falling state with protecting privacy. On the other hand, the conventional method uses a classifier built by Support Vector Machine for fall detection. However it is necessary to prepare the data of the falling state as well as the standing state for training. In the proposed method, 78% less required training data than the conventional method, and only use the data of standing state for training.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"542 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128205239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is an event called "stamp rally" that collects checkpoint stamps. The event needs some rally tools. If a participant loses his/her rally tools, it will be difficult for him/her to continue the rally. We are developing the hand pose rally system, which is one of the "gesture interface". Our system attempts to identify an individual by the posture of the participant's hand captured by the USB camera. By bending and stretching five fingers, 32 types of hand postures are achieved. This system estimates which of the 32 types of hand postures whom he/she has presented. We have been constructing a posture estimation method. However, the accuracy was not so good depending on the posture of the hand. In this paper, we focus only on the posture estimation part of the hand pose rally system, and consider a method to improve the estimation accuracy.
{"title":"Estimation of Hand Posture with Straight Line Detection for a Hand Pose Rally System","authors":"Ayumu Meiji, A. Suganuma","doi":"10.12792/icisip2021.034","DOIUrl":"https://doi.org/10.12792/icisip2021.034","url":null,"abstract":"There is an event called \"stamp rally\" that collects checkpoint stamps. The event needs some rally tools. If a participant loses his/her rally tools, it will be difficult for him/her to continue the rally. We are developing the hand pose rally system, which is one of the \"gesture interface\". Our system attempts to identify an individual by the posture of the participant's hand captured by the USB camera. By bending and stretching five fingers, 32 types of hand postures are achieved. This system estimates which of the 32 types of hand postures whom he/she has presented. We have been constructing a posture estimation method. However, the accuracy was not so good depending on the posture of the hand. In this paper, we focus only on the posture estimation part of the hand pose rally system, and consider a method to improve the estimation accuracy.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131562163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the coronavirus (COVID-19) spreads around the world, we are increasingly cognizant of our health on a daily basis. This paper focuses on heart rate monitoring, utilizing remote monitoring methodology as a vital indicator of health status. Remote photoplethysmography (rPPG), is a wellknown technique in human remote monitoring, to calculate the heart rate from face videos. Since rPPG analyzes small changes in: color and motion, physical factors (e.g., breathing and adjusting posture), and environmental factors (e.g., illumination and shade), it is difficult to measure heart rate with precision. To resolve these challenges, this paper proposes a system that effectively combines the following methods: 1) Lucas-Kanade method to dynamically track each skin pixel, 2) selection of proper pixels that are not affected by the environmental fluctuations inlight and shade, 3) the delineation of the heart rate signal from noisy to precise data to improve accuracy, and 4) Fast Fourier Transform (FFT) to estimate the main frequency of the signal to determine the heart rate. The results of the experiment showed that the mean absolute error (MAE) of the heart rate was 3.4 bpm for 72 face videos.
{"title":"Accurate Heart Rate Measuring from Face Video Using Efficient Pixel Selection and Tracking","authors":"Mikiya Koike, Satoru Fujita","doi":"10.12792/icisip2021.003","DOIUrl":"https://doi.org/10.12792/icisip2021.003","url":null,"abstract":"As the coronavirus (COVID-19) spreads around the world, we are increasingly cognizant of our health on a daily basis. This paper focuses on heart rate monitoring, utilizing remote monitoring methodology as a vital indicator of health status. Remote photoplethysmography (rPPG), is a wellknown technique in human remote monitoring, to calculate the heart rate from face videos. Since rPPG analyzes small changes in: color and motion, physical factors (e.g., breathing and adjusting posture), and environmental factors (e.g., illumination and shade), it is difficult to measure heart rate with precision. To resolve these challenges, this paper proposes a system that effectively combines the following methods: 1) Lucas-Kanade method to dynamically track each skin pixel, 2) selection of proper pixels that are not affected by the environmental fluctuations inlight and shade, 3) the delineation of the heart rate signal from noisy to precise data to improve accuracy, and 4) Fast Fourier Transform (FFT) to estimate the main frequency of the signal to determine the heart rate. The results of the experiment showed that the mean absolute error (MAE) of the heart rate was 3.4 bpm for 72 face videos.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132761236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, many unmanned transfer robots have been introduced in factories and warehouses. Functions such as self-position estimation are indispensable for freely operating automated guided vehicles. In this research, we estimated the self-position in the factory, proposed a robot control method using lane marking, and verified the measurement accuracy of the system in real time. In this study, the camera image is used to read the AR marker and lane markings to calculate the distance between the camera and lane markings and estimate the self-position. In this study, lane markings and AR markers are photographed horizontally with a camera. The distance is calculated from the position and tilt of the lane markings on the image. When the AR marker is detected, the camera is calibrated to calculate the distance and angle, and the self-position is estimated by comparing it with the actual coordinates. As an experiment to measure the distance to the lane marking, the distance was calculated by gradually bringing the camera closer to the stationary camera with a thick paper with a thickness of 30 mm, which is likened to the lane marking. In the distance calculation, two experiments were conducted with the camera oriented horizontally and diagonally with respect to the lane marking. As an experiment of self-position estimation using AR markers, we created a model like a passage in a factory, placed cameras at multiple points, and measured the error from theoretical values. As a method of expressing the self-position, I assigned the x-axis and y-axis to the model in the actual coordinate system and expressed it in two dimensions. In both experiments, in order to verify the accuracy, 100 continuous data were acquired at each point and the variability of the data was investigated.
{"title":"Recognition of Lane Markings in Factories and Self-position Estimation Method Using AR Markers","authors":"Kento Hisanaga, Shiyuan Yang, S. Serikawa","doi":"10.12792/icisip2021.036","DOIUrl":"https://doi.org/10.12792/icisip2021.036","url":null,"abstract":"In recent years, many unmanned transfer robots have been introduced in factories and warehouses. Functions such as self-position estimation are indispensable for freely operating automated guided vehicles. In this research, we estimated the self-position in the factory, proposed a robot control method using lane marking, and verified the measurement accuracy of the system in real time. In this study, the camera image is used to read the AR marker and lane markings to calculate the distance between the camera and lane markings and estimate the self-position. In this study, lane markings and AR markers are photographed horizontally with a camera. The distance is calculated from the position and tilt of the lane markings on the image. When the AR marker is detected, the camera is calibrated to calculate the distance and angle, and the self-position is estimated by comparing it with the actual coordinates. As an experiment to measure the distance to the lane marking, the distance was calculated by gradually bringing the camera closer to the stationary camera with a thick paper with a thickness of 30 mm, which is likened to the lane marking. In the distance calculation, two experiments were conducted with the camera oriented horizontally and diagonally with respect to the lane marking. As an experiment of self-position estimation using AR markers, we created a model like a passage in a factory, placed cameras at multiple points, and measured the error from theoretical values. As a method of expressing the self-position, I assigned the x-axis and y-axis to the model in the actual coordinate system and expressed it in two dimensions. In both experiments, in order to verify the accuracy, 100 continuous data were acquired at each point and the variability of the data was investigated.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121123820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shunsuke Tanaka, Ryosuke Ichikawa, Y. Yodo, M. Akiyama
In this study, we evaluated the optimal geometry for collecting magnetic beads. As a result of making a square shaped coil in the previous research, it was confirmed that magnetic beads were collected at the center and the corner. However, this result was not satisfactory. Because we did not intend for magnetic beads to be collected at the corner of the coil. Therefore, in order to evaluate the shape of the optimal layout, we made and evaluated round, square, and triangular coils, respectively. As a result, it was confirmed that the round coil had the property of collecting magnetic beads in the center more efficiently than the other coils.
{"title":"Optimal Micromanipulator Design for Gathering Magnetic Beads","authors":"Shunsuke Tanaka, Ryosuke Ichikawa, Y. Yodo, M. Akiyama","doi":"10.12792/icisip2021.026","DOIUrl":"https://doi.org/10.12792/icisip2021.026","url":null,"abstract":"In this study, we evaluated the optimal geometry for collecting magnetic beads. As a result of making a square shaped coil in the previous research, it was confirmed that magnetic beads were collected at the center and the corner. However, this result was not satisfactory. Because we did not intend for magnetic beads to be collected at the corner of the coil. Therefore, in order to evaluate the shape of the optimal layout, we made and evaluated round, square, and triangular coils, respectively. As a result, it was confirmed that the round coil had the property of collecting magnetic beads in the center more efficiently than the other coils.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116643809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}