In recent years, the introduction of business and service robots has progressed in Japan, and the number of unmanned transfer robots is increasing. Therefore, I will focus on obstacle detection, which is an indispensable function for automated guided vehicles. Conventional methods for detecting obstacles include ultrasonic sensors, PSD sensors, and 2D-LIDAR, but they have disadvantages such as a narrow measurement range, being easily affected by disturbances, and having a long mechanical life. Therefore, as a previous research, our laboratory has been developing an obstacle detection system using a line laser and a camera in order to improve these disadvantages. In this method, a line laser is used to irradiate the floor surface in front of the robot with a horizontal line, and obstacles are detected from changes in the horizontal line. This method improves the disadvantages of the conventional method and enables the detection of obstacles with a long life that is not easily affected by disturbances over a wide area. However, there are disadvantages such as being able to detect only obstacles on the straight line of the laser and being able to detect only part of the obstacles. Therefore, I propose an obstacle detection system using a projector and a camera for the purpose of improving the disadvantages of the conventional method and the disadvantages of previous research. This method is a system in which a projector irradiates the floor surface in front of the robot with multiple vertical and horizontal lines and detects obstacles from the characteristic changes in the lines. In this study, we conducted three experiments to verify the superiority of this study when compared with the previous studies. As a result of the experiment, it was confirmed that this study improved the disadvantages of the conventional method and the previous study.
{"title":"Proposal of a Method to Detect Obstacle Using Projector and Camera","authors":"Hayato Mizuno, Shiyuan Yang, S. Serikawa","doi":"10.12792/icisip2021.037","DOIUrl":"https://doi.org/10.12792/icisip2021.037","url":null,"abstract":"In recent years, the introduction of business and service robots has progressed in Japan, and the number of unmanned transfer robots is increasing. Therefore, I will focus on obstacle detection, which is an indispensable function for automated guided vehicles. Conventional methods for detecting obstacles include ultrasonic sensors, PSD sensors, and 2D-LIDAR, but they have disadvantages such as a narrow measurement range, being easily affected by disturbances, and having a long mechanical life. Therefore, as a previous research, our laboratory has been developing an obstacle detection system using a line laser and a camera in order to improve these disadvantages. In this method, a line laser is used to irradiate the floor surface in front of the robot with a horizontal line, and obstacles are detected from changes in the horizontal line. This method improves the disadvantages of the conventional method and enables the detection of obstacles with a long life that is not easily affected by disturbances over a wide area. However, there are disadvantages such as being able to detect only obstacles on the straight line of the laser and being able to detect only part of the obstacles. Therefore, I propose an obstacle detection system using a projector and a camera for the purpose of improving the disadvantages of the conventional method and the disadvantages of previous research. This method is a system in which a projector irradiates the floor surface in front of the robot with multiple vertical and horizontal lines and detects obstacles from the characteristic changes in the lines. In this study, we conducted three experiments to verify the superiority of this study when compared with the previous studies. As a result of the experiment, it was confirmed that this study improved the disadvantages of the conventional method and the previous study.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123586180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The trial production of a new concept vertical take-off and landing (VTOL) canard aircraft based on the modified tricopter with tilt tail rotor was carried out for aerial, observation and research. Continuous transition from vertical to horizontal flight can be done by tilting the tail rotor supported with canard wing. The lift of wing during horizontal flight supported the weight of the aircraft, and its causes the reduction of power consumption and extend the flight area.
{"title":"Trial Production of Modified Tricopter Based Vertical Take-off Landing Canard Aircraft with Tilt Tail Rotor","authors":"K. Hayama, Tomohiro Kudou, H. Irie","doi":"10.12792/icisip2021.035","DOIUrl":"https://doi.org/10.12792/icisip2021.035","url":null,"abstract":"The trial production of a new concept vertical take-off and landing (VTOL) canard aircraft based on the modified tricopter with tilt tail rotor was carried out for aerial, observation and research. Continuous transition from vertical to horizontal flight can be done by tilting the tail rotor supported with canard wing. The lift of wing during horizontal flight supported the weight of the aircraft, and its causes the reduction of power consumption and extend the flight area.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126975833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the past few years, convolutional neural networks have been considered the mainstream network for processing images. Transformer first proposed a brand new deep neural network in 2017, based mainly on the self-attention mechanism, and has achieved amazing results in the field of natural language processing. Compared with traditional convolutional networks and recurrent networks, the model is superior in quality, has stronger parallelism, and requires less training time. Because of these powerful advantages, more and more related workers are expanding how Transformer is applied to computer vision. This article aims to provide a comprehensive overview of the application of Transformer in computer vision. We first introduce the self-attention mechanism, because it is an important component of Transformer, namely single-headed attention mechanism, multi-headed attention mechanism, position coding, etc. And introduces the reformer model after the transformer is improved. We then introduced some applications of Transformer in computer vision, image classification, object detection, and image processing. At the end of this article, we studied the future research direction and development of Transformer in computer vision, hoping that this article can arouse further interest in Transformer.
{"title":"A Survey on: Application of Transformer in Computer Vision","authors":"Zhenghua Zhang, Zhangjie Gong, Qingqing Hong","doi":"10.12792/icisip2021.006","DOIUrl":"https://doi.org/10.12792/icisip2021.006","url":null,"abstract":"In the past few years, convolutional neural networks have been considered the mainstream network for processing images. Transformer first proposed a brand new deep neural network in 2017, based mainly on the self-attention mechanism, and has achieved amazing results in the field of natural language processing. Compared with traditional convolutional networks and recurrent networks, the model is superior in quality, has stronger parallelism, and requires less training time. Because of these powerful advantages, more and more related workers are expanding how Transformer is applied to computer vision. This article aims to provide a comprehensive overview of the application of Transformer in computer vision. We first introduce the self-attention mechanism, because it is an important component of Transformer, namely single-headed attention mechanism, multi-headed attention mechanism, position coding, etc. And introduces the reformer model after the transformer is improved. We then introduced some applications of Transformer in computer vision, image classification, object detection, and image processing. At the end of this article, we studied the future research direction and development of Transformer in computer vision, hoping that this article can arouse further interest in Transformer.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121288034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ami Furuzono, K. Moriya, K. Koshi, Keiji Matsumoto
In this study, we propose a simulating wave system (SWS) that provides a comfortable sleep environment to release stress. This system realizes floating-feeling as if it feels like floating on the water surface by automatically swinging the seat back and forth, left and right. First, this paper describes the mechanical structure and control method on the developed prototype SWS. Since we assess the comfort levels of SWS with subjective and objective evaluations, we describe these criteria in detail. For objective evaluation, four types of criteria, such as LF/HF, ellipse area, SD1/SD2, and SD1, are adopted. LF/HF calculated from the Fourier transformed subject’s heart rate variability (HRV) and SD1/SD2 meaning the ratio of the major axis to the minor axis on a Lorenz plot are the values indicating sympathetic nerve activity. Ellipse areas on the Lorenz plot and minor axis SD1 are used as the value of parasympathetic nerve activity. For the subjective evaluation, a visual analog scale(VAS) is adopted that is a method of assessing sensation on a horizontal straight line ranging from 0 to 100 percent. This paper reports the research results of the proposed SWS on the above items.
在这项研究中,我们提出了一个模拟波系统(SWS),提供一个舒适的睡眠环境来释放压力。该系统通过自动前后左右摆动座椅来实现漂浮感,就像漂浮在水面上一样。本文首先介绍了研制的SWS样机的机械结构和控制方法。由于我们通过主观和客观评价来评估SWS的舒适度,因此我们详细描述了这些标准。客观评价采用LF/HF、椭圆面积、SD1/SD2、SD1四种标准。根据傅里叶变换后受试者的心率变异性(HRV)计算的LF/HF和SD1/SD2(即洛伦兹图上长轴与短轴的比值)是表示交感神经活动的值。用Lorenz图和SD1小轴上的椭圆区域作为副交感神经活动的值。主观评价采用视觉模拟量表(visual analog scale, VAS),即在水平直线上评估感觉的方法,范围从0到100%。本文报告了所提出的SWS在上述项目上的研究成果。
{"title":"Development of Simulating Wave System (SWS) for a Hospitable Sleeping Environment with Artificial Vibration","authors":"Ami Furuzono, K. Moriya, K. Koshi, Keiji Matsumoto","doi":"10.12792/icisip2021.014","DOIUrl":"https://doi.org/10.12792/icisip2021.014","url":null,"abstract":"In this study, we propose a simulating wave system (SWS) that provides a comfortable sleep environment to release stress. This system realizes floating-feeling as if it feels like floating on the water surface by automatically swinging the seat back and forth, left and right. First, this paper describes the mechanical structure and control method on the developed prototype SWS. Since we assess the comfort levels of SWS with subjective and objective evaluations, we describe these criteria in detail. For objective evaluation, four types of criteria, such as LF/HF, ellipse area, SD1/SD2, and SD1, are adopted. LF/HF calculated from the Fourier transformed subject’s heart rate variability (HRV) and SD1/SD2 meaning the ratio of the major axis to the minor axis on a Lorenz plot are the values indicating sympathetic nerve activity. Ellipse areas on the Lorenz plot and minor axis SD1 are used as the value of parasympathetic nerve activity. For the subjective evaluation, a visual analog scale(VAS) is adopted that is a method of assessing sensation on a horizontal straight line ranging from 0 to 100 percent. This paper reports the research results of the proposed SWS on the above items.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122389550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In today's society, there is a lot of desk work, and in schools and workplaces, people need to be able to perform various tasks efficiently and with concentration. The work environment has a significant impact on the efficiency of desk work. This study investigates the effects of background music on work and cerebral blood flow during mental tasks. In this study, we used NIRS to measure and verify changes in cerebral blood flow in collaborators who calculated and memorized English words while listening to background music. In particular, we hypothesized and tested that the presence or absence of vocals in the background music would affect cerebral blood flow. We adopted a block design as our experimental method for measuring cerebral blood flow. The descriptive statistics values, maximum, minimum, mean, and variance, were calculated from the measurement results of cerebral blood flow, and a box plot represented the size distribution. The experimental results showed that the variance of cerebral blood flow between collaborators was more significant during computational tasks than at rest. The group that contained verbal information had higher overall cerebral blood flow than the group that did not. Besides, cerebral blood flow was lower during the task than during rest.
{"title":"Effects of Verbal Information in Background Music on Mental Task and its Relation to Cerebral Blood Flow","authors":"N. Shirahama, Takahiro Higashi, Satoshi Watanabe","doi":"10.12792/icisip2021.013","DOIUrl":"https://doi.org/10.12792/icisip2021.013","url":null,"abstract":"In today's society, there is a lot of desk work, and in schools and workplaces, people need to be able to perform various tasks efficiently and with concentration. The work environment has a significant impact on the efficiency of desk work. This study investigates the effects of background music on work and cerebral blood flow during mental tasks. In this study, we used NIRS to measure and verify changes in cerebral blood flow in collaborators who calculated and memorized English words while listening to background music. In particular, we hypothesized and tested that the presence or absence of vocals in the background music would affect cerebral blood flow. We adopted a block design as our experimental method for measuring cerebral blood flow. The descriptive statistics values, maximum, minimum, mean, and variance, were calculated from the measurement results of cerebral blood flow, and a box plot represented the size distribution. The experimental results showed that the variance of cerebral blood flow between collaborators was more significant during computational tasks than at rest. The group that contained verbal information had higher overall cerebral blood flow than the group that did not. Besides, cerebral blood flow was lower during the task than during rest.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128827421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, image processing technology has been applied to all walks of life, and good results have been achieved in the field of agriculture. Image segmentation is the foundation and key of image processing. In order to understand the application status of image segmentation technology in the agricultural field, this article systematically sorts out some mainstream image segmentation methods. First, it introduces segmentation methods based on threshold, clustering, edge, graph theory and superpixel segmentation, and then introduces Segmentation method based on deep learning, and prospects for future research trends. and look forward future trends.
{"title":"A Survey on Crop Image Segmentation Methods","authors":"Hong Qingqing, Yan Tianbao, Lihan Bin","doi":"10.12792/icisip2021.008","DOIUrl":"https://doi.org/10.12792/icisip2021.008","url":null,"abstract":"Nowadays, image processing technology has been applied to all walks of life, and good results have been achieved in the field of agriculture. Image segmentation is the foundation and key of image processing. In order to understand the application status of image segmentation technology in the agricultural field, this article systematically sorts out some mainstream image segmentation methods. First, it introduces segmentation methods based on threshold, clustering, edge, graph theory and superpixel segmentation, and then introduces Segmentation method based on deep learning, and prospects for future research trends. and look forward future trends.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129143899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
If you come into contact with others with dust or animal hair on your clothes, there is a high possibility that you will make the other person develop an allergy or not be able to establish good communication. Thus, it is necessary to remove dusts and more from your clothes before going out. However, this work is done manually, which is time-consuming and time-consuming. To solve this problem, last time we developed a robot that automatically removes dust from clothes on hangers with a brush. The robot holds and pinches the clothes between two brushes and removes the dust by moving the brushes from top to bottom. It also removes dust from the entire clothes by repeatedly extending the arm with the brush attached and removing the dust. However, this robot could only operate for one garment at a time. In this paper, we develop a robot that can continuously care for multiple clothes. First, this robot rotates the hanger rack. Next, use the Kinect camera to look at the front of the brush and stop the rotation of the hanger rack when the clothes come. Third, remove the dust from the clothes in the same way as the previous robot we created. By repeating the above operation, multiple clothes can be removed dust in succession. The operation time of this robot is about two minutes. All the user has to do is put the clothes on the hanger, press the start button on the robot, and the robot will remove the dust from the clothes. While the robot is running, the user can spend his time doing other things. For example, by using this robot, you can make better use of your valuable morning time for yourself instead of using it to dust your clothes.
{"title":"Proposal of Automatic Dust Catcher Robot of Clothes Using Kinect","authors":"Natsuki Nakamoto, Yuhki Kitazono","doi":"10.12792/icisip2021.011","DOIUrl":"https://doi.org/10.12792/icisip2021.011","url":null,"abstract":"If you come into contact with others with dust or animal hair on your clothes, there is a high possibility that you will make the other person develop an allergy or not be able to establish good communication. Thus, it is necessary to remove dusts and more from your clothes before going out. However, this work is done manually, which is time-consuming and time-consuming. To solve this problem, last time we developed a robot that automatically removes dust from clothes on hangers with a brush. The robot holds and pinches the clothes between two brushes and removes the dust by moving the brushes from top to bottom. It also removes dust from the entire clothes by repeatedly extending the arm with the brush attached and removing the dust. However, this robot could only operate for one garment at a time. In this paper, we develop a robot that can continuously care for multiple clothes. First, this robot rotates the hanger rack. Next, use the Kinect camera to look at the front of the brush and stop the rotation of the hanger rack when the clothes come. Third, remove the dust from the clothes in the same way as the previous robot we created. By repeating the above operation, multiple clothes can be removed dust in succession. The operation time of this robot is about two minutes. All the user has to do is put the clothes on the hanger, press the start button on the robot, and the robot will remove the dust from the clothes. While the robot is running, the user can spend his time doing other things. For example, by using this robot, you can make better use of your valuable morning time for yourself instead of using it to dust your clothes.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124523825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of image processing hardware using FPGA requires various peripherals such as cameras, memory, and displays. Commercially available FPGA boards have various peripherals, but they cannot be used without developing and implementing their own interface circuits. In addition, since the on-board peripherals are different for each FPGA board, new interface circuits must be developed every time when employing different FPGA boards. Therefore, we are developing a general-purpose verification environment that can be imported into commercial FPGA boards, including CPUs, without the need for peripherals on the FPGA board. The feature of the proposed verification environment is that it provides virtual peripherals on a PC. In addition, the proposed verification environment can directly mount hardware modules that are automatically converted from software programs by High-Level Synthesis (HLS). As a result, the design of interface circuits with peripheral devices can be omitted. In this paper, to realize the above verification environment, we developed the software to be executed on the PC and the CPU on the FPGA board, respectively. The communication between the PC and FPGA was initially implemented using serial communication, but in this paper, Linux is installed on the FPGA board’s CPU, and TCP/IP communication is implemented between the PC and FPGA. Using these software, we investigated whether it is possible to verify images such as 4K for the image processing hardware created by HLS.
{"title":"Development of a Simple Verification Environment Using FPGA for image processing Hardware Created by High-Level-Synthesis Using TCP/IP","authors":"Atsushi Shojima, A. Yamawaki","doi":"10.12792/icisip2021.038","DOIUrl":"https://doi.org/10.12792/icisip2021.038","url":null,"abstract":"The development of image processing hardware using FPGA requires various peripherals such as cameras, memory, and displays. Commercially available FPGA boards have various peripherals, but they cannot be used without developing and implementing their own interface circuits. In addition, since the on-board peripherals are different for each FPGA board, new interface circuits must be developed every time when employing different FPGA boards. Therefore, we are developing a general-purpose verification environment that can be imported into commercial FPGA boards, including CPUs, without the need for peripherals on the FPGA board. The feature of the proposed verification environment is that it provides virtual peripherals on a PC. In addition, the proposed verification environment can directly mount hardware modules that are automatically converted from software programs by High-Level Synthesis (HLS). As a result, the design of interface circuits with peripheral devices can be omitted. In this paper, to realize the above verification environment, we developed the software to be executed on the PC and the CPU on the FPGA board, respectively. The communication between the PC and FPGA was initially implemented using serial communication, but in this paper, Linux is installed on the FPGA board’s CPU, and TCP/IP communication is implemented between the PC and FPGA. Using these software, we investigated whether it is possible to verify images such as 4K for the image processing hardware created by HLS.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122712349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Koshi, Ryushi Morita, K. Moriya, Keiji Matsumoto, Hirohito Shintani
This paper attempts to estimate a level of concentration focusing on temporal changes of cerebral blood flow (CBF) for Stroop color-word test (SCWT) which is sometimes used in psychiatric research to induce prefrontal cerebral blood flow changes reflecting cognitive functions. The CBF is measured by near-infrared spectroscopy (NIRS) instrument and processed by time synchronous averaging (TSA). Then, the distinction of the concentration from the TSA waveforms is also discussed.
{"title":"Temporal Changes of Cerebral Blood Flow for Stroop Color-Word Test: A Near-Infrared Spectroscopy Study","authors":"K. Koshi, Ryushi Morita, K. Moriya, Keiji Matsumoto, Hirohito Shintani","doi":"10.12792/icisip2021.015","DOIUrl":"https://doi.org/10.12792/icisip2021.015","url":null,"abstract":"This paper attempts to estimate a level of concentration focusing on temporal changes of cerebral blood flow (CBF) for Stroop color-word test (SCWT) which is sometimes used in psychiatric research to induce prefrontal cerebral blood flow changes reflecting cognitive functions. The CBF is measured by near-infrared spectroscopy (NIRS) instrument and processed by time synchronous averaging (TSA). Then, the distinction of the concentration from the TSA waveforms is also discussed.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"320 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132067806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, more and more people have been keeping pets. Many zoonotic diseases now exist, and it is possible to become infected with zoonotic diseases from pets. One of the routes of infection is through excrement. Therefore, we have developed a device that automatically folds pet sheets. This system consists of two parts: a camera part that detects excrement and a part that processes the excrement. When the system recognizes that the pet has defecated through image processing, it automatically folds the pet sheet along the creases and opens the lid of the trash can for disposal. When the pet gets on the device to defecate, the system saves the image taken just before as a background, and when the pet gets off the device after defecating, the system recognizes the excrement and folds the left and right sides of the pet sheet, then the top and bottom. Then, the pet sheet is lifted to the front of the trash can and the lid of the trash can is opened using a DC motor. Finally, the pet sheet is thrown into the trash can, and the operation is completed by returning the arm and the trash can lid to their initial positions. The success rate of the experiment was 100% for the recognition of excrement and 100% for the disposal of excrement.
{"title":"Fully Automatic Pet Sheet Disposal System Using Image Processing","authors":"Airi Taniguchi, Yuhki Kitazono","doi":"10.12792/icisip2021.010","DOIUrl":"https://doi.org/10.12792/icisip2021.010","url":null,"abstract":"In recent years, more and more people have been keeping pets. Many zoonotic diseases now exist, and it is possible to become infected with zoonotic diseases from pets. One of the routes of infection is through excrement. Therefore, we have developed a device that automatically folds pet sheets. This system consists of two parts: a camera part that detects excrement and a part that processes the excrement. When the system recognizes that the pet has defecated through image processing, it automatically folds the pet sheet along the creases and opens the lid of the trash can for disposal. When the pet gets on the device to defecate, the system saves the image taken just before as a background, and when the pet gets off the device after defecating, the system recognizes the excrement and folds the left and right sides of the pet sheet, then the top and bottom. Then, the pet sheet is lifted to the front of the trash can and the lid of the trash can is opened using a DC motor. Finally, the pet sheet is thrown into the trash can, and the operation is completed by returning the arm and the trash can lid to their initial positions. The success rate of the experiment was 100% for the recognition of excrement and 100% for the disposal of excrement.","PeriodicalId":431446,"journal":{"name":"The Proceedings of The 8th International Conference on Intelligent Systems and Image Processing 2021","volume":"54 13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123373399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}