With the improvement of human life quality, life expectancy generally increases. As a result, more and more elderly people living alone appear. Recently, the safety problems of the elderly living alone have attracted more and more attention from the public. Due to living alone, the elderly cannot be found at the first time when an accident occurs indoors or out, and the rescue time is delayed. This article proposes a way to use the speed up module to realize real-time face detection on the Raspberry Pi and optimize the processing of PIR sensor signals and write a logic system based on the camera and PIR signals to record and analyze the life of the elderly living alone and warning system to their family members.
随着人类生活质量的提高,预期寿命普遍延长。因此,出现了越来越多的独居老人。近来,独居老人的安全问题越来越受到社会各界的关注。由于独居,当老人在室内外发生意外时,无法第一时间发现,延误了救援时间。本文提出了一种在树莓派(Raspberry Pi)上使用加速模块实现实时人脸检测的方法,并优化了 PIR 传感器信号的处理,编写了基于摄像头和 PIR 信号的逻辑系统,对独居老人的生活进行记录和分析,并向其家人发出预警系统。
{"title":"Smart house system for safety of elderly living alone based on camera and PIR sensor","authors":"Yichen Wang, Yutian Wu, Shuwei Zhang, Harutoshi Ogai, Katsumi Hirai, Shigeyuki Tateno","doi":"10.1007/s10015-023-00932-5","DOIUrl":"10.1007/s10015-023-00932-5","url":null,"abstract":"<div><p>With the improvement of human life quality, life expectancy generally increases. As a result, more and more elderly people living alone appear. Recently, the safety problems of the elderly living alone have attracted more and more attention from the public. Due to living alone, the elderly cannot be found at the first time when an accident occurs indoors or out, and the rescue time is delayed. This article proposes a way to use the speed up module to realize real-time face detection on the Raspberry Pi and optimize the processing of PIR sensor signals and write a logic system based on the camera and PIR signals to record and analyze the life of the elderly living alone and warning system to their family members.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 1","pages":"43 - 54"},"PeriodicalIF":0.8,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-023-00932-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139442000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-08DOI: 10.1007/s10015-023-00928-1
Toru Ohira
“Chases and Escapes” is a classical mathematical problem. Recently, we proposed a simple extension, called “Group Chase and Escape,” where one group chases another. This extension bridges the traditional problem with the current interest in studying collective motion among animals, insects, and cars. In this presentation, I will introduce our fundamental model and explore its intricate emergent behaviors. In our model, each chaser approaches the nearest escapee, while each escapee moves away from its closest chaser. Interestingly, despite the absence of communication within each group, we observe the formation of aggregate patterns. Furthermore, the effectiveness of capture varies as we adjust the ratio of chasers to escapees, which can be attributed to a group effect. I will delve into how these behaviors manifest in relation to various parameters, such as densities. Moreover, we have explored different expansions of this basic model. First, we introduced fluctuations, where players now make errors in their step directions with a certain probability. We found that a moderate level of fluctuations improves the efficiency of catching. Second, we incorporated a delay in the chasers’ reactions to catch their targets. This distance-dependent reaction delay can lead to highly complex behaviors. Additionally, I will provide an overview of other groups’ extensions of the model and the latest developments in this field.
{"title":"Collective behaviors emerging from chases and escapes","authors":"Toru Ohira","doi":"10.1007/s10015-023-00928-1","DOIUrl":"10.1007/s10015-023-00928-1","url":null,"abstract":"<div><p>“Chases and Escapes” is a classical mathematical problem. Recently, we proposed a simple extension, called “Group Chase and Escape,” where one group chases another. This extension bridges the traditional problem with the current interest in studying collective motion among animals, insects, and cars. In this presentation, I will introduce our fundamental model and explore its intricate emergent behaviors. In our model, each chaser approaches the nearest escapee, while each escapee moves away from its closest chaser. Interestingly, despite the absence of communication within each group, we observe the formation of aggregate patterns. Furthermore, the effectiveness of capture varies as we adjust the ratio of chasers to escapees, which can be attributed to a group effect. I will delve into how these behaviors manifest in relation to various parameters, such as densities. Moreover, we have explored different expansions of this basic model. First, we introduced fluctuations, where players now make errors in their step directions with a certain probability. We found that a moderate level of fluctuations improves the efficiency of catching. Second, we incorporated a delay in the chasers’ reactions to catch their targets. This distance-dependent reaction delay can lead to highly complex behaviors. Additionally, I will provide an overview of other groups’ extensions of the model and the latest developments in this field.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 1","pages":"1 - 11"},"PeriodicalIF":0.8,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139447291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-06DOI: 10.1007/s10015-023-00923-6
Elmehdi Aniq, Mohamed Chakraoui, Naoual Mouhni
Ki-67 is a non-histone nuclear protein located in the nuclear cortex and is one of the essential biomarkers used to provide the proliferative status of cancer cells. Because of the variability in color, morphology and intensity of the cell nuclei, Ki-67 is sensitive to chemotherapy and radiation therapy. The proliferation index is usually calculated visually by professional pathologists who assess the total percentage of positive (labeled) cells. This semi-quantitative counting can be the source of some inter- and intra-observer variability and is time-consuming. These factors open up a new field of scientific and technological research and development. Artificial intelligence is attracting attention to solve these problems. Our solution is based on deep learning to calculate the percentage of cells labeled by the ki-67 protein. The tumor area with (times)40 magnification is given by the pathologist to segment different types of positive, negative or TIL (tumor infiltrating lymphocytes) cells. The calculation of the percentage comes after cells counting using classical image processing techniques. To give the model our satisfaction, we made a comparison with other datasets of the test and we compared it with the diagnosis of pathologists. Despite the error of our model, KiNet outperforms the best performing models to date in terms of average error measurement.
{"title":"Artificial intelligence in pathological anatomy: digitization of the calculation of the proliferation index (Ki-67) in breast carcinoma","authors":"Elmehdi Aniq, Mohamed Chakraoui, Naoual Mouhni","doi":"10.1007/s10015-023-00923-6","DOIUrl":"10.1007/s10015-023-00923-6","url":null,"abstract":"<div><p>Ki-67 is a non-histone nuclear protein located in the nuclear cortex and is one of the essential biomarkers used to provide the proliferative status of cancer cells. Because of the variability in color, morphology and intensity of the cell nuclei, Ki-67 is sensitive to chemotherapy and radiation therapy. The proliferation index is usually calculated visually by professional pathologists who assess the total percentage of positive (labeled) cells. This semi-quantitative counting can be the source of some inter- and intra-observer variability and is time-consuming. These factors open up a new field of scientific and technological research and development. Artificial intelligence is attracting attention to solve these problems. Our solution is based on deep learning to calculate the percentage of cells labeled by the ki-67 protein. The tumor area with <span>(times)</span>40 magnification is given by the pathologist to segment different types of positive, negative or TIL (tumor infiltrating lymphocytes) cells. The calculation of the percentage comes after cells counting using classical image processing techniques. To give the model our satisfaction, we made a comparison with other datasets of the test and we compared it with the diagnosis of pathologists. Despite the error of our model, KiNet outperforms the best performing models to date in terms of average error measurement.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 1","pages":"177 - 186"},"PeriodicalIF":0.8,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139380865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While conventional biped robots are arithmetically controlled by CPU and driven by servo motors, humans locomote by contraction of muscles that receive electrical signals from the spinal cord. For real-time control without numerical calculations, we proposed a method that analog electronic circuits mimic neural circuits and output electrical signals. Gait control of a musculoskeletal robot requires this circuit and muscle-mimicking actuators. In this paper, we extracted the muscle displacements and generated forces involved in human walking and running with inverse dynamic simulation. The generated force and electromyogram were compared, and the main moving muscles were selected. The neural signals input to the muscles were derived by dividing the displacement graph into 6 sections and classifying the muscle groups by focusing on the maximum contraction. Also, we compared the generated forces, displacements, and the neural signals with physiological findings and discussed the similarity between the living body and the musculoskeletal model.
{"title":"Extraction of actuator forces and displacements involved in human walking and running and estimation of time-series neural signals by inverse dynamics simulation","authors":"Motokuni Ishibashi, Kenji Takeda, Kentaro Yamazaki, Takumi Ishihama, Tatsumi Goto, Shuxin Lyu, Minami Kaneko, Fumio Uchikoba","doi":"10.1007/s10015-023-00921-8","DOIUrl":"10.1007/s10015-023-00921-8","url":null,"abstract":"<div><p>While conventional biped robots are arithmetically controlled by CPU and driven by servo motors, humans locomote by contraction of muscles that receive electrical signals from the spinal cord. For real-time control without numerical calculations, we proposed a method that analog electronic circuits mimic neural circuits and output electrical signals. Gait control of a musculoskeletal robot requires this circuit and muscle-mimicking actuators. In this paper, we extracted the muscle displacements and generated forces involved in human walking and running with inverse dynamic simulation. The generated force and electromyogram were compared, and the main moving muscles were selected. The neural signals input to the muscles were derived by dividing the displacement graph into 6 sections and classifying the muscle groups by focusing on the maximum contraction. Also, we compared the generated forces, displacements, and the neural signals with physiological findings and discussed the similarity between the living body and the musculoskeletal model.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 1","pages":"81 - 93"},"PeriodicalIF":0.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-22DOI: 10.1007/s10015-023-00927-2
Honoka Tani, Akira Yamawaki
We are developing pencil-drawing-style image conversion software suitable for high-level synthesis, HLS, technology that automatically converts software into hardware. The pencil-drawing-style image conversion consists of the former and latter processes. The former generates the images expressing edge strengths and their directions. The latter process convolves the line segment corresponding to the edge strength with its direction. As hardware-oriented software description, the medium data across the former and latter processes are optimized. In addition, the former and latter processes are overlapped between the FIFO buffer passing the medium data. The obtained image is still a gray-scaled image. To make it support the color image, this paper inserts a process compositing the original color image with the grayed pencil-drawing-style image to not intervene in the pipelined data path behavior. As a result, an HLS tool used is expected to generate a hardware module with the ideal pipelined data path by one output data/one clock. The experimental results show that the colorization hardware had no significant performance degradation issues for circuit size, run time, or power efficiency compared to the pencil drawing hardware with grayscale. Compared with the software execution, our hardware supporting color image can achieve 4.2 times the performance improvement and 130 times power efficiency.
{"title":"Light-weight color image conversion like pencil drawing for high-level synthesized hardware","authors":"Honoka Tani, Akira Yamawaki","doi":"10.1007/s10015-023-00927-2","DOIUrl":"10.1007/s10015-023-00927-2","url":null,"abstract":"<div><p>We are developing pencil-drawing-style image conversion software suitable for high-level synthesis, HLS, technology that automatically converts software into hardware. The pencil-drawing-style image conversion consists of the former and latter processes. The former generates the images expressing edge strengths and their directions. The latter process convolves the line segment corresponding to the edge strength with its direction. As hardware-oriented software description, the medium data across the former and latter processes are optimized. In addition, the former and latter processes are overlapped between the FIFO buffer passing the medium data. The obtained image is still a gray-scaled image. To make it support the color image, this paper inserts a process compositing the original color image with the grayed pencil-drawing-style image to not intervene in the pipelined data path behavior. As a result, an HLS tool used is expected to generate a hardware module with the ideal pipelined data path by one output data/one clock. The experimental results show that the colorization hardware had no significant performance degradation issues for circuit size, run time, or power efficiency compared to the pencil drawing hardware with grayscale. Compared with the software execution, our hardware supporting color image can achieve 4.2 times the performance improvement and 130 times power efficiency.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 1","pages":"29 - 36"},"PeriodicalIF":0.8,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138945048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this research is to develop robots that perform mud sampling on tidal flats automatically. Erosions that occur on beaches and sands go away to offshore caused by waves, winds and so on. In addition, the phenomena have not been clarified. Thus, a mathematical model has been proposed to analyze the phenomena. Then, parameters in the model are required to be identified by collecting bottom sediments. Now, the collections of bottom sediments are achieved with manpower. However, surfaces of tidal flats are of mud and hard to walk on. In this paper, a robot for collecting bottom sediments on tidal flats is proposed. During traveling on muddy terrains, the robot has to avoid obstacles, i.e., wastes, driftwoods and so on. Then, the SSD (single shot multibox detector) was used to detect objects with image recognition. Fundamental experiments were performed in our laboratory and it was shown that the developed robot could perform the fundamental desired tasks.
{"title":"Robots traveling on muddy terrain for sampling bottom sediment in tidal flats","authors":"Masatoshi Hatano, Manami Senzaki, Hidetoshi Kawasaki, Chiaki Takasu, Masaki Yamazaki, Yukiyoshi Hoshigami","doi":"10.1007/s10015-023-00920-9","DOIUrl":"10.1007/s10015-023-00920-9","url":null,"abstract":"<div><p>The purpose of this research is to develop robots that perform mud sampling on tidal flats automatically. Erosions that occur on beaches and sands go away to offshore caused by waves, winds and so on. In addition, the phenomena have not been clarified. Thus, a mathematical model has been proposed to analyze the phenomena. Then, parameters in the model are required to be identified by collecting bottom sediments. Now, the collections of bottom sediments are achieved with manpower. However, surfaces of tidal flats are of mud and hard to walk on. In this paper, a robot for collecting bottom sediments on tidal flats is proposed. During traveling on muddy terrains, the robot has to avoid obstacles, i.e., wastes, driftwoods and so on. Then, the SSD (single shot multibox detector) was used to detect objects with image recognition. Fundamental experiments were performed in our laboratory and it was shown that the developed robot could perform the fundamental desired tasks.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 1","pages":"155 - 160"},"PeriodicalIF":0.8,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138960203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s10015-023-00919-2
Kento Wakamatsu, Satoshi Ono
Class imbalanceness, i.e., the inequality of the number of samples between categories, adversely affects machine learning models, including deep neural networks. In semantic segmentation, extracting a small area of minor categories with respect to the entire image includes the same problem as class imbalanceness. Such difficulties exist in various applications of semantic segmentation, including medical images. This paper proposes a semantic segmentation method that considers global features and appropriately detects small categories. The proposed method adopts TransUNet architecture and Unified Focal Loss (UFL) function; the former allows considering global image features, and the latter mitigates the harmful effects of class imbalanceness. Experimental results with real-world applications showed that the proposed method successfully extracts small regions of minor classes without increasing false positives of other classes.
{"title":"TransUNet with unified focal loss for class-imbalanced semantic segmentation","authors":"Kento Wakamatsu, Satoshi Ono","doi":"10.1007/s10015-023-00919-2","DOIUrl":"10.1007/s10015-023-00919-2","url":null,"abstract":"<div><p>Class imbalanceness, i.e., the inequality of the number of samples between categories, adversely affects machine learning models, including deep neural networks. In semantic segmentation, extracting a small area of minor categories with respect to the entire image includes the same problem as class imbalanceness. Such difficulties exist in various applications of semantic segmentation, including medical images. This paper proposes a semantic segmentation method that considers global features and appropriately detects small categories. The proposed method adopts TransUNet architecture and Unified Focal Loss (UFL) function; the former allows considering global image features, and the latter mitigates the harmful effects of class imbalanceness. Experimental results with real-world applications showed that the proposed method successfully extracts small regions of minor classes without increasing false positives of other classes.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 1","pages":"101 - 106"},"PeriodicalIF":0.8,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139172406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Here, an optimized neural network controller (NC) was developed with the cuckoo search (CS) method. This was inspired by the mending behavior of the cuckoo bird, which lays eggs similar to those of their putative parents in their nests and allows the putative parents to raise them. CS is an evolutionary computation algorithm that mimics the ecological behavior of organisms to optimize a controller. Previous studies have demonstrated good evolutionary processes for NCs when the value of the scaling index varies in steps during a scheduled period. Therefore, the proposed CS scheduling plan adjusts the scaling index as a linear function, nonlinear function, or stairs. Computer simulations demonstrated that an NC optimized with the scheduled CS method had superior control performance compared to the original CS method. The best results were obtained when the schedule plan was set to a linear or nonlinear function rather than a stair plan.
{"title":"Performance evaluation of schedule plan for cuckoo search applied to the neural network controller of a rotary crane","authors":"Rui Kinjo, Kunihiko Nakazono, Naoki Oshiro, Hiroshi Kinjo","doi":"10.1007/s10015-023-00918-3","DOIUrl":"10.1007/s10015-023-00918-3","url":null,"abstract":"<div><p>Here, an optimized neural network controller (NC) was developed with the cuckoo search (CS) method. This was inspired by the mending behavior of the cuckoo bird, which lays eggs similar to those of their putative parents in their nests and allows the putative parents to raise them. CS is an evolutionary computation algorithm that mimics the ecological behavior of organisms to optimize a controller. Previous studies have demonstrated good evolutionary processes for NCs when the value of the scaling index varies in steps during a scheduled period. Therefore, the proposed CS scheduling plan adjusts the scaling index as a linear function, nonlinear function, or stairs. Computer simulations demonstrated that an NC optimized with the scheduled CS method had superior control performance compared to the original CS method. The best results were obtained when the schedule plan was set to a linear or nonlinear function rather than a stair plan.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 1","pages":"129 - 135"},"PeriodicalIF":0.8,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139212030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-organizing Map (SOM) is one of the artificial neural networks and well applied to datamining or feature visualization of high-dimensional datasets. Recently, SOMs are actively used for market research, political decision-making, and social analysis using a huge number of live text-data. The SOM, however, needs a large number of parameters and iterative calculations like Deep Learning, so that specialized accelerators for SOM are strongly required. In this paper, we newly propose a scalable SOM accelerator based on FPGA, in which all neurons in the SOM are mapped onto an internal memory, or BRAM (Block-RAM) in FPGA to maintain high parallelism in the SOM itself. We implement the proposed SOM accelerator on an Alveo U50 (Xilinx, Ltd.) and evaluate its performance: the accelerator shows high scalability and runs 102.0 times faster than software processing with Intel Core i7, which is expected to be enough for the real-time datamining and feature visualization.
自组织图(SOM)是人工神经网络之一,被广泛应用于高维数据集的数据挖掘或特征可视化。最近,自组织图被积极用于市场研究、政治决策和社会分析,使用了大量的实时文本数据。然而,SOM 与深度学习一样,需要大量的参数和迭代计算,因此非常需要专门的 SOM 加速器。在本文中,我们新提出了一种基于 FPGA 的可扩展 SOM 加速器,其中 SOM 中的所有神经元都映射到 FPGA 中的内部存储器或 BRAM(Block-RAM)上,以保持 SOM 本身的高并行性。我们在 Alveo U50(赛灵思公司)上实现了所提出的 SOM 加速器,并对其性能进行了评估:该加速器显示出很高的可扩展性,其运行速度是英特尔酷睿 i7 软件处理速度的 102.0 倍,预计足以满足实时数据挖掘和特征可视化的需要。
{"title":"A highly scalable Self-organizing Map accelerator on FPGA and its performance evaluation","authors":"Yusuke Yamagiwa, Yuki Kawahara, Kenji Kanazawa, Moritoshi Yasunaga","doi":"10.1007/s10015-023-00916-5","DOIUrl":"10.1007/s10015-023-00916-5","url":null,"abstract":"<div><p>Self-organizing Map (SOM) is one of the artificial neural networks and well applied to datamining or feature visualization of high-dimensional datasets. Recently, SOMs are actively used for market research, political decision-making, and social analysis using a huge number of live text-data. The SOM, however, needs a large number of parameters and iterative calculations like Deep Learning, so that specialized accelerators for SOM are strongly required. In this paper, we newly propose a scalable SOM accelerator based on FPGA, in which all neurons in the SOM are mapped onto an internal memory, or BRAM (Block-RAM) in FPGA to maintain high parallelism in the SOM itself. We implement the proposed SOM accelerator on an Alveo U50 (Xilinx, Ltd.) and evaluate its performance: the accelerator shows high scalability and runs 102.0 times faster than software processing with Intel Core i7, which is expected to be enough for the real-time datamining and feature visualization.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 1","pages":"94 - 100"},"PeriodicalIF":0.8,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139249972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-18DOI: 10.1007/s10015-023-00917-4
Kikuhito Kawasue, Khin Dagon Win, Kumiko Yoshida, Geunho Lee
In pig production, the number of pigs raised on each farm is increasing, but the population of workers involved in pig production is decreasing, so lighter labor is expected. On the other hand, it is also important to improve pig grading and profitability. Weight is a major criterion for pig grading. Too heavy or too light will decrease profits, and pigs need to be shipped at the appropriate weight. However, since each pig weighs more than 100 kg, weighing each pig is very labor-intensive. In large farms, more than 50 pigs are kept in a single piggery, and they are shipped together at the same time, after determining the day when they have reached the proper weight for shipment. In order to improve profitability, it is important to control the growth of pigs in a piggery so that they grow uniformly and to determine the appropriate shipping date. In this study, a prototype system was developed to automatically measure daily weight distribution. If the weight distribution in the piggery is known, appropriate shipping dates can be determined. This paper reports the results of a valid experiment using the developed system.
{"title":"Pig sorting system with three exits that incorporates an RGB-D sensor for constant use during fattening","authors":"Kikuhito Kawasue, Khin Dagon Win, Kumiko Yoshida, Geunho Lee","doi":"10.1007/s10015-023-00917-4","DOIUrl":"10.1007/s10015-023-00917-4","url":null,"abstract":"<div><p>In pig production, the number of pigs raised on each farm is increasing, but the population of workers involved in pig production is decreasing, so lighter labor is expected. On the other hand, it is also important to improve pig grading and profitability. Weight is a major criterion for pig grading. Too heavy or too light will decrease profits, and pigs need to be shipped at the appropriate weight. However, since each pig weighs more than 100 kg, weighing each pig is very labor-intensive. In large farms, more than 50 pigs are kept in a single piggery, and they are shipped together at the same time, after determining the day when they have reached the proper weight for shipment. In order to improve profitability, it is important to control the growth of pigs in a piggery so that they grow uniformly and to determine the appropriate shipping date. In this study, a prototype system was developed to automatically measure daily weight distribution. If the weight distribution in the piggery is known, appropriate shipping dates can be determined. This paper reports the results of a valid experiment using the developed system.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":"29 1","pages":"37 - 42"},"PeriodicalIF":0.8,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139261269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}