Yuequan Yang, Wei Li, Zhiqiang Cao, Jiatong Bao, Fudong Li
{"title":"基于双重关注和倒残差的轻量级机器人抓取检测网络","authors":"Yuequan Yang, Wei Li, Zhiqiang Cao, Jiatong Bao, Fudong Li","doi":"10.1177/01423312241247346","DOIUrl":null,"url":null,"abstract":"Grasping detection is one of the crucial capabilities for robot systems. Deep learning has achieved remarkable outcomes in robot grasping tasks; however, many deep neural networks were at the expense of high computation cost with memory requirements, which hindered their deployment on computing-constrained devices. To solve this problem, this paper proposes an end-to-end lightweight network with dual attention and inverted residual strategies (LiDAIR), which adopts a generative pixel-level prediction to achieve grasp detection. The LiDAIR is composed of the convolution modules (Conv), the inverted residual convolution module (IRCM), the convolutional block attention connection module (CBACM), and the transposed convolution modules (TConv). The Convs are utilized in downsampling processes to extract the input image features. Then, the IRCM is proposed as a bridge between the downsampling and upsampling phases. In the upsampling phase, the CBACM is designed to focus on the valuable regions from spatial and channel dimensions, where the skip connection is employed to attain multi-level feature fusion. Afterwards, the TConvs are used to restore image resolution. The LiDAIR is lightweight with 704K parameters and enjoys a good tradeoff among lightweight structure, accuracy, and speed. It was evaluated on both the Cornell data set and the Jacquard data set within 10 ms inference time, and the detection accuracy on both the data sets were 97.7% and 92.7%, respectively.","PeriodicalId":49426,"journal":{"name":"Transactions of the Institute of Measurement and Control","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lightweight robotic grasping detection network based on dual attention and inverted residual\",\"authors\":\"Yuequan Yang, Wei Li, Zhiqiang Cao, Jiatong Bao, Fudong Li\",\"doi\":\"10.1177/01423312241247346\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Grasping detection is one of the crucial capabilities for robot systems. Deep learning has achieved remarkable outcomes in robot grasping tasks; however, many deep neural networks were at the expense of high computation cost with memory requirements, which hindered their deployment on computing-constrained devices. To solve this problem, this paper proposes an end-to-end lightweight network with dual attention and inverted residual strategies (LiDAIR), which adopts a generative pixel-level prediction to achieve grasp detection. The LiDAIR is composed of the convolution modules (Conv), the inverted residual convolution module (IRCM), the convolutional block attention connection module (CBACM), and the transposed convolution modules (TConv). The Convs are utilized in downsampling processes to extract the input image features. Then, the IRCM is proposed as a bridge between the downsampling and upsampling phases. In the upsampling phase, the CBACM is designed to focus on the valuable regions from spatial and channel dimensions, where the skip connection is employed to attain multi-level feature fusion. Afterwards, the TConvs are used to restore image resolution. The LiDAIR is lightweight with 704K parameters and enjoys a good tradeoff among lightweight structure, accuracy, and speed. It was evaluated on both the Cornell data set and the Jacquard data set within 10 ms inference time, and the detection accuracy on both the data sets were 97.7% and 92.7%, respectively.\",\"PeriodicalId\":49426,\"journal\":{\"name\":\"Transactions of the Institute of Measurement and Control\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transactions of the Institute of Measurement and Control\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1177/01423312241247346\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions of the Institute of Measurement and Control","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1177/01423312241247346","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Lightweight robotic grasping detection network based on dual attention and inverted residual
Grasping detection is one of the crucial capabilities for robot systems. Deep learning has achieved remarkable outcomes in robot grasping tasks; however, many deep neural networks were at the expense of high computation cost with memory requirements, which hindered their deployment on computing-constrained devices. To solve this problem, this paper proposes an end-to-end lightweight network with dual attention and inverted residual strategies (LiDAIR), which adopts a generative pixel-level prediction to achieve grasp detection. The LiDAIR is composed of the convolution modules (Conv), the inverted residual convolution module (IRCM), the convolutional block attention connection module (CBACM), and the transposed convolution modules (TConv). The Convs are utilized in downsampling processes to extract the input image features. Then, the IRCM is proposed as a bridge between the downsampling and upsampling phases. In the upsampling phase, the CBACM is designed to focus on the valuable regions from spatial and channel dimensions, where the skip connection is employed to attain multi-level feature fusion. Afterwards, the TConvs are used to restore image resolution. The LiDAIR is lightweight with 704K parameters and enjoys a good tradeoff among lightweight structure, accuracy, and speed. It was evaluated on both the Cornell data set and the Jacquard data set within 10 ms inference time, and the detection accuracy on both the data sets were 97.7% and 92.7%, respectively.
期刊介绍:
Transactions of the Institute of Measurement and Control is a fully peer-reviewed international journal. The journal covers all areas of applications in instrumentation and control. Its scope encompasses cutting-edge research and development, education and industrial applications.