{"title":"Average Sparse Attention for Dense Video Captioning From Multiperspective Edge-Computing Cameras","authors":"Ling-Hsuan Huang;Ching-Hu Lu","doi":"10.1109/JSYST.2024.3456864","DOIUrl":null,"url":null,"abstract":"In recent years, the artificial intelligence of things (AIoT) has accelerated the development of edge computing. Since existing edge computing for dense video captioning has only explored single-camera decision-making, we propose a lightweight image stitching model that uses a proposed inverted pruned residual model to realize multicamera decision-making to generate more accurate captions. Existing dense video captioning uses an intensive attention mechanism, which readily results in the loss of important information. Thus, our study proposes an average sparse attention mechanism such that the resultant dense video-captioning model is better able to focus on important information and improve the quality of its generated captions. The experiments show that the lightweight video stitching model can reduce model parameters by 13.40% and increase frames per second by 28.96% on an edge platform when compared to the latest studies. Furthermore, a dense video caption network with the average sparse attention mechanism yielded improvements of 22.97% for BLEU3, 35.04% for BLEU4, and 7.51% for METEOR.","PeriodicalId":55017,"journal":{"name":"IEEE Systems Journal","volume":"18 4","pages":"1939-1950"},"PeriodicalIF":4.0000,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Systems Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10703164/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, the artificial intelligence of things (AIoT) has accelerated the development of edge computing. Since existing edge computing for dense video captioning has only explored single-camera decision-making, we propose a lightweight image stitching model that uses a proposed inverted pruned residual model to realize multicamera decision-making to generate more accurate captions. Existing dense video captioning uses an intensive attention mechanism, which readily results in the loss of important information. Thus, our study proposes an average sparse attention mechanism such that the resultant dense video-captioning model is better able to focus on important information and improve the quality of its generated captions. The experiments show that the lightweight video stitching model can reduce model parameters by 13.40% and increase frames per second by 28.96% on an edge platform when compared to the latest studies. Furthermore, a dense video caption network with the average sparse attention mechanism yielded improvements of 22.97% for BLEU3, 35.04% for BLEU4, and 7.51% for METEOR.
期刊介绍:
This publication provides a systems-level, focused forum for application-oriented manuscripts that address complex systems and system-of-systems of national and global significance. It intends to encourage and facilitate cooperation and interaction among IEEE Societies with systems-level and systems engineering interest, and to attract non-IEEE contributors and readers from around the globe. Our IEEE Systems Council job is to address issues in new ways that are not solvable in the domains of the existing IEEE or other societies or global organizations. These problems do not fit within traditional hierarchical boundaries. For example, disaster response such as that triggered by Hurricane Katrina, tsunamis, or current volcanic eruptions is not solvable by pure engineering solutions. We need to think about changing and enlarging the paradigm to include systems issues.