{"title":"分布式视觉分析的边缘数据存储:海报","authors":"Yang Deng, A. Ravindran, T. Han","doi":"10.1145/3132211.3132463","DOIUrl":null,"url":null,"abstract":"Autonomous machine vision is a powerful tool to address challenges in multiple domains including national security (for example, video surveillance), health care (for example, patient monitoring), and transportation (for example, autonomous vehicles). Distributed vision, where multiple cameras observe a specific geographic area 24/7, enables smart understanding of events in a physical environment with minimal human intervention. We observe that the cloud paradigm alone does not offer a pathway to real-time distributed vision processing. With potentially thousands of cameras, hundreds of gigabytes data per second needs to be transferred to the cloud, saturating the bandwidth of the network. More importantly, vision applications are inherently latency-critical with a high demand for real-time scene analysis (for example, feature extraction and object tracking). To meet latency requirements, computation - including both processing of raw video streams to identify objects, and analytics on this data, needs to be brought to the edge of the network. While object recognition may be done locally at the end node (next to the camera), vision analytics requires access to data generated across different nodes. For example, a subject of interest may need to be tracked across multiple cameras to identify the nature of activities. This creates a need for a low latency distributed data store communicating over a dynamic communication network (most often wireless), to be implemented at the edge. Moreover, the data store must be able to address the limited storage at the end nodes (typically gigabytes). Additionally, privacy and security are prime concerns in the design of such a distributed edge storage.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Edge datastore for distributed vision analytics: poster\",\"authors\":\"Yang Deng, A. Ravindran, T. Han\",\"doi\":\"10.1145/3132211.3132463\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autonomous machine vision is a powerful tool to address challenges in multiple domains including national security (for example, video surveillance), health care (for example, patient monitoring), and transportation (for example, autonomous vehicles). Distributed vision, where multiple cameras observe a specific geographic area 24/7, enables smart understanding of events in a physical environment with minimal human intervention. We observe that the cloud paradigm alone does not offer a pathway to real-time distributed vision processing. With potentially thousands of cameras, hundreds of gigabytes data per second needs to be transferred to the cloud, saturating the bandwidth of the network. More importantly, vision applications are inherently latency-critical with a high demand for real-time scene analysis (for example, feature extraction and object tracking). To meet latency requirements, computation - including both processing of raw video streams to identify objects, and analytics on this data, needs to be brought to the edge of the network. While object recognition may be done locally at the end node (next to the camera), vision analytics requires access to data generated across different nodes. For example, a subject of interest may need to be tracked across multiple cameras to identify the nature of activities. This creates a need for a low latency distributed data store communicating over a dynamic communication network (most often wireless), to be implemented at the edge. Moreover, the data store must be able to address the limited storage at the end nodes (typically gigabytes). Additionally, privacy and security are prime concerns in the design of such a distributed edge storage.\",\"PeriodicalId\":389022,\"journal\":{\"name\":\"Proceedings of the Second ACM/IEEE Symposium on Edge Computing\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Second ACM/IEEE Symposium on Edge Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3132211.3132463\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3132211.3132463","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Edge datastore for distributed vision analytics: poster
Autonomous machine vision is a powerful tool to address challenges in multiple domains including national security (for example, video surveillance), health care (for example, patient monitoring), and transportation (for example, autonomous vehicles). Distributed vision, where multiple cameras observe a specific geographic area 24/7, enables smart understanding of events in a physical environment with minimal human intervention. We observe that the cloud paradigm alone does not offer a pathway to real-time distributed vision processing. With potentially thousands of cameras, hundreds of gigabytes data per second needs to be transferred to the cloud, saturating the bandwidth of the network. More importantly, vision applications are inherently latency-critical with a high demand for real-time scene analysis (for example, feature extraction and object tracking). To meet latency requirements, computation - including both processing of raw video streams to identify objects, and analytics on this data, needs to be brought to the edge of the network. While object recognition may be done locally at the end node (next to the camera), vision analytics requires access to data generated across different nodes. For example, a subject of interest may need to be tracked across multiple cameras to identify the nature of activities. This creates a need for a low latency distributed data store communicating over a dynamic communication network (most often wireless), to be implemented at the edge. Moreover, the data store must be able to address the limited storage at the end nodes (typically gigabytes). Additionally, privacy and security are prime concerns in the design of such a distributed edge storage.