首页 > 最新文献

2017 International Conference on Systems, Signals and Image Processing (IWSSIP)最新文献

英文 中文
An analysis of the applicability of the TFD IP option for QoS assurance of multiple video streams in a congested network 分析了TFD IP选项在拥塞网络中对多个视频流的QoS保证的适用性
Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965597
R. Chodorek, A. Chodorek
The Trafïîc Flow Description (TFD) option is an experimental option of the IP protocol, designed by the Authors, able to assure signaling for QoS purposes. The option is used as a carrier of knowledge about forthcoming traffic. If planning horizons are short enough, this knowledge can be used for dynamic bandwidth allocation. In the paper an analysis of QoS assurance using the TFD option is presented The analysis was made for a case of QoS protection of multiple video streams. Results show that dynamic bandwidth allocation using the TFD option gives better QoS-related parameters than the typical approach to QoS assurance, based on the RSVP protocol.
Trafïîc Flow Description (TFD)选项是IP协议的一个实验性选项,由作者设计,能够确保信令用于QoS目的。该选项被用作关于即将到来的流量的知识载体。如果规划期限足够短,这些知识可以用于动态带宽分配。本文对TFD选项的QoS保证进行了分析,并以多视频流的QoS保护为例进行了分析。结果表明,与基于RSVP协议的典型QoS保证方法相比,使用TFD选项的动态带宽分配提供了更好的QoS相关参数。
{"title":"An analysis of the applicability of the TFD IP option for QoS assurance of multiple video streams in a congested network","authors":"R. Chodorek, A. Chodorek","doi":"10.1109/IWSSIP.2017.7965597","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965597","url":null,"abstract":"The Trafïîc Flow Description (TFD) option is an experimental option of the IP protocol, designed by the Authors, able to assure signaling for QoS purposes. The option is used as a carrier of knowledge about forthcoming traffic. If planning horizons are short enough, this knowledge can be used for dynamic bandwidth allocation. In the paper an analysis of QoS assurance using the TFD option is presented The analysis was made for a case of QoS protection of multiple video streams. Results show that dynamic bandwidth allocation using the TFD option gives better QoS-related parameters than the typical approach to QoS assurance, based on the RSVP protocol.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128525387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hierarchical co-segmentation of 3D point clouds for indoor scene 室内场景三维点云分层共分割
Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965590
Yan-Ting Lin
Segmentation of point clouds has been studied under a variety of scenarios. However, the segmentation of scanned point clouds for a clustered indoor scene remains significantly challenging due to noisy and incomplete data, as well as scene complexity. Based on the observation that objects in an indoor scene vary largely in scale but are typically supported by planes, we propose a co-segmentation approach. This technique utilizes the mutual agency between the point clouds captured at different times after the objects' poses change due to human actions. Hence, we hierarchically segment scenes from different times into patches and generate tree structures to store their relations. By iteratively clustering patches and co-analyzing them based on the relations between patches, we modify the tree structures and generate our results. To test the robustness of our method, we evaluate it on imperfectly scanned point clouds from a childroom, a bedroom, and two offices scenes.
在各种场景下,对点云的分割进行了研究。然而,由于数据的噪声和不完整以及场景的复杂性,对聚类室内场景的扫描点云分割仍然具有很大的挑战性。基于室内场景中物体在尺度上变化很大,但通常由平面支持的观察,我们提出了一种共同分割方法。这种技术利用了在物体姿势因人类行为而改变后,在不同时间捕获的点云之间的相互代理。因此,我们将不同时间的场景分层分割成小块,并生成树状结构来存储它们之间的关系。通过迭代聚类斑块,并根据斑块之间的关系进行共同分析,修改树结构,生成结果。为了测试我们的方法的稳健性,我们对来自儿童、卧室和两个办公室场景的不完美扫描点云进行了评估。
{"title":"Hierarchical co-segmentation of 3D point clouds for indoor scene","authors":"Yan-Ting Lin","doi":"10.1109/IWSSIP.2017.7965590","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965590","url":null,"abstract":"Segmentation of point clouds has been studied under a variety of scenarios. However, the segmentation of scanned point clouds for a clustered indoor scene remains significantly challenging due to noisy and incomplete data, as well as scene complexity. Based on the observation that objects in an indoor scene vary largely in scale but are typically supported by planes, we propose a co-segmentation approach. This technique utilizes the mutual agency between the point clouds captured at different times after the objects' poses change due to human actions. Hence, we hierarchically segment scenes from different times into patches and generate tree structures to store their relations. By iteratively clustering patches and co-analyzing them based on the relations between patches, we modify the tree structures and generate our results. To test the robustness of our method, we evaluate it on imperfectly scanned point clouds from a childroom, a bedroom, and two offices scenes.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126014652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CrowdSync: User generated videos synchronization using crowdsourcing CrowdSync:用户使用众包生成视频同步
Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965599
R. M. C. Segundo, M. N. Amorim, Celso A. S. Santos
User Generated Videos are contents created by heterogeneous users around an event. Each user films the event with his point of view, and according to his limitations. In this scenario, it is impossible to guarantee that all the videos will be stable, focused on a point of the event or other characteristics that turn the automatic video synchronization process possible. Focused on this scenario we propose the use of crowdsourcing techniques in video synchronization (CrowdSync). The crowd is not affected by heterogeneous videos as the automatic processes are, so it is possible to use them to process videos and find the synchronization points. In order to make this process possible, a structure is described that can manage both crowd and video synchronization: the Dynamic Alignment List (DAL). Therefore, we carried out two experiments to verify that the crowd can perform the proposed approach through two experiments: a crowd simulator and a small task based experiment.
用户生成视频是由异构用户围绕某一事件创建的内容。每个用户都用自己的视角拍摄事件,并根据自己的限制。在这种情况下,不可能保证所有视频都是稳定的,集中在事件的某一点或其他特征上,从而使自动视频同步过程成为可能。针对这种情况,我们建议在视频同步(CrowdSync)中使用众包技术。人群不像自动化过程那样受到异构视频的影响,因此可以利用它们来处理视频并找到同步点。为了使这一过程成为可能,描述了一种可以同时管理人群和视频同步的结构:动态对齐列表(DAL)。因此,我们进行了两个实验来验证人群可以通过两个实验来执行所提出的方法:一个是人群模拟器,另一个是基于小任务的实验。
{"title":"CrowdSync: User generated videos synchronization using crowdsourcing","authors":"R. M. C. Segundo, M. N. Amorim, Celso A. S. Santos","doi":"10.1109/IWSSIP.2017.7965599","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965599","url":null,"abstract":"User Generated Videos are contents created by heterogeneous users around an event. Each user films the event with his point of view, and according to his limitations. In this scenario, it is impossible to guarantee that all the videos will be stable, focused on a point of the event or other characteristics that turn the automatic video synchronization process possible. Focused on this scenario we propose the use of crowdsourcing techniques in video synchronization (CrowdSync). The crowd is not affected by heterogeneous videos as the automatic processes are, so it is possible to use them to process videos and find the synchronization points. In order to make this process possible, a structure is described that can manage both crowd and video synchronization: the Dynamic Alignment List (DAL). Therefore, we carried out two experiments to verify that the crowd can perform the proposed approach through two experiments: a crowd simulator and a small task based experiment.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134285874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Non-contact signal detection and processing techniques for cardio-respiratory thoracic activity 心肺胸廓活动非接触式信号检测与处理技术
Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965579
D. Shahu, Ismail Baxhaku, Alban Rakipi
The possibility to monitor the cardio-respiratory activity in a non-invasive way, by measuring thoracic displacement using cost-effective microwave Doppler technology, has been studied. Several laboratory tests were performed in order to demonstrate the feasibility of the proposed electromagnetic measurement method. Time domain signal processing techniques have been used in order to evaluate the main frequency and rate variability of the heartbeat and respiration signals. They have shown a very good correlation coefficient with the electrocardiogram and spirometer results, used as reference instruments. On the other hand, a simple electromagnetic model was developed in order to analyze the scattering problem, using appropriate analytical and numerical techniques.
通过使用低成本的微波多普勒技术测量胸部位移,以无创方式监测心肺活动的可能性已经进行了研究。为了证明所提出的电磁测量方法的可行性,进行了若干实验室试验。时域信号处理技术已被用于评估心跳和呼吸信号的主要频率和速率变异性。它们与作为参考仪器的心电图和肺活量计结果有很好的相关系数。另一方面,利用适当的解析和数值技术,建立了一个简单的电磁模型来分析散射问题。
{"title":"Non-contact signal detection and processing techniques for cardio-respiratory thoracic activity","authors":"D. Shahu, Ismail Baxhaku, Alban Rakipi","doi":"10.1109/IWSSIP.2017.7965579","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965579","url":null,"abstract":"The possibility to monitor the cardio-respiratory activity in a non-invasive way, by measuring thoracic displacement using cost-effective microwave Doppler technology, has been studied. Several laboratory tests were performed in order to demonstrate the feasibility of the proposed electromagnetic measurement method. Time domain signal processing techniques have been used in order to evaluate the main frequency and rate variability of the heartbeat and respiration signals. They have shown a very good correlation coefficient with the electrocardiogram and spirometer results, used as reference instruments. On the other hand, a simple electromagnetic model was developed in order to analyze the scattering problem, using appropriate analytical and numerical techniques.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123350111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An effective consistency correction and blending method for camera-array-based microscopy imaging 一种有效的相机阵列显微成像一致性校正和混合方法
Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965602
J. Bao, Jingtao Fan, Xiaowei Hu, Jinnan Wang, Lei Wang
Camera-array-based microscopy imaging is an effective scheme to satisfy the requirements of wide field of view, high spatial resolution and real-time imaging simultaneously. However, with the increasing number of cameras and expansion of field of view, the nonlinear camera response, vignetting of camera lens, ununiformity of illumination system, and the low overlapping ratio all lower the quality of microscopic image stitching and blending. In this paper, we propose an image consistency correction and blending method for 5 × 7 camera-array-based 0.17-gigapixel microscopic images. Firstly, we establish an image consistency correction model. Then, we obtain the response functions and compensation factors. Next, we restore captured images based on above model. Finally, we adopt an improved alpha-blending method to stitch and blend images of multiple fields of view. Experimental results show that our proposed method eliminates the inconsistency and seams among stitched images effectively.
基于相机阵列的显微成像是同时满足大视场、高空间分辨率和实时性要求的有效方案。然而,随着摄像机数量的增加和视场的扩大,摄像机的非线性响应、摄像机镜头的渐晕、照明系统的不均匀性以及低重叠率都降低了显微图像拼接和混合的质量。本文提出了一种基于5 × 7相机阵列的0.17亿像素显微图像的图像一致性校正和混合方法。首先,建立图像一致性校正模型。然后,得到了响应函数和补偿因子。接下来,我们基于上述模型对捕获的图像进行恢复。最后,采用改进的alpha-blending方法对多视场图像进行拼接和融合。实验结果表明,该方法有效地消除了拼接图像之间的不一致和接缝。
{"title":"An effective consistency correction and blending method for camera-array-based microscopy imaging","authors":"J. Bao, Jingtao Fan, Xiaowei Hu, Jinnan Wang, Lei Wang","doi":"10.1109/IWSSIP.2017.7965602","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965602","url":null,"abstract":"Camera-array-based microscopy imaging is an effective scheme to satisfy the requirements of wide field of view, high spatial resolution and real-time imaging simultaneously. However, with the increasing number of cameras and expansion of field of view, the nonlinear camera response, vignetting of camera lens, ununiformity of illumination system, and the low overlapping ratio all lower the quality of microscopic image stitching and blending. In this paper, we propose an image consistency correction and blending method for 5 × 7 camera-array-based 0.17-gigapixel microscopic images. Firstly, we establish an image consistency correction model. Then, we obtain the response functions and compensation factors. Next, we restore captured images based on above model. Finally, we adopt an improved alpha-blending method to stitch and blend images of multiple fields of view. Experimental results show that our proposed method eliminates the inconsistency and seams among stitched images effectively.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124932410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Evaluation of background noise for significance level identification 评价显著性水平识别的背景噪声
Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965614
J. Poměnková, E. Klejmova, T. Malach
The paper deals with the identification of the significance level for testing the time-frequency transform of the data. The usual procedure of time-frequency significance testing is based on the knowledge of background spectrum. Very often, we have certain expectations about the character the background noise (White noise, Red noise, etc.). Our paper deals with the case when the character of the noise is unknown and may not be Gaussian despite our assumptions. Thus, we propose how to identify our own critical values for testing time-frequency transform significance with respect to the data character. We compare our findings with the critical quantile of χ22.
本文讨论了检验数据时频变换的显著性水平的识别问题。通常的时频显著性检验是基于背景谱的知识。通常情况下,我们对背景噪音(白噪音,红噪音等)的角色有一定的期望。我们的论文处理的情况下,噪声的性质是未知的,可能不是高斯尽管我们的假设。因此,我们提出了如何识别我们自己的临界值来测试时频变换显著性相对于数据特征。我们将结果与χ22的临界分位数进行比较。
{"title":"Evaluation of background noise for significance level identification","authors":"J. Poměnková, E. Klejmova, T. Malach","doi":"10.1109/IWSSIP.2017.7965614","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965614","url":null,"abstract":"The paper deals with the identification of the significance level for testing the time-frequency transform of the data. The usual procedure of time-frequency significance testing is based on the knowledge of background spectrum. Very often, we have certain expectations about the character the background noise (White noise, Red noise, etc.). Our paper deals with the case when the character of the noise is unknown and may not be Gaussian despite our assumptions. Thus, we propose how to identify our own critical values for testing time-frequency transform significance with respect to the data character. We compare our findings with the critical quantile of χ22.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122651389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Encoding mode selection in HEVC with the use of noise reduction 编码模式选择在HEVC与使用降噪
Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965589
O. Stankiewicz, K. Wegner, D. Karwowski, J. Stankowski, K. Klimaszewski, T. Grajek
This paper concerns optimization of encoding in HEVC. A novel method is proposed in which encoding modes, e.g. coding block structure, prediction types and motion vectors, are selected basing on noise-reduced version of the input sequence, while the content, e.g. transform coefficients, are coded basing on the unaltered input sequence. Although the proposed scheme involves encoding of two versions of the input sequence, the proposed realization ensures that the complexity is only negligibly larger than complexity of a single encoder. The proposal has been implemented and assessed. The experimental results show that the proposal provides up to 1.5% bitrate reduction while preserving the same video quality.
本文研究了HEVC中编码的优化问题。提出了一种基于降噪后的输入序列选择编码块结构、预测类型和运动向量等编码方式,而基于不变输入序列对变换系数等内容进行编码的新方法。虽然所提出的方案涉及到两个版本的输入序列的编码,但所提出的实现保证了复杂度只比单个编码器的复杂度大得可以忽略不计。该建议已得到实施和评估。实验结果表明,在保持视频质量不变的情况下,该方案可将比特率降低1.5%。
{"title":"Encoding mode selection in HEVC with the use of noise reduction","authors":"O. Stankiewicz, K. Wegner, D. Karwowski, J. Stankowski, K. Klimaszewski, T. Grajek","doi":"10.1109/IWSSIP.2017.7965589","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965589","url":null,"abstract":"This paper concerns optimization of encoding in HEVC. A novel method is proposed in which encoding modes, e.g. coding block structure, prediction types and motion vectors, are selected basing on noise-reduced version of the input sequence, while the content, e.g. transform coefficients, are coded basing on the unaltered input sequence. Although the proposed scheme involves encoding of two versions of the input sequence, the proposed realization ensures that the complexity is only negligibly larger than complexity of a single encoder. The proposal has been implemented and assessed. The experimental results show that the proposal provides up to 1.5% bitrate reduction while preserving the same video quality.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122155031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
On buffer overflow duration in WSN with a vacation-type power saving mechanism 基于休假型节电机制的WSN缓冲器溢出持续时间研究
Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965620
W. Kempa
A queue-based model of WSN's node with power saving mechanism described by the single vacation policy is considered. Whenever the queue of packets directed to the node becomes empty, the radio transmitter/receiver is being switched off for a random and generally distributed time period. During the vacation the processing is blocked and the incoming packets are buffered. Modelling the transient behavior of the node by an M/G/1/N-type system with single server vacations, the CDF (cumulative distribution function) of buffer overflow duration, conditioned by the initial buffer state, is investigated. Applying analytical approach based on the idea of embedded Markov chain, integral equations and linear algebra, the compact-form representation for the first buffer overflow duration CDF is found. Hence, the formula for the CDF of next such periods is derived. Moreover, probability distributions of the number of packet losses in successive buffer overflow periods are found. Numerical example is attached as well.
考虑了一种基于队列的WSN节点模型,该模型采用单休假策略来描述节点的节电机制。每当指向节点的数据包队列变为空时,无线电发送器/接收器将在随机且通常分布的时间段内关闭。在假期期间,处理被阻塞,传入的数据包被缓冲。用单服务器休假的M/G/1/ n型系统建模节点的暂态行为,研究了初始缓冲状态下缓冲区溢出持续时间的累积分布函数。应用基于嵌入马尔可夫链、积分方程和线性代数思想的解析方法,得到了缓冲溢出时间CDF的紧表示形式。因此,可以推导出下一个时期的CDF公式。此外,还得到了连续缓冲区溢出周期内丢包数的概率分布。最后给出了数值算例。
{"title":"On buffer overflow duration in WSN with a vacation-type power saving mechanism","authors":"W. Kempa","doi":"10.1109/IWSSIP.2017.7965620","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965620","url":null,"abstract":"A queue-based model of WSN's node with power saving mechanism described by the single vacation policy is considered. Whenever the queue of packets directed to the node becomes empty, the radio transmitter/receiver is being switched off for a random and generally distributed time period. During the vacation the processing is blocked and the incoming packets are buffered. Modelling the transient behavior of the node by an M/G/1/N-type system with single server vacations, the CDF (cumulative distribution function) of buffer overflow duration, conditioned by the initial buffer state, is investigated. Applying analytical approach based on the idea of embedded Markov chain, integral equations and linear algebra, the compact-form representation for the first buffer overflow duration CDF is found. Hence, the formula for the CDF of next such periods is derived. Moreover, probability distributions of the number of packet losses in successive buffer overflow periods are found. Numerical example is attached as well.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126753848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Improving matching performance of the keypoints in images of 3D scenes by using depth information 利用深度信息提高三维场景图像中关键点的匹配性能
Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965571
K. Matusiak, P. Skulimowski, P. Strumiłło
Keypoint detection is a basic step in many computer vision algorithms aimed at recognition of objects, automatic navigation, medicine and other application fields. Successful implementation of higher level image analysis tasks, however, is conditioned by reliable detection of characteristic image local regions termed keypoints. A large number of keypoint detection algorithms has been proposed and verified. The main part of this work is devoted to description of an original keypoint detection algorithm that incorporates depth information computed from stereovision cameras or other depth sensing devices. It was shown that filtering out keypoints that are context dependent, e.g. located on object boundaries can improve the matching performance of the keypoints which is the basis for object recognition tasks. This improvement was shown quantitatively by comparing the proposed algorithm to the widely accepted SIFT keypoint detector algorithm. Our study is motivated by a development of a system aimed at aiding the visually impaired in space perception and object identification.
关键点检测是目标识别、自动导航、医学等应用领域中许多计算机视觉算法的基本步骤。然而,更高级的图像分析任务的成功实施,是以可靠地检测被称为关键点的特征图像局部区域为条件的。大量的关键点检测算法已经被提出和验证。这项工作的主要部分是致力于描述一个原始的关键点检测算法,该算法结合了从立体视觉相机或其他深度传感设备计算的深度信息。结果表明,过滤掉与上下文相关的关键点(例如位于目标边界上的关键点)可以提高关键点的匹配性能,这是目标识别任务的基础。通过将所提出的算法与广泛接受的SIFT关键点检测器算法进行比较,定量地证明了这种改进。我们的研究是由一个系统的发展,旨在帮助视障人士在空间感知和物体识别。
{"title":"Improving matching performance of the keypoints in images of 3D scenes by using depth information","authors":"K. Matusiak, P. Skulimowski, P. Strumiłło","doi":"10.1109/IWSSIP.2017.7965571","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965571","url":null,"abstract":"Keypoint detection is a basic step in many computer vision algorithms aimed at recognition of objects, automatic navigation, medicine and other application fields. Successful implementation of higher level image analysis tasks, however, is conditioned by reliable detection of characteristic image local regions termed keypoints. A large number of keypoint detection algorithms has been proposed and verified. The main part of this work is devoted to description of an original keypoint detection algorithm that incorporates depth information computed from stereovision cameras or other depth sensing devices. It was shown that filtering out keypoints that are context dependent, e.g. located on object boundaries can improve the matching performance of the keypoints which is the basis for object recognition tasks. This improvement was shown quantitatively by comparing the proposed algorithm to the widely accepted SIFT keypoint detector algorithm. Our study is motivated by a development of a system aimed at aiding the visually impaired in space perception and object identification.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131869623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast cloud image segmentation with superpixel analysis based convolutional networks 基于卷积网络的超像素分析快速云图像分割
Pub Date : 2017-05-01 DOI: 10.1109/IWSSIP.2017.7965591
Lifang Wu, Jiaoyu He, Meng Jian, Jianan Zhang, Yunzhen Zou
Due to the various noises, the cloud image segmentation becomes a big challenge for atmosphere prediction. CNN is capable of learning discriminative features from complex data, but this may be quite time-consuming in pixel-level segmentation problems. In this paper we propose superpixel analysis based CNN (SP-CNN) for high efficient cloud image segmentation. SP-CNN employs image over-segmentation of superpixels as basic entities to preserve local consistency. SP-CNN takes the image patches centered at representative pixels in every superpixel as input, and all superpixels are classified as cloud or non-cloud part by voting of the representative pixels. It greatly reduces the computational burden on CNN learning. In order to avoid the ambiguity from superpixel boundaries, SP-CNN selects the representative pixels uniformly from the eroded superpixels. Experimental analysis demonstrates that SP-CNN guarantees both the effectiveness and efficiency in cloud segmentation.
由于各种噪声的存在,云图像分割成为大气预报的一大难题。CNN能够从复杂的数据中学习判别特征,但这在像素级分割问题中可能相当耗时。本文提出了一种基于超像素分析的CNN (SP-CNN)算法,用于高效的云图像分割。SP-CNN采用超像素的图像过分割作为基本实体,保持局部一致性。SP-CNN将每个超像素中以代表性像素为中心的图像patch作为输入,通过对代表性像素的投票将所有超像素分类为云部分或非云部分。大大减少了CNN学习的计算负担。为了避免超像素边界的模糊性,SP-CNN从侵蚀的超像素中均匀地选择具有代表性的像素。实验分析表明,SP-CNN保证了云分割的有效性和高效性。
{"title":"Fast cloud image segmentation with superpixel analysis based convolutional networks","authors":"Lifang Wu, Jiaoyu He, Meng Jian, Jianan Zhang, Yunzhen Zou","doi":"10.1109/IWSSIP.2017.7965591","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965591","url":null,"abstract":"Due to the various noises, the cloud image segmentation becomes a big challenge for atmosphere prediction. CNN is capable of learning discriminative features from complex data, but this may be quite time-consuming in pixel-level segmentation problems. In this paper we propose superpixel analysis based CNN (SP-CNN) for high efficient cloud image segmentation. SP-CNN employs image over-segmentation of superpixels as basic entities to preserve local consistency. SP-CNN takes the image patches centered at representative pixels in every superpixel as input, and all superpixels are classified as cloud or non-cloud part by voting of the representative pixels. It greatly reduces the computational burden on CNN learning. In order to avoid the ambiguity from superpixel boundaries, SP-CNN selects the representative pixels uniformly from the eroded superpixels. Experimental analysis demonstrates that SP-CNN guarantees both the effectiveness and efficiency in cloud segmentation.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115779098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2017 International Conference on Systems, Signals and Image Processing (IWSSIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1