首页 > 最新文献

2015 Signal Processing and Intelligent Systems Conference (SPIS)最新文献

英文 中文
An improved DV-Hop localization algorithm in wireless sensor networks 一种改进的无线传感器网络DV-Hop定位算法
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422331
Mohaddeseh Peyvandi, A. Pouyan
Localization as a fundamental issue has a challenge in wireless sensor networks (WSNs). Many approaches have been proposed to solve the inaccurate node localization. Among range-free algorithms, DV-Hop (Distance Vector-hop) is a well-known localization algorithm that utilizes of hop-distance estimation to locate sensor nodes. This has lead positioning accuracy is limited. In this paper, an improved DV-HOP algorithm based on the hop-size correction and localization optimization is put forward. Firstly, based on difference of actual and estimated distance between reference nodes an effective hop-size is calculated for whole network; secondly, a correction value is added to the hops between unknown nodes and reference nodes while received signal strength indicator (RSSI) value is used to correct the distance of single hop. Finally, the Levenberg-Marquardt algorithm is applied to estimate an optimize position for each sensors. In evaluation step, various factors that affect the localization accuracy of the DV-Hop are investigated. Simulation results show that the proposed algorithm has been significantly improved compared to the basic DV-Hop and some existing improved algorithms.
定位作为无线传感器网络的一个基本问题,在无线传感器网络中面临着挑战。为了解决节点定位不准确的问题,人们提出了许多方法。在无距离定位算法中,DV-Hop (Distance Vector-hop)是一种著名的定位算法,它利用跳距估计来定位传感器节点。这就导致定位精度受到限制。本文提出了一种基于跳数修正和定位优化的改进DV-HOP算法。首先,根据参考节点之间实际距离和估计距离的差异,计算整个网络的有效跳数;其次,对未知节点与参考节点之间的跳数增加一个校正值,并使用接收信号强度指示器(received signal strength indicator, RSSI)值校正单跳距离。最后,应用Levenberg-Marquardt算法估计每个传感器的最优位置。在评价步骤中,研究了影响DV-Hop定位精度的各种因素。仿真结果表明,与基本的DV-Hop和现有的一些改进算法相比,该算法有了明显的改进。
{"title":"An improved DV-Hop localization algorithm in wireless sensor networks","authors":"Mohaddeseh Peyvandi, A. Pouyan","doi":"10.1109/SPIS.2015.7422331","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422331","url":null,"abstract":"Localization as a fundamental issue has a challenge in wireless sensor networks (WSNs). Many approaches have been proposed to solve the inaccurate node localization. Among range-free algorithms, DV-Hop (Distance Vector-hop) is a well-known localization algorithm that utilizes of hop-distance estimation to locate sensor nodes. This has lead positioning accuracy is limited. In this paper, an improved DV-HOP algorithm based on the hop-size correction and localization optimization is put forward. Firstly, based on difference of actual and estimated distance between reference nodes an effective hop-size is calculated for whole network; secondly, a correction value is added to the hops between unknown nodes and reference nodes while received signal strength indicator (RSSI) value is used to correct the distance of single hop. Finally, the Levenberg-Marquardt algorithm is applied to estimate an optimize position for each sensors. In evaluation step, various factors that affect the localization accuracy of the DV-Hop are investigated. Simulation results show that the proposed algorithm has been significantly improved compared to the basic DV-Hop and some existing improved algorithms.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115448426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Toward a robust and secure echo steganography method based on parameters hopping 基于参数跳变的鲁棒安全回波隐写方法
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422329
Hamzeh Ghasemzadeh, M. Kayvanrad
Echo hiding methods have good perceptual quality and they are robust to intentional and unintentional modifications. Unfortunately these methods are not quite transparent and are not suitable for steganography applications. Specifically, this point became more obvious after a recent steganalysis investigation where both parameters and the hidden message were extracted accurately. This work tries to alleviate this problem by introducing variable parameters into echo hiding methods. The system is tested in both active and passive warden scenarios. Comparing results of conventional and the proposed method shows that for embedding strength of 0.2, the proposed method decreases detection of echo method by 24.2% and increases its robustness to echo attacks by 16.17%.
回声隐藏方法具有良好的感知质量和对有意和无意修改的鲁棒性。不幸的是,这些方法不是很透明,不适合隐写应用。具体来说,在最近的隐写分析调查之后,这一点变得更加明显,其中参数和隐藏的消息都被准确地提取出来。本工作试图通过在回波隐藏方法中引入可变参数来缓解这一问题。该系统在主动和被动监狱长场景下进行了测试。结果表明,在嵌入强度为0.2的情况下,该方法对回波法的检测能力降低了24.2%,对回波攻击的鲁棒性提高了16.17%。
{"title":"Toward a robust and secure echo steganography method based on parameters hopping","authors":"Hamzeh Ghasemzadeh, M. Kayvanrad","doi":"10.1109/SPIS.2015.7422329","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422329","url":null,"abstract":"Echo hiding methods have good perceptual quality and they are robust to intentional and unintentional modifications. Unfortunately these methods are not quite transparent and are not suitable for steganography applications. Specifically, this point became more obvious after a recent steganalysis investigation where both parameters and the hidden message were extracted accurately. This work tries to alleviate this problem by introducing variable parameters into echo hiding methods. The system is tested in both active and passive warden scenarios. Comparing results of conventional and the proposed method shows that for embedding strength of 0.2, the proposed method decreases detection of echo method by 24.2% and increases its robustness to echo attacks by 16.17%.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116962995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Brain extraction: A region based histogram analysis strategy 脑提取:一种基于区域的直方图分析策略
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422305
H. Khastavaneh, H. Ebrahimpour-Komleh
Brain extraction is the task of removing non-brain tissues from brain magnetic resonance images. Brain extraction is a preprocessing step in many applications related to the brain image analysis. Accurate extraction of brain tissue is a laborious task. So, automatic extraction of it is a need in many applications. In this study we propose an automatic region based brain extraction method. In this method histogram of each region is independently analyzed and parameters relating to each tissue type is estimated by employing expectation-maximization algorithm. The estimated parameters of each tissue type including its mean and variance are used to determine tissues of interests. In this study tissues of interest are gray matter and white mater. Eventually a connected component analysis leads to select largest connected components of tissues of interest as brain mask. The proposed method is tested on BrainWeb dataset. Jaccard similarity index (J), Dice similarity coefficient (DSC), Sensitivity (Sen), and Specificity (Spec) are used to measure performance of the proposed method. The results are compared to three popular brain extraction methods namely hybrid watershed algorithm (HWA), brain extraction tools (BET), and brain surface extractor (BSE). The proposed method outperforms mentioned popular methods.
脑提取是从脑磁共振图像中去除非脑组织的任务。在许多与脑图像分析相关的应用中,脑提取是一个预处理步骤。准确提取脑组织是一项艰巨的任务。因此,在许多应用中都需要对其进行自动提取。在这项研究中,我们提出了一种基于自动区域的大脑提取方法。该方法对每个区域的直方图进行独立分析,并采用期望最大化算法估计与每种组织类型相关的参数。每种组织类型的估计参数包括其均值和方差用于确定感兴趣的组织。本研究关注的组织是灰质和白质。最终,一个连接成分分析导致选择最大的连接成分的组织感兴趣的脑掩膜。在BrainWeb数据集上进行了测试。用Jaccard相似指数(J)、Dice相似系数(DSC)、Sensitivity (Sen)和Specificity (Spec)来衡量该方法的性能。结果与混合分水岭算法(HWA)、脑提取工具(BET)和脑表面提取器(BSE)三种常用的脑提取方法进行了比较。所提方法优于上述常用方法。
{"title":"Brain extraction: A region based histogram analysis strategy","authors":"H. Khastavaneh, H. Ebrahimpour-Komleh","doi":"10.1109/SPIS.2015.7422305","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422305","url":null,"abstract":"Brain extraction is the task of removing non-brain tissues from brain magnetic resonance images. Brain extraction is a preprocessing step in many applications related to the brain image analysis. Accurate extraction of brain tissue is a laborious task. So, automatic extraction of it is a need in many applications. In this study we propose an automatic region based brain extraction method. In this method histogram of each region is independently analyzed and parameters relating to each tissue type is estimated by employing expectation-maximization algorithm. The estimated parameters of each tissue type including its mean and variance are used to determine tissues of interests. In this study tissues of interest are gray matter and white mater. Eventually a connected component analysis leads to select largest connected components of tissues of interest as brain mask. The proposed method is tested on BrainWeb dataset. Jaccard similarity index (J), Dice similarity coefficient (DSC), Sensitivity (Sen), and Specificity (Spec) are used to measure performance of the proposed method. The results are compared to three popular brain extraction methods namely hybrid watershed algorithm (HWA), brain extraction tools (BET), and brain surface extractor (BSE). The proposed method outperforms mentioned popular methods.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124139088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Traffic sign recognition using an extended bag-of-features model with spatial histogram 基于空间直方图扩展特征袋模型的交通标志识别
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422338
Mahsa Mirabdollahi Shams, H. Kaveh, R. Safabakhsh
Traffic sign recognition (TSR) is a major challenging task for intelligent transport systems. In this paper, we present a multiclass traffic sign recognition system based on the Bag-of-Word (BOW) model. Despite huge success of BOW method, ignoring the spatial information is a weakness of this model and affects accuracy of classification. We have proposed a Spatial Histogram for traffic signs that preserves the required spatial information. In addition, we used an extended codebook construction method to extract key features from all of sign categories efficiently and achieved a recognition rate of %88.02 through 62 sign types with a short execution time.
交通标志识别(TSR)是智能交通系统面临的一个重大挑战。本文提出了一种基于词袋(Bag-of-Word, BOW)模型的多级交通标志识别系统。尽管BOW方法取得了巨大的成功,但忽略空间信息是该模型的一个弱点,影响了分类的准确性。我们提出了一种保留所需空间信息的交通标志空间直方图。此外,我们使用扩展码本构建方法从所有符号类别中高效提取关键特征,在较短的执行时间内,对62种符号类型的识别率达到了%88.02。
{"title":"Traffic sign recognition using an extended bag-of-features model with spatial histogram","authors":"Mahsa Mirabdollahi Shams, H. Kaveh, R. Safabakhsh","doi":"10.1109/SPIS.2015.7422338","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422338","url":null,"abstract":"Traffic sign recognition (TSR) is a major challenging task for intelligent transport systems. In this paper, we present a multiclass traffic sign recognition system based on the Bag-of-Word (BOW) model. Despite huge success of BOW method, ignoring the spatial information is a weakness of this model and affects accuracy of classification. We have proposed a Spatial Histogram for traffic signs that preserves the required spatial information. In addition, we used an extended codebook construction method to extract key features from all of sign categories efficiently and achieved a recognition rate of %88.02 through 62 sign types with a short execution time.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130127803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A new method for traffic density estimation based on topic model 基于主题模型的交通密度估计新方法
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422323
Razie Kaviani, P. Ahmadi, I. Gholampour
Traffic density estimation plays an integral role in intelligent transportation systems (ITS), using which provides important information for signal control and effective traffic management. In this paper, we present a new framework for traffic density estimation based on topic model, which is an unsupervised model. This framework uses a set of visual features without any need to individual vehicle detection and tracking, and discovers the motion patterns automatically in traffic scenes by using topic model. Then, likelihood value allocated to each video clip enables us to estimate its traffic density. Results on a standard dataset show high classification performance of our proposed approach and robustness to typical environmental and illumination conditions.
交通密度估计在智能交通系统中起着重要的作用,为信号控制和有效的交通管理提供了重要的信息。本文提出了一种新的基于主题模型的交通密度估计框架,这是一种无监督模型。该框架利用一组视觉特征,无需对单个车辆进行检测和跟踪,利用主题模型自动发现交通场景中的运动模式。然后,为每个视频片段分配似然值,使我们能够估计其流量密度。在标准数据集上的结果表明,我们提出的方法具有较高的分类性能,并且对典型环境和光照条件具有鲁棒性。
{"title":"A new method for traffic density estimation based on topic model","authors":"Razie Kaviani, P. Ahmadi, I. Gholampour","doi":"10.1109/SPIS.2015.7422323","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422323","url":null,"abstract":"Traffic density estimation plays an integral role in intelligent transportation systems (ITS), using which provides important information for signal control and effective traffic management. In this paper, we present a new framework for traffic density estimation based on topic model, which is an unsupervised model. This framework uses a set of visual features without any need to individual vehicle detection and tracking, and discovers the motion patterns automatically in traffic scenes by using topic model. Then, likelihood value allocated to each video clip enables us to estimate its traffic density. Results on a standard dataset show high classification performance of our proposed approach and robustness to typical environmental and illumination conditions.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"15 15-16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132879304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Dynamic prediction scheduling for virtual machine placement via ant colony optimization 基于蚁群优化的虚拟机布局动态预测调度
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422321
Milad Seddigh, H. Taheri, Saeed Sharifian
Virtual machine (VM) scheduling with load balancing in cloud computing aims to allocate VMs to suitable physical machines (PM) and balance the resource usage among all of the PMs. Correct scheduling of cloud hosts is necessary to develop efficient scheduling strategies to appropriately allocate VMs to physical resources. In this regard the use of dynamic forecast of resource usage in each PM can improve the VM scheduling problem. This paper combines ant colony optimization (ACO) and VM dynamic forecast scheduling (VM_DFS), called virtual machine dynamic prediction scheduling via ant colony optimization (VMDPS-ACO), to solve the VM scheduling problem. In this algorithm through analysis of historical memory consumption in each PM, future memory consumption forecast of VMs on that PM and the efficient allocation of VMs on the cloud infrastructure is performed. We experimented the proposed algorithm using Matlab. The performance of the proposed algorithm is compared with VM_DFS. VM_DFS algorithm exploits first fit decreasing (FFD) scheme using corresponding types (i.e. queuing the list of VMs increasingly, decreasingly or randomly) to schedule VMs and assign them to suitable PMs. We experimented the proposed algorithm in both homogeneous and heterogeneous mode. The results indicate, VMDPS-ACO produces lower resource wastage than VM_DFS in both homogenous and heterogeneous modes and better load balancing among PMs.
云计算中的虚拟机负载均衡调度的目的是将虚拟机分配到合适的物理机上,并在所有物理机上平衡资源使用。正确的云主机调度是制定有效的调度策略,合理分配虚拟机到物理资源的必要条件。在这方面,使用每个PM的资源使用动态预测可以改善虚拟机调度问题。本文将蚁群优化(ACO)与虚拟机动态预测调度(VM_DFS)相结合,称为基于蚁群优化的虚拟机动态预测调度(VMDPS-ACO)来解决虚拟机调度问题。该算法通过分析每个PM的历史内存消耗情况,预测该PM上虚拟机的未来内存消耗情况,并对云基础架构上的虚拟机进行有效分配。我们用Matlab对所提出的算法进行了实验。将该算法与VM_DFS进行了性能比较。VM_DFS算法利用首先拟合递减(FFD)方案,使用相应的类型(即对虚拟机列表进行递增、递减或随机排队)来调度虚拟机,并将其分配给合适的pm。我们在同构模式和异构模式下实验了该算法。结果表明,VMDPS-ACO在同质和异构模式下都比VM_DFS产生更低的资源浪费,并且在pm之间具有更好的负载均衡。
{"title":"Dynamic prediction scheduling for virtual machine placement via ant colony optimization","authors":"Milad Seddigh, H. Taheri, Saeed Sharifian","doi":"10.1109/SPIS.2015.7422321","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422321","url":null,"abstract":"Virtual machine (VM) scheduling with load balancing in cloud computing aims to allocate VMs to suitable physical machines (PM) and balance the resource usage among all of the PMs. Correct scheduling of cloud hosts is necessary to develop efficient scheduling strategies to appropriately allocate VMs to physical resources. In this regard the use of dynamic forecast of resource usage in each PM can improve the VM scheduling problem. This paper combines ant colony optimization (ACO) and VM dynamic forecast scheduling (VM_DFS), called virtual machine dynamic prediction scheduling via ant colony optimization (VMDPS-ACO), to solve the VM scheduling problem. In this algorithm through analysis of historical memory consumption in each PM, future memory consumption forecast of VMs on that PM and the efficient allocation of VMs on the cloud infrastructure is performed. We experimented the proposed algorithm using Matlab. The performance of the proposed algorithm is compared with VM_DFS. VM_DFS algorithm exploits first fit decreasing (FFD) scheme using corresponding types (i.e. queuing the list of VMs increasingly, decreasingly or randomly) to schedule VMs and assign them to suitable PMs. We experimented the proposed algorithm in both homogeneous and heterogeneous mode. The results indicate, VMDPS-ACO produces lower resource wastage than VM_DFS in both homogenous and heterogeneous modes and better load balancing among PMs.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126596798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Design an intelligent curve to reduce the accident rate 设计智能曲线,降低事故率
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422337
Negin Massoudian, M. Eshghi
Since unsafe speed and deviation to the left side in road curve with limited Visibility, may cause accidents to happen, a solution to detect and prevent such accidents has been proposed in this paper in order to achieve a safe trip. It is necessary to have the geometric properties of the road, and detect the speed and location of the car. To do this, initially the geometric properties of mountainous roads are extracted and then using inductive loop detectors sensors, and instant speed and location data of vehicles on the road, probability of accident will be predicted. In next step, by warning drivers and reducing the vehicle's speed using automatic physical barriers used along the route, accidents will be prevented. Finally the costs of making the roads intelligent will be investigated. The results show that the designed intelligent system has succeeded to achieve remarkable reduction in probable.
由于在能见度有限的弯道中,不安全的车速和偏左会导致事故的发生,本文提出了一种检测和预防此类事故的解决方案,以实现安全出行。它必须具有道路的几何属性,并检测汽车的速度和位置。为此,首先提取山区道路的几何属性,然后利用电感回路检测器传感器,结合道路上车辆的瞬时速度和位置数据,预测事故发生的概率。下一步,将利用沿途设置的自动物理屏障,警告驾驶员并降低车辆速度,以防止事故发生。最后,将调查道路智能化的成本。结果表明,所设计的智能系统成功地实现了概率的显著降低。
{"title":"Design an intelligent curve to reduce the accident rate","authors":"Negin Massoudian, M. Eshghi","doi":"10.1109/SPIS.2015.7422337","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422337","url":null,"abstract":"Since unsafe speed and deviation to the left side in road curve with limited Visibility, may cause accidents to happen, a solution to detect and prevent such accidents has been proposed in this paper in order to achieve a safe trip. It is necessary to have the geometric properties of the road, and detect the speed and location of the car. To do this, initially the geometric properties of mountainous roads are extracted and then using inductive loop detectors sensors, and instant speed and location data of vehicles on the road, probability of accident will be predicted. In next step, by warning drivers and reducing the vehicle's speed using automatic physical barriers used along the route, accidents will be prevented. Finally the costs of making the roads intelligent will be investigated. The results show that the designed intelligent system has succeeded to achieve remarkable reduction in probable.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134107250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
User-friendly visual secret sharing based on random grids 基于随机网格的用户友好的可视化秘密共享
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422312
S. Paknahad, S. A. Hosseini, Mahdi R. Alaghband
A user-friendly visual secret sharing scheme without pixel expansion is presented based on random grids. Since noise like shares are not user-friendly, a meaningful shares producing method would be proposed in order to simplify mass data management. Firstly, black and white pixels distribution in shared images and stack image will be analyzed, then a probability allocation will be proposed which has ability to control the quality of produced shared images and stack image. In former methods there was a quality tradeoff between meaningful shares and stack image, but the proposed method increases tradeoff flexibility. Moreover the inability to adjust the visual quality reduced by the proposed visual secret sharing scheme. The suggested method will be checked and compared to other schemes.
提出了一种基于随机网格的用户友好的无需像素扩展的可视化秘密共享方案。由于类噪声股票不便于用户使用,为了简化海量数据的管理,提出了一种有意义的股票生成方法。首先分析了共享图像和堆栈图像中的黑白像素分布,然后提出了一种能够控制生成的共享图像和堆栈图像质量的概率分配方法。在以前的方法中,有意义的份额和堆栈图像之间存在质量权衡,但该方法增加了权衡的灵活性。此外,所提出的视觉秘密共享方案降低了视觉质量的不可调性。建议的方法将被检查并与其他方案进行比较。
{"title":"User-friendly visual secret sharing based on random grids","authors":"S. Paknahad, S. A. Hosseini, Mahdi R. Alaghband","doi":"10.1109/SPIS.2015.7422312","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422312","url":null,"abstract":"A user-friendly visual secret sharing scheme without pixel expansion is presented based on random grids. Since noise like shares are not user-friendly, a meaningful shares producing method would be proposed in order to simplify mass data management. Firstly, black and white pixels distribution in shared images and stack image will be analyzed, then a probability allocation will be proposed which has ability to control the quality of produced shared images and stack image. In former methods there was a quality tradeoff between meaningful shares and stack image, but the proposed method increases tradeoff flexibility. Moreover the inability to adjust the visual quality reduced by the proposed visual secret sharing scheme. The suggested method will be checked and compared to other schemes.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115253169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel compressed sensing DOA estimation using difference set codes 一种基于差分集码的压缩感知DOA估计方法
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422330
Iman Taghavi, M. Sabahi, F. Parvaresh, M. Mivehchy
In this paper, we address the problem of direction-of-arrival (DOA) estimation using a novel spatial sampling scheme based on difference set (DS) codes, called DS-spatial sampling. It is shown that the proposed DS-spatial sampling scheme can be modeled by a deterministic dictionary with minimum coherence. We also develop a low complexity compressed sensing (CS) model for DOA estimation. The proposed methods can reduce the number of array elements as well as the number of receivers. Compared with the conventional DOA estimation algorithm, the proposed sampling and processing method can achieve significantly higher resolution.
在本文中,我们使用一种新的基于差分集(DS)编码的空间采样方案来解决到达方向(DOA)估计问题,称为DS-空间采样。结果表明,所提出的ds空间采样方案可以用具有最小相干性的确定性字典来建模。我们还开发了一种用于DOA估计的低复杂度压缩感知(CS)模型。所提出的方法可以减少阵列元素的数量和接收器的数量。与传统的DOA估计算法相比,所提出的采样处理方法可以获得更高的分辨率。
{"title":"A novel compressed sensing DOA estimation using difference set codes","authors":"Iman Taghavi, M. Sabahi, F. Parvaresh, M. Mivehchy","doi":"10.1109/SPIS.2015.7422330","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422330","url":null,"abstract":"In this paper, we address the problem of direction-of-arrival (DOA) estimation using a novel spatial sampling scheme based on difference set (DS) codes, called DS-spatial sampling. It is shown that the proposed DS-spatial sampling scheme can be modeled by a deterministic dictionary with minimum coherence. We also develop a low complexity compressed sensing (CS) model for DOA estimation. The proposed methods can reduce the number of array elements as well as the number of receivers. Compared with the conventional DOA estimation algorithm, the proposed sampling and processing method can achieve significantly higher resolution.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116000948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
License plate recognition based on edge histogram analysis and classifier ensemble 基于边缘直方图分析和分类器集成的车牌识别
Pub Date : 2015-12-01 DOI: 10.1109/SPIS.2015.7422310
M. Nejati, A. Majidi, Morteza Jalalat
In this paper, a new approach for Iranian vehicle license plate recognition (LPR) is proposed. The proposed algorithm contains four main steps including plate localization, segmentation, recognition, and fusion of multiple recognition results. The license plate localization is begun with some preprocessing for down-sampling, denoising and histogram equalization. Then, histogram of vertical edges is used for detection of candidate lines expected to contain the license plate. In this step, we design a filter in order to reduce the number of false line candidates. The candidate plates are then extracted using vertical projection histogram of edges and aspect ratio characteristic. The binary image of these candidates obtained by locally adaptive thresholding is transmitted to the segmentation and recognition modules. Our recognition method is accomplished using a classifier ensemble with mixture of experts architecture. Using a feedback from the recognition result of candidate plates, the true candidate is detected. To improve the recognition accuracy and robustness, we apply the proposed LPR on three consecutive frames of a vehicle captured by different exposure times and then combine their recognition outputs. The experimental results in practical situations of day and night show that the proposed approach leads to 95.39% accuracy in plate localization and 92.45% overall accuracy after recognition.
本文提出了一种新的伊朗车辆车牌识别方法。该算法包括车牌定位、分割、识别和多识别结果融合四个主要步骤。车牌定位首先进行降采样、去噪和直方图均衡化预处理。然后,使用垂直边缘直方图检测期望包含车牌的候选线。在这一步中,我们设计了一个过滤器,以减少假候选线的数量。然后利用边缘垂直投影直方图和纵横比特征提取候选板。通过局部自适应阈值分割得到候选对象的二值图像,并将其传输到分割和识别模块。我们的识别方法是使用混合专家架构的分类器集成来完成的。利用候选车牌识别结果的反馈,检测出真实的候选车牌。为了提高识别精度和鲁棒性,我们将所提出的LPR应用于不同曝光时间拍摄的车辆连续三帧图像,然后将它们的识别输出进行组合。白天和夜间实际情况下的实验结果表明,该方法的车牌定位精度为95.39%,识别后的整体精度为92.45%。
{"title":"License plate recognition based on edge histogram analysis and classifier ensemble","authors":"M. Nejati, A. Majidi, Morteza Jalalat","doi":"10.1109/SPIS.2015.7422310","DOIUrl":"https://doi.org/10.1109/SPIS.2015.7422310","url":null,"abstract":"In this paper, a new approach for Iranian vehicle license plate recognition (LPR) is proposed. The proposed algorithm contains four main steps including plate localization, segmentation, recognition, and fusion of multiple recognition results. The license plate localization is begun with some preprocessing for down-sampling, denoising and histogram equalization. Then, histogram of vertical edges is used for detection of candidate lines expected to contain the license plate. In this step, we design a filter in order to reduce the number of false line candidates. The candidate plates are then extracted using vertical projection histogram of edges and aspect ratio characteristic. The binary image of these candidates obtained by locally adaptive thresholding is transmitted to the segmentation and recognition modules. Our recognition method is accomplished using a classifier ensemble with mixture of experts architecture. Using a feedback from the recognition result of candidate plates, the true candidate is detected. To improve the recognition accuracy and robustness, we apply the proposed LPR on three consecutive frames of a vehicle captured by different exposure times and then combine their recognition outputs. The experimental results in practical situations of day and night show that the proposed approach leads to 95.39% accuracy in plate localization and 92.45% overall accuracy after recognition.","PeriodicalId":424434,"journal":{"name":"2015 Signal Processing and Intelligent Systems Conference (SPIS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122250098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
2015 Signal Processing and Intelligent Systems Conference (SPIS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1