Pub Date : 2016-12-01DOI: 10.1109/ISSPIT.2016.7886006
Minqiu Chen, Xi Chen, X. Mao
In this paper, we modify the classic manifold separation technique (MST), aiming to reduce its dependence on high signal-to-noise ratio (SNR) measuring environment. According to the analysis of the array response, it is demonstrated that to maintain a correct phase relationship between the received data at different calibration angles is indispensable for the application of MST. Thus, we slightly change the structure of the traditional calibration system, so that a phase reference for the measurements can be obtained. Besides, unlike the classic MST, where only a single snapshot measurement is utilized for calibration, multi-snapshot information is exploited in the novel method by using the subspace decomposition technique. Simulation results verify the superiorities of the proposed subspace-based calibration method in 1-D and 2-D scenarios.
{"title":"A subspace-based Manifold separation technique for array calibration","authors":"Minqiu Chen, Xi Chen, X. Mao","doi":"10.1109/ISSPIT.2016.7886006","DOIUrl":"https://doi.org/10.1109/ISSPIT.2016.7886006","url":null,"abstract":"In this paper, we modify the classic manifold separation technique (MST), aiming to reduce its dependence on high signal-to-noise ratio (SNR) measuring environment. According to the analysis of the array response, it is demonstrated that to maintain a correct phase relationship between the received data at different calibration angles is indispensable for the application of MST. Thus, we slightly change the structure of the traditional calibration system, so that a phase reference for the measurements can be obtained. Besides, unlike the classic MST, where only a single snapshot measurement is utilized for calibration, multi-snapshot information is exploited in the novel method by using the subspace decomposition technique. Simulation results verify the superiorities of the proposed subspace-based calibration method in 1-D and 2-D scenarios.","PeriodicalId":371691,"journal":{"name":"2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116786264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ISSPIT.2016.7886057
Aljoharah Algwaiz, S. Rajasekaran, R. Ammar
Efficient and accurate data mining has become vital as technology advancements in data collection and storage soar. Researchers have proposed various valuable machine learning algorithms for data mining. However, not many have utilized formal methods. This paper proposes a data mining approach using Probabilistic Context Free Grammars (PCFGs). In this work we have employed PCFGs to mine from large heterogeneous datasets. The data mining problem of our interest is classification. To start with a probabilistic grammar is inferred from datasets for which classifications are known. The learnt model can then be used to classify any unknown data. Specifically, for each unknown data point, the model can be used to calculate probabilities that this point belongs to the various possible classes. A simple resolution strategy could be to associate the point with the class that corresponds to the maximum probability. To demonstrate the applicability of our approach we consider the problem of identifying splice junctions. Using a PCFG, an input DNA sequence is classified as donor, acceptor, or neither.
{"title":"Data mining using Probabilistic Grammars","authors":"Aljoharah Algwaiz, S. Rajasekaran, R. Ammar","doi":"10.1109/ISSPIT.2016.7886057","DOIUrl":"https://doi.org/10.1109/ISSPIT.2016.7886057","url":null,"abstract":"Efficient and accurate data mining has become vital as technology advancements in data collection and storage soar. Researchers have proposed various valuable machine learning algorithms for data mining. However, not many have utilized formal methods. This paper proposes a data mining approach using Probabilistic Context Free Grammars (PCFGs). In this work we have employed PCFGs to mine from large heterogeneous datasets. The data mining problem of our interest is classification. To start with a probabilistic grammar is inferred from datasets for which classifications are known. The learnt model can then be used to classify any unknown data. Specifically, for each unknown data point, the model can be used to calculate probabilities that this point belongs to the various possible classes. A simple resolution strategy could be to associate the point with the class that corresponds to the maximum probability. To demonstrate the applicability of our approach we consider the problem of identifying splice junctions. Using a PCFG, an input DNA sequence is classified as donor, acceptor, or neither.","PeriodicalId":371691,"journal":{"name":"2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121032275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ISSPIT.2016.7886058
Moataz H. Khalil, W. Sheta, Adel Said Elmaghraby
Cloud computing environments are growing in complexity creating more challenges for improved resilience and availability. Cloud computing research can benefit from machine learning and data mining by using data from actual operational cloud systems. One aspect that needs in-depth analysis is the failure characteristics of cloud environments. Failure is the main contributor to reduced resiliency of applications and services in cloud computing. This work presents a categorizing method to identify machines removed from the system based on failure or due to maintenance. Our experiments are targeting large scale cloud computing environments and experimental data consists of 25 million submitted tasks on 12500 severs over a 29 day period. The parameters of categorizing are CPU and memory utilization. Also, this work developed a support vector machine (SVM) model for learning and prediction of machine failure. The devolved model achieved 99.04 % accuracy. Precision and Recall curves demonstrate that the model is consistent with varying data size. The model has very good consistency with max difference from theoretical data by only 0.008%.
{"title":"Categorizing hardware failure in large scale cloud computing environment","authors":"Moataz H. Khalil, W. Sheta, Adel Said Elmaghraby","doi":"10.1109/ISSPIT.2016.7886058","DOIUrl":"https://doi.org/10.1109/ISSPIT.2016.7886058","url":null,"abstract":"Cloud computing environments are growing in complexity creating more challenges for improved resilience and availability. Cloud computing research can benefit from machine learning and data mining by using data from actual operational cloud systems. One aspect that needs in-depth analysis is the failure characteristics of cloud environments. Failure is the main contributor to reduced resiliency of applications and services in cloud computing. This work presents a categorizing method to identify machines removed from the system based on failure or due to maintenance. Our experiments are targeting large scale cloud computing environments and experimental data consists of 25 million submitted tasks on 12500 severs over a 29 day period. The parameters of categorizing are CPU and memory utilization. Also, this work developed a support vector machine (SVM) model for learning and prediction of machine failure. The devolved model achieved 99.04 % accuracy. Precision and Recall curves demonstrate that the model is consistent with varying data size. The model has very good consistency with max difference from theoretical data by only 0.008%.","PeriodicalId":371691,"journal":{"name":"2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"327 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122636267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ISSPIT.2016.7886023
A. Salazar, G. Safont, Alberto Rodríguez, L. Vergara
This paper presents a signal processing framework for the problem of automatic credit card fraud detection. This is a critical problem affecting large financial companies that has increased due to the rapid expansion of information and communication technologies. The framework establishes relationships between signal processing and pattern recognition issues around a detection problem with a very low ratio between fraudulent and legitimate transactions. Solutions are proposed using fusion of scores which are related to the familiar likelihood ratio statistic. Moreover, the classical detection problem analyzed by receiving operating characteristic curves is mapped to real-world business requirements based on key performance indicators. A strong practical case which combines real and surrogate data is approached, including comparison of the proposed methods with standard methods.
{"title":"Combination of multiple detectors for credit card fraud detection","authors":"A. Salazar, G. Safont, Alberto Rodríguez, L. Vergara","doi":"10.1109/ISSPIT.2016.7886023","DOIUrl":"https://doi.org/10.1109/ISSPIT.2016.7886023","url":null,"abstract":"This paper presents a signal processing framework for the problem of automatic credit card fraud detection. This is a critical problem affecting large financial companies that has increased due to the rapid expansion of information and communication technologies. The framework establishes relationships between signal processing and pattern recognition issues around a detection problem with a very low ratio between fraudulent and legitimate transactions. Solutions are proposed using fusion of scores which are related to the familiar likelihood ratio statistic. Moreover, the classical detection problem analyzed by receiving operating characteristic curves is mapped to real-world business requirements based on key performance indicators. A strong practical case which combines real and surrogate data is approached, including comparison of the proposed methods with standard methods.","PeriodicalId":371691,"journal":{"name":"2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123702714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ISSPIT.2016.7886003
M. Leles, A. S. V. Cardoso, Mariana G. Moreira, H. N. Guimarães, C. M. Silva, A. Pitsillides
Singular Spectrum Analysis (SSA) is a nonparametric approach used to decompose a time series into meaningful components, related to trends, oscillations and noise. SSA can be seen as a spectral decomposition, where each term is related to an eigenvector derived from the trajectory matrix. In this context the eigenvectors can be viewed as eigenfilters. The frequency domain interpretation of SSA is a relatively recent subject. Although the analytic solution for the frequency-response of eigenfilters is already known, the periodogram is often applied for their frequency characterization. This paper presents a comparison of these methods, applied to eigenfilters' frequency characterization for time series components identification. To perform this evaluation, several tests were carried out, in both a synthetic and real data time series. In every situations the eigenfilters analytic frequency response method provided better results compared to the periodogram in terms of frequency estimates as well as their dispersion and sensitivity to variations in the SSA algorithm parameter.
{"title":"Frequency-domain characterization of Singular Spectrum Analysis eigenvectors","authors":"M. Leles, A. S. V. Cardoso, Mariana G. Moreira, H. N. Guimarães, C. M. Silva, A. Pitsillides","doi":"10.1109/ISSPIT.2016.7886003","DOIUrl":"https://doi.org/10.1109/ISSPIT.2016.7886003","url":null,"abstract":"Singular Spectrum Analysis (SSA) is a nonparametric approach used to decompose a time series into meaningful components, related to trends, oscillations and noise. SSA can be seen as a spectral decomposition, where each term is related to an eigenvector derived from the trajectory matrix. In this context the eigenvectors can be viewed as eigenfilters. The frequency domain interpretation of SSA is a relatively recent subject. Although the analytic solution for the frequency-response of eigenfilters is already known, the periodogram is often applied for their frequency characterization. This paper presents a comparison of these methods, applied to eigenfilters' frequency characterization for time series components identification. To perform this evaluation, several tests were carried out, in both a synthetic and real data time series. In every situations the eigenfilters analytic frequency response method provided better results compared to the periodogram in terms of frequency estimates as well as their dispersion and sensitivity to variations in the SSA algorithm parameter.","PeriodicalId":371691,"journal":{"name":"2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134253654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ISSPIT.2016.7886029
D. Alinovi, G. Ferrari, F. Pisani, R. Raheli
A novel video processing-based method for remote estimation of the respiratory rate (RR) is proposed. Relying on the fact that breathing involves quasi-periodic movements, this technique employs a generalized model of pixel-wise periodicity and applies a maximum likelihood (ML) criterion. The system first selects suitable regions of interest (ROI) mainly affected by respiratory movements. The obtained ROI are jointly analyzed for the estimation of the fundamental frequency, which is strictly related to the RR of the patient. A large motion detection algorithm is also applied, in order to exclude, from RR estimation, ROI possibly affected by unrelated large movements. The RRs estimated by the proposed system are compared with those extracted by a pneumograph and a previously proposed video processing algorithm. The results, albeit preliminary, show a good agreement with the pneumograph and a clear improvement over the previously proposed algorithm.
{"title":"Respiratory rate monitoring by maximum likelihood video processing","authors":"D. Alinovi, G. Ferrari, F. Pisani, R. Raheli","doi":"10.1109/ISSPIT.2016.7886029","DOIUrl":"https://doi.org/10.1109/ISSPIT.2016.7886029","url":null,"abstract":"A novel video processing-based method for remote estimation of the respiratory rate (RR) is proposed. Relying on the fact that breathing involves quasi-periodic movements, this technique employs a generalized model of pixel-wise periodicity and applies a maximum likelihood (ML) criterion. The system first selects suitable regions of interest (ROI) mainly affected by respiratory movements. The obtained ROI are jointly analyzed for the estimation of the fundamental frequency, which is strictly related to the RR of the patient. A large motion detection algorithm is also applied, in order to exclude, from RR estimation, ROI possibly affected by unrelated large movements. The RRs estimated by the proposed system are compared with those extracted by a pneumograph and a previously proposed video processing algorithm. The results, albeit preliminary, show a good agreement with the pneumograph and a clear improvement over the previously proposed algorithm.","PeriodicalId":371691,"journal":{"name":"2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125591983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ISSPIT.2016.7886046
A. Nasser, A. Mansour, K. Yao, H. Abdallah, M. Chaitou, H. Charara
In this paper, Principal Component Analysis (PCA) techniques are introduced in the context of Cognitive Radio to enhance the Spectrum Sensing performance. PCA step increases the SNR of the Primary User's signal and, consequently, enhances the Spectrum Sensing performance. We applied PCA as a combination scheme of a multi-antenna Cognitive Radio system. Analytic results will be presented to show the effectiveness of this technique by deriving the new SNR obtained after applying PCA, which can be considered a pre-processing step for a classical Spectrum Sensing algorithm. The effect of PCA is examined with well known detectors in Spectrum Sensing, where the proposed technique shows its efficiency. The performance of the proposed technique is corroborated through many simulations.
{"title":"Spectrum Sensing enhancement using Principal Component Analysis","authors":"A. Nasser, A. Mansour, K. Yao, H. Abdallah, M. Chaitou, H. Charara","doi":"10.1109/ISSPIT.2016.7886046","DOIUrl":"https://doi.org/10.1109/ISSPIT.2016.7886046","url":null,"abstract":"In this paper, Principal Component Analysis (PCA) techniques are introduced in the context of Cognitive Radio to enhance the Spectrum Sensing performance. PCA step increases the SNR of the Primary User's signal and, consequently, enhances the Spectrum Sensing performance. We applied PCA as a combination scheme of a multi-antenna Cognitive Radio system. Analytic results will be presented to show the effectiveness of this technique by deriving the new SNR obtained after applying PCA, which can be considered a pre-processing step for a classical Spectrum Sensing algorithm. The effect of PCA is examined with well known detectors in Spectrum Sensing, where the proposed technique shows its efficiency. The performance of the proposed technique is corroborated through many simulations.","PeriodicalId":371691,"journal":{"name":"2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130068248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ISSPIT.2016.7886052
Lambros Pyrgas, P. Kitsos, A. Skodras
The discrete Hartley transform finds numerous applications in signal and image processing. An efficient Field Programmable Gate Array implementation for the 64-point Two-Band Fast Discrete Hartley Transform is proposed in this communication. The architecture requires 57 clock cycles to compute the 64-point Two-Band Fast Discrete Hartley Transform and reaches a rate of up to 103.82 million samples per second at a 92 MHz clock frequency. The architecture has been implemented using VHDL and realized on a Cyclone IV FPGA of Altera.
离散哈特利变换在信号和图像处理中有许多应用。提出了一种有效的现场可编程门阵列实现64点两波段快速离散哈特利变换。该架构需要57个时钟周期来计算64点两频带快速离散哈特利变换,并在92 MHz时钟频率下达到每秒10382万个样本的速率。该体系结构采用VHDL实现,并在Altera公司的Cyclone IV FPGA上实现。
{"title":"An FPGA design for the Two-Band Fast Discrete Hartley Transform","authors":"Lambros Pyrgas, P. Kitsos, A. Skodras","doi":"10.1109/ISSPIT.2016.7886052","DOIUrl":"https://doi.org/10.1109/ISSPIT.2016.7886052","url":null,"abstract":"The discrete Hartley transform finds numerous applications in signal and image processing. An efficient Field Programmable Gate Array implementation for the 64-point Two-Band Fast Discrete Hartley Transform is proposed in this communication. The architecture requires 57 clock cycles to compute the 64-point Two-Band Fast Discrete Hartley Transform and reaches a rate of up to 103.82 million samples per second at a 92 MHz clock frequency. The architecture has been implemented using VHDL and realized on a Cyclone IV FPGA of Altera.","PeriodicalId":371691,"journal":{"name":"2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114876962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ISSPIT.2016.7886036
Zhilbert Tafa
Barrier coverage provides intrusion detection for various national security applications. If the network is randomly deployed, in moderately dense networks, the full end-to-end barrier line might not be provided. To fill the breaks and to assure the intrusion detection, additional nodes have to be introduced. The network should be designed in a way that enables the good (cost/benefit) balance between the number of initially deployed static nodes and the (added) mobile nodes. This research, for the first time introduces the artificial neural networks (ANNs) in predicting the number of the additionally supplied static nodes or simultaneously deployed mobile nodes for barrier coverage setup after the network's initial installation. The results show a high degree of predictability, with the R-factor of over 0.99 regarding the test data. Besides its primary results, the importance of the research relies also in fact that the approach can be extended to the prediction of k-barrier coverage, the mobility range, and to the other network design objectives.
{"title":"Artificial Neural Networks in WSNs design: Mobility prediction for barrier coverage","authors":"Zhilbert Tafa","doi":"10.1109/ISSPIT.2016.7886036","DOIUrl":"https://doi.org/10.1109/ISSPIT.2016.7886036","url":null,"abstract":"Barrier coverage provides intrusion detection for various national security applications. If the network is randomly deployed, in moderately dense networks, the full end-to-end barrier line might not be provided. To fill the breaks and to assure the intrusion detection, additional nodes have to be introduced. The network should be designed in a way that enables the good (cost/benefit) balance between the number of initially deployed static nodes and the (added) mobile nodes. This research, for the first time introduces the artificial neural networks (ANNs) in predicting the number of the additionally supplied static nodes or simultaneously deployed mobile nodes for barrier coverage setup after the network's initial installation. The results show a high degree of predictability, with the R-factor of over 0.99 regarding the test data. Besides its primary results, the importance of the research relies also in fact that the approach can be extended to the prediction of k-barrier coverage, the mobility range, and to the other network design objectives.","PeriodicalId":371691,"journal":{"name":"2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128430122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/ISSPIT.2016.7886012
Bar Hemo, Kobi Cohen, Qing Zhao
The problem of detecting an anomalous process over multiple processes is considered. We consider a composite hypothesis case, in which the measurements drawn when observing a process follow a common distribution parameterized by an unknown parameter (vector). The unknown parameter belongs to one of two disjoint parameter spaces, depending on whether the process is normal or abnormal. The objective is a sequential search strategy that minimizes the expected detection time subject to an error probability constraint. We develop a deterministic search policy to solve the problem and prove its asymptotic optimality (as the error probability approaches zero) when the parameter under the null hypothesis is known. We further provide an explicit upper bound on the error probability for the finite sample regime.
{"title":"Asymptotically optimal search of unknown anomalies","authors":"Bar Hemo, Kobi Cohen, Qing Zhao","doi":"10.1109/ISSPIT.2016.7886012","DOIUrl":"https://doi.org/10.1109/ISSPIT.2016.7886012","url":null,"abstract":"The problem of detecting an anomalous process over multiple processes is considered. We consider a composite hypothesis case, in which the measurements drawn when observing a process follow a common distribution parameterized by an unknown parameter (vector). The unknown parameter belongs to one of two disjoint parameter spaces, depending on whether the process is normal or abnormal. The objective is a sequential search strategy that minimizes the expected detection time subject to an error probability constraint. We develop a deterministic search policy to solve the problem and prove its asymptotic optimality (as the error probability approaches zero) when the parameter under the null hypothesis is known. We further provide an explicit upper bound on the error probability for the finite sample regime.","PeriodicalId":371691,"journal":{"name":"2016 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134472264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}