Pub Date : 2019-03-01DOI: 10.1109/CISS.2019.8692835
A. Nair, T. Tran, A. Reiter, M. Bell
Plane wave ultrasound imaging is an ideal approach to achieve maximum real-time frame rates. However, multiple plane wave insonifications at different angles are often combined to improve image quality, reducing the throughput of the system. We are exploring deep learning-based ultrasound image formation methods as an alternative to this beamforming process by extracting critical information directly from raw radio-frequency channel data from a single plane wave insonification prior to the application of receive time delays. In this paper, we investigate a Generative Adversarial Network (GAN) architecture for the proposed task. This network was trained with over 50,000 FieldII simulations, each containing a single cyst in tissue insonified by a single plane wave. The GAN is trained to produce two outputs – a Deep Neural Network (DNN) B-mode image trained to match a Delay-and-Sum (DAS) beamformed B-mode image and a DNN segmentation trained to match the true segmentation of the cyst from surrounding tissue. We systematically investigate the benefits of feature sharing and discriminative loss during GAN training. Our overall best performing network architecture (with feature sharing and discriminative loss) obtained a PSNR score of 29.38 dB with the simulated test set and 14.86 dB with a tissue-mimicking phantom. The DSC scores were 0.908 and 0.79 for the simulated and phantom data, respectively. The successful translation of the feature representations learned by the GAN to phantom data demonstrates the promise that deep learning holds as an alternative to the traditional ultrasound information extraction pipeline.
{"title":"A Generative Adversarial Neural Network for Beamforming Ultrasound Images Invited Presentation","authors":"A. Nair, T. Tran, A. Reiter, M. Bell","doi":"10.1109/CISS.2019.8692835","DOIUrl":"https://doi.org/10.1109/CISS.2019.8692835","url":null,"abstract":"Plane wave ultrasound imaging is an ideal approach to achieve maximum real-time frame rates. However, multiple plane wave insonifications at different angles are often combined to improve image quality, reducing the throughput of the system. We are exploring deep learning-based ultrasound image formation methods as an alternative to this beamforming process by extracting critical information directly from raw radio-frequency channel data from a single plane wave insonification prior to the application of receive time delays. In this paper, we investigate a Generative Adversarial Network (GAN) architecture for the proposed task. This network was trained with over 50,000 FieldII simulations, each containing a single cyst in tissue insonified by a single plane wave. The GAN is trained to produce two outputs – a Deep Neural Network (DNN) B-mode image trained to match a Delay-and-Sum (DAS) beamformed B-mode image and a DNN segmentation trained to match the true segmentation of the cyst from surrounding tissue. We systematically investigate the benefits of feature sharing and discriminative loss during GAN training. Our overall best performing network architecture (with feature sharing and discriminative loss) obtained a PSNR score of 29.38 dB with the simulated test set and 14.86 dB with a tissue-mimicking phantom. The DSC scores were 0.908 and 0.79 for the simulated and phantom data, respectively. The successful translation of the feature representations learned by the GAN to phantom data demonstrates the promise that deep learning holds as an alternative to the traditional ultrasound information extraction pipeline.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116007953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/CISS.2019.8693022
Shuyu Liu, Yingze Hou, J. Spall
Evaluating the statistical error in the estimate coming from a stochastic approximation (SA) algorithm is useful for confidence region calculation and the determination of stopping times. Robbins-Monro (RM) type stochastic gradient descent is a widely used method in SA. Knowledge of the probability distribution of the SA process is useful for error analysis. Currently, however, only the asymptotic distribution has been studied in this setting in asymptotic theories, while distribution functions in the finite-sample regime have not been clearly depicted. We developed a method to estimate the finite sample distribution based on a surrogate process. We described the stochastic gradient descent (SGD) process as a Euler-Maruyama (EM) scheme for some RM types of stochastic differential equations (SDEs). Weak convergence theory for EM schemes validates its surrogate property with a convergence in distribution sense. For the first time, we have shown that utilizing the solution of Fokker-Planck (FP) equation for the surrogate SDE is appropriate to characterize the evolution of the distribution function in SGD process.
{"title":"Distribution Estimation for Stochastic Approximation in Finite Samples Using A Surrogate Stochastic Differential Equation Method","authors":"Shuyu Liu, Yingze Hou, J. Spall","doi":"10.1109/CISS.2019.8693022","DOIUrl":"https://doi.org/10.1109/CISS.2019.8693022","url":null,"abstract":"Evaluating the statistical error in the estimate coming from a stochastic approximation (SA) algorithm is useful for confidence region calculation and the determination of stopping times. Robbins-Monro (RM) type stochastic gradient descent is a widely used method in SA. Knowledge of the probability distribution of the SA process is useful for error analysis. Currently, however, only the asymptotic distribution has been studied in this setting in asymptotic theories, while distribution functions in the finite-sample regime have not been clearly depicted. We developed a method to estimate the finite sample distribution based on a surrogate process. We described the stochastic gradient descent (SGD) process as a Euler-Maruyama (EM) scheme for some RM types of stochastic differential equations (SDEs). Weak convergence theory for EM schemes validates its surrogate property with a convergence in distribution sense. For the first time, we have shown that utilizing the solution of Fokker-Planck (FP) equation for the surrogate SDE is appropriate to characterize the evolution of the distribution function in SGD process.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131518043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/CISS.2019.8692820
Marius Arvinte, Marcos Tavares, D. Samardzija
A machine learning solution leveraging geolocation side information is proposed for enhancing beam management in 5G NR millimeter wave (mmWave) wireless systems. An important building block of our solution are the support vector machines (SVMs), which are used to model the mapping between the user equipments’ (UEs) geolocations and their serving beams/cells in a multiuser, multi-cell environment. Building upon the SVM models mentioned above, we introduce a multiuser scheduling algorithm that uses local beam assignment information from the cells adjacent to the users to reduce the amount of required real time channel state information (CSI) feedback. Simulations carried out using a realistic antenna array radiation pattern, as well as, data from a ray tracing channel model in a dense urban mmWave deployment show that the proposed multiuser scheduler has remarkably good performance, while its algorithmic complexity is kept low. We further quantify the improvements that our SVM-based beam management methods enable by comparison against the conventional exhaustive beam sweeping approach typically employed by 5G NR mmWave implementations. In this case, we show that our proposal enables the network to achieve a 50% reduction in initial access latency at a fixed signaling overhead, or 34% reduction of signaling overhead at a fixed latency requirement.
{"title":"Beam Management in 5G NR using Geolocation Side Information","authors":"Marius Arvinte, Marcos Tavares, D. Samardzija","doi":"10.1109/CISS.2019.8692820","DOIUrl":"https://doi.org/10.1109/CISS.2019.8692820","url":null,"abstract":"A machine learning solution leveraging geolocation side information is proposed for enhancing beam management in 5G NR millimeter wave (mmWave) wireless systems. An important building block of our solution are the support vector machines (SVMs), which are used to model the mapping between the user equipments’ (UEs) geolocations and their serving beams/cells in a multiuser, multi-cell environment. Building upon the SVM models mentioned above, we introduce a multiuser scheduling algorithm that uses local beam assignment information from the cells adjacent to the users to reduce the amount of required real time channel state information (CSI) feedback. Simulations carried out using a realistic antenna array radiation pattern, as well as, data from a ray tracing channel model in a dense urban mmWave deployment show that the proposed multiuser scheduler has remarkably good performance, while its algorithmic complexity is kept low. We further quantify the improvements that our SVM-based beam management methods enable by comparison against the conventional exhaustive beam sweeping approach typically employed by 5G NR mmWave implementations. In this case, we show that our proposal enables the network to achieve a 50% reduction in initial access latency at a fixed signaling overhead, or 34% reduction of signaling overhead at a fixed latency requirement.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"25 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132089936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/CISS.2019.8692789
Thibaud Tonnellier, Adam Cavatassi, W. Gross
Polar codes natively lack the flexibility that is desired for practical application. Namely, Arikan’s polar code definition can only achieve code lengths that are powers of two. Rate-matching techniques, known as puncturing and shortening, have been applied to polar codes to grant a flexible block length. By considering polarizing kernels of alternate dimensions, Multi-kernel polar codes improve natural block length flexibility. With the recent advent of the 3GPP 5th generation New Radio specification, there now exists an industry standard for length-flexible polar codes. This paper outlines various state-of-the-art flexible polar coding schemes, such as puncturing, shortening, and multi-kernel construction, and evaluates their efficacy with respect to the newly designed 3GPP standard. Simulations and an in-depth analysis are presented.
{"title":"Length-Compatible Polar Codes: A Survey : (Invited Paper)","authors":"Thibaud Tonnellier, Adam Cavatassi, W. Gross","doi":"10.1109/CISS.2019.8692789","DOIUrl":"https://doi.org/10.1109/CISS.2019.8692789","url":null,"abstract":"Polar codes natively lack the flexibility that is desired for practical application. Namely, Arikan’s polar code definition can only achieve code lengths that are powers of two. Rate-matching techniques, known as puncturing and shortening, have been applied to polar codes to grant a flexible block length. By considering polarizing kernels of alternate dimensions, Multi-kernel polar codes improve natural block length flexibility. With the recent advent of the 3GPP 5th generation New Radio specification, there now exists an industry standard for length-flexible polar codes. This paper outlines various state-of-the-art flexible polar coding schemes, such as puncturing, shortening, and multi-kernel construction, and evaluates their efficacy with respect to the newly designed 3GPP standard. Simulations and an in-depth analysis are presented.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129826878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/CISS.2019.8693032
Ziyu Liu, J. Spall
The particle filter is a popular algorithm for solving the state-space problem for its easy implement. Many previous studies have been conducted to study the asymptotical behavior of particle filters. In our previous works, we divided the error of particle filter into two parts. By using Lindeberg’s central limit theorem, we showed that one of them is asymptotically normal. However, it’s hard to estimate the covariance matrix of it’s converged distribution. This paper aims at giving a computable estimator for the covariance matrix.
{"title":"Error Estimation for the Particle Filter","authors":"Ziyu Liu, J. Spall","doi":"10.1109/CISS.2019.8693032","DOIUrl":"https://doi.org/10.1109/CISS.2019.8693032","url":null,"abstract":"The particle filter is a popular algorithm for solving the state-space problem for its easy implement. Many previous studies have been conducted to study the asymptotical behavior of particle filters. In our previous works, we divided the error of particle filter into two parts. By using Lindeberg’s central limit theorem, we showed that one of them is asymptotically normal. However, it’s hard to estimate the covariance matrix of it’s converged distribution. This paper aims at giving a computable estimator for the covariance matrix.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125318509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/CISS.2019.8693047
Sota Akaishi, R. Uda
Cross site scripting (XSS) attack is one of the attacks on the web. It brings session hijack with HTTP cookies, information collection with fake HTML input form and phishing with dummy sites. As a countermeasure of XSS attack, machine learning has attracted a lot of attention. There are existing researches in which SVM, Random Forest and SCW are used for the detection of the attack. However, in the researches, there are problems that the size of data set is too small or unbalanced, and that preprocessing method for vectorization of strings causes misclassification. The highest accuracy of the classification was 98% in existing researches. Therefore, in this paper, we improved the preprocessing method for vectorization by using word2vec to find the frequency of appearance and co-occurrence of the words in XSS attack scripts. Moreover, we also used a large data set to decrease the deviation of the data. Furthermore, we evaluated the classification results with two procedures. One is an inappropriate procedure which some researchers tend to select by mistake. The other is an appropriate procedure which can be applied to an attack detection filter in the real environment.
{"title":"Classification of XSS Attacks by Machine Learning with Frequency of Appearance and Co-occurrence","authors":"Sota Akaishi, R. Uda","doi":"10.1109/CISS.2019.8693047","DOIUrl":"https://doi.org/10.1109/CISS.2019.8693047","url":null,"abstract":"Cross site scripting (XSS) attack is one of the attacks on the web. It brings session hijack with HTTP cookies, information collection with fake HTML input form and phishing with dummy sites. As a countermeasure of XSS attack, machine learning has attracted a lot of attention. There are existing researches in which SVM, Random Forest and SCW are used for the detection of the attack. However, in the researches, there are problems that the size of data set is too small or unbalanced, and that preprocessing method for vectorization of strings causes misclassification. The highest accuracy of the classification was 98% in existing researches. Therefore, in this paper, we improved the preprocessing method for vectorization by using word2vec to find the frequency of appearance and co-occurrence of the words in XSS attack scripts. Moreover, we also used a large data set to decrease the deviation of the data. Furthermore, we evaluated the classification results with two procedures. One is an inappropriate procedure which some researchers tend to select by mistake. The other is an appropriate procedure which can be applied to an attack detection filter in the real environment.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127368555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-18DOI: 10.1109/CISS.2019.8692936
Nazanin Takbiri, D. Goeckel, A. Houmansadr, H. Pishro-Nik
Various modern and highly popular applications make use of user data traces in order to offer specific services, often for the purpose of improving the user’s experience while using such applications. However, even when user data is privatized by employing privacy-preserving mechanisms (PPM), users’ privacy may still be compromised by an external party who leverages statistical matching methods to match users’ traces with their previous activities. In this paper, we obtain the theoretical bounds on user privacy for situations in which user traces are matchable to sequences of prior behavior, despite anonymization of data time series. We provide both achievability and converse results for the case where the data trace of each user consists of independent and identically distributed (i.i.d.) random samples drawn from a multinomial distribution, as well as the case that the users’ data points are dependent over time and the data trace of each user is governed by a Markov chain model.
{"title":"Asymptotic Limits of Privacy in Bayesian Time Series Matching","authors":"Nazanin Takbiri, D. Goeckel, A. Houmansadr, H. Pishro-Nik","doi":"10.1109/CISS.2019.8692936","DOIUrl":"https://doi.org/10.1109/CISS.2019.8692936","url":null,"abstract":"Various modern and highly popular applications make use of user data traces in order to offer specific services, often for the purpose of improving the user’s experience while using such applications. However, even when user data is privatized by employing privacy-preserving mechanisms (PPM), users’ privacy may still be compromised by an external party who leverages statistical matching methods to match users’ traces with their previous activities. In this paper, we obtain the theoretical bounds on user privacy for situations in which user traces are matchable to sequences of prior behavior, despite anonymization of data time series. We provide both achievability and converse results for the case where the data trace of each user consists of independent and identically distributed (i.i.d.) random samples drawn from a multinomial distribution, as well as the case that the users’ data points are dependent over time and the data trace of each user is governed by a Markov chain model.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117061467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-30DOI: 10.1109/CISS.2019.8692792
Ananth Narayan Samudrala, M. Amini, S. Kar, Rick S. Blum
Accurate network topology information is critical for secure operation of smart power distribution systems. Line outages can change the operational topology of a distribution network. As a result, topology identification by detecting outages is an important task to avoid mismatch between the topology that the operator believes is present and the actual topology. Power distribution systems are operated as radial trees and are recently adopting the integration of sensors to monitor the network in real time. In this paper, an optimal sensor placement solution is proposed that enables outage detection through statistical tests based on sensor measurements. Using two types of sensors, node sensors and line sensors, we propose a novel formulation for the optimal sensor placement as a cost optimization problem with binary decision variables, i.e., to place or not place a sensor at each bus/line. The advantage of the proposed placement strategy for outage detection is that it incorporates various types of sensors, is independent of load forecast statistics and is cost effective. Numerical results illustrating the placement solution are presented.
{"title":"Optimal Sensor Placement for Topology Identification in Smart Power Grids","authors":"Ananth Narayan Samudrala, M. Amini, S. Kar, Rick S. Blum","doi":"10.1109/CISS.2019.8692792","DOIUrl":"https://doi.org/10.1109/CISS.2019.8692792","url":null,"abstract":"Accurate network topology information is critical for secure operation of smart power distribution systems. Line outages can change the operational topology of a distribution network. As a result, topology identification by detecting outages is an important task to avoid mismatch between the topology that the operator believes is present and the actual topology. Power distribution systems are operated as radial trees and are recently adopting the integration of sensors to monitor the network in real time. In this paper, an optimal sensor placement solution is proposed that enables outage detection through statistical tests based on sensor measurements. Using two types of sensors, node sensors and line sensors, we propose a novel formulation for the optimal sensor placement as a cost optimization problem with binary decision variables, i.e., to place or not place a sensor at each bus/line. The advantage of the proposed placement strategy for outage detection is that it incorporates various types of sensors, is independent of load forecast statistics and is cost effective. Numerical results illustrating the placement solution are presented.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-17DOI: 10.1109/CISS.2019.8692899
Sebastian Cammerer, Xiaojie Wang, Yingyan Ma, S. Brink
We consider spatially coupled low-density parity-check (SC-LDPC) codes within a non-orthogonal interleave division multiple access (IDMA) scheme to avoid cumbersome degree profile matching of the LDPC code components to the iterative multi-user detector (MUD). Besides excellent decoding thresholds, the approach benefits from the possibility of using rather simple and regular underlying block LDPC codes owing to the universal behavior of the resulting coupled code with respect to the channel front-end, i.e., the iterative MUD. Furthermore, an additional outer repetition code makes the scheme flexible to cope with a varying number of users and user rates, as the SC-LDPC itself can be kept constant for a wide range of different user loads. The decoding thresholds are obtained via density evolution (DE) and verified by bit error rate (BER) simulations. To keep decoding complexity and latency small, we introduce a joint iterative windowed detector/decoder imposing carefully adjusted sub-block interleavers. Finally, we show that the proposed coding scheme also works for Rayleigh channels using the same code with tolerable performance loss compared to the additive white Gaussian noise (AWGN) channel.
{"title":"Spatially Coupled LDPC Codes and the Multiple Access Channel","authors":"Sebastian Cammerer, Xiaojie Wang, Yingyan Ma, S. Brink","doi":"10.1109/CISS.2019.8692899","DOIUrl":"https://doi.org/10.1109/CISS.2019.8692899","url":null,"abstract":"We consider spatially coupled low-density parity-check (SC-LDPC) codes within a non-orthogonal interleave division multiple access (IDMA) scheme to avoid cumbersome degree profile matching of the LDPC code components to the iterative multi-user detector (MUD). Besides excellent decoding thresholds, the approach benefits from the possibility of using rather simple and regular underlying block LDPC codes owing to the universal behavior of the resulting coupled code with respect to the channel front-end, i.e., the iterative MUD. Furthermore, an additional outer repetition code makes the scheme flexible to cope with a varying number of users and user rates, as the SC-LDPC itself can be kept constant for a wide range of different user loads. The decoding thresholds are obtained via density evolution (DE) and verified by bit error rate (BER) simulations. To keep decoding complexity and latency small, we introduce a joint iterative windowed detector/decoder imposing carefully adjusted sub-block interleavers. Finally, we show that the proposed coding scheme also works for Rayleigh channels using the same code with tolerable performance loss compared to the additive white Gaussian noise (AWGN) channel.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114062423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-16DOI: 10.1109/CISS.2019.8692816
S. Horii, T. Suko
In this paper, we deal with the problem of estimating the intervention effect in the statistical causal analysis using the structural equation model and the causal diagram. The intervention effect is defined as a causal effect on the response variable Y when the causal variable X is fixed to a certain value by an external operation and is defined based on the causal diagram. The intervention effect is defined as a function of the probability distributions in the causal diagram, however, generally these probability distributions are unknown, so it is required to estimate them from data. In other words, the steps of the estimation of the intervention effect using the causal diagram are as follows: 1. Estimate the causal diagram from the data, 2. Estimate the probability distributions in the causal diagram from the data, 3. Calculate the intervention effect. However, if the problem of estimating the intervention effect is formulated in the statistical decision theory framework, estimation with this procedure is not necessarily optimal. In this study, we formulate the problem of estimating the intervention effect for the two cases, the case where the causal diagram is known and the case where it is unknown, in the framework of statistical decision theory and derive the optimal decision method under the Bayesian criterion. We show the effectiveness of the proposed method through numerical simulations.
{"title":"A Note on the Estimation Method of Intervention Effects based on Statistical Decision Theory","authors":"S. Horii, T. Suko","doi":"10.1109/CISS.2019.8692816","DOIUrl":"https://doi.org/10.1109/CISS.2019.8692816","url":null,"abstract":"In this paper, we deal with the problem of estimating the intervention effect in the statistical causal analysis using the structural equation model and the causal diagram. The intervention effect is defined as a causal effect on the response variable Y when the causal variable X is fixed to a certain value by an external operation and is defined based on the causal diagram. The intervention effect is defined as a function of the probability distributions in the causal diagram, however, generally these probability distributions are unknown, so it is required to estimate them from data. In other words, the steps of the estimation of the intervention effect using the causal diagram are as follows: 1. Estimate the causal diagram from the data, 2. Estimate the probability distributions in the causal diagram from the data, 3. Calculate the intervention effect. However, if the problem of estimating the intervention effect is formulated in the statistical decision theory framework, estimation with this procedure is not necessarily optimal. In this study, we formulate the problem of estimating the intervention effect for the two cases, the case where the causal diagram is known and the case where it is unknown, in the framework of statistical decision theory and derive the optimal decision method under the Bayesian criterion. We show the effectiveness of the proposed method through numerical simulations.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130211740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}