This paper presents a new technique that adjusts the hysteresis window depending on the variations in load current caused by a voltage-mode circuit to reduce the voltage and current ripples. Moreover, a compact current-sensing circuit is used to provide an accurate sensing signal for achieving fast hysteresis window adjustment. In addition, a zero-current detection circuit is also proposed to eliminate the reverse current at light loads. As a result, this technique reduces the voltage ripple below 8.08 mVpp and the current ripple below 93.98 mApp for a load current of 500 mA. Circuit simulation is performed using 0.18 μm CMOS process parameters.
{"title":"A novel adaptive hysteresis DC-DC buck converter for portable devices","authors":"S. Park, Ju Sang Lee, Sang-Dae Yu","doi":"10.3906/ELK-1809-45","DOIUrl":"https://doi.org/10.3906/ELK-1809-45","url":null,"abstract":"This paper presents a new technique that adjusts the hysteresis window depending on the variations in load current caused by a voltage-mode circuit to reduce the voltage and current ripples. Moreover, a compact current-sensing circuit is used to provide an accurate sensing signal for achieving fast hysteresis window adjustment. In addition, a zero-current detection circuit is also proposed to eliminate the reverse current at light loads. As a result, this technique reduces the voltage ripple below 8.08 mVpp and the current ripple below 93.98 mApp for a load current of 500 mA. Circuit simulation is performed using 0.18 μm CMOS process parameters.","PeriodicalId":49410,"journal":{"name":"Turkish Journal of Electrical Engineering and Computer Sciences","volume":"39 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81693889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naphat Keawpiba, L. Preechaveerakul, S. Vanichayobon
A bitmap-based index is an effective and efficient indexing method for answering selective queries in a read- only environment. It offers improved query execution time by applying low-cost Boolean operators on the index directly, before accessing raw data. A drawback of the bitmap index is that index size increases with the cardinality of indexed attributes, which additionally has an impact on a query execution time. This impact is related to an increase of query execution time due to the scanning of bitmap vectors to answer the queries. In this paper, we propose a new encoding bitmap index, called the HyBiX bitmap index. The HyBiX bitmap index was experimentally compared to existing encoding bitmap indexes in terms of space requirement, query execution time, and space and time trade-off for equality and range queries. As experimental results, the HyBiX bitmap index can reduce space requirements with high cardinality attributes with satisfactory execution times for both equality and range queries. The performance of the HyBiX bitmap index provides the second-best results for equality queries and the first-best for range queries in terms of space and time trade-off.
{"title":"HyBiX: A novel encoding bitmap index for space- and time-efficient query processing","authors":"Naphat Keawpiba, L. Preechaveerakul, S. Vanichayobon","doi":"10.3906/ELK-1807-277","DOIUrl":"https://doi.org/10.3906/ELK-1807-277","url":null,"abstract":"A bitmap-based index is an effective and efficient indexing method for answering selective queries in a read- only environment. It offers improved query execution time by applying low-cost Boolean operators on the index directly, before accessing raw data. A drawback of the bitmap index is that index size increases with the cardinality of indexed attributes, which additionally has an impact on a query execution time. This impact is related to an increase of query execution time due to the scanning of bitmap vectors to answer the queries. In this paper, we propose a new encoding bitmap index, called the HyBiX bitmap index. The HyBiX bitmap index was experimentally compared to existing encoding bitmap indexes in terms of space requirement, query execution time, and space and time trade-off for equality and range queries. As experimental results, the HyBiX bitmap index can reduce space requirements with high cardinality attributes with satisfactory execution times for both equality and range queries. The performance of the HyBiX bitmap index provides the second-best results for equality queries and the first-best for range queries in terms of space and time trade-off.","PeriodicalId":49410,"journal":{"name":"Turkish Journal of Electrical Engineering and Computer Sciences","volume":"44 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88126056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently, large-scale solar farms are being rapidly integrated in electrical grids all over the world. However, the photovoltaic (PV) output power is highly intermittent in nature and can also be correlated with other solar farms located at different places. Moreover, the increasing PV penetration also results in large solar forecast error and its impact on power system stability should be estimated. The effects of these quantities on small-signal stability are difficult to quantify using deterministic techniques but can be conveniently estimated using probabilistic methods. For this purpose, the authors have developed a method of probabilistic analysis based on combined cumulant and Gram– Charlier expansion technique. The output from the proposed method provides the probability density function and cumulative density function of the real part of the critical eigenvalue, from which information concerning the stability of low-frequency oscillatory dynamics can be inferred. The proposed method gives accurate results in less computation time compared to conventional techniques. The test system is a large modified IEEE 16-machine, 68-bus system, which is a benchmark system to study low-frequency oscillatory dynamics in power systems. The results show that the PV power fluctuation has the potential to cause oscillatory instability. Furthermore, the system is more prone to small-signal instability when the PV farms are correlated as well as when large PV forecast error exists.
{"title":"Probabilistic small-signal stability analysis of power system with solar farm integration","authors":"Samundra Gurung, S. Naetiladdanon, A. Sangswang","doi":"10.3906/ELK-1804-228","DOIUrl":"https://doi.org/10.3906/ELK-1804-228","url":null,"abstract":"Currently, large-scale solar farms are being rapidly integrated in electrical grids all over the world. However, the photovoltaic (PV) output power is highly intermittent in nature and can also be correlated with other solar farms located at different places. Moreover, the increasing PV penetration also results in large solar forecast error and its impact on power system stability should be estimated. The effects of these quantities on small-signal stability are difficult to quantify using deterministic techniques but can be conveniently estimated using probabilistic methods. For this purpose, the authors have developed a method of probabilistic analysis based on combined cumulant and Gram– Charlier expansion technique. The output from the proposed method provides the probability density function and cumulative density function of the real part of the critical eigenvalue, from which information concerning the stability of low-frequency oscillatory dynamics can be inferred. The proposed method gives accurate results in less computation time compared to conventional techniques. The test system is a large modified IEEE 16-machine, 68-bus system, which is a benchmark system to study low-frequency oscillatory dynamics in power systems. The results show that the PV power fluctuation has the potential to cause oscillatory instability. Furthermore, the system is more prone to small-signal instability when the PV farms are correlated as well as when large PV forecast error exists.","PeriodicalId":49410,"journal":{"name":"Turkish Journal of Electrical Engineering and Computer Sciences","volume":"50 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82688691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Dursun, Seral Özşen, S. Günes, B. Akdemir, Ş. Yosunkaya
Sleep electroencephalogram (EEG) signal is an important clinical tool for automatic sleep staging process. Sleep EEG signal is effected by artifacts and other biological signal sources, such as electrooculogram (EOG) and electromyogram (EMG), and since it is effected, its clinical utility reduces. Therefore, eliminating EOG artifacts from sleep EEG signal is a major challenge for automatic sleep staging. We have studied the effects of EOG signals on sleep EEG and tried to remove them from the EEG signals by using regression method. The EEG and EOG recordings of seven subjects were obtained from the Sleep Research Laboratory of Meram Medicine Faculty of Necmettin Erbakan University. A dataset consisting of 58 h and 6941 epochs was used in the research. Then, in order to see the consequences of this process, we classified pure sleep EEG and artifact-eliminated EEG signals with artificial neural networks (ANN). The results showed that elimination of EOG artifacts raised the classification accuracy on each subject at a range of 1%– 1.5%. However, this increase was obtained for a single parameter. This can be regarded as an important improvement if the whole system is considered. However, different artifact elimination strategies combined with different classification methods for another sleep EEG artifact may give higher accuracy differences between original and purified signals.
睡眠脑电图(EEG)信号是睡眠自动分期的重要临床工具。睡眠脑电图信号受到人工信号和其他生物信号源的影响,如眼电图(EOG)和肌电图(EMG),由于受到影响,其临床应用价值降低。因此,消除睡眠脑电图信号中的眼电信号伪影是实现自动睡眠分期的主要挑战。我们研究了脑电信号对睡眠脑电的影响,并尝试用回归方法将其从脑电信号中去除。7名受试者的EEG和EOG记录来自埃尔巴坎大学Meram医学院睡眠研究实验室。研究使用了58 h 6941 epoch的数据集。然后,为了观察这一过程的结果,我们使用人工神经网络(ANN)对纯睡眠脑电信号和消除伪影的脑电信号进行分类。结果表明,消除EOG伪影后,对每个主题的分类准确率提高了1% - 1.5%。然而,这种增加是在单一参数下获得的。如果考虑到整个系统,这可以被视为一个重要的改进。然而,对于另一种睡眠脑电信号伪影,不同的伪影消除策略结合不同的分类方法可能会使原始信号与纯化信号的准确率差异更高。
{"title":"Automated elimination of EOG artifacts in sleep EEG using regression method","authors":"M. Dursun, Seral Özşen, S. Günes, B. Akdemir, Ş. Yosunkaya","doi":"10.3906/ELK-1809-180","DOIUrl":"https://doi.org/10.3906/ELK-1809-180","url":null,"abstract":"Sleep electroencephalogram (EEG) signal is an important clinical tool for automatic sleep staging process. Sleep EEG signal is effected by artifacts and other biological signal sources, such as electrooculogram (EOG) and electromyogram (EMG), and since it is effected, its clinical utility reduces. Therefore, eliminating EOG artifacts from sleep EEG signal is a major challenge for automatic sleep staging. We have studied the effects of EOG signals on sleep EEG and tried to remove them from the EEG signals by using regression method. The EEG and EOG recordings of seven subjects were obtained from the Sleep Research Laboratory of Meram Medicine Faculty of Necmettin Erbakan University. A dataset consisting of 58 h and 6941 epochs was used in the research. Then, in order to see the consequences of this process, we classified pure sleep EEG and artifact-eliminated EEG signals with artificial neural networks (ANN). The results showed that elimination of EOG artifacts raised the classification accuracy on each subject at a range of 1%– 1.5%. However, this increase was obtained for a single parameter. This can be regarded as an important improvement if the whole system is considered. However, different artifact elimination strategies combined with different classification methods for another sleep EEG artifact may give higher accuracy differences between original and purified signals.","PeriodicalId":49410,"journal":{"name":"Turkish Journal of Electrical Engineering and Computer Sciences","volume":"133 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81827007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data classification is the process of organizing data by relevant categories. In this way, the data can be understood and used more efficiently by scientists. Numerous studies have been proposed in the literature for the problem of data classification. However, with recently introduced metaheuristics, it has continued to be riveting to revisit this classical problem and investigate the efficiency of new techniques. Teaching-learning-based optimization (TLBO) is a recent metaheuristic that has been reported to be very effective for combinatorial optimization problems. In this study, we propose a novel hybrid TLBO algorithm with extreme learning machines (ELM) for the solution of data classification problems. The proposed algorithm (TLBO-ELM) is tested on a set of UCI benchmark datasets. The performance of TLBO-ELM is observed to be competitive for both binary and multiclass data classification problems compared with state-of-the-art algorithms.
{"title":"A novel hybrid teaching-learning-based optimization algorithm for the classification of data by using extreme learning machines","authors":"E. Sevinç, Tansel Dökeroglu","doi":"10.3906/ELK-1802-40","DOIUrl":"https://doi.org/10.3906/ELK-1802-40","url":null,"abstract":"Data classification is the process of organizing data by relevant categories. In this way, the data can be understood and used more efficiently by scientists. Numerous studies have been proposed in the literature for the problem of data classification. However, with recently introduced metaheuristics, it has continued to be riveting to revisit this classical problem and investigate the efficiency of new techniques. Teaching-learning-based optimization (TLBO) is a recent metaheuristic that has been reported to be very effective for combinatorial optimization problems. In this study, we propose a novel hybrid TLBO algorithm with extreme learning machines (ELM) for the solution of data classification problems. The proposed algorithm (TLBO-ELM) is tested on a set of UCI benchmark datasets. The performance of TLBO-ELM is observed to be competitive for both binary and multiclass data classification problems compared with state-of-the-art algorithms.","PeriodicalId":49410,"journal":{"name":"Turkish Journal of Electrical Engineering and Computer Sciences","volume":"28 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83759366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social circles, groups, lists, etc. are functionalities that allow users of online social network (OSN) platforms to manually organize their social media contacts. However, this facility provided by OSNs has not received appreciation from users due to the tedious nature of the task of organizing the ones that are only contacted periodically. In view of the numerous benefits of this functionality, it may be advantageous to investigate measures that lead to enhancements in its efficacy by allowing for automatic creation of customized groups of users (social circles, groups, lists, etc). The field of study for this purpose, i.e. creating coarse-grained descriptions from data, consists of two families of techniques, community discovery and clustering. These approaches are infeasible for the purpose of automation of social circle creation as they fail on social networks. A reason for this failure could be lack of knowledge of the global structure of the social network or the sparsity that exists in data from social networking websites. As individuals do in real life, OSN clients dependably attempt to broaden their groups of contacts in order to fulfill different social demands. This means that ‘homophily’ would exist among OSN users and prove useful in the task of social circle detection. Based on this intuition, the current inquiry is focused on understanding ‘homophily’ and its role in the process of social circle formation. Extensive experiments are performed on egocentric networks (ego is user, alters are friends) extracted from prominent OSNs like Facebook, Twitter, and Google+. The results of these experiments are used to propose a unified framework: feature extraction for social circles discovery (FESC). FESC detects social circles by jointly modeling ego-net topology and attributes of alters. The performance of FESC is compared with standard benchmark frameworks using metrics like edit distance, modularity, and running time to highlight its efficacy.
{"title":"Understanding attribute and social circle correlation in social networks","authors":"P. Nerurkar, M. Chandane, S. Bhirud","doi":"10.3906/ELK-1806-91","DOIUrl":"https://doi.org/10.3906/ELK-1806-91","url":null,"abstract":"Social circles, groups, lists, etc. are functionalities that allow users of online social network (OSN) platforms to manually organize their social media contacts. However, this facility provided by OSNs has not received appreciation from users due to the tedious nature of the task of organizing the ones that are only contacted periodically. In view of the numerous benefits of this functionality, it may be advantageous to investigate measures that lead to enhancements in its efficacy by allowing for automatic creation of customized groups of users (social circles, groups, lists, etc). The field of study for this purpose, i.e. creating coarse-grained descriptions from data, consists of two families of techniques, community discovery and clustering. These approaches are infeasible for the purpose of automation of social circle creation as they fail on social networks. A reason for this failure could be lack of knowledge of the global structure of the social network or the sparsity that exists in data from social networking websites. As individuals do in real life, OSN clients dependably attempt to broaden their groups of contacts in order to fulfill different social demands. This means that ‘homophily’ would exist among OSN users and prove useful in the task of social circle detection. Based on this intuition, the current inquiry is focused on understanding ‘homophily’ and its role in the process of social circle formation. Extensive experiments are performed on egocentric networks (ego is user, alters are friends) extracted from prominent OSNs like Facebook, Twitter, and Google+. The results of these experiments are used to propose a unified framework: feature extraction for social circles discovery (FESC). FESC detects social circles by jointly modeling ego-net topology and attributes of alters. The performance of FESC is compared with standard benchmark frameworks using metrics like edit distance, modularity, and running time to highlight its efficacy.","PeriodicalId":49410,"journal":{"name":"Turkish Journal of Electrical Engineering and Computer Sciences","volume":"24 3 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89641260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Selçuk Coskun, I. Pehlivan, Akif Akgül, Bi̇lal Gürevi̇n
The basis of encryption techniques is random number generators (RNGs). The application areas of cryptology are increasing in number due to continuously developing technology, so the need for RNGs is increasing rapidly, too. RNGs can be divided into two categories as pseudorandom number generator (PRNGs) and true random number generator (TRNGs). TRNGs are systems that use unpredictable and uncontrollable entropy sources and generate random numbers. During the design of TRNGs, while analog signals belonging to the used entropy sources are being converted to digital data, generally comparators, flip-flops, Schmitt triggers, and ADCs are used. In this study, a computer-controlled new and flexible platform to find the most appropriate system parameters in ADC-based TRNG designs is designed and realized. As a sample application with this new platform, six different TRNGs that use three different outputs of Zhongtang, which is a continuous time chaotic system, as an entropy source are designed. Random number series generated with the six designed TRNGs are put through the NIST800–22 test, which has the internationally highest standards, and they pass all tests. With the help of the new platform designed, ADC-based high-quality TRNGs can be developed fast and also without the need for expertise. The platform has been designed to decide which entropy source and parameter are better by comparing them before complex embedded TRNG designs. In addition, this platform can be used for educational purposes to explain how to work an ADC-based TRNG. That is why it can be utilized as an experiment set in engineering education, as well.
{"title":"A new computer-controlled platform for ADC-based true random number generator and its applications","authors":"Selçuk Coskun, I. Pehlivan, Akif Akgül, Bi̇lal Gürevi̇n","doi":"10.3906/ELK-1806-167","DOIUrl":"https://doi.org/10.3906/ELK-1806-167","url":null,"abstract":"The basis of encryption techniques is random number generators (RNGs). The application areas of cryptology are increasing in number due to continuously developing technology, so the need for RNGs is increasing rapidly, too. RNGs can be divided into two categories as pseudorandom number generator (PRNGs) and true random number generator (TRNGs). TRNGs are systems that use unpredictable and uncontrollable entropy sources and generate random numbers. During the design of TRNGs, while analog signals belonging to the used entropy sources are being converted to digital data, generally comparators, flip-flops, Schmitt triggers, and ADCs are used. In this study, a computer-controlled new and flexible platform to find the most appropriate system parameters in ADC-based TRNG designs is designed and realized. As a sample application with this new platform, six different TRNGs that use three different outputs of Zhongtang, which is a continuous time chaotic system, as an entropy source are designed. Random number series generated with the six designed TRNGs are put through the NIST800–22 test, which has the internationally highest standards, and they pass all tests. With the help of the new platform designed, ADC-based high-quality TRNGs can be developed fast and also without the need for expertise. The platform has been designed to decide which entropy source and parameter are better by comparing them before complex embedded TRNG designs. In addition, this platform can be used for educational purposes to explain how to work an ADC-based TRNG. That is why it can be utilized as an experiment set in engineering education, as well.","PeriodicalId":49410,"journal":{"name":"Turkish Journal of Electrical Engineering and Computer Sciences","volume":"39 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89723707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a new accuracy measurement model for the video stabilization method based on background motion that can accurately measure the performance of the video stabilization algorithm. Undesired residual motion present in the video can quantitatively be measured by the pixel by pixel background motion displacement between two consecutive background frames. First of all, foregrounds are removed from a stabilized video, and then we find the two-dimensional flow vectors for each pixel separately between two consecutive background frames. After that, we calculate a Euclidean distance between these two flow vectors for each pixel one by one, which is regarded as a displacement of each pixel. Then a total Euclidean distance of each frame is averaged to get a mean displacement for each pixel, which is called mean displacement error, and finally we calculate the average mean displacement error. Our experimental results show the effectiveness of our proposed method.
{"title":"A novel accuracy assessment model for video stabilization approaches based on background motion","authors":"Md. Alamgir Hossain, Tien-Dung Nguyen, E. Huh","doi":"10.3906/ELK-1810-68","DOIUrl":"https://doi.org/10.3906/ELK-1810-68","url":null,"abstract":"In this paper, we propose a new accuracy measurement model for the video stabilization method based on background motion that can accurately measure the performance of the video stabilization algorithm. Undesired residual motion present in the video can quantitatively be measured by the pixel by pixel background motion displacement between two consecutive background frames. First of all, foregrounds are removed from a stabilized video, and then we find the two-dimensional flow vectors for each pixel separately between two consecutive background frames. After that, we calculate a Euclidean distance between these two flow vectors for each pixel one by one, which is regarded as a displacement of each pixel. Then a total Euclidean distance of each frame is averaged to get a mean displacement for each pixel, which is called mean displacement error, and finally we calculate the average mean displacement error. Our experimental results show the effectiveness of our proposed method.","PeriodicalId":49410,"journal":{"name":"Turkish Journal of Electrical Engineering and Computer Sciences","volume":"154 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76787378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A k-connected wireless sensor network remains connected if any k-1 arbitrary nodes stop working. The aim of movement-assisted k -connectivity restoration is to preserve the k -connectivity of a network by moving the nodes to the necessary positions after possible failures in nodes. This paper proposes an algorithm named TAPU for k-connectivity restoration that guarantees the optimal movement cost. Our algorithm improves the time and space complexities of the previous approach (MCCR) in both best and worst cases. In the proposed algorithm, the nodes are classified into safe and unsafe groups. Failures of safe nodes do not change the k value of the network while failures of unsafe nodes reduce the k value. After an unsafe node’s failure, the shortest path tree of the failed node is generated. Each node moves to its parent location in the tree starting from a safe node with the minimum moving cost to the root. TAPU has been implemented on simulation and testbed environments including Kobuki robots and Iris nodes. The measurements show that TAPU finds the optimum movement up to 79.5% faster with 50% lower memory usage than MCCR and with up to 59% lower cost than the greedy algorithms.
{"title":"TAPU: Test and pick up-based k-connectivity restoration algorithm for wireless sensor networks","authors":"V. Akram, O. Dagdeviren","doi":"10.3906/ELK-1801-49","DOIUrl":"https://doi.org/10.3906/ELK-1801-49","url":null,"abstract":"A k-connected wireless sensor network remains connected if any k-1 arbitrary nodes stop working. The aim of movement-assisted k -connectivity restoration is to preserve the k -connectivity of a network by moving the nodes to the necessary positions after possible failures in nodes. This paper proposes an algorithm named TAPU for k-connectivity restoration that guarantees the optimal movement cost. Our algorithm improves the time and space complexities of the previous approach (MCCR) in both best and worst cases. In the proposed algorithm, the nodes are classified into safe and unsafe groups. Failures of safe nodes do not change the k value of the network while failures of unsafe nodes reduce the k value. After an unsafe node’s failure, the shortest path tree of the failed node is generated. Each node moves to its parent location in the tree starting from a safe node with the minimum moving cost to the root. TAPU has been implemented on simulation and testbed environments including Kobuki robots and Iris nodes. The measurements show that TAPU finds the optimum movement up to 79.5% faster with 50% lower memory usage than MCCR and with up to 59% lower cost than the greedy algorithms.","PeriodicalId":49410,"journal":{"name":"Turkish Journal of Electrical Engineering and Computer Sciences","volume":"41 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79206455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a new speech enhancement algorithm based on the adaptive threshold of intrinsic mode functions (IMFs) of noisy signal frames extracted by empirical mode decomposition. Adaptive threshold values are estimated by using the gamma statistical model of Teager energy operated IMFs of noisy speech and estimated noise based on symmetric Kullback–Leibler divergence. The enhanced speech signal is obtained by a semisoft thresholding function, which is utilized by threshold IMF coefficients of noisy speech. The method is tested on the NOIZEUS speech database and the proposed method is compared with wavelet-shrinkage and EMD-shrinkage methods in terms of segmental SNR improvement (SegSNR), weighted spectral slope (WSS), and perceptual evaluation of speech quality (PESQ). Experimental results show that the proposed method provides a higher SegSNR improvement in dB, lower WSS distance, and higher PESQ scores than wavelet-shrinkage and EMD-shrinkage methods. The proposed method shows better performance than traditional threshold-based speech enhancement approaches from high to low SNR levels.
{"title":"Speech enhancement using adaptive thresholding based on gamma distribution of Teager energy operated intrinsic mode functions","authors":"Özkan Arslan, E. Z. Engin","doi":"10.3906/ELK-1804-18","DOIUrl":"https://doi.org/10.3906/ELK-1804-18","url":null,"abstract":"This paper introduces a new speech enhancement algorithm based on the adaptive threshold of intrinsic mode functions (IMFs) of noisy signal frames extracted by empirical mode decomposition. Adaptive threshold values are estimated by using the gamma statistical model of Teager energy operated IMFs of noisy speech and estimated noise based on symmetric Kullback–Leibler divergence. The enhanced speech signal is obtained by a semisoft thresholding function, which is utilized by threshold IMF coefficients of noisy speech. The method is tested on the NOIZEUS speech database and the proposed method is compared with wavelet-shrinkage and EMD-shrinkage methods in terms of segmental SNR improvement (SegSNR), weighted spectral slope (WSS), and perceptual evaluation of speech quality (PESQ). Experimental results show that the proposed method provides a higher SegSNR improvement in dB, lower WSS distance, and higher PESQ scores than wavelet-shrinkage and EMD-shrinkage methods. The proposed method shows better performance than traditional threshold-based speech enhancement approaches from high to low SNR levels.","PeriodicalId":49410,"journal":{"name":"Turkish Journal of Electrical Engineering and Computer Sciences","volume":"104 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80583870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}