in this paper we propose a probability density function for signal contaminated by sea and rain non-stationary clutter and noise. Afterward, using the proposed pdf we estimated unknown parameters and using Hilbert transform and obtain an analytical signal to pass its imaginary part through a designed Hilbert filter. In the cell under test, we compare adaptive threshold output from the calculated real and imaginary parts of signal and make an automatic decision to whether anti rain clutter filter must be used or not. This algorithm will decrease adaptively destructive effects of rain clutter just in such cells that are contaminated with rain reflections. Our proposed algorithm is tested on real RADAR signature and implemented in a sea marine RADAR. Results show that the method has good performance in inhomogeneous clutter and different sea states.
{"title":"A New Model for Rain Clutter Cancellation in Marine Radars","authors":"M. Alaee, R. Amiri, A. S. Moghadam, M. Sepahvand","doi":"10.1109/AMS.2010.66","DOIUrl":"https://doi.org/10.1109/AMS.2010.66","url":null,"abstract":"in this paper we propose a probability density function for signal contaminated by sea and rain non-stationary clutter and noise. Afterward, using the proposed pdf we estimated unknown parameters and using Hilbert transform and obtain an analytical signal to pass its imaginary part through a designed Hilbert filter. In the cell under test, we compare adaptive threshold output from the calculated real and imaginary parts of signal and make an automatic decision to whether anti rain clutter filter must be used or not. This algorithm will decrease adaptively destructive effects of rain clutter just in such cells that are contaminated with rain reflections. Our proposed algorithm is tested on real RADAR signature and implemented in a sea marine RADAR. Results show that the method has good performance in inhomogeneous clutter and different sea states.","PeriodicalId":437153,"journal":{"name":"2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127767746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Among the steganographic techniques and particularly in conventional least significant bit (LSB) insertion method, there is a challenging issue and that is how to embed desired secret bits in a cover medium (a typical 8-bit gray-scale image) in a way not to be seen by human vision system as well as the fact that it is expected not to be attacked by some attacks like chi-squared attack, etc. The point is how to maintain robustness and the highest acceptable imperceptibility. Although by using more pixels for estimating each target pixel’s capacity, bigger imperceptibility is achieved, there is another problem that the higher probability of being attacked by Chi-squared index is expected as the desired amount of secret bits increases. This paper proposes a method that utilizes more surrounding pixels (unlike BPCS, PVD and MBNS methods which use 3 or 4 immediate neighbors of each pixel). In this regard, for each certain target pixel, a more precise number of bits known as capacity, is found to be filled by respective secret bits. Finally, it is proved that the method is robust against Chi-squared attack. The new method is called MSP and it stands for more surrounding pixels.
{"title":"Enhanced Least Significant Bit Scheme Robust against Chi-Squared Attack","authors":"Masoud Afrakhteh, S. Ibrahim","doi":"10.1109/AMS.2010.64","DOIUrl":"https://doi.org/10.1109/AMS.2010.64","url":null,"abstract":"Among the steganographic techniques and particularly in conventional least significant bit (LSB) insertion method, there is a challenging issue and that is how to embed desired secret bits in a cover medium (a typical 8-bit gray-scale image) in a way not to be seen by human vision system as well as the fact that it is expected not to be attacked by some attacks like chi-squared attack, etc. The point is how to maintain robustness and the highest acceptable imperceptibility. Although by using more pixels for estimating each target pixel’s capacity, bigger imperceptibility is achieved, there is another problem that the higher probability of being attacked by Chi-squared index is expected as the desired amount of secret bits increases. This paper proposes a method that utilizes more surrounding pixels (unlike BPCS, PVD and MBNS methods which use 3 or 4 immediate neighbors of each pixel). In this regard, for each certain target pixel, a more precise number of bits known as capacity, is found to be filled by respective secret bits. Finally, it is proved that the method is robust against Chi-squared attack. The new method is called MSP and it stands for more surrounding pixels.","PeriodicalId":437153,"journal":{"name":"2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126912629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we are proposing a technique for encrypting ‘high volume data streams’ that are sent over communication links throughout various networks. High volume data streams refer to applications such as audio-video conferencing, or more specifically Internet Protocol Television (IPTV), and usage of the File Transfer Protocol or FTP. In most cases when data related to these applications is transferred from one peer to another, the main emphasis always tends to be to transfer these forms of data as fast as possible. As a result of this, the security issue is at times overlooked. This paper is putting forward a technique that might be an answer to the security problem of transferring data of this sort.
在本文中,我们提出了一种加密“大容量数据流”的技术,这些数据流通过各种网络的通信链路发送。大容量数据流指的是音频视频会议等应用,或者更具体地说是IPTV (Internet Protocol Television),以及文件传输协议(File Transfer Protocol, FTP)的使用。在大多数情况下,当与这些应用程序相关的数据从一个对等点传输到另一个对等点时,主要的重点总是倾向于尽可能快地传输这些形式的数据。因此,安全问题有时会被忽视。本文提出了一种可能解决此类数据传输安全问题的技术。
{"title":"A Dynamic Encryption Algorithm for Multicast/Broadcast Streaming Applications","authors":"T. Rahman","doi":"10.1109/AMS.2010.119","DOIUrl":"https://doi.org/10.1109/AMS.2010.119","url":null,"abstract":"In this paper we are proposing a technique for encrypting ‘high volume data streams’ that are sent over communication links throughout various networks. High volume data streams refer to applications such as audio-video conferencing, or more specifically Internet Protocol Television (IPTV), and usage of the File Transfer Protocol or FTP. In most cases when data related to these applications is transferred from one peer to another, the main emphasis always tends to be to transfer these forms of data as fast as possible. As a result of this, the security issue is at times overlooked. This paper is putting forward a technique that might be an answer to the security problem of transferring data of this sort.","PeriodicalId":437153,"journal":{"name":"2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127679302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sky has always been the crucial element in modeling the background of an outdoor scene. The position of the sun during the day gives a different impact on the sky colour. The sky colour indirectly affects the colour of the objects which were exposed to the lighting, such as the orangish red colour of the clouds seen during sunsets. Consequently, this study will emphasize on how to produce illuminated 3D objects based upon the effects of interaction between the sunlight and sky. A two-part program was developed for this study. The first part of the program concentrates on producing the correct sky colour depending on the position of the sun using Perez’s function. The sky colour will be plotted on the sky dome which in turn will become a closed environment for the clouds. The interaction will occur in the second part of the program where the energy transfer in the dome environment with color of the sky as the main source illumination, resulting in the colour bleeding effect when using the radiosity approach. The result from this study is applicable to daylight modeling of building by showing the lighting effects from the sun and the sky.
{"title":"Interaction between Sunlight and the Sky Colour with 3D Objects in the Outdoor Virtual Environment","authors":"S. Halawani, M. Sunar","doi":"10.1109/AMS.2010.96","DOIUrl":"https://doi.org/10.1109/AMS.2010.96","url":null,"abstract":"The sky has always been the crucial element in modeling the background of an outdoor scene. The position of the sun during the day gives a different impact on the sky colour. The sky colour indirectly affects the colour of the objects which were exposed to the lighting, such as the orangish red colour of the clouds seen during sunsets. Consequently, this study will emphasize on how to produce illuminated 3D objects based upon the effects of interaction between the sunlight and sky. A two-part program was developed for this study. The first part of the program concentrates on producing the correct sky colour depending on the position of the sun using Perez’s function. The sky colour will be plotted on the sky dome which in turn will become a closed environment for the clouds. The interaction will occur in the second part of the program where the energy transfer in the dome environment with color of the sky as the main source illumination, resulting in the colour bleeding effect when using the radiosity approach. The result from this study is applicable to daylight modeling of building by showing the lighting effects from the sun and the sky.","PeriodicalId":437153,"journal":{"name":"2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126658830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The hub location problem appears in a variety of applications including airline systems, cargo delivery systems, and telecommunication network design. When we analyze hub location applications separately, we observe that each area has its own characteristics. In this paper, we study the single allocation hub covering problem under capacity constraints (or CSAHCLP - Capacitate Single Allocation Hub Covering Location Problem) over complete hub networks and propose a mixed-integer programming formulation to this end. The aim of our model is to find the location of hubs and allocate non-hub nodes to the located hub nodes so much that the travel cost between any hub-node pair is within a given cost bound and hubs are considered under capacity constraint. Unlike [1] we prepare new formulation with covering radius. In general this paper attempts to propose a new mixed-integer programming formulation and adapt the imperialist competitive algorithm to solve the hub covering location problem. Also unlike previous studies, we adapt new solution algorithm (Imperialist competitive algorithm) for solving our problem that has not used yet.
枢纽定位问题出现在各种应用中,包括航空系统、货物交付系统和电信网络设计。当我们单独分析枢纽位置应用程序时,我们观察到每个区域都有自己的特点。本文研究了完全集线器网络中容量约束下的单分配集线器覆盖问题(CSAHCLP - Capacitate single allocation hub cover Location problem),并提出了一个混合整数规划公式。该模型的目标是找到集线器的位置,并将非集线器节点分配给已定位的集线器节点,使任何集线器节点对之间的旅行成本在给定的成本范围内,并且集线器被认为是在容量约束下的。与[1]不同,我们制备了具有覆盖半径的新配方。总的来说,本文试图提出一种新的混合整数规划公式,并采用帝国主义竞争算法来解决枢纽覆盖定位问题。此外,与以往的研究不同,我们采用了新的求解算法(帝国主义竞争算法)来解决我们尚未使用的问题。
{"title":"Hub Covering Location Problem under Capacity Constraints","authors":"R. Ghodsi, Mehrdad Mohammadi, H. Rostami","doi":"10.1109/AMS.2010.132","DOIUrl":"https://doi.org/10.1109/AMS.2010.132","url":null,"abstract":"The hub location problem appears in a variety of applications including airline systems, cargo delivery systems, and telecommunication network design. When we analyze hub location applications separately, we observe that each area has its own characteristics. In this paper, we study the single allocation hub covering problem under capacity constraints (or CSAHCLP - Capacitate Single Allocation Hub Covering Location Problem) over complete hub networks and propose a mixed-integer programming formulation to this end. The aim of our model is to find the location of hubs and allocate non-hub nodes to the located hub nodes so much that the travel cost between any hub-node pair is within a given cost bound and hubs are considered under capacity constraint. Unlike [1] we prepare new formulation with covering radius. In general this paper attempts to propose a new mixed-integer programming formulation and adapt the imperialist competitive algorithm to solve the hub covering location problem. Also unlike previous studies, we adapt new solution algorithm (Imperialist competitive algorithm) for solving our problem that has not used yet.","PeriodicalId":437153,"journal":{"name":"2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114486054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multilayer perceptron network (MLP) has been recognized as a powerful tool for many applications including classification. Selection of the activation functions in the multilayer perceptron (MLP) network plays an essential role on the network performance. This paper presents a comparison study of two commonly used MLP activation function, sigmoid and hyperbolic tangent for weather classification. Meteorological data such as solar radiation, ambient temperature, current, surface temperature, voltage, wind direction and wind speed data are acquired from a photovoltaic (PV) system. Then, the meteorological data are input to the MLP network to classify the weather condition. In this study, weather conditions are classified into four types, rain, cloudy, dry day and storm. Levenberg-Marquardt algorithm is used to train the MLP network since it is the fastest training and ensure the best converges towards a minimum error. Experimental results show that hyperbolic tangent activation function is more efficient compared to sigmoid activation function. The MLP network using hyperbolic tangent function has achieved higher classification accuracy with less number of hidden nodes compared to sigmoid activation function.
{"title":"Performance Comparison of Different Multilayer Perceptron Network Activation Functions in Automated Weather Classification","authors":"I. Isa, S. Omar, Z. Saad, M. K. Osman","doi":"10.1109/AMS.2010.27","DOIUrl":"https://doi.org/10.1109/AMS.2010.27","url":null,"abstract":"Multilayer perceptron network (MLP) has been recognized as a powerful tool for many applications including classification. Selection of the activation functions in the multilayer perceptron (MLP) network plays an essential role on the network performance. This paper presents a comparison study of two commonly used MLP activation function, sigmoid and hyperbolic tangent for weather classification. Meteorological data such as solar radiation, ambient temperature, current, surface temperature, voltage, wind direction and wind speed data are acquired from a photovoltaic (PV) system. Then, the meteorological data are input to the MLP network to classify the weather condition. In this study, weather conditions are classified into four types, rain, cloudy, dry day and storm. Levenberg-Marquardt algorithm is used to train the MLP network since it is the fastest training and ensure the best converges towards a minimum error. Experimental results show that hyperbolic tangent activation function is more efficient compared to sigmoid activation function. The MLP network using hyperbolic tangent function has achieved higher classification accuracy with less number of hidden nodes compared to sigmoid activation function.","PeriodicalId":437153,"journal":{"name":"2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122424024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Govindaswamy, Matthew Caudill, Jeff Wilson, D. Brower, G. Balasekaran
Sorting data sets are a thoroughly researched field. Several sorting algorithms have been introduced and these include Bubble, Insertion, Selection, Shell, Quick, Merge and Heap. In this paper, we present a novel sorting algorithm,named Clump Sort, to take advantage of ordered segments already present in medical data sets. It succeeds in sorting the medical data considerably better than all the sorts except when using totally non-clumped data. In this test using totally nonclumped data, Heap sort does only slightly better than Clump sort. However, Clump sort has the advantage of being a stable sort as the original order of equal elements is preserved whereas in Heap sort, it is not since it does not guarantee that equal elements will appear in their original order after sorting. As such, Clump Sort will have considerably better data cache performance with both clumped and non-clumped data, outperforming Heap Sort on a modern desktop PC, because it accesses the elements in order. Sorting equal elements in the correct order is essential for sorting medical data.
{"title":"Clump Sort: A Stable Alternative to Heap Sort for Sorting Medical Data","authors":"V. Govindaswamy, Matthew Caudill, Jeff Wilson, D. Brower, G. Balasekaran","doi":"10.1109/AMS.2010.53","DOIUrl":"https://doi.org/10.1109/AMS.2010.53","url":null,"abstract":"Sorting data sets are a thoroughly researched field. Several sorting algorithms have been introduced and these include Bubble, Insertion, Selection, Shell, Quick, Merge and Heap. In this paper, we present a novel sorting algorithm,named Clump Sort, to take advantage of ordered segments already present in medical data sets. It succeeds in sorting the medical data considerably better than all the sorts except when using totally non-clumped data. In this test using totally nonclumped data, Heap sort does only slightly better than Clump sort. However, Clump sort has the advantage of being a stable sort as the original order of equal elements is preserved whereas in Heap sort, it is not since it does not guarantee that equal elements will appear in their original order after sorting. As such, Clump Sort will have considerably better data cache performance with both clumped and non-clumped data, outperforming Heap Sort on a modern desktop PC, because it accesses the elements in order. Sorting equal elements in the correct order is essential for sorting medical data.","PeriodicalId":437153,"journal":{"name":"2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122640506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Md-Yusof, C. Tan, A. Wong, Z. Ibrahim, M. Hani, M. N. Marsono
Compressed memory system is a high performance memory system that is capable to reduce memory size and improve memory performance. This project focuses on the memory management of compressed memory and a secondlevel memory and compares different compress memory allocation algorithms. Two dynamic memory allocation algorithms, which are lazy-fit and link-fit algorithms are analyzed to determine their impacts on memory compression and memory access time. In this work, systemC design approach is used to design the compressed memory system. The compressed memory system is designed from C modeling and refined to systemC register-transfer level abstraction. The simulation results show the lazy-fit system has better allocation speed compared to the one based on link-fit algorithm.
{"title":"Link-Fit and Lazy-Fit Algorithms in Compressed Memory System","authors":"Z. Md-Yusof, C. Tan, A. Wong, Z. Ibrahim, M. Hani, M. N. Marsono","doi":"10.1109/AMS.2010.117","DOIUrl":"https://doi.org/10.1109/AMS.2010.117","url":null,"abstract":"Compressed memory system is a high performance memory system that is capable to reduce memory size and improve memory performance. This project focuses on the memory management of compressed memory and a secondlevel memory and compares different compress memory allocation algorithms. Two dynamic memory allocation algorithms, which are lazy-fit and link-fit algorithms are analyzed to determine their impacts on memory compression and memory access time. In this work, systemC design approach is used to design the compressed memory system. The compressed memory system is designed from C modeling and refined to systemC register-transfer level abstraction. The simulation results show the lazy-fit system has better allocation speed compared to the one based on link-fit algorithm.","PeriodicalId":437153,"journal":{"name":"2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122870403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asrul Adam, A. F. Z. Abidin, Z. Ibrahim, A. R. Husain, Z. Yusof, I. Ibrahim
Most of the operational time of a PCB Robotic Drill is spent on moving the drill bit between the holes. This operational time can be kept at a minimal level by optimizing the route taken by the robot. An optimized route translates to a minimal cost of operating the robot. This paper proposes a new model that implements Particle Swarm Optimization (PSO) in order to find optimized routing path when using the PCB Robotic Drill. The main task of the PCB Robotic Drill is to drill holes at Printed Circuit Board (PCB). This PCB Robotic Drill will route the drill site by moving the drill bit along Cartesian axes from it’s initial position. Then, the drill bit will return back to the initial position. The drill route consists of a number of potential locations where the holes are going to be drilled. As the number of holes required increases so thus does the complexity to find the optimized route. The proposed model can be used to solve this complex problem with minimal computational time. The result of a case study indicates that the proposed model is capable to find the shortest path for the robot to complete its task. Thus concluded the proposed model can be implemented in any drill route problems.
{"title":"A Particle Swarm Optimization Approach to Robotic Drill Route Optimization","authors":"Asrul Adam, A. F. Z. Abidin, Z. Ibrahim, A. R. Husain, Z. Yusof, I. Ibrahim","doi":"10.1109/AMS.2010.25","DOIUrl":"https://doi.org/10.1109/AMS.2010.25","url":null,"abstract":"Most of the operational time of a PCB Robotic Drill is spent on moving the drill bit between the holes. This operational time can be kept at a minimal level by optimizing the route taken by the robot. An optimized route translates to a minimal cost of operating the robot. This paper proposes a new model that implements Particle Swarm Optimization (PSO) in order to find optimized routing path when using the PCB Robotic Drill. The main task of the PCB Robotic Drill is to drill holes at Printed Circuit Board (PCB). This PCB Robotic Drill will route the drill site by moving the drill bit along Cartesian axes from it’s initial position. Then, the drill bit will return back to the initial position. The drill route consists of a number of potential locations where the holes are going to be drilled. As the number of holes required increases so thus does the complexity to find the optimized route. The proposed model can be used to solve this complex problem with minimal computational time. The result of a case study indicates that the proposed model is capable to find the shortest path for the robot to complete its task. Thus concluded the proposed model can be implemented in any drill route problems.","PeriodicalId":437153,"journal":{"name":"2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128570392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is focusing on optimizing harmonic content in Sinusoidal Pulse Width Modulation (SPWM) design. This SPWM is designed using VHDL and implemented on ALTERA (DE2-70 board). SPWM output is generated by intersection between sine signal and triangle signal. Sine signal is the reference waveform and triangle waveform is the carrier waveform. When value sine signal is large than triangle signal, the pulse will start produce to high. And then when the triangular signals higher than sine signal, the pulse will come to low. SPWM output will changed by changing the value of number of bit, modulation index and frequency used in this system to produce more pulse width. The more pulse width produced, the output voltage will have lower harmonics contents and the resolution increase.
{"title":"Sinusoidal Pulse Width Modulation (SPWM) Design and Implementation by Focusing on Reducing Harmonic Content","authors":"H. Hussin, A. Saparon, M. Muhamad, M. D. Risin","doi":"10.1109/AMS.2010.125","DOIUrl":"https://doi.org/10.1109/AMS.2010.125","url":null,"abstract":"This paper is focusing on optimizing harmonic content in Sinusoidal Pulse Width Modulation (SPWM) design. This SPWM is designed using VHDL and implemented on ALTERA (DE2-70 board). SPWM output is generated by intersection between sine signal and triangle signal. Sine signal is the reference waveform and triangle waveform is the carrier waveform. When value sine signal is large than triangle signal, the pulse will start produce to high. And then when the triangular signals higher than sine signal, the pulse will come to low. SPWM output will changed by changing the value of number of bit, modulation index and frequency used in this system to produce more pulse width. The more pulse width produced, the output voltage will have lower harmonics contents and the resolution increase.","PeriodicalId":437153,"journal":{"name":"2010 Fourth Asia International Conference on Mathematical/Analytical Modelling and Computer Simulation","volume":"428 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129363446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}