Pub Date : 2017-12-01DOI: 10.1109/ISED.2017.8303922
A. Choudhury, B. Sikdar
Dynamic voltage and frequency scaling puts threats to reliability in Chip Multiprocessors (CMPs). Cache being the most susceptible to faults, the fault tolerance techniques are necessary to ensure error free execution even if there are faults in cache. Existing fault tolerance techniques lack completeness in fault protection as well as harm effective capacity of the cache. They either remap faulty blocks to non-conflicting faulty blocks or use some auxiliary cache. This work proposes a fault remapping strategy that ensures completeness in fault protection without affecting the effective capacity of the Last Level Cache by remapping all effective faulty cache lines to either non-conflicting faulty cache lines or low-reusable healthy lines. The reusability is predicted using dynamic reuse distance analysis and cache lines are ranked by their protecting distance. Only the highly reusable faulty lines are considered for remapping to low reusable non-conflicting faulty lines. Failing that the low-reusable healthy lines are considered as the target and this avoids the requirement of any auxiliary cache. Cycle accurate simulation in Multi2Sim 5.0 with plethora of fault maps, in an octacore CMP architecture, reveals up to 38.73% increase in hit ratio over the existing fault remapping techniques.
{"title":"CIFR: A complete in-place fault remapping strategy for CMP cache using dynamic reuse distance","authors":"A. Choudhury, B. Sikdar","doi":"10.1109/ISED.2017.8303922","DOIUrl":"https://doi.org/10.1109/ISED.2017.8303922","url":null,"abstract":"Dynamic voltage and frequency scaling puts threats to reliability in Chip Multiprocessors (CMPs). Cache being the most susceptible to faults, the fault tolerance techniques are necessary to ensure error free execution even if there are faults in cache. Existing fault tolerance techniques lack completeness in fault protection as well as harm effective capacity of the cache. They either remap faulty blocks to non-conflicting faulty blocks or use some auxiliary cache. This work proposes a fault remapping strategy that ensures completeness in fault protection without affecting the effective capacity of the Last Level Cache by remapping all effective faulty cache lines to either non-conflicting faulty cache lines or low-reusable healthy lines. The reusability is predicted using dynamic reuse distance analysis and cache lines are ranked by their protecting distance. Only the highly reusable faulty lines are considered for remapping to low reusable non-conflicting faulty lines. Failing that the low-reusable healthy lines are considered as the target and this avoids the requirement of any auxiliary cache. Cycle accurate simulation in Multi2Sim 5.0 with plethora of fault maps, in an octacore CMP architecture, reveals up to 38.73% increase in hit ratio over the existing fault remapping techniques.","PeriodicalId":147019,"journal":{"name":"2017 7th International Symposium on Embedded Computing and System Design (ISED)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125172906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ISED.2017.8303928
Priyanandini Das, Pranose J. Edavoor, S. Raveendran, Sunil Rathore, A. Rahulkar
This paper analyses the performance of a PID controller for a servomotor system which is based on Wavelet as opposed to conventional PID controller. First, the orthogonal wavelet filter bank has been obtained. For decomposing the error signal(difference between reference and actual speed of servomotor system) at different sub-bands, wavelet filter bank is used. These sub-bands have been scaled by selecting optimized gains in order to minimize the error. After adding the resultant sub-bands together and the control signal is generated for the servomotor system. The simulation results show the superiority of the wavelet-based PID controller over existing PID controller for different conditions and also the Multiresolution (MRPID) controller is implemented on FPGA and the filter characteristics is compared with the MATLAB simulation.
{"title":"Design and implementation of PID controller based on orthogonal wavelet filter-banks in FPGA","authors":"Priyanandini Das, Pranose J. Edavoor, S. Raveendran, Sunil Rathore, A. Rahulkar","doi":"10.1109/ISED.2017.8303928","DOIUrl":"https://doi.org/10.1109/ISED.2017.8303928","url":null,"abstract":"This paper analyses the performance of a PID controller for a servomotor system which is based on Wavelet as opposed to conventional PID controller. First, the orthogonal wavelet filter bank has been obtained. For decomposing the error signal(difference between reference and actual speed of servomotor system) at different sub-bands, wavelet filter bank is used. These sub-bands have been scaled by selecting optimized gains in order to minimize the error. After adding the resultant sub-bands together and the control signal is generated for the servomotor system. The simulation results show the superiority of the wavelet-based PID controller over existing PID controller for different conditions and also the Multiresolution (MRPID) controller is implemented on FPGA and the filter characteristics is compared with the MATLAB simulation.","PeriodicalId":147019,"journal":{"name":"2017 7th International Symposium on Embedded Computing and System Design (ISED)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126437462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ISED.2017.8303936
A. Gupta
The expansion of power system has led to the increase of Distributed Generation (DG) to fulfil the increase in the load demand. The DG opted has led to the increase in the losses and voltage imbalance due to improper allocation. Rapid increase in nonlinear loads in our daily life has introduced the power quality problems in the power system in terms of harmonics. In this paper, the optimal placement of multiple DG and Distribution Static Compensator (D-STATCOM) in a radial distribution system is carried out which results in the reduction of line losses as well as Total Harmonic Distortion (THD). The optimal location of DG and D-STATCOM is determined by stability index. The optimal size of DG and D-STATCOM is determined using variation technique by increasing the size in steps. The load flow analysis and harmonic flow analysis is carried out on IEEE 34 bus system having the harmonic sources injected as current sources.
{"title":"Effect of optimal allocation of multiple DG and D-STATCOM in radial distribution system for minimizing losses and THD","authors":"A. Gupta","doi":"10.1109/ISED.2017.8303936","DOIUrl":"https://doi.org/10.1109/ISED.2017.8303936","url":null,"abstract":"The expansion of power system has led to the increase of Distributed Generation (DG) to fulfil the increase in the load demand. The DG opted has led to the increase in the losses and voltage imbalance due to improper allocation. Rapid increase in nonlinear loads in our daily life has introduced the power quality problems in the power system in terms of harmonics. In this paper, the optimal placement of multiple DG and Distribution Static Compensator (D-STATCOM) in a radial distribution system is carried out which results in the reduction of line losses as well as Total Harmonic Distortion (THD). The optimal location of DG and D-STATCOM is determined by stability index. The optimal size of DG and D-STATCOM is determined using variation technique by increasing the size in steps. The load flow analysis and harmonic flow analysis is carried out on IEEE 34 bus system having the harmonic sources injected as current sources.","PeriodicalId":147019,"journal":{"name":"2017 7th International Symposium on Embedded Computing and System Design (ISED)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130130986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ISED.2017.8303944
Ankur Pokhara, Jatin Agrawal, B. Mishra
A full custom, all digital, low power Time-to-Digital Converter (TDC) is proposed. The proposed architecture contains a 20-bit ripple counter, an encoder, an edge detector and a Ring Delay Line (RDL). The TDC core, has an active area of 0.026mm2 implemented in 0.18µm CMOS technology that achieves a resolution of 586.4ps/LSB and 201.8ps/LSB, lower power consumption of 32.5µW and 315.5µW, with the distance calculation up to 2949.4km and 1015.7km at 1V and 1.8V respectively, making it feasible for time-of-flight measurement in space applications.
{"title":"Design of an all-digital, low power time-to-digital converter in 0.18μm CMOS","authors":"Ankur Pokhara, Jatin Agrawal, B. Mishra","doi":"10.1109/ISED.2017.8303944","DOIUrl":"https://doi.org/10.1109/ISED.2017.8303944","url":null,"abstract":"A full custom, all digital, low power Time-to-Digital Converter (TDC) is proposed. The proposed architecture contains a 20-bit ripple counter, an encoder, an edge detector and a Ring Delay Line (RDL). The TDC core, has an active area of 0.026mm2 implemented in 0.18µm CMOS technology that achieves a resolution of 586.4ps/LSB and 201.8ps/LSB, lower power consumption of 32.5µW and 315.5µW, with the distance calculation up to 2949.4km and 1015.7km at 1V and 1.8V respectively, making it feasible for time-of-flight measurement in space applications.","PeriodicalId":147019,"journal":{"name":"2017 7th International Symposium on Embedded Computing and System Design (ISED)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124032249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ISED.2017.8303935
S. Raj, K. C. Ray
Automatic analysis of long-term electrocardiogram (ECG) recordings is crucial for timely and accurate diagnosis of life-threatening cardiovascular diseases. This article presents an efficient ECG classification scheme using variational mode decomposition approach. The method decomposes a time-domain input signal into various variational mode functions (VMFs). The VMD method adaptively decomposes an input signal into a number of modes to estimate their center frequencies, so that the band-limited modes can regenerate the input signal exactly. In this study, only mode-2 (M2) is used as morphological features and represented in reduced dimensions by employing principal component analysis (PCA). Further, the dynamic features (RR-intervals) are concatenated to constitute a feature set representing each heartbeat. The PCA method is employed to balance the impact of both the features exhibiting two different characteristics of an heartbeat i.e within the event and among the events. These extracted features of each heartbeat are further utilized for recognition into one of 16 heartbeat classes using artificial bee colony (ABC) optimized directed acyclic graph support vector machines (DAG-SVM). The proposed method is evaluated on the benchmark MIT-BIH arrhythmia database yielding an improved accuracy, sensitivity, positive predictivity and F-score of 98.72%, 98.72% and 98.72% respectively over the methodologies available in literature to the state-of-art diagnosis.
{"title":"Application of variational mode decomposition and ABC optimized DAG-SVM in arrhythmia analysis","authors":"S. Raj, K. C. Ray","doi":"10.1109/ISED.2017.8303935","DOIUrl":"https://doi.org/10.1109/ISED.2017.8303935","url":null,"abstract":"Automatic analysis of long-term electrocardiogram (ECG) recordings is crucial for timely and accurate diagnosis of life-threatening cardiovascular diseases. This article presents an efficient ECG classification scheme using variational mode decomposition approach. The method decomposes a time-domain input signal into various variational mode functions (VMFs). The VMD method adaptively decomposes an input signal into a number of modes to estimate their center frequencies, so that the band-limited modes can regenerate the input signal exactly. In this study, only mode-2 (M2) is used as morphological features and represented in reduced dimensions by employing principal component analysis (PCA). Further, the dynamic features (RR-intervals) are concatenated to constitute a feature set representing each heartbeat. The PCA method is employed to balance the impact of both the features exhibiting two different characteristics of an heartbeat i.e within the event and among the events. These extracted features of each heartbeat are further utilized for recognition into one of 16 heartbeat classes using artificial bee colony (ABC) optimized directed acyclic graph support vector machines (DAG-SVM). The proposed method is evaluated on the benchmark MIT-BIH arrhythmia database yielding an improved accuracy, sensitivity, positive predictivity and F-score of 98.72%, 98.72% and 98.72% respectively over the methodologies available in literature to the state-of-art diagnosis.","PeriodicalId":147019,"journal":{"name":"2017 7th International Symposium on Embedded Computing and System Design (ISED)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114532025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ISED.2017.8303939
Alisha Oraon, Shradha Shreya, Renuka Kumari, A. Islam
In this paper, a double trench 4H SiC MOSFET is presented as an enhanced model for the SiC conventional UMOSFET with a p+ shielding to prevent dielectric breakdown of the gate oxide. This paper proposes a double trench structure with both gate and source trenches. The double trench structure reduces the electric field at the bottom of the gate oxide. Thus, on optimizing the model of UMOSFET with Double Trench structure we further increase the breakdown voltage (BV). Hence, higher BV is achieved compared to conventional SiC UMOSFET, resulting in increase of overall figure of merit (FoM) to an appreciable value. The BV achieved is 1450 V and ON-state specific resistance (RON-sp) is 4.24 mΩ.cm2 which on calculation gives FoM to be 0.495. Thus, the FoM is improved by 36.3% compared to conventional SiC UMOSFET.
本文提出了一种双沟槽4H SiC MOSFET,作为SiC传统MOSFET的增强模型,具有p+屏蔽以防止栅氧化物的介电击穿。本文提出了一种门沟和源沟同时存在的双沟结构。双沟槽结构减小了栅极氧化物底部的电场。因此,在优化双沟槽结构UMOSFET模型的基础上,进一步提高了击穿电压(BV)。因此,与传统的SiC UMOSFET相比,实现了更高的BV,从而将总体性能值(FoM)增加到一个可观的值。实现的BV为1450 V, on状态比电阻(ronsp)为4.24 mΩ。计算得到FoM为0.495。因此,与传统的SiC UMOSFET相比,FoM提高了36.3%。
{"title":"A double trench 4H — SiC MOSFET as an enhanced model of SiC UMOSFET","authors":"Alisha Oraon, Shradha Shreya, Renuka Kumari, A. Islam","doi":"10.1109/ISED.2017.8303939","DOIUrl":"https://doi.org/10.1109/ISED.2017.8303939","url":null,"abstract":"In this paper, a double trench 4H SiC MOSFET is presented as an enhanced model for the SiC conventional UMOSFET with a p+ shielding to prevent dielectric breakdown of the gate oxide. This paper proposes a double trench structure with both gate and source trenches. The double trench structure reduces the electric field at the bottom of the gate oxide. Thus, on optimizing the model of UMOSFET with Double Trench structure we further increase the breakdown voltage (BV). Hence, higher BV is achieved compared to conventional SiC UMOSFET, resulting in increase of overall figure of merit (FoM) to an appreciable value. The BV achieved is 1450 V and ON-state specific resistance (RON-sp) is 4.24 mΩ.cm2 which on calculation gives FoM to be 0.495. Thus, the FoM is improved by 36.3% compared to conventional SiC UMOSFET.","PeriodicalId":147019,"journal":{"name":"2017 7th International Symposium on Embedded Computing and System Design (ISED)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127304114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ISED.2017.8303941
G. Hegde, K. S. Reddy, T. K. Ramesh
In this work, an approach for optimizing the 3-D Discrete wavelet transform (3-D DWT) architecture is recommended. Conventional 3-D DWT architectures include basic building blocks such as 1-D DWT module, 2-D DWT module, transpose memory unit, and temporal memory unit. Proposed 3D DWT architecture is designed by suitably interconnecting the fundamental constituents (1-D DWT and 2-D DWT modules) which do not demand transposition and temporal memory units. Architecture employing the recommended approach is realized in gate level Verilog HDL. Design is functionally verified, synthesized using Cadence RC design compiler, and implemented on 90nm standard cell library. Experimental results exhibit that the proposed approach for the architecture offers significant gain in both area and power.
{"title":"An approach for area and power optimization of flipping 3-D discrete wavelet transform architecture","authors":"G. Hegde, K. S. Reddy, T. K. Ramesh","doi":"10.1109/ISED.2017.8303941","DOIUrl":"https://doi.org/10.1109/ISED.2017.8303941","url":null,"abstract":"In this work, an approach for optimizing the 3-D Discrete wavelet transform (3-D DWT) architecture is recommended. Conventional 3-D DWT architectures include basic building blocks such as 1-D DWT module, 2-D DWT module, transpose memory unit, and temporal memory unit. Proposed 3D DWT architecture is designed by suitably interconnecting the fundamental constituents (1-D DWT and 2-D DWT modules) which do not demand transposition and temporal memory units. Architecture employing the recommended approach is realized in gate level Verilog HDL. Design is functionally verified, synthesized using Cadence RC design compiler, and implemented on 90nm standard cell library. Experimental results exhibit that the proposed approach for the architecture offers significant gain in both area and power.","PeriodicalId":147019,"journal":{"name":"2017 7th International Symposium on Embedded Computing and System Design (ISED)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121862571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ISED.2017.8303927
Pranose J. Edavoor, S. Raveendran, A. Rahulkar
This paper presents a wavelet based image acquisition and transmission algorithm using adaptive compression and block scanning method with a three level encryption scheme. The proposed adaptive compression supports the changing of SNR and BPP values on-the-fly between the capture of images. Adaptive block scanning is used to achieve a better compression ratio by choosing among horizontal, vertical, Hilbert and zigzag scans. An encryption scheme has been included with three level encryption to ensure security of the data captured and stored. The proposed work was simulated and functionally verified on Vivado 2016.4 and implemented on Zynq 7000. The proposed algorithm was tested with daubechies1 (db1), daubechies2 (db2), daubechies3 (db3), daubechies4 (db4) wavelets. The adaptive image compression algorithm proposed is capable of changing image quality as and when needed by user and this can find application in real time multimedia applications.
{"title":"Implementation of adaptive image compression algorithm using varying bit-length daubechies wavelet coefficient with three-level encryption on Zynq 7000","authors":"Pranose J. Edavoor, S. Raveendran, A. Rahulkar","doi":"10.1109/ISED.2017.8303927","DOIUrl":"https://doi.org/10.1109/ISED.2017.8303927","url":null,"abstract":"This paper presents a wavelet based image acquisition and transmission algorithm using adaptive compression and block scanning method with a three level encryption scheme. The proposed adaptive compression supports the changing of SNR and BPP values on-the-fly between the capture of images. Adaptive block scanning is used to achieve a better compression ratio by choosing among horizontal, vertical, Hilbert and zigzag scans. An encryption scheme has been included with three level encryption to ensure security of the data captured and stored. The proposed work was simulated and functionally verified on Vivado 2016.4 and implemented on Zynq 7000. The proposed algorithm was tested with daubechies1 (db1), daubechies2 (db2), daubechies3 (db3), daubechies4 (db4) wavelets. The adaptive image compression algorithm proposed is capable of changing image quality as and when needed by user and this can find application in real time multimedia applications.","PeriodicalId":147019,"journal":{"name":"2017 7th International Symposium on Embedded Computing and System Design (ISED)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131552815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ISED.2017.8303924
Fritjof Bornebusch, R. Wille, R. Drechsler
Solvers for Boolean satisfiability (SAT solvers) are essential for various hardware and software verification tasks such as equivalence checking, property checking, coverage analysis, etc. Nevertheless, despite the fact that very powerful solvers have been developed in the recent decades, this progress often still cannot cope with the exponentially increasing complexity of those verification tasks. As a consequence, researchers and engineers are investigating complementarily different verification approaches which require changes in the core methods as well. Self-verification is such a promising approach where e.g. SAT solvers have to be executed on the system itself. This comes with hardware restrictions such as limited memory and motivates lightweight SAT solvers. This work provides a case study towards the development of such solvers. To this end, we consider several core techniques of SAT solvers (such as clause learning, Boolean constraint propagation, etc.) and discuss as well as evaluate how they contribute to both, the run-time performance but also the required memory requirements. The findings from this case study provide a basis for the development of dedicated, i.e. lightweight, SAT solvers to be used in self-verification solutions.
{"title":"Towards lightweight satisfiability solvers for self-verification","authors":"Fritjof Bornebusch, R. Wille, R. Drechsler","doi":"10.1109/ISED.2017.8303924","DOIUrl":"https://doi.org/10.1109/ISED.2017.8303924","url":null,"abstract":"Solvers for Boolean satisfiability (SAT solvers) are essential for various hardware and software verification tasks such as equivalence checking, property checking, coverage analysis, etc. Nevertheless, despite the fact that very powerful solvers have been developed in the recent decades, this progress often still cannot cope with the exponentially increasing complexity of those verification tasks. As a consequence, researchers and engineers are investigating complementarily different verification approaches which require changes in the core methods as well. Self-verification is such a promising approach where e.g. SAT solvers have to be executed on the system itself. This comes with hardware restrictions such as limited memory and motivates lightweight SAT solvers. This work provides a case study towards the development of such solvers. To this end, we consider several core techniques of SAT solvers (such as clause learning, Boolean constraint propagation, etc.) and discuss as well as evaluate how they contribute to both, the run-time performance but also the required memory requirements. The findings from this case study provide a basis for the development of dedicated, i.e. lightweight, SAT solvers to be used in self-verification solutions.","PeriodicalId":147019,"journal":{"name":"2017 7th International Symposium on Embedded Computing and System Design (ISED)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130415811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ISED.2017.8303947
Umang Agarwal
Designing and implementing a multirotor imposes some challenges: limited flight time and take-off mass, motor/propeller matching and unsteady dynamics. In this paper, these challenges are addressed by multi-objective optimization of a multiro tor's operational parameters like flight velocity, flight altitude and motor/propeller rpm, and physical parameters like motor, battery and propeller geometry. New contributions are establishing a functional dependence of rotor thrust and power coefficients on design parameters, and incorporating aerodynamic effects within the optimization environment. Additionally, the rationality of optimization is enhanced by modeling physical parameters as discrete variables. The numerical results indicate that Genetic Algorithm reliably finds an optimum design, and improves flight time and maximum take-off mass by 35%.
{"title":"Multirotor performance optimization using genetic algorithm","authors":"Umang Agarwal","doi":"10.1109/ISED.2017.8303947","DOIUrl":"https://doi.org/10.1109/ISED.2017.8303947","url":null,"abstract":"Designing and implementing a multirotor imposes some challenges: limited flight time and take-off mass, motor/propeller matching and unsteady dynamics. In this paper, these challenges are addressed by multi-objective optimization of a multiro tor's operational parameters like flight velocity, flight altitude and motor/propeller rpm, and physical parameters like motor, battery and propeller geometry. New contributions are establishing a functional dependence of rotor thrust and power coefficients on design parameters, and incorporating aerodynamic effects within the optimization environment. Additionally, the rationality of optimization is enhanced by modeling physical parameters as discrete variables. The numerical results indicate that Genetic Algorithm reliably finds an optimum design, and improves flight time and maximum take-off mass by 35%.","PeriodicalId":147019,"journal":{"name":"2017 7th International Symposium on Embedded Computing and System Design (ISED)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130648248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}