As the automotive industry moves toward autonomous driving and ADAS (Advanced Driver Assistance Systems), Model-Based Design (MBD) is a practical design methodology. It can be used to develop rapid prototyping by using MATLAB and Simulink. The MBD method still has limitations for handling complex models. This paper uses the Control Data Flow Graph (CDFG), an intermediate representation for analyzing complex algorithms, so that suitable optimizations for image processing applications can be implemented on an FPGA. The experimental results show that the proposed CDFG method improved both the area and speed of the edge detection case study compared with the MathWorks Vision HDL toolbox.
{"title":"Model-Based Design Optimization using CDFG for Image Processing on FPGA","authors":"Surachate Chumpol, Panadda Solod, Krerkchai Thongnoo, Nattha Jindapetch","doi":"10.37936/ecti-cit.2023174.252417","DOIUrl":"https://doi.org/10.37936/ecti-cit.2023174.252417","url":null,"abstract":"As the automotive industry moves toward autonomous driving and ADAS (Advanced Driver Assistance Systems), Model-Based Design (MBD) is a practical design methodology. It can be used to develop rapid prototyping by using MATLAB and Simulink. The MBD method still has limitations for handling complex models. This paper uses the Control Data Flow Graph (CDFG), an intermediate representation for analyzing complex algorithms, so that suitable optimizations for image processing applications can be implemented on an FPGA. The experimental results show that the proposed CDFG method improved both the area and speed of the edge detection case study compared with the MathWorks Vision HDL toolbox.","PeriodicalId":37046,"journal":{"name":"ECTI Transactions on Computer and Information Technology","volume":"12 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135513734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-14DOI: 10.37936/ecti-cit.2023174.251643
Korn Puangnak, Manthana Tiawongsuwan
This paper presents an improved version of the California Algorithm (CA), focusing on threshold selection criteria. The CA is a widely recognized incidence detection algorithm used as a benchmark for comparison with newly developed incident detection algorithms. This study proposes criteria for threshold selection in CA based on the A* algorithm, which aims to find optimal thresholds using a Performance Index (PI) as a cost function. Our proposed method reduces processing time by optimizing resource utilization and establishes a standard for threshold selection in CA for comparison and evaluation purposes. Experimental results from our proposed method demonstrate its effectiveness in reducing the complexity required to determine optimal thresholds. Optimization of the CA method using the A* algorithm results in a 98.68% reduction in the number of nodes searched compared to a Complete Search Tree (CST).
{"title":"Optimizing Incident Detection Thresholds Using the A* Algorithm: An Enhanced Approach for the California Algorithm","authors":"Korn Puangnak, Manthana Tiawongsuwan","doi":"10.37936/ecti-cit.2023174.251643","DOIUrl":"https://doi.org/10.37936/ecti-cit.2023174.251643","url":null,"abstract":"This paper presents an improved version of the California Algorithm (CA), focusing on threshold selection criteria. The CA is a widely recognized incidence detection algorithm used as a benchmark for comparison with newly developed incident detection algorithms. This study proposes criteria for threshold selection in CA based on the A* algorithm, which aims to find optimal thresholds using a Performance Index (PI) as a cost function. Our proposed method reduces processing time by optimizing resource utilization and establishes a standard for threshold selection in CA for comparison and evaluation purposes. Experimental results from our proposed method demonstrate its effectiveness in reducing the complexity required to determine optimal thresholds. Optimization of the CA method using the A* algorithm results in a 98.68% reduction in the number of nodes searched compared to a Complete Search Tree (CST).","PeriodicalId":37046,"journal":{"name":"ECTI Transactions on Computer and Information Technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135765977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a digital image watermarking strategy using histogram equalization and visual Saliency followed by LSB (Least Significant Bit) replacement for better imperceptibility with hiding capacity. With this technique, a saliency map determines lesser-observable parts of the original image and gradually implants with increasing amounts of information based on histogram equalization information. The output from saliency is the perceptible areas within an image, which is the most notable position from the perspective of vision; as a result, any changes made other than those areas will be less noticeable to viewers. Implementing the histogram method helps identify the areas where we can hide our secret information within that image. Using the LSB replacement technique, we adaptively insert our confidential data into the original image. Here, we use the saliency map to find out the non-salient region or less perceptible region to improve the imperceptibility, and the histogram equalization technique is used to maximize the hiding capacity within those less perceptible regions. So that we can improve the imperceptibility as well as the hiding capacity.
{"title":"Image Watermarking Framework using Histogram Equalization and Visual Saliency","authors":"Bishwabara Panda, Manas Ranjan Nayak, Pradeep Kumar Mallick, Abhishek Basu","doi":"10.37936/ecti-cit.2023174.252375","DOIUrl":"https://doi.org/10.37936/ecti-cit.2023174.252375","url":null,"abstract":"This paper proposes a digital image watermarking strategy using histogram equalization and visual Saliency followed by LSB (Least Significant Bit) replacement for better imperceptibility with hiding capacity. With this technique, a saliency map determines lesser-observable parts of the original image and gradually implants with increasing amounts of information based on histogram equalization information. The output from saliency is the perceptible areas within an image, which is the most notable position from the perspective of vision; as a result, any changes made other than those areas will be less noticeable to viewers. Implementing the histogram method helps identify the areas where we can hide our secret information within that image. Using the LSB replacement technique, we adaptively insert our confidential data into the original image. Here, we use the saliency map to find out the non-salient region or less perceptible region to improve the imperceptibility, and the histogram equalization technique is used to maximize the hiding capacity within those less perceptible regions. So that we can improve the imperceptibility as well as the hiding capacity.","PeriodicalId":37046,"journal":{"name":"ECTI Transactions on Computer and Information Technology","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135253872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our research proposes an approach to finding a suitable location for a motorcycle Battery Swapping Station (BSS) that considers multiple objectives. We developed a model based on Euclidean distance with K-NN, the AHP function, a desired number of stations, and GIS-based road infrastructure data. This model also considers the maximum coverage area and satisfies the number of stations and geographical features. Additionally, we consider the average driving distance of the battery swapping station location. To facilitate analysis, square grids form cells representing road type, environmental characteristics, places, and population density. Our proposed framework provides decision-makers with a multi-objective and visually optimized motorcycle BSS location, allowing for a more flexible selection of exact BSS locations shown on a map. Our demonstration can be used to resolve the uncertain problem related to finding a place for a motorcycle battery swapping station location. to finding a place for a motorcycle battery swapping station location.
{"title":"Optimized Selection of Motorcycle Battery Swapping Stations Under Flexible Demand by Using Distance Function And Gis Technique","authors":"Athita On-Ouen, Jirayus Arbking, Nuttaporn Phakdee","doi":"10.37936/ecti-cit.2023173.253697","DOIUrl":"https://doi.org/10.37936/ecti-cit.2023173.253697","url":null,"abstract":"Our research proposes an approach to finding a suitable location for a motorcycle Battery Swapping Station (BSS) that considers multiple objectives. We developed a model based on Euclidean distance with K-NN, the AHP function, a desired number of stations, and GIS-based road infrastructure data. This model also considers the maximum coverage area and satisfies the number of stations and geographical features. Additionally, we consider the average driving distance of the battery swapping station location. To facilitate analysis, square grids form cells representing road type, environmental characteristics, places, and population density. Our proposed framework provides decision-makers with a multi-objective and visually optimized motorcycle BSS location, allowing for a more flexible selection of exact BSS locations shown on a map. Our demonstration can be used to resolve the uncertain problem related to finding a place for a motorcycle battery swapping station location. to finding a place for a motorcycle battery swapping station location.","PeriodicalId":37046,"journal":{"name":"ECTI Transactions on Computer and Information Technology","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135038015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data utility and data privacy are serious issues that must be considered when datasets are utilized in big data analytics such that they are traded off. That is, the datasets have high data utility and often have high risks in terms of privacy violation issues. To balance the data utility and the data privacy in datasets when they are provided to utilize in big data analytics, several privacy preservation models have been proposed, e.g., k-Anonymity, l-Diversity, t-Closeness, Anatomy, k-Likeness, and (lp1, . . . , lpn)-Privacy. Unfortunately, these privacy preservation models are highly complex data models and still have data utility issues that must be addressed. To rid these vulnerabilities of these models, a new privacy preservation model is proposed in this work. It is based on aggregate query answers that can guarantee the confidence of the range and the number of values that can be re-identified. Furthermore, we show that the proposed model is more effcient and effective in big data analytics by using extensive experiments.
{"title":"Privacy-Enhancing Data Aggregation for Big Data Analytics","authors":"Surapon Riyana, Kittikorn Sasujit, Nigran Homdoung","doi":"10.37936/ecti-cit.2023173.252952","DOIUrl":"https://doi.org/10.37936/ecti-cit.2023173.252952","url":null,"abstract":"Data utility and data privacy are serious issues that must be considered when datasets are utilized in big data analytics such that they are traded off. That is, the datasets have high data utility and often have high risks in terms of privacy violation issues. To balance the data utility and the data privacy in datasets when they are provided to utilize in big data analytics, several privacy preservation models have been proposed, e.g., k-Anonymity, l-Diversity, t-Closeness, Anatomy, k-Likeness, and (lp1, . . . , lpn)-Privacy. Unfortunately, these privacy preservation models are highly complex data models and still have data utility issues that must be addressed. To rid these vulnerabilities of these models, a new privacy preservation model is proposed in this work. It is based on aggregate query answers that can guarantee the confidence of the range and the number of values that can be re-identified. Furthermore, we show that the proposed model is more effcient and effective in big data analytics by using extensive experiments.","PeriodicalId":37046,"journal":{"name":"ECTI Transactions on Computer and Information Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136342945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-02DOI: 10.37936/ecti-cit.2023173.253328
Benchaporn Jantarakongkul, Pusit Kulkasem
This paper presents a simple and optimal approach for automatically identifying the location and size of plaque territories in IVUS images, thus improving plaque territory classification. Unlike existing circular-based algorithms, we leverage the anatomical structure of IVUS images to enhance accuracy. The adventitia, which constitutes the largest part of the image, serves as a landmark; however, its low contrast makes edge detection challenging. To address this issue, we enhance the brightness of the adventitia, identify and remove intima blobs, and accurately determine the media boundary. This aids in simplifying the calculation of plaque territory. To locate the plaque territory, we employ a spiral random walk-based approach that utilizes the concentration of entropy and gradient magnitude in the target area. Our approach outperforms existing methods, contributing to automated plaque analysis for cardiovascular disease diagnosis and treatment. The results show that the proposed approach achieves an accuracy of 0.89, precision of 0.81, recall of 0.77, and F1-Score of 0.83, respectively.
{"title":"Plaque Territory Detection in IVUS Images based on Concentration of Entropy and Gradient Magnitude via Spiral Random Walk-based Approach","authors":"Benchaporn Jantarakongkul, Pusit Kulkasem","doi":"10.37936/ecti-cit.2023173.253328","DOIUrl":"https://doi.org/10.37936/ecti-cit.2023173.253328","url":null,"abstract":"This paper presents a simple and optimal approach for automatically identifying the location and size of plaque territories in IVUS images, thus improving plaque territory classification. Unlike existing circular-based algorithms, we leverage the anatomical structure of IVUS images to enhance accuracy. The adventitia, which constitutes the largest part of the image, serves as a landmark; however, its low contrast makes edge detection challenging. To address this issue, we enhance the brightness of the adventitia, identify and remove intima blobs, and accurately determine the media boundary. This aids in simplifying the calculation of plaque territory. To locate the plaque territory, we employ a spiral random walk-based approach that utilizes the concentration of entropy and gradient magnitude in the target area. Our approach outperforms existing methods, contributing to automated plaque analysis for cardiovascular disease diagnosis and treatment. The results show that the proposed approach achieves an accuracy of 0.89, precision of 0.81, recall of 0.77, and F1-Score of 0.83, respectively.","PeriodicalId":37046,"journal":{"name":"ECTI Transactions on Computer and Information Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134968521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In healthcare, electrocardiography (ECG) sensors generate a large amount of heart electrical signal that must be efficiently compressed to enable fast data transfer and reduce storage costs. Existing methods for ECG data compression do not fully exploit the characteristics of ECG signals, leading to suboptimal compression. This study proposes a data compression technique for ECG data by exploiting the known characteristics of ECG signals. Our approach combines Savitzky-Golay filtering, detrending, discrete cosine transform, scalar quantization, run-length encoding, and Huffman coding for the effective compression. To optimize the compression performance, we generated quantization intervals tailored to the ECG data characteristics. The proposed method experimentally produces a high compression ratio of 127.61 for a design parameter K = 8, a minimum percentage root mean square difference of 1.03% for K = 128, and a maximum quality score (QS) of 39.78, where K is the number of quantization intervals. Moreover, we compared the proposed method to state-of-the-art methods on a widely used ECG benchmark dataset. We found that the proposed method outperforms the others in terms of the QS, which measures the overall compression-decompression ability. By enabling more storage and faster data transfer, the proposed method can facilitate the widespread use and analysis of large volumes of ECG data, thereby contributing to advances in healthcare.
{"title":"An Efficient Electrocardiography Data Compression","authors":"Passakorn Luanloet, Watcharapan Suwansantisuk, Pinit Kumhom","doi":"10.37936/ecti-cit.2023173.253629","DOIUrl":"https://doi.org/10.37936/ecti-cit.2023173.253629","url":null,"abstract":"In healthcare, electrocardiography (ECG) sensors generate a large amount of heart electrical signal that must be efficiently compressed to enable fast data transfer and reduce storage costs. Existing methods for ECG data compression do not fully exploit the characteristics of ECG signals, leading to suboptimal compression. This study proposes a data compression technique for ECG data by exploiting the known characteristics of ECG signals. Our approach combines Savitzky-Golay filtering, detrending, discrete cosine transform, scalar quantization, run-length encoding, and Huffman coding for the effective compression. To optimize the compression performance, we generated quantization intervals tailored to the ECG data characteristics. The proposed method experimentally produces a high compression ratio of 127.61 for a design parameter K = 8, a minimum percentage root mean square difference of 1.03% for K = 128, and a maximum quality score (QS) of 39.78, where K is the number of quantization intervals. Moreover, we compared the proposed method to state-of-the-art methods on a widely used ECG benchmark dataset. We found that the proposed method outperforms the others in terms of the QS, which measures the overall compression-decompression ability. By enabling more storage and faster data transfer, the proposed method can facilitate the widespread use and analysis of large volumes of ECG data, thereby contributing to advances in healthcare.","PeriodicalId":37046,"journal":{"name":"ECTI Transactions on Computer and Information Technology","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134968523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-12DOI: 10.37936/ecti-cit.2023173.252500
Wannida Sae-Tang, Adisorn Sirikham
This paper proposes steganography-based copyright- and privacy-protected image trading systems using image transformation, i.e., either discrete cosine transform (DCT) or Hadamard transform (HT). In the systems, there are a content provider (CP), a consumer, the first trusted third party (TTP), and the second TTP. To protect the copyright of the image, the consumer ID is embedded into the amplitude components of the commercial image by the first TTP using the digital fingerprinting technique, and to protect the consumer's privacy against the first TTP and a malicious third party (s), the image steganography is applied to the commercial image by using image transformation. A color dummy image is used instead of a gray dummy image for security purposes. After applying the image transformation to both images, the coefficient signs of the commercial image are replaced by the coefficient signs of the dummy image pixel-by-pixel so that the inversely transformed commercial image looks like the dummy image instead of the commercial image. Once the consumer receives the fingerprinted image from the first TTP and the coefficient signs of the commercial image from the second TTP, the consumer reconstructs the fingerprinted commercial image without losing the hidden fingerprint at all because of the compatibility of the proposed image steganography method and the amplitude-based fingerprinting method. The experimental results confirm that the stego-images generated by the proposed systems do not look suspicious with higher qualities compared with those generated by existing systems. Moreover, the fingerprinted image quality and the correct fingerprint extracting rate have been improved by the proposed systems.
{"title":"Image Steganography-based Copyright and Privacy-Protected Image Trading Systems","authors":"Wannida Sae-Tang, Adisorn Sirikham","doi":"10.37936/ecti-cit.2023173.252500","DOIUrl":"https://doi.org/10.37936/ecti-cit.2023173.252500","url":null,"abstract":"This paper proposes steganography-based copyright- and privacy-protected image trading systems using image transformation, i.e., either discrete cosine transform (DCT) or Hadamard transform (HT). In the systems, there are a content provider (CP), a consumer, the first trusted third party (TTP), and the second TTP. To protect the copyright of the image, the consumer ID is embedded into the amplitude components of the commercial image by the first TTP using the digital fingerprinting technique, and to protect the consumer's privacy against the first TTP and a malicious third party (s), the image steganography is applied to the commercial image by using image transformation. A color dummy image is used instead of a gray dummy image for security purposes. After applying the image transformation to both images, the coefficient signs of the commercial image are replaced by the coefficient signs of the dummy image pixel-by-pixel so that the inversely transformed commercial image looks like the dummy image instead of the commercial image. Once the consumer receives the fingerprinted image from the first TTP and the coefficient signs of the commercial image from the second TTP, the consumer reconstructs the fingerprinted commercial image without losing the hidden fingerprint at all because of the compatibility of the proposed image steganography method and the amplitude-based fingerprinting method. The experimental results confirm that the stego-images generated by the proposed systems do not look suspicious with higher qualities compared with those generated by existing systems. Moreover, the fingerprinted image quality and the correct fingerprint extracting rate have been improved by the proposed systems.","PeriodicalId":37046,"journal":{"name":"ECTI Transactions on Computer and Information Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135354361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}