Pub Date : 2023-01-03DOI: 10.1109/IMCOM56909.2023.10035658
Kundan Kandhway
In this work we borrow models from biology (epi-demics) to model spread of a message as a Susceptible-Infected-Recovered (SIR) process. We assume that the target population is large. Further, homogeneous mixing of population is considered. The campaigner enrolls people to spread the message to maximize its reach, this is in addition to the standard epidemic spread. We term this intervention by the campaigner as enrollment. Enrollment may be done by reaching out to people through advertisements, for example, in social media, in print or electronic media, etc. An appropriate cost function is chosen and the given situation is posed as a mathematical optimization problem, more specifically, an optimal control problem. The formulated problem is mathematically analyzed. To this end, the existence of a solution to the optimal control problem is explored. Further, we study the nature of state trajectories at the optimum. We provide insights that are useful in optimizing viral marketing strategies, political or social awareness campaigns, etc.
{"title":"Maximizing Spread of a Message in the Susceptible-Infected-Recovered Process","authors":"Kundan Kandhway","doi":"10.1109/IMCOM56909.2023.10035658","DOIUrl":"https://doi.org/10.1109/IMCOM56909.2023.10035658","url":null,"abstract":"In this work we borrow models from biology (epi-demics) to model spread of a message as a Susceptible-Infected-Recovered (SIR) process. We assume that the target population is large. Further, homogeneous mixing of population is considered. The campaigner enrolls people to spread the message to maximize its reach, this is in addition to the standard epidemic spread. We term this intervention by the campaigner as enrollment. Enrollment may be done by reaching out to people through advertisements, for example, in social media, in print or electronic media, etc. An appropriate cost function is chosen and the given situation is posed as a mathematical optimization problem, more specifically, an optimal control problem. The formulated problem is mathematically analyzed. To this end, the existence of a solution to the optimal control problem is explored. Further, we study the nature of state trajectories at the optimum. We provide insights that are useful in optimizing viral marketing strategies, political or social awareness campaigns, etc.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"440 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133158608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1109/IMCOM56909.2023.10035595
Anirudh Kasturi, A. Agrawal, C. Hota
There has been a huge increase in the amount of data being generated as a result of the proliferation of high-tech, data-generating devices made possible by recent developments in mobile technology. This has rekindled interest in creating smart applications that can make use of the possibilities of this data and provide insightful results. Concerns about bandwidth, privacy, and latency arise when this data from many devices is aggregated in one location to create more precise predictions. This research presents a novel distributed learning approach, wherein a Variational Auto Encoder is trained locally on each client and then used to derive a sample set of points centrally. The server then develops a unified global model, and sends its training parameters to all users. Pure non-i.i.d. distributions, in which each client only sees data labelled with a single value, are the primary focus of our study. According to our findings, communication amongst the server and the clients takes significantly less time than it does in federated and centralised learning setups. We further demonstrate that, whenever the data is spread in a pure non-iid fashion, our methodology achieves higher accuracy than the federated learning strategy by more than 4%. We also showed that, in comparison to centralised and federated learning systems, our suggested method requires less network bandwidth.
{"title":"Distributed Learning of Pure Non-IID data using Latent Codes","authors":"Anirudh Kasturi, A. Agrawal, C. Hota","doi":"10.1109/IMCOM56909.2023.10035595","DOIUrl":"https://doi.org/10.1109/IMCOM56909.2023.10035595","url":null,"abstract":"There has been a huge increase in the amount of data being generated as a result of the proliferation of high-tech, data-generating devices made possible by recent developments in mobile technology. This has rekindled interest in creating smart applications that can make use of the possibilities of this data and provide insightful results. Concerns about bandwidth, privacy, and latency arise when this data from many devices is aggregated in one location to create more precise predictions. This research presents a novel distributed learning approach, wherein a Variational Auto Encoder is trained locally on each client and then used to derive a sample set of points centrally. The server then develops a unified global model, and sends its training parameters to all users. Pure non-i.i.d. distributions, in which each client only sees data labelled with a single value, are the primary focus of our study. According to our findings, communication amongst the server and the clients takes significantly less time than it does in federated and centralised learning setups. We further demonstrate that, whenever the data is spread in a pure non-iid fashion, our methodology achieves higher accuracy than the federated learning strategy by more than 4%. We also showed that, in comparison to centralised and federated learning systems, our suggested method requires less network bandwidth.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127386219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1109/IMCOM56909.2023.10035650
V. Nguyen, Van Huy Nguyen, Quoc Khuong Nguyen, Tien-Dung Nguyen
Time synchronization techniques for underwater communications face challenging issues: time-varied channel, and strong color noise. In addition, since underwater channels are low in bandwidth and prone to low signal to noise ratio, one would prefer to employ a synchronization technique with low overhead. Previous synchronization techniques used preamble to synchronize, which consumes the bandwidth. In this paper, we propose a time synchronization method using the guard interval (GI) in an OFDM symbol. Herein, identical GIs are inserted at the head and tail of an OFDM symbol. At the receiver side, our method checks the difference and similarity between the heading and tailing GIs, from which it can determine the beginning of the OFDM symbol. The proposed method used an iterative technique to magnify the difference between the real signal and the background noise, so that in case of low SNR, the receiver still can perform synchronization. Simulation results show that the proposed method can effectively synchronize when the SNR is low.
{"title":"A Multi-Stage Method for Time Synchronization in Acoustic Underwater Communications","authors":"V. Nguyen, Van Huy Nguyen, Quoc Khuong Nguyen, Tien-Dung Nguyen","doi":"10.1109/IMCOM56909.2023.10035650","DOIUrl":"https://doi.org/10.1109/IMCOM56909.2023.10035650","url":null,"abstract":"Time synchronization techniques for underwater communications face challenging issues: time-varied channel, and strong color noise. In addition, since underwater channels are low in bandwidth and prone to low signal to noise ratio, one would prefer to employ a synchronization technique with low overhead. Previous synchronization techniques used preamble to synchronize, which consumes the bandwidth. In this paper, we propose a time synchronization method using the guard interval (GI) in an OFDM symbol. Herein, identical GIs are inserted at the head and tail of an OFDM symbol. At the receiver side, our method checks the difference and similarity between the heading and tailing GIs, from which it can determine the beginning of the OFDM symbol. The proposed method used an iterative technique to magnify the difference between the real signal and the background noise, so that in case of low SNR, the receiver still can perform synchronization. Simulation results show that the proposed method can effectively synchronize when the SNR is low.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125963019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1109/IMCOM56909.2023.10035581
Ritu Chauhan, Nidhi Gola, Eiad Yafi
In recent times, heart disease has been recognized as the world's leading cause of death. However, it is also regarded as the disease that is most easily controlled and prevented. Recently, World Health Organization (WHO) claims that heart disease's progression and associated treatment expenses can both be significantly halted with the help of an early and prompt diagnosis. Therefore, researchers have employed various data mining approaches to diagnose heart disease in consideration of the rising number of deaths caused by the disease. This research study applied data mining classification modeling techniques, specifically discriminant analysis on the heart disease dataset for the prediction of chances of heart disease based on various attributes and assess the contribution of each attribute towards the heart disease. Lastly, the range and the accuracy of the classification are assessed. This dataset has an accuracy of 85.3% in predicting that whether individual has heart disease or not and the specificity of individual possess heart disease is 84.8% while normal individuals acquire specificity of 85.9%.
{"title":"An Analytical Approach to Predict the Cardio Vascular Disorder","authors":"Ritu Chauhan, Nidhi Gola, Eiad Yafi","doi":"10.1109/IMCOM56909.2023.10035581","DOIUrl":"https://doi.org/10.1109/IMCOM56909.2023.10035581","url":null,"abstract":"In recent times, heart disease has been recognized as the world's leading cause of death. However, it is also regarded as the disease that is most easily controlled and prevented. Recently, World Health Organization (WHO) claims that heart disease's progression and associated treatment expenses can both be significantly halted with the help of an early and prompt diagnosis. Therefore, researchers have employed various data mining approaches to diagnose heart disease in consideration of the rising number of deaths caused by the disease. This research study applied data mining classification modeling techniques, specifically discriminant analysis on the heart disease dataset for the prediction of chances of heart disease based on various attributes and assess the contribution of each attribute towards the heart disease. Lastly, the range and the accuracy of the classification are assessed. This dataset has an accuracy of 85.3% in predicting that whether individual has heart disease or not and the specificity of individual possess heart disease is 84.8% while normal individuals acquire specificity of 85.9%.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128816822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1109/IMCOM56909.2023.10035583
Gatha Varma
Data collected from services and application users contain identifying attributes. The categorical attributes of user data capture information contained in a fixed set of domain values $boldsymbol{D}_{boldsymbol{m}}$. The statistical analysis of the collected data drives modeling, which in the case of categorical attributes is frequency estimation. It gives the approximate number of individuals who reported a specific value from set $boldsymbol{D}_{boldsymbol{m}}$. Under the conditions where the user data is collected repeatedly, frequency estimation may exhibit disclosure potential risks. Therefore it is important to privatize the user data such that the statistics are relevant yet minimize privacy risks. This is achieved by a set of algorithms called Frequency Oracles. Local Differential Privacy is a widely-used technique for the concerning circumstances. Additionally, several methods are used to amplify its privacy guarantees including sampling and randomization. In this paper, I propose the first sample-based frequency oracle which used Optimized Local Hashing (OLH) and was further enhanced by the replacement of some attribute values with fake data. The adaptive solution utilized the benefits offered by OLH for large-dimensioned dataset and a variance independent of dimensionality. The privacy-utility trade-off given by the proposed solution was found to be better than existing solutions for certain general and strict privacy regimes for multi-dimensional datasets.
{"title":"Local Hashing and Fake Data for Privacy-Aware Frequency Estimation","authors":"Gatha Varma","doi":"10.1109/IMCOM56909.2023.10035583","DOIUrl":"https://doi.org/10.1109/IMCOM56909.2023.10035583","url":null,"abstract":"Data collected from services and application users contain identifying attributes. The categorical attributes of user data capture information contained in a fixed set of domain values $boldsymbol{D}_{boldsymbol{m}}$. The statistical analysis of the collected data drives modeling, which in the case of categorical attributes is frequency estimation. It gives the approximate number of individuals who reported a specific value from set $boldsymbol{D}_{boldsymbol{m}}$. Under the conditions where the user data is collected repeatedly, frequency estimation may exhibit disclosure potential risks. Therefore it is important to privatize the user data such that the statistics are relevant yet minimize privacy risks. This is achieved by a set of algorithms called Frequency Oracles. Local Differential Privacy is a widely-used technique for the concerning circumstances. Additionally, several methods are used to amplify its privacy guarantees including sampling and randomization. In this paper, I propose the first sample-based frequency oracle which used Optimized Local Hashing (OLH) and was further enhanced by the replacement of some attribute values with fake data. The adaptive solution utilized the benefits offered by OLH for large-dimensioned dataset and a variance independent of dimensionality. The privacy-utility trade-off given by the proposed solution was found to be better than existing solutions for certain general and strict privacy regimes for multi-dimensional datasets.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125394841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1109/IMCOM56909.2023.10035590
Quoc Tuan Dao, T. K. Dang, Thi Phuong Hoa Nguyen, Thi Minh Chau Le
The main purpose of this research is to develop a method to model the criminal code by its essence to serve a legal reasoning-enable expert system. Ontology combines a hierarchi-cal structure and logical reasoning, that can mitigate semantic equivocation and produce the figured semantic information. The ontology is based on Description Logics Semantic Web Ontology Language (OWL-DL) extracted from the Vietnamese Penal Code. Logical relationships will be defined as rules in the Semantic Web Rule Language (SWRL) language. The fact that legal domain is very complicated, so the construction of solid legal domain ontologies is acknowledged as a difficult and complex process. This study approaches the strategy named middle-out, which is composed of two interrelated strategies: top-down and bottom-up. Moreover, the model will be used as a component in a legal reasoning-enable expert system. The reasoning-enable system is a smart system that can provide critical analytical, and evaluation for checking and evaluating an act and whether is legitimate. The system also supported the purposes of legal reasoning and law-making in the Fourth Industrial Revolution which caused the rapid development of quantity and quality high-tech crime, and new criminal minds. All are being analyzed, built, and evaluated based on Vietnam Law characteristics, but also expected to be able to apply in other countries.
{"title":"VNLES: A Reasoning-enable Legal Expert System using Ontology Modeling-based Method: A Case Study of Vietnam Criminal Code","authors":"Quoc Tuan Dao, T. K. Dang, Thi Phuong Hoa Nguyen, Thi Minh Chau Le","doi":"10.1109/IMCOM56909.2023.10035590","DOIUrl":"https://doi.org/10.1109/IMCOM56909.2023.10035590","url":null,"abstract":"The main purpose of this research is to develop a method to model the criminal code by its essence to serve a legal reasoning-enable expert system. Ontology combines a hierarchi-cal structure and logical reasoning, that can mitigate semantic equivocation and produce the figured semantic information. The ontology is based on Description Logics Semantic Web Ontology Language (OWL-DL) extracted from the Vietnamese Penal Code. Logical relationships will be defined as rules in the Semantic Web Rule Language (SWRL) language. The fact that legal domain is very complicated, so the construction of solid legal domain ontologies is acknowledged as a difficult and complex process. This study approaches the strategy named middle-out, which is composed of two interrelated strategies: top-down and bottom-up. Moreover, the model will be used as a component in a legal reasoning-enable expert system. The reasoning-enable system is a smart system that can provide critical analytical, and evaluation for checking and evaluating an act and whether is legitimate. The system also supported the purposes of legal reasoning and law-making in the Fourth Industrial Revolution which caused the rapid development of quantity and quality high-tech crime, and new criminal minds. All are being analyzed, built, and evaluated based on Vietnam Law characteristics, but also expected to be able to apply in other countries.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121602606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1109/icsssm.2009.5174842
R. Carr
The effects of acid identity on CH3OH dehydration are examined here using density functional theory (DFT) estimates of acid strength (as deprotonation energies, DPE) and reaction energies, combined with rate data on Keggin polyoxometalate (POM) clusters and zeolite H-BEA. Measured first-order (kmono) and zero-order (kdimer) CH3OH dehydration rate constants depend exponentially on DPE for POM clusters; the value of kmono depends more strongly on DPE than kdimer does. The chemical significance of these rate parameters and the basis for their dependences on acid strength were established by using DFT to estimate the energies of intermediates and transition states involved in elementary steps that are consistent with measured rate equations. We conclude from this treatment that CH3OH dehydration proceeds via direct reactions of co-adsorbed CH3OH molecules for relevant solid acids and reaction conditions. Methyl cations formed at ionpair transition states in these direct routes are solvated by H2O and CH3OH more effectively than those in alternate sequential routes involving methoxide formation and subsequent reaction with CH3OH. The stability of ion-pairs, prevalent as intermediates and transition states on solid acids, depend sensitively on DPE because of concomitant correlations between the stability of the conjugate anionic cluster and DPE. The chemical interpretation of kmono and kdimer from mechanism-based rate equations, together with thermochemical cycles of their respective transition state formations, show that similar charge distributions in the intermediate and transition state involved in kdimer cause its weaker dependence on DPE. Values of kmono involve uncharged reactants and the same ion-pair transition state as kdimer; these species sense acid strength differently and cause the larger effects of DPE on kmono. Confinement effects in H-BEA affect the value of kmono because the different sizes and number of molecules in reactants and transition states selectively stabilize the latter; however, they do not influence kdimer, for which reactants and transition states of similar size sense spatial constraints to the same extent. This combination of theory and experiment for solid acids of known structure sheds considerable light on the relative contributions from solvation, electrostatic, and van der Waals interactions in stabilizing cationic transition states and provides predictive insights into the relative contributions of parallel routes based on the size and charge distributions of their relevant intermediates and transition states. These findings also demonstrate how the consequences of acid strength on measured turnover rates depend on reaction conditions and their concomitant changes in the chemical significance of the rate parameters measured. Moreover, the complementary use of experiment and theory in resolving mechanistic controversies has given predictive guidance about how rate and equilibrium constants, often inextricably comb
{"title":"Title and Copyright","authors":"R. Carr","doi":"10.1109/icsssm.2009.5174842","DOIUrl":"https://doi.org/10.1109/icsssm.2009.5174842","url":null,"abstract":"The effects of acid identity on CH3OH dehydration are examined here using density functional theory (DFT) estimates of acid strength (as deprotonation energies, DPE) and reaction energies, combined with rate data on Keggin polyoxometalate (POM) clusters and zeolite H-BEA. Measured first-order (kmono) and zero-order (kdimer) CH3OH dehydration rate constants depend exponentially on DPE for POM clusters; the value of kmono depends more strongly on DPE than kdimer does. The chemical significance of these rate parameters and the basis for their dependences on acid strength were established by using DFT to estimate the energies of intermediates and transition states involved in elementary steps that are consistent with measured rate equations. We conclude from this treatment that CH3OH dehydration proceeds via direct reactions of co-adsorbed CH3OH molecules for relevant solid acids and reaction conditions. Methyl cations formed at ionpair transition states in these direct routes are solvated by H2O and CH3OH more effectively than those in alternate sequential routes involving methoxide formation and subsequent reaction with CH3OH. The stability of ion-pairs, prevalent as intermediates and transition states on solid acids, depend sensitively on DPE because of concomitant correlations between the stability of the conjugate anionic cluster and DPE. The chemical interpretation of kmono and kdimer from mechanism-based rate equations, together with thermochemical cycles of their respective transition state formations, show that similar charge distributions in the intermediate and transition state involved in kdimer cause its weaker dependence on DPE. Values of kmono involve uncharged reactants and the same ion-pair transition state as kdimer; these species sense acid strength differently and cause the larger effects of DPE on kmono. Confinement effects in H-BEA affect the value of kmono because the different sizes and number of molecules in reactants and transition states selectively stabilize the latter; however, they do not influence kdimer, for which reactants and transition states of similar size sense spatial constraints to the same extent. This combination of theory and experiment for solid acids of known structure sheds considerable light on the relative contributions from solvation, electrostatic, and van der Waals interactions in stabilizing cationic transition states and provides predictive insights into the relative contributions of parallel routes based on the size and charge distributions of their relevant intermediates and transition states. These findings also demonstrate how the consequences of acid strength on measured turnover rates depend on reaction conditions and their concomitant changes in the chemical significance of the rate parameters measured. Moreover, the complementary use of experiment and theory in resolving mechanistic controversies has given predictive guidance about how rate and equilibrium constants, often inextricably comb","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125131874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1109/IMCOM56909.2023.10035621
Mike Louie C. Enriquez, R. Relano, Kate G. Francisco, Ronnie S. Concepcion, Jonah Jahara G. Baun, Adrian Genevie G. Janairo, J. A. D. Leon, A. Bandala, R. R. Vicerra, E. Dadios
Underground utility detection technology contributes significantly to the planning and repair of various infrastructures for it saves a significant amount of money on utility damages, human life risk, and operation time. With that, this study has optimized a towed equatorial dipole-dipole antenna system which was 3D-modeled in Altair CAD FEKO to produce stronger electric and magnetic fields in a transmitter-receiver configuration. The algorithm is based on EM-driven antenna correlation via the effect of dipole geometrical configurations on structure parameterization. New and efficient metaheuristic optimization methods such as the Circle Inspired Optimization Algorithm (CIOA), constrained Artificial Bee Colony (cABC), and Genetic Algorithm (GA), the highest electromagnetic fields along the top of the pipe with voltage difference of 2.30e-4 V, which is higher compared to CIOA and GA with a voltage difference of 1.81e-4 V and -1.11e-4 V, respectively. This implies the development of an effective method for more accessible and precise calculation of the electromagnetic field for the very low-frequency antenna without the need for extensive mathematical computation. Furthermore, the best distance configuration between dipoles is 0.4 m, wire diameter of 0.01 m, and Tx power of 5.0 W. This study aims to optimize an antenna system to produce more vital electric and magnetic fields in a transmitter-receiver configuration, identify the best spacing distance in the equatorial dipole-dipole antenna and characterize the correlation of the parameters such as the distance, wire antenna diameter, and transmitted power.
地下公用设施探测技术对各种基础设施的规划和维修具有重要意义,可以节省大量的公用设施损坏费用、人员生命风险和运行时间。基于此,本研究优化了拖曳赤道偶极子-偶极子天线系统,并在Altair CAD FEKO中进行了三维建模,使其在收发配置下产生更强的电场和磁场。该算法基于电磁驱动的天线相关,利用偶极子几何构型对结构参数化的影响。新型高效的元启发式优化算法(CIOA)、约束人工蜂群(cABC)和遗传算法(GA)在管道顶部的最高电磁场为2.30e-4 V,高于CIOA和GA分别为1.81e-4 V和-1.11e-4 V的电压差。这意味着发展一种有效的方法,使甚低频天线的电磁场计算更容易获得和精确,而不需要大量的数学计算。最佳偶极子间距为0.4 m,线径为0.01 m, Tx功率为5.0 W。本研究旨在优化天线系统,使其在收发配置下产生更重要的电场和磁场,确定赤道偶极-偶极天线的最佳间距距离,并表征距离、线天线直径和发射功率等参数的相关性。
{"title":"Optimization of VLF Capacitive-Resistive Dipole Electromagnetic Fields for Underground Utility Pipe Detection Using Artificial Bee Colony, Circle-inspired, and Genetic Metaheuristics","authors":"Mike Louie C. Enriquez, R. Relano, Kate G. Francisco, Ronnie S. Concepcion, Jonah Jahara G. Baun, Adrian Genevie G. Janairo, J. A. D. Leon, A. Bandala, R. R. Vicerra, E. Dadios","doi":"10.1109/IMCOM56909.2023.10035621","DOIUrl":"https://doi.org/10.1109/IMCOM56909.2023.10035621","url":null,"abstract":"Underground utility detection technology contributes significantly to the planning and repair of various infrastructures for it saves a significant amount of money on utility damages, human life risk, and operation time. With that, this study has optimized a towed equatorial dipole-dipole antenna system which was 3D-modeled in Altair CAD FEKO to produce stronger electric and magnetic fields in a transmitter-receiver configuration. The algorithm is based on EM-driven antenna correlation via the effect of dipole geometrical configurations on structure parameterization. New and efficient metaheuristic optimization methods such as the Circle Inspired Optimization Algorithm (CIOA), constrained Artificial Bee Colony (cABC), and Genetic Algorithm (GA), the highest electromagnetic fields along the top of the pipe with voltage difference of 2.30e-4 V, which is higher compared to CIOA and GA with a voltage difference of 1.81e-4 V and -1.11e-4 V, respectively. This implies the development of an effective method for more accessible and precise calculation of the electromagnetic field for the very low-frequency antenna without the need for extensive mathematical computation. Furthermore, the best distance configuration between dipoles is 0.4 m, wire diameter of 0.01 m, and Tx power of 5.0 W. This study aims to optimize an antenna system to produce more vital electric and magnetic fields in a transmitter-receiver configuration, identify the best spacing distance in the equatorial dipole-dipole antenna and characterize the correlation of the parameters such as the distance, wire antenna diameter, and transmitted power.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115140760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1109/IMCOM56909.2023.10035575
Seunghwan Kim, Sukhan Lee
In deep learning, the quality of ground truth training data is crucial for the resulting performance. However, depending on applications, collecting a sufficient amount of quality data from a realistic setting is problematic. In this case, data augmentation can play an important role as long as augmentation ensures data quality and diversity for training, preferably in an unsupervised way. Recently, a number of GAN variants have been emerged for improved quality in data augmentation. Although successful, further improvement is necessary for enhancing diversity in addition to quality in data augmentation. In this paper, we propose a GAN-based approach to self-supervised augmentation of quality data based on Classification-Reinforced GAN referred to here as CLS-R GAN, to extending diversity as well as quality in data augmentation. In CLS-R GAN, a discriminator-independent classifier additionally self-trains the generator by classifying the fake data, as well as augmenting the real data in an unsupervised way. Extensive experiments were conducted, including an application to augmenting liver ultrasonic image data, to verify the effectiveness of CLS-R GAN based on standard evaluation metrics. The results indicate the effectiveness of CLS-R GAN for improved quality and diversity in augmented data.
{"title":"Self-Supervised Augmentation of Quality Data Based on Classification-Reinforced GAN","authors":"Seunghwan Kim, Sukhan Lee","doi":"10.1109/IMCOM56909.2023.10035575","DOIUrl":"https://doi.org/10.1109/IMCOM56909.2023.10035575","url":null,"abstract":"In deep learning, the quality of ground truth training data is crucial for the resulting performance. However, depending on applications, collecting a sufficient amount of quality data from a realistic setting is problematic. In this case, data augmentation can play an important role as long as augmentation ensures data quality and diversity for training, preferably in an unsupervised way. Recently, a number of GAN variants have been emerged for improved quality in data augmentation. Although successful, further improvement is necessary for enhancing diversity in addition to quality in data augmentation. In this paper, we propose a GAN-based approach to self-supervised augmentation of quality data based on Classification-Reinforced GAN referred to here as CLS-R GAN, to extending diversity as well as quality in data augmentation. In CLS-R GAN, a discriminator-independent classifier additionally self-trains the generator by classifying the fake data, as well as augmenting the real data in an unsupervised way. Extensive experiments were conducted, including an application to augmenting liver ultrasonic image data, to verify the effectiveness of CLS-R GAN based on standard evaluation metrics. The results indicate the effectiveness of CLS-R GAN for improved quality and diversity in augmented data.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133600852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1109/IMCOM56909.2023.10035612
T. Utsumi, Masashi Hashimoto
To ensure that elderly members of society are able to maintain a quality of life that allows them to live independently, it is important that impending frailty, which occurs between the state of begin healthy and needing nursing care, be detectable at an early stage. The purpose of this study was to establish a system to continuously measure of elderly people as they go about their daily lives at home and to detect early signs of pre-frailty. This study focused on the decrease in walking speed of elderly people in pre-frailty, designing a means of measuring walking speed using a non-wearable passive infrared (PIR) sensor that performs equivalent accuracy with conventional manual measurement. The walking speed measured by this system with an average error of 1.7% at five meters from the walking speed measured by the gating method. Further testing confirmed that walking speed could be measured with an average error of 1.5%, even at one meter. The system is feasible, cost-effective, and can be easily installed in homes for continuous measurement of the walking speed of elderly people.
{"title":"A Development of an Early Detection System of Pre-frailty in Senior Citizens Living Inside","authors":"T. Utsumi, Masashi Hashimoto","doi":"10.1109/IMCOM56909.2023.10035612","DOIUrl":"https://doi.org/10.1109/IMCOM56909.2023.10035612","url":null,"abstract":"To ensure that elderly members of society are able to maintain a quality of life that allows them to live independently, it is important that impending frailty, which occurs between the state of begin healthy and needing nursing care, be detectable at an early stage. The purpose of this study was to establish a system to continuously measure of elderly people as they go about their daily lives at home and to detect early signs of pre-frailty. This study focused on the decrease in walking speed of elderly people in pre-frailty, designing a means of measuring walking speed using a non-wearable passive infrared (PIR) sensor that performs equivalent accuracy with conventional manual measurement. The walking speed measured by this system with an average error of 1.7% at five meters from the walking speed measured by the gating method. Further testing confirmed that walking speed could be measured with an average error of 1.5%, even at one meter. The system is feasible, cost-effective, and can be easily installed in homes for continuous measurement of the walking speed of elderly people.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114738864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}