Pub Date : 2021-11-17DOI: 10.1109/IC-NIDC54101.2021.9660433
Tianxing Xu, Konglin Zhu, A. Andrzejak, Lin Zhang
Federated Learning (FL) is a distributed machine learning paradigm to solve isolated data island problems under privacy constraints. Recent works reveal that FL still exists security problems in which attackers can infer private data from gradients. In this paper, we propose a distributed FL framework in Trusted Execution Environment (TEE) to protect gradients in the perspective of hardware. We use trusted Software Guard eXtensions (SGX) as an instance to implement the FL, and proposed an SGX-FL framework. Firstly, to break through the limitation of physical memory space in SGX and meanwhile preserve the privacy, we leverage a gradient filtering mechanism to obtain the “important” gradients which preserve the utmost data privacy and put them into SGX. Secondly, to enhance the global adhesion of gradients so that the important gradients can be aggregated at maximum, a grouping method is carried out to put the most appropriate number of members into one group. Finally, to keep the accuracy of the FL model, the secondary gradients of group members and aggregated important gradients are simultaneously uploaded to the server and the computation procedure is validated by the integrity method of SGX. The evaluation results show that the proposed SGX-FL reduces the computation cost by 19 times compared with the existing approaches.
{"title":"Distributed Learning in Trusted Execution Environment: A Case Study of Federated Learning in SGX","authors":"Tianxing Xu, Konglin Zhu, A. Andrzejak, Lin Zhang","doi":"10.1109/IC-NIDC54101.2021.9660433","DOIUrl":"https://doi.org/10.1109/IC-NIDC54101.2021.9660433","url":null,"abstract":"Federated Learning (FL) is a distributed machine learning paradigm to solve isolated data island problems under privacy constraints. Recent works reveal that FL still exists security problems in which attackers can infer private data from gradients. In this paper, we propose a distributed FL framework in Trusted Execution Environment (TEE) to protect gradients in the perspective of hardware. We use trusted Software Guard eXtensions (SGX) as an instance to implement the FL, and proposed an SGX-FL framework. Firstly, to break through the limitation of physical memory space in SGX and meanwhile preserve the privacy, we leverage a gradient filtering mechanism to obtain the “important” gradients which preserve the utmost data privacy and put them into SGX. Secondly, to enhance the global adhesion of gradients so that the important gradients can be aggregated at maximum, a grouping method is carried out to put the most appropriate number of members into one group. Finally, to keep the accuracy of the FL model, the secondary gradients of group members and aggregated important gradients are simultaneously uploaded to the server and the computation procedure is validated by the integrity method of SGX. The evaluation results show that the proposed SGX-FL reduces the computation cost by 19 times compared with the existing approaches.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115266790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Error-Correcting Output Codes (ECOCs) have been proposed to construct multi-class classifiers using simple binary classifiers. Recently, the principle of ECOCs has been employed for improving the robustness of deep classifiers. In this paper, a novel ECOC framework is developed by presenting a novel label grouping and code-construction method. The proposed label grouping is based on linear discriminant analysis (LDA) similarity. Via simulations, it is demonstrated that deep classifiers trained with the proposed ECOC yield better classification performance on pure data and better adversarial robustness than the state-of-the-art deep neural classifiers using ECOCs.
{"title":"Construction of Error Correcting Output Codes for Robust Deep Neural Networks Based on Label Grouping Scheme","authors":"Hwiyoung Youn, Soonhee Kwon, Hyunhee Lee, Jiho Kim, Songnam Hong, Dong-joon Shin","doi":"10.1109/IC-NIDC54101.2021.9660486","DOIUrl":"https://doi.org/10.1109/IC-NIDC54101.2021.9660486","url":null,"abstract":"Error-Correcting Output Codes (ECOCs) have been proposed to construct multi-class classifiers using simple binary classifiers. Recently, the principle of ECOCs has been employed for improving the robustness of deep classifiers. In this paper, a novel ECOC framework is developed by presenting a novel label grouping and code-construction method. The proposed label grouping is based on linear discriminant analysis (LDA) similarity. Via simulations, it is demonstrated that deep classifiers trained with the proposed ECOC yield better classification performance on pure data and better adversarial robustness than the state-of-the-art deep neural classifiers using ECOCs.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122552993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-17DOI: 10.1109/IC-NIDC54101.2021.9660470
Shujian Zhang, Zhan Xu, Lu Tian, Xiaolong Yang
Spectrum sensing can effectively improve the low utilization of spectrum resources and is one of the crucial components of cognitive radio networks. This paper proposes a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network cascaded spectrum sensing model. The model uses CNN to analyze the Short-Time Fourier transform spectrogram of the blind signal. Then the generated feature vector or feature map is passed to the LSTM according to the timestamp. Finally, it detects a signal in a specific spectrum and classifies the signal type to identify multiple signals accurately. The neural network model improves the detection probability by simultaneously acquiring the spatial and temporal characteristics of the blind signal. The experimental results show that the method in this paper can detect a variety of signals with higher detection probability within a wide range of SNR, especially under the condition of low SNR.
{"title":"A Spectrum Sensing Method Based on CNN-LSTM Deep Neural Network","authors":"Shujian Zhang, Zhan Xu, Lu Tian, Xiaolong Yang","doi":"10.1109/IC-NIDC54101.2021.9660470","DOIUrl":"https://doi.org/10.1109/IC-NIDC54101.2021.9660470","url":null,"abstract":"Spectrum sensing can effectively improve the low utilization of spectrum resources and is one of the crucial components of cognitive radio networks. This paper proposes a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network cascaded spectrum sensing model. The model uses CNN to analyze the Short-Time Fourier transform spectrogram of the blind signal. Then the generated feature vector or feature map is passed to the LSTM according to the timestamp. Finally, it detects a signal in a specific spectrum and classifies the signal type to identify multiple signals accurately. The neural network model improves the detection probability by simultaneously acquiring the spatial and temporal characteristics of the blind signal. The experimental results show that the method in this paper can detect a variety of signals with higher detection probability within a wide range of SNR, especially under the condition of low SNR.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122587725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-17DOI: 10.1109/IC-NIDC54101.2021.9660594
Zengjun Niu, K. Niu, Zhiqiang He
Underwater acoustic network plays an important role in various operations in the marine environment. It is very necessary to design a routing protocol that can adapt to the underwater environment with high time-varying and Doppler frequency shift. In this paper, we present a dynamic forward hybrid routing algorithm based on the small node group (SG-DFHR), to achieve a higher packet delivery ratio and lower energy consumption. The working process of SG-DFHR can be mainly divided into two stages. First, the randomly deployed underwater nodes are divided into several groups comprised of three nodes, which respectively serve as the master, secondary and ordinary node. Then a hybrid routing strategy is implemented for multi-hop transmission. In which, the master node uses the multicast for data packet transmission, while the rest nodes use the unicast method. Furthermore, in order to adapt to the dynamic changes of underwater network, we design a node group inspect and update strategy. The simulation and theoretical analysis show that our algorithm has superior performance over the ALRP, DCK-S-BEAR and SUN. Compared with the previous algorithms, under the premise of no significant increase in delay, the energy consumption and packet delivery performances of SG-DFHR are significantly improved. SG-DFHR achieves effective tradeoff among multiple performance metrics.
{"title":"Dynamic Forward Hybrid Routing Algorithm by Small Node Group in Underwater Acoustic Communication","authors":"Zengjun Niu, K. Niu, Zhiqiang He","doi":"10.1109/IC-NIDC54101.2021.9660594","DOIUrl":"https://doi.org/10.1109/IC-NIDC54101.2021.9660594","url":null,"abstract":"Underwater acoustic network plays an important role in various operations in the marine environment. It is very necessary to design a routing protocol that can adapt to the underwater environment with high time-varying and Doppler frequency shift. In this paper, we present a dynamic forward hybrid routing algorithm based on the small node group (SG-DFHR), to achieve a higher packet delivery ratio and lower energy consumption. The working process of SG-DFHR can be mainly divided into two stages. First, the randomly deployed underwater nodes are divided into several groups comprised of three nodes, which respectively serve as the master, secondary and ordinary node. Then a hybrid routing strategy is implemented for multi-hop transmission. In which, the master node uses the multicast for data packet transmission, while the rest nodes use the unicast method. Furthermore, in order to adapt to the dynamic changes of underwater network, we design a node group inspect and update strategy. The simulation and theoretical analysis show that our algorithm has superior performance over the ALRP, DCK-S-BEAR and SUN. Compared with the previous algorithms, under the premise of no significant increase in delay, the energy consumption and packet delivery performances of SG-DFHR are significantly improved. SG-DFHR achieves effective tradeoff among multiple performance metrics.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114922688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-17DOI: 10.1109/IC-NIDC54101.2021.9660579
Xuming Feng, Yaping Zhu, Cheng Yang
Video summarization is a technique that creates short summaries from original videos while retaining the main representative information. Traditional video summarization models based on deep learning mostly use frames as the basic processing unit, which cannot handle long videos due to hardware limitations. In this paper, we compress the frame-level features into shot-level features using a feature extractor based on Convolutional Neural Network (CNN), which can improve the training accuracy and reduce computation. At the same time, we propose a feature fusion algorithm based on the capsule network, which combines the RGB features and Light Flow features of the video into the deep features with adaptive weights to enhance the original video features. Experiment results on two benchmark datasets (TVsum and SumMe) validate the effectiveness of our method.
{"title":"Video Summarization Based on Fusing Features and Shot Segmentation","authors":"Xuming Feng, Yaping Zhu, Cheng Yang","doi":"10.1109/IC-NIDC54101.2021.9660579","DOIUrl":"https://doi.org/10.1109/IC-NIDC54101.2021.9660579","url":null,"abstract":"Video summarization is a technique that creates short summaries from original videos while retaining the main representative information. Traditional video summarization models based on deep learning mostly use frames as the basic processing unit, which cannot handle long videos due to hardware limitations. In this paper, we compress the frame-level features into shot-level features using a feature extractor based on Convolutional Neural Network (CNN), which can improve the training accuracy and reduce computation. At the same time, we propose a feature fusion algorithm based on the capsule network, which combines the RGB features and Light Flow features of the video into the deep features with adaptive weights to enhance the original video features. Experiment results on two benchmark datasets (TVsum and SumMe) validate the effectiveness of our method.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131042036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Belt and Road (B&R) initiative is proposed to promote common development among countries along the B&R. In recent years, although the B&R has contributed to the regions along the route, it is always a controversial topic in the international community. A number of scholars have done a set of research works to analyze the effects of the B&R projects based on traditional economic methods. However, the drawbacks of subjectivity and delay reduce the conviction of the analysis results. In this paper, we leverage the objectivity and real-time features of remote sensing (RS) images to analyze the effects of the B&R project. Our research takes Voi town along the Mongolia-Nairobi Railway as the representative city. In addition, in order to prove the causal relationship between the B&R and economic development, we select the Taveta town as the comparison city. The semantic segmentation based on deep learning is applied to the multi-temporal RS images, to retrieve the economic development by automatically recognizing houses. On this basis, the construction and development of both the studied region and the comparison are quantitatively analyzed by meshing analysis and standard deviation elliptic methods. For overcoming the shortages of the conventional algorithms, a novel segmentation network based on the attention mechanism is proposed. The evaluation proves the semantic segmentation results can fully support the follow-up data analysis. In addition, the analysis results show that our work is a convincing initiative to reveal the values of the B&R projects for economic developments in the B&R-related regions.
{"title":"Economic Development Analysis of the Belt and Road Regions Based on Automatic Interpretation of Remote Sensing Images","authors":"Xinzhu Qiu, Yunzhe Wang, Jingyi Cao, Guannan Xu, Yanan You, Junlong Ren","doi":"10.1109/IC-NIDC54101.2021.9660561","DOIUrl":"https://doi.org/10.1109/IC-NIDC54101.2021.9660561","url":null,"abstract":"The Belt and Road (B&R) initiative is proposed to promote common development among countries along the B&R. In recent years, although the B&R has contributed to the regions along the route, it is always a controversial topic in the international community. A number of scholars have done a set of research works to analyze the effects of the B&R projects based on traditional economic methods. However, the drawbacks of subjectivity and delay reduce the conviction of the analysis results. In this paper, we leverage the objectivity and real-time features of remote sensing (RS) images to analyze the effects of the B&R project. Our research takes Voi town along the Mongolia-Nairobi Railway as the representative city. In addition, in order to prove the causal relationship between the B&R and economic development, we select the Taveta town as the comparison city. The semantic segmentation based on deep learning is applied to the multi-temporal RS images, to retrieve the economic development by automatically recognizing houses. On this basis, the construction and development of both the studied region and the comparison are quantitatively analyzed by meshing analysis and standard deviation elliptic methods. For overcoming the shortages of the conventional algorithms, a novel segmentation network based on the attention mechanism is proposed. The evaluation proves the semantic segmentation results can fully support the follow-up data analysis. In addition, the analysis results show that our work is a convincing initiative to reveal the values of the B&R projects for economic developments in the B&R-related regions.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132725693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-17DOI: 10.1109/IC-NIDC54101.2021.9660488
Beom Jun Woo, H. Kim, Jeunghun Kim, N. Kim
This paper presents a sparse attention-based speech separation algorithm separating and generating clean speech from mixed audio containing speech from multiple speakers. Recent development of deep learning has enabled several speech separation models to generate clean speech audios. Especially speech separation models based on transformer show high performance due to their ability to learn long term dependencies compared with other neural network structures. However, as a transformer with self-attention falls short of catching short-term dependencies, we adopt sparse attention structure to the original transformer-based speech separation model. We show that the model with sparse attention outperforms the original full attention method.
{"title":"Speech Separation Based on DPTNet with Sparse Attention","authors":"Beom Jun Woo, H. Kim, Jeunghun Kim, N. Kim","doi":"10.1109/IC-NIDC54101.2021.9660488","DOIUrl":"https://doi.org/10.1109/IC-NIDC54101.2021.9660488","url":null,"abstract":"This paper presents a sparse attention-based speech separation algorithm separating and generating clean speech from mixed audio containing speech from multiple speakers. Recent development of deep learning has enabled several speech separation models to generate clean speech audios. Especially speech separation models based on transformer show high performance due to their ability to learn long term dependencies compared with other neural network structures. However, as a transformer with self-attention falls short of catching short-term dependencies, we adopt sparse attention structure to the original transformer-based speech separation model. We show that the model with sparse attention outperforms the original full attention method.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122368474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-17DOI: 10.1109/IC-NIDC54101.2021.9660527
Zexuan Liu, Chuang Zhang, Ming Wu, Shizun Wang
Visually impaired people are numerous in society and in a vulnerable position, and social concerns to these disadvantaged groups should be increasing. This paper introduces a pair of AR glasses with corresponding app designed and developed for color-blind people and people with regional visual impairments. The content captured by the camera is recolored by color rotation and displayed on the glasses, so that the color-blind people can better distinguish colors. We also creatively put forward the idea of transfer of the visual field, which assists the visually impaired in the area to perceive the scene ahead. Result from 24 volunteers proves that the pair of glasses can greatly assist the life of visually impaired people.
{"title":"Auxiliary Glasses Designed for Visually Impaired People with Color Blindness and Regional Visual Impairments","authors":"Zexuan Liu, Chuang Zhang, Ming Wu, Shizun Wang","doi":"10.1109/IC-NIDC54101.2021.9660527","DOIUrl":"https://doi.org/10.1109/IC-NIDC54101.2021.9660527","url":null,"abstract":"Visually impaired people are numerous in society and in a vulnerable position, and social concerns to these disadvantaged groups should be increasing. This paper introduces a pair of AR glasses with corresponding app designed and developed for color-blind people and people with regional visual impairments. The content captured by the camera is recolored by color rotation and displayed on the glasses, so that the color-blind people can better distinguish colors. We also creatively put forward the idea of transfer of the visual field, which assists the visually impaired in the area to perceive the scene ahead. Result from 24 volunteers proves that the pair of glasses can greatly assist the life of visually impaired people.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115835916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-17DOI: 10.1109/IC-NIDC54101.2021.9660411
Jiaming Yang, Shouzhi Yu, Mingwei Ding
Eyes are the window of the mind, and eye movement contains a lot of effective information. With the breakthrough of artificial intelligence and human-computer interaction, eye movement tracking technology has gradually moved to the foreground in recent years. This paper takes medical health as a typical scenario, carries out innovative research around the potential application of eye movement tracking technology in patient communication and human-computer interaction, proposes and implements an intention estimation system based on eye movement tracking. Based on the physiology, medicine and ergonomics of eye movement, the system makes original innovations, extracts a set of eye movement parameters with high recognition accuracy and high expression efficiency, summarizes eye movement instructions that can meet the application of silent environment, and finally forms a series of embryonic applications related to interactive instructions.
{"title":"Design and Implementation of Human Intention Estimation System Based on Eye Movement Tracking","authors":"Jiaming Yang, Shouzhi Yu, Mingwei Ding","doi":"10.1109/IC-NIDC54101.2021.9660411","DOIUrl":"https://doi.org/10.1109/IC-NIDC54101.2021.9660411","url":null,"abstract":"Eyes are the window of the mind, and eye movement contains a lot of effective information. With the breakthrough of artificial intelligence and human-computer interaction, eye movement tracking technology has gradually moved to the foreground in recent years. This paper takes medical health as a typical scenario, carries out innovative research around the potential application of eye movement tracking technology in patient communication and human-computer interaction, proposes and implements an intention estimation system based on eye movement tracking. Based on the physiology, medicine and ergonomics of eye movement, the system makes original innovations, extracts a set of eye movement parameters with high recognition accuracy and high expression efficiency, summarizes eye movement instructions that can meet the application of silent environment, and finally forms a series of embryonic applications related to interactive instructions.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117080446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-17DOI: 10.1109/IC-NIDC54101.2021.9660425
Zhe Han, Chao Wang, Zixing Gou, Huiping Tian
In this paper, a magnetic field sensor (MFS) with high figure of merit (FOM) is theoretically proposed, which is based on photonic crystal slab (PhCS) covered by magnetic fluid film (MFF). The PhCS consists of a two dimensionally periodic nanohole array introduced into a silicon slab. The large-sized nanohole is used to increase the area of light-matter interaction. By slightly breaking the symmetry of nanoholes, quasi bound states in the continuum (BIC) with Fano line shape is excited in the PhCS, which is sensitive to external magnetic field and has a high Q-factor. The effect of MFF thickness on the magnetic field sensitivity is investigated. Furthermore, high resonance amplitude of 0.97 and low limit of detection (LOD) of 6.1×10−5 T are achieved. Compared with the researches lately published, the sensor exhibits high Q-factor and high sensitivity. Therefore, we believe the proposed sensor will contribute to the lab-on-chip magnetic field detection system design.
{"title":"High Figure of Merit Magnetic Field Sensor Based on Photonic Crystal Slab Supporting Quasi Bound States in The Continuum","authors":"Zhe Han, Chao Wang, Zixing Gou, Huiping Tian","doi":"10.1109/IC-NIDC54101.2021.9660425","DOIUrl":"https://doi.org/10.1109/IC-NIDC54101.2021.9660425","url":null,"abstract":"In this paper, a magnetic field sensor (MFS) with high figure of merit (FOM) is theoretically proposed, which is based on photonic crystal slab (PhCS) covered by magnetic fluid film (MFF). The PhCS consists of a two dimensionally periodic nanohole array introduced into a silicon slab. The large-sized nanohole is used to increase the area of light-matter interaction. By slightly breaking the symmetry of nanoholes, quasi bound states in the continuum (BIC) with Fano line shape is excited in the PhCS, which is sensitive to external magnetic field and has a high Q-factor. The effect of MFF thickness on the magnetic field sensitivity is investigated. Furthermore, high resonance amplitude of 0.97 and low limit of detection (LOD) of 6.1×10−5 T are achieved. Compared with the researches lately published, the sensor exhibits high Q-factor and high sensitivity. Therefore, we believe the proposed sensor will contribute to the lab-on-chip magnetic field detection system design.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115599468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}