首页 > 最新文献

Journal of Information and Intelligence最新文献

英文 中文
Voice Fence Wall: User-optional voice privacy transmission 语音栅栏墙:用户可选的语音隐私传输
Pub Date : 2024-03-01 DOI: 10.1016/j.jiixd.2023.12.002
Li Luo, Yining Liu

Sensors are widely applied in the collection of voice data. Since many attributes of voice data are sensitive such as user emotions, identity, raw voice collection may lead serious privacy threat. In the past, traditional feature extraction obtains and encrypts voice features that are then transmitted to upstream servers. In order to avoid sensitive attribute disclosure, it is necessary to separate the sensitive attributes from non-sensitive attributes of voice data. Motivated by this, user-optional privacy transmission framework for voice data (called: Voice Fence Wall) is proposed. Firstly, we provide user-optional, which means users can choose the attributes (sensitive attributes) they want to be protected. Secondly, Voice Fence Wall utilizes minimum mutual information (MI) to reduce the correlation between sensitive and non-sensitive attributes, thereby separating these attributes. Finally, only the separated non-sensitive attributes are transmitted to the upstream server, the quality of voice services is satisfied without leaking sensitive attributes. To verify the reliability and practicability, three voice datasets are used to evaluate the model, the experiments demonstrate that Voice Fence Wall not only effectively separates attributes to resist attribute inference attacks, but also outperforms related work in terms of classification performance. Specifically, our framework achieves 89.84 ​% accuracy in sentiment recognition and 6.01 ​% equal error rate in voice authentication.

传感器被广泛应用于语音数据的收集。由于语音数据的许多属性是敏感的,如用户的情绪、身份等,原始语音采集可能会导致严重的隐私威胁。过去,传统的特征提取方法是获取语音特征并进行加密,然后传输到上游服务器。为了避免敏感属性泄露,有必要将语音数据中的敏感属性和非敏感属性分开。受此启发,我们提出了用户可选的语音数据隐私传输框架(称为:语音篱笆墙)。首先,我们提供了用户可选性,即用户可以选择需要保护的属性(敏感属性)。其次,语音篱笆墙利用最小互信息(MI)来降低敏感属性和非敏感属性之间的相关性,从而分离这些属性。最后,只有被分离的非敏感属性才会被传输到上游服务器,从而在不泄露敏感属性的情况下满足语音服务的质量要求。为了验证该模型的可靠性和实用性,我们使用了三个语音数据集来评估该模型,实验证明语音篱笆墙不仅能有效分离属性以抵御属性推理攻击,而且在分类性能方面优于相关研究。具体地说,我们的框架在情感识别方面达到了 89.84 % 的准确率,在语音认证方面达到了 6.01 % 的平均错误率。
{"title":"Voice Fence Wall: User-optional voice privacy transmission","authors":"Li Luo,&nbsp;Yining Liu","doi":"10.1016/j.jiixd.2023.12.002","DOIUrl":"10.1016/j.jiixd.2023.12.002","url":null,"abstract":"<div><p>Sensors are widely applied in the collection of voice data. Since many attributes of voice data are sensitive such as user emotions, identity, raw voice collection may lead serious privacy threat. In the past, traditional feature extraction obtains and encrypts voice features that are then transmitted to upstream servers. In order to avoid sensitive attribute disclosure, it is necessary to separate the sensitive attributes from non-sensitive attributes of voice data. Motivated by this, user-optional privacy transmission framework for voice data (called: Voice Fence Wall) is proposed. Firstly, we provide user-optional, which means users can choose the attributes (sensitive attributes) they want to be protected. Secondly, Voice Fence Wall utilizes minimum mutual information (MI) to reduce the correlation between sensitive and non-sensitive attributes, thereby separating these attributes. Finally, only the separated non-sensitive attributes are transmitted to the upstream server, the quality of voice services is satisfied without leaking sensitive attributes. To verify the reliability and practicability, three voice datasets are used to evaluate the model, the experiments demonstrate that Voice Fence Wall not only effectively separates attributes to resist attribute inference attacks, but also outperforms related work in terms of classification performance. Specifically, our framework achieves 89.84 ​% accuracy in sentiment recognition and 6.01 ​% equal error rate in voice authentication.</p></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"2 2","pages":"Pages 116-129"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294971592300080X/pdfft?md5=7d514122810a42466002016ad09b7381&pid=1-s2.0-S294971592300080X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139393204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hyperspectral unmixing approach for ink mismatch detection in unbalanced clusters 用于非平衡集群中油墨错配检测的高光谱非混合方法
Pub Date : 2024-03-01 DOI: 10.1016/j.jiixd.2024.01.004
Faryal Aurooj Nasir , Salman Liaquat , Khurram Khurshid , Nor Muzlifah Mahyuddin

Detecting ink mismatch is a significant challenge in verifying the authenticity of documents, especially when dealing with uneven ink distribution. Conventional imaging methods frequently fail to distinguish visually similar inks. Our study presents a novel hyperspectral unmixing approach to detect ink mismatches in unbalanced clusters. The proposed method identifies unique spectral characteristics of different inks employing k-means clustering and Gaussian mixture models (GMMs) to perform color segmentation on different ink types and utilizes elbow estimation and silhouette coefficient to evaluate the number of inks estimation precisely. For a more accurate estimation of quantity, which is generally not an attribute of clustering methods, we employed entropy calculations in the red, green, and blue depth channels for precise abundance estimation of ink. This unique combination of basic techniques in conjunction exhibits better efficacy in performing ink unmixing and provides a real-world document forensic solution compared to current methods that rely on assumptions like prior knowledge of the inks used in a document and deep learning-based methods that rely heavily on abundant training datasets. We evaluate our approach on the iVision handwritten hyperspectral images dataset (iVision HHID), which is a comprehensive and rich dataset that surpasses the commonly-used UWA writing inks hyperspectral images (WIHSI) database in size and diversity. This study has accomplished the unmixing task with three main challenges: unmixing of diverse ink spectral signatures (149 spectral bands instead of 33 bands in the previous dataset), without using prior knowledge and assumptions about the number of inks used in the questioned document, and not requiring large training data for performing unmixing. Furthermore, the security of the proposed document authentication methodology to address the likelihood of forgeries or manipulations in questioned documents is enhanced as compared to previous works relying on known inks and known spectrum. Randomization techniques and anomaly detection mechanisms are used in our methodology which increases the difficulty for adversaries to predict and manipulate specific aspects of the input data in questioned documents, thereby enhancing the robustness of our method. The code for conducting this research can be accessed at GitHub repository.

检测油墨不匹配是验证文件真伪的一大挑战,尤其是在油墨分布不均匀的情况下。传统的成像方法经常无法区分视觉上相似的油墨。我们的研究提出了一种新颖的高光谱非混合方法,用于检测不平衡集群中的油墨错配。所提出的方法利用 K 均值聚类和高斯混合模型(GMMs)来识别不同油墨的独特光谱特征,从而对不同类型的油墨进行颜色分割,并利用肘部估计和剪影系数来精确评估油墨估计数量。为了更精确地估算数量(这通常不是聚类方法的特性),我们在红色、绿色和蓝色深度通道中采用了熵计算,以精确估算墨水的丰度。与依赖文档中所用墨水的先验知识等假设的现有方法和严重依赖丰富训练数据集的基于深度学习的方法相比,这种将基本技术结合在一起的独特方法在进行墨水解混合时表现出更好的功效,并提供了一种真实世界的文档取证解决方案。我们在 iVision 手写高光谱图像数据集(iVision HHID)上评估了我们的方法,该数据集全面而丰富,在规模和多样性上超过了常用的 UWA 书写墨水高光谱图像(WIHSI)数据库。这项研究在完成非混合任务时面临三大挑战:非混合多种墨水光谱特征(149 个光谱带而不是之前数据集中的 33 个带),不使用关于问题文档中使用的墨水数量的先验知识和假设,以及执行非混合时不需要大量训练数据。此外,与之前依赖已知油墨和已知光谱的工作相比,所提出的文件认证方法的安全性得到了提高,可以解决受质疑文件中可能存在的伪造或篡改问题。我们的方法采用了随机化技术和异常检测机制,增加了对手预测和篡改问题文档中输入数据特定方面的难度,从而增强了我们方法的鲁棒性。本研究的代码可在 GitHub 存储库中获取。
{"title":"A hyperspectral unmixing approach for ink mismatch detection in unbalanced clusters","authors":"Faryal Aurooj Nasir ,&nbsp;Salman Liaquat ,&nbsp;Khurram Khurshid ,&nbsp;Nor Muzlifah Mahyuddin","doi":"10.1016/j.jiixd.2024.01.004","DOIUrl":"10.1016/j.jiixd.2024.01.004","url":null,"abstract":"<div><p>Detecting ink mismatch is a significant challenge in verifying the authenticity of documents, especially when dealing with uneven ink distribution. Conventional imaging methods frequently fail to distinguish visually similar inks. Our study presents a novel hyperspectral unmixing approach to detect ink mismatches in unbalanced clusters. The proposed method identifies unique spectral characteristics of different inks employing k-means clustering and Gaussian mixture models (GMMs) to perform color segmentation on different ink types and utilizes elbow estimation and silhouette coefficient to evaluate the number of inks estimation precisely. For a more accurate estimation of quantity, which is generally not an attribute of clustering methods, we employed entropy calculations in the red, green, and blue depth channels for precise abundance estimation of ink. This unique combination of basic techniques in conjunction exhibits better efficacy in performing ink unmixing and provides a real-world document forensic solution compared to current methods that rely on assumptions like prior knowledge of the inks used in a document and deep learning-based methods that rely heavily on abundant training datasets. We evaluate our approach on the iVision handwritten hyperspectral images dataset (iVision HHID), which is a comprehensive and rich dataset that surpasses the commonly-used UWA writing inks hyperspectral images (WIHSI) database in size and diversity. This study has accomplished the unmixing task with three main challenges: unmixing of diverse ink spectral signatures (149 spectral bands instead of 33 bands in the previous dataset), without using prior knowledge and assumptions about the number of inks used in the questioned document, and not requiring large training data for performing unmixing. Furthermore, the security of the proposed document authentication methodology to address the likelihood of forgeries or manipulations in questioned documents is enhanced as compared to previous works relying on known inks and known spectrum. Randomization techniques and anomaly detection mechanisms are used in our methodology which increases the difficulty for adversaries to predict and manipulate specific aspects of the input data in questioned documents, thereby enhancing the robustness of our method. The code for conducting this research can be accessed at <span>GitHub repository</span><svg><path></path></svg>.</p></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"2 2","pages":"Pages 177-190"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949715924000040/pdfft?md5=3d98b093a0be134b496feff3d3fa509c&pid=1-s2.0-S2949715924000040-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139634593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data security and privacy computing in artificial intelligence 人工智能中的数据安全和隐私计算
Pub Date : 2024-03-01 DOI: 10.1016/j.jiixd.2024.02.007
Dengguo Feng, Hui Li, Rongxing Lu, Zheli Liu, Jianbing Ni, Hui Zhu
{"title":"Data security and privacy computing in artificial intelligence","authors":"Dengguo Feng,&nbsp;Hui Li,&nbsp;Rongxing Lu,&nbsp;Zheli Liu,&nbsp;Jianbing Ni,&nbsp;Hui Zhu","doi":"10.1016/j.jiixd.2024.02.007","DOIUrl":"https://doi.org/10.1016/j.jiixd.2024.02.007","url":null,"abstract":"","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"2 2","pages":"Pages 99-101"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294971592400012X/pdfft?md5=b365b0de34c8f2cd89fb4535c7790036&pid=1-s2.0-S294971592400012X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140555268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated data processing and feature engineering for deep learning and big data applications: A survey 用于深度学习和大数据应用的自动数据处理和特征工程:一项调查
Pub Date : 2024-01-08 DOI: 10.1016/j.jiixd.2024.01.002
Alhassan Mumuni , Fuseini Mumuni
Modern approach to artificial intelligence (AI) aims to design algorithms that learn directly from data. This approach has achieved impressive results and has contributed significantly to the progress of AI, particularly in the sphere of supervised deep learning. It has also simplified the design of machine learning systems as the learning process is highly automated. However, not all data processing tasks in conventional deep learning pipelines have been automated. In most cases data has to be manually collected, preprocessed and further extended through data augmentation before they can be effective for training. Recently, special techniques for automating these tasks have emerged. The automation of data processing tasks is driven by the need to utilize large volumes of complex, heterogeneous data for machine learning and big data applications. Today, end-to-end automated data processing systems based on automated machine learning (AutoML) techniques are capable of taking raw data and transforming them into useful features for big data tasks by automating all intermediate processing stages. In this work, we present a thorough review of approaches for automating data processing tasks in deep learning pipelines, including automated data preprocessing – e.g., data cleaning, labeling, missing data imputation, and categorical data encoding – as well as data augmentation (including synthetic data generation using generative AI methods) and feature engineering – specifically, automated feature extraction, feature construction and feature selection. In addition to automating specific data processing tasks, we discuss the use of AutoML methods and tools to simultaneously optimize all stages of the machine learning pipeline.
{"title":"Automated data processing and feature engineering for deep learning and big data applications: A survey","authors":"Alhassan Mumuni ,&nbsp;Fuseini Mumuni","doi":"10.1016/j.jiixd.2024.01.002","DOIUrl":"10.1016/j.jiixd.2024.01.002","url":null,"abstract":"<div><div>Modern approach to artificial intelligence (AI) aims to design algorithms that learn directly from data. This approach has achieved impressive results and has contributed significantly to the progress of AI, particularly in the sphere of supervised deep learning. It has also simplified the design of machine learning systems as the learning process is highly automated. However, not all data processing tasks in conventional deep learning pipelines have been automated. In most cases data has to be manually collected, preprocessed and further extended through data augmentation before they can be effective for training. Recently, special techniques for automating these tasks have emerged. The automation of data processing tasks is driven by the need to utilize large volumes of complex, heterogeneous data for machine learning and big data applications. Today, end-to-end automated data processing systems based on automated machine learning (AutoML) techniques are capable of taking raw data and transforming them into useful features for big data tasks by automating all intermediate processing stages. In this work, we present a thorough review of approaches for automating data processing tasks in deep learning pipelines, including automated data preprocessing – e.g., data cleaning, labeling, missing data imputation, and categorical data encoding – as well as data augmentation (including synthetic data generation using generative AI methods) and feature engineering – specifically, automated feature extraction, feature construction and feature selection. In addition to automating specific data processing tasks, we discuss the use of AutoML methods and tools to simultaneously optimize all stages of the machine learning pipeline.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 2","pages":"Pages 113-153"},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139454323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AutoML: A systematic review on automated machine learning with neural architecture search AutoML:利用神经架构搜索自动机器学习的系统综述
Pub Date : 2024-01-01 DOI: 10.1016/j.jiixd.2023.10.002
Imrus Salehin , Md. Shamiul Islam , Pritom Saha , S.M. Noman , Azra Tuni , Md. Mehedi Hasan , Md. Abu Baten

AutoML (Automated Machine Learning) is an emerging field that aims to automate the process of building machine learning models. AutoML emerged to increase productivity and efficiency by automating as much as possible the inefficient work that occurs while repeating this process whenever machine learning is applied. In particular, research has been conducted for a long time on technologies that can effectively develop high-quality models by minimizing the intervention of model developers in the process from data preprocessing to algorithm selection and tuning. In this semantic review research, we summarize the data processing requirements for AutoML approaches and provide a detailed explanation. We place greater emphasis on neural architecture search (NAS) as it currently represents a highly popular sub-topic within the field of AutoML. NAS methods use machine learning algorithms to search through a large space of possible architectures and find the one that performs best on a given task. We provide a summary of the performance achieved by representative NAS algorithms on the CIFAR-10, CIFAR-100, ImageNet and well-known benchmark datasets. Additionally, we delve into several noteworthy research directions in NAS methods including one/two-stage NAS, one-shot NAS and joint hyperparameter with architecture optimization. We discussed how the search space size and complexity in NAS can vary depending on the specific problem being addressed. To conclude, we examine several open problems (SOTA problems) within current AutoML methods that assure further investigation in future research.

AutoML(自动化机器学习)是一个新兴领域,旨在实现机器学习模型构建过程的自动化。AutoML 的出现是为了尽可能自动化重复机器学习过程中出现的低效工作,从而提高生产率和效率。特别是,从数据预处理到算法选择和调整,模型开发人员在这一过程中的干预降到最低,从而有效开发出高质量模型的技术已经研究了很长时间。在这项语义回顾研究中,我们总结了 AutoML 方法的数据处理要求,并提供了详细的解释。我们更加重视神经架构搜索(NAS),因为它是目前 AutoML 领域非常热门的子课题。NAS 方法使用机器学习算法在大量可能的架构中进行搜索,找出在给定任务中表现最佳的架构。我们总结了具有代表性的 NAS 算法在 CIFAR-10、CIFAR-100、ImageNet 和知名基准数据集上取得的性能。此外,我们还深入探讨了 NAS 方法中几个值得关注的研究方向,包括单/两阶段 NAS、单次 NAS 和联合超参数与架构优化。我们讨论了 NAS 的搜索空间大小和复杂性如何因所解决的具体问题而异。最后,我们探讨了当前 AutoML 方法中的几个开放问题(SOTA 问题),这些问题值得在未来的研究中进一步探讨。
{"title":"AutoML: A systematic review on automated machine learning with neural architecture search","authors":"Imrus Salehin ,&nbsp;Md. Shamiul Islam ,&nbsp;Pritom Saha ,&nbsp;S.M. Noman ,&nbsp;Azra Tuni ,&nbsp;Md. Mehedi Hasan ,&nbsp;Md. Abu Baten","doi":"10.1016/j.jiixd.2023.10.002","DOIUrl":"10.1016/j.jiixd.2023.10.002","url":null,"abstract":"<div><p>AutoML (Automated Machine Learning) is an emerging field that aims to automate the process of building machine learning models. AutoML emerged to increase productivity and efficiency by automating as much as possible the inefficient work that occurs while repeating this process whenever machine learning is applied. In particular, research has been conducted for a long time on technologies that can effectively develop high-quality models by minimizing the intervention of model developers in the process from data preprocessing to algorithm selection and tuning. In this semantic review research, we summarize the data processing requirements for AutoML approaches and provide a detailed explanation. We place greater emphasis on neural architecture search (NAS) as it currently represents a highly popular sub-topic within the field of AutoML. NAS methods use machine learning algorithms to search through a large space of possible architectures and find the one that performs best on a given task. We provide a summary of the performance achieved by representative NAS algorithms on the CIFAR-10, CIFAR-100, ImageNet and well-known benchmark datasets. Additionally, we delve into several noteworthy research directions in NAS methods including one/two-stage NAS, one-shot NAS and joint hyperparameter with architecture optimization. We discussed how the search space size and complexity in NAS can vary depending on the specific problem being addressed. To conclude, we examine several open problems (SOTA problems) within current AutoML methods that assure further investigation in future research.</p></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"2 1","pages":"Pages 52-81"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949715923000604/pdfft?md5=a79f7fb3cdab55edd3b7838063f99f50&pid=1-s2.0-S2949715923000604-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135849912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radio frequency based distributed system for noncooperative UAV classification and positioning 基于无线电频率的分布式无人机非合作分类和定位系统
Pub Date : 2024-01-01 DOI: 10.1016/j.jiixd.2023.07.002
Chaozheng Xue , Tao Li , Yongzhao Li

With the increasing popularity of civilian unmanned aerial vehicles (UAVs), safety issues arising from unsafe operations and terrorist activities have received growing attention. To address this problem, an accurate classification and positioning system is needed. Considering that UAVs usually use radio frequency (RF) signals for video transmission, in this paper, we design a passive distributed monitoring system that can classify and locate UAVs according to their RF signals. Specifically, three passive receivers are arranged in different locations to receive RF signals. Due to the noncooperation between a UAV and receivers, it is necessary to detect whether there is a UAV signal from the received signals. Hence, convolutional neural network (CNN) is proposed to not only detect the presence of the UAV, but also classify its type. After the UAV signal is detected, the time difference of arrival (TDOA) of the UAV signal arriving at the receiver is estimated by the cross-correlation method to obtain the corresponding distance difference. Finally, the Chan algorithm is used to calculate the location of the UAV. We deploy a distributed system constructed by three software defined radio (SDR) receivers on the campus playground, and conduct extensive experiments in a real wireless environment. The experimental results have successfully validated the proposed system.

随着民用无人飞行器(UAV)的日益普及,不安全操作和恐怖活动引发的安全问题日益受到关注。为解决这一问题,需要一个精确的分类和定位系统。考虑到无人飞行器通常使用射频(RF)信号进行视频传输,本文设计了一种无源分布式监控系统,可根据射频信号对无人飞行器进行分类和定位。具体来说,三个无源接收器被安排在不同位置接收射频信号。由于无人飞行器与接收器之间存在非合作关系,因此有必要从接收到的信号中检测是否存在无人飞行器信号。因此,我们提出了卷积神经网络(CNN),它不仅能检测到无人飞行器的存在,还能对其类型进行分类。检测到无人机信号后,利用交叉相关法估算无人机信号到达接收器的到达时间差(TDOA),从而得到相应的距离差。最后,利用 Chan 算法计算出无人机的位置。我们在校园操场上部署了一个由三个软件定义无线电(SDR)接收器构成的分布式系统,并在真实无线环境中进行了大量实验。实验结果成功验证了所提出的系统。
{"title":"Radio frequency based distributed system for noncooperative UAV classification and positioning","authors":"Chaozheng Xue ,&nbsp;Tao Li ,&nbsp;Yongzhao Li","doi":"10.1016/j.jiixd.2023.07.002","DOIUrl":"10.1016/j.jiixd.2023.07.002","url":null,"abstract":"<div><p>With the increasing popularity of civilian unmanned aerial vehicles (UAVs), safety issues arising from unsafe operations and terrorist activities have received growing attention. To address this problem, an accurate classification and positioning system is needed. Considering that UAVs usually use radio frequency (RF) signals for video transmission, in this paper, we design a passive distributed monitoring system that can classify and locate UAVs according to their RF signals. Specifically, three passive receivers are arranged in different locations to receive RF signals. Due to the noncooperation between a UAV and receivers, it is necessary to detect whether there is a UAV signal from the received signals. Hence, convolutional neural network (CNN) is proposed to not only detect the presence of the UAV, but also classify its type. After the UAV signal is detected, the time difference of arrival (TDOA) of the UAV signal arriving at the receiver is estimated by the cross-correlation method to obtain the corresponding distance difference. Finally, the Chan algorithm is used to calculate the location of the UAV. We deploy a distributed system constructed by three software defined radio (SDR) receivers on the campus playground, and conduct extensive experiments in a real wireless environment. The experimental results have successfully validated the proposed system.</p></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"2 1","pages":"Pages 42-51"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949715923000446/pdfft?md5=462b514a709497f9d3e6393f3ad2f8f7&pid=1-s2.0-S2949715923000446-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84541549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FTG: Score-based black-box watermarking by fragile trigger generation for deep model integrity verification FTG:通过脆弱触发器生成基于分数的黑盒水印,用于深度模型完整性验证
Pub Date : 2024-01-01 DOI: 10.1016/j.jiixd.2023.10.006
Heng Yin , Zhaoxia Yin , Zhenzhe Gao , Hang Su , Xinpeng Zhang , Bin Luo

Deep neural networks (DNNs) are widely used in real-world applications, thanks to their exceptional performance in image recognition. However, their vulnerability to attacks, such as Trojan and data poison, can compromise the integrity and stability of DNN applications. Therefore, it is crucial to verify the integrity of DNN models to ensure their security. Previous research on model watermarking for integrity detection has encountered the issue of overexposure of model parameters during embedding and extraction of the watermark. To address this problem, we propose a novel score-based black-box DNN fragile watermarking framework called fragile trigger generation (FTG). The FTG framework only requires the prediction probability distribution of the final output of the classifier during the watermarking process. It generates different fragile samples as the trigger, based on the classification prediction probability of the target classifier and a specified prediction probability mask to watermark it. Different prediction probability masks can promote the generation of fragile samples in corresponding distribution types. The whole watermarking process does not affect the performance of the target classifier. When verifying the watermarking information, the FTG only needs to compare the prediction results of the model on the samples with the previous label. As a result, the required model parameter information is reduced, and the FTG only needs a few samples to detect slight modifications in the model. Experimental results demonstrate the effectiveness of our proposed method and show its superiority over related work. The FTG framework provides a robust solution for verifying the integrity of DNN models, and its effectiveness in detecting slight modifications makes it a valuable tool for ensuring the security and stability of DNN applications.

深度神经网络(DNN)因其在图像识别方面的卓越性能而被广泛应用于现实世界。然而,它们易受木马和数据中毒等攻击的影响,会损害 DNN 应用程序的完整性和稳定性。因此,验证 DNN 模型的完整性以确保其安全性至关重要。以往针对完整性检测的模型水印研究遇到了水印嵌入和提取过程中模型参数过度暴露的问题。为解决这一问题,我们提出了一种新颖的基于分数的黑盒 DNN 脆弱水印框架,称为脆弱触发生成(FTG)。FTG 框架在水印处理过程中只需要分类器最终输出的预测概率分布。它根据目标分类器的分类预测概率和指定的预测概率掩码,生成不同的易损样本作为触发器,对其进行水印处理。不同的预测概率掩码可促进生成相应分布类型的易损样本。整个水印过程不会影响目标分类器的性能。在验证水印信息时,FTG 只需比较模型对样本的预测结果与之前的标签。因此,所需的模型参数信息减少了,FTG 只需要几个样本就能检测到模型的细微变化。实验结果证明了我们提出的方法的有效性,并显示出其优于相关工作。FTG 框架为验证 DNN 模型的完整性提供了一个稳健的解决方案,它在检测轻微修改方面的有效性使其成为确保 DNN 应用安全性和稳定性的重要工具。
{"title":"FTG: Score-based black-box watermarking by fragile trigger generation for deep model integrity verification","authors":"Heng Yin ,&nbsp;Zhaoxia Yin ,&nbsp;Zhenzhe Gao ,&nbsp;Hang Su ,&nbsp;Xinpeng Zhang ,&nbsp;Bin Luo","doi":"10.1016/j.jiixd.2023.10.006","DOIUrl":"10.1016/j.jiixd.2023.10.006","url":null,"abstract":"<div><p>Deep neural networks (DNNs) are widely used in real-world applications, thanks to their exceptional performance in image recognition. However, their vulnerability to attacks, such as Trojan and data poison, can compromise the integrity and stability of DNN applications. Therefore, it is crucial to verify the integrity of DNN models to ensure their security. Previous research on model watermarking for integrity detection has encountered the issue of overexposure of model parameters during embedding and extraction of the watermark. To address this problem, we propose a novel score-based black-box DNN fragile watermarking framework called fragile trigger generation (FTG). The FTG framework only requires the prediction probability distribution of the final output of the classifier during the watermarking process. It generates different fragile samples as the trigger, based on the classification prediction probability of the target classifier and a specified prediction probability mask to watermark it. Different prediction probability masks can promote the generation of fragile samples in corresponding distribution types. The whole watermarking process does not affect the performance of the target classifier. When verifying the watermarking information, the FTG only needs to compare the prediction results of the model on the samples with the previous label. As a result, the required model parameter information is reduced, and the FTG only needs a few samples to detect slight modifications in the model. Experimental results demonstrate the effectiveness of our proposed method and show its superiority over related work. The FTG framework provides a robust solution for verifying the integrity of DNN models, and its effectiveness in detecting slight modifications makes it a valuable tool for ensuring the security and stability of DNN applications.</p></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"2 1","pages":"Pages 28-41"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949715923000641/pdfft?md5=60f402130fb47c84b855a467ea72516c&pid=1-s2.0-S2949715923000641-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135412511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frontiers of collaborative intelligence systems 协作智能系统的前沿
Pub Date : 2024-01-01 DOI: 10.1016/j.jiixd.2023.10.005
Maoguo Gong , Yajing He , Hao Li , Yue Wu , Mingyang Zhang , Shanfeng Wang , Tianshi Luo

The development of information technology has propelled technological reform in artificial intelligence (AI). To address the needs of diversified and complex applications, AI has been increasingly trending towards intelligent, collaborative, and systematized development across different levels and tasks. Research on intelligent, collaborative and systematized AI can be divided into three levels: micro, meso, and macro. Firstly, the micro-level collaboration is illustrated through the introduction of swarm intelligence collaborative methods related to individuals collaboration and decision variables collaboration. Secondly, the meso-level collaboration is discussed in terms of multi-task collaboration and multi-party collaboration. Thirdly, the macro-level collaboration is primarily in the context of intelligent collaborative systems, such as terrestrial-satellite collaboration, space-air-ground collaboration, space-air-ground-air collaboration, vehicle-road-cloud collaboration and end-edge-cloud collaboration. Finally, this paper provides prospects on the future development of relevant fields from the perspectives of the micro, meso, and macro levels.

信息技术的发展推动了人工智能(AI)的技术改革。为满足多样化、复杂化的应用需求,人工智能越来越趋向于智能化、协同化、系统化的发展,跨越不同的层次和任务。关于人工智能智能化、协同化和系统化的研究可分为微观、中观和宏观三个层面。首先,微观层面的协作通过引入与个体协作和决策变量协作相关的蜂群智能协作方法来说明。其次,从多任务协作和多方协作两个方面探讨中观层面的协作。第三,宏观层面的协同主要结合智能协同系统,如地-卫星协同、空-空-地协同、空-空-地-空协同、车-路-云协同、端-边-云协同等。最后,本文从微观、中观和宏观三个层面对相关领域的未来发展进行了展望。
{"title":"Frontiers of collaborative intelligence systems","authors":"Maoguo Gong ,&nbsp;Yajing He ,&nbsp;Hao Li ,&nbsp;Yue Wu ,&nbsp;Mingyang Zhang ,&nbsp;Shanfeng Wang ,&nbsp;Tianshi Luo","doi":"10.1016/j.jiixd.2023.10.005","DOIUrl":"10.1016/j.jiixd.2023.10.005","url":null,"abstract":"<div><p>The development of information technology has propelled technological reform in artificial intelligence (AI). To address the needs of diversified and complex applications, AI has been increasingly trending towards intelligent, collaborative, and systematized development across different levels and tasks. Research on intelligent, collaborative and systematized AI can be divided into three levels: micro, meso, and macro. Firstly, the micro-level collaboration is illustrated through the introduction of swarm intelligence collaborative methods related to individuals collaboration and decision variables collaboration. Secondly, the meso-level collaboration is discussed in terms of multi-task collaboration and multi-party collaboration. Thirdly, the macro-level collaboration is primarily in the context of intelligent collaborative systems, such as terrestrial-satellite collaboration, space-air-ground collaboration, space-air-ground-air collaboration, vehicle-road-cloud collaboration and end-edge-cloud collaboration. Finally, this paper provides prospects on the future development of relevant fields from the perspectives of the micro, meso, and macro levels.</p></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"2 1","pages":"Pages 14-27"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294971592300063X/pdfft?md5=666b324f5aba714a9622c1ecb7cabb7c&pid=1-s2.0-S294971592300063X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136009781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding turbo codes: A signal processing study 了解涡轮编码:信号处理研究
Pub Date : 2024-01-01 DOI: 10.1016/j.jiixd.2023.10.003
Xiang-Gen Xia

In this paper, we study turbo codes from the digital signal processing point of view by defining turbo codes over the complex field. It is known that iterative decoding and interleaving between concatenated parallel codes are two key elements that make turbo codes perform significantly better than the conventional error control codes. This is analytically illustrated in this paper. We show that the decoded noise mean power in the iterative decoding decreases when the number of iterations increases, as long as the interleaving decorrelates the noise after each iterative decoding step. An analytic decreasing rate and the limit of the decoded noise mean power are given. The limit of the decoded noise mean power of the iterative decoding of a turbo code with two parallel codes with their rates less than 1/2 is one third of the noise power before the decoding, which can not be achieved by any non-turbo codes with the same rate. From this study, the role of designing a good interleaver can also be clearly seen.

本文从数字信号处理的角度出发,通过定义复数域上的涡轮编码来研究涡轮编码。众所周知,迭代解码和并行编码之间的交错是使涡轮编码的性能明显优于传统误差控制编码的两个关键因素。本文通过分析说明了这一点。我们证明,只要交织在每个迭代解码步骤后对噪声进行去相关处理,迭代解码中的解码噪声平均功率就会随着迭代次数的增加而减小。给出了解析递减率和解码噪声平均功率的极限。用两个速率小于 1/2 的并行编码对一个涡轮编码进行迭代解码的解码噪声平均功率的极限是解码前噪声功率的三分之一,这是任何具有相同速率的非涡轮编码都无法达到的。从这项研究中,我们也可以清楚地看到设计一个好的交织器的作用。
{"title":"Understanding turbo codes: A signal processing study","authors":"Xiang-Gen Xia","doi":"10.1016/j.jiixd.2023.10.003","DOIUrl":"10.1016/j.jiixd.2023.10.003","url":null,"abstract":"<div><p>In this paper, we study turbo codes from the digital signal processing point of view by defining turbo codes over the complex field. It is known that iterative decoding and interleaving between concatenated parallel codes are two key elements that make turbo codes perform significantly better than the conventional error control codes. This is analytically illustrated in this paper. We show that the decoded noise mean power in the iterative decoding decreases when the number of iterations increases, as long as the interleaving decorrelates the noise after each iterative decoding step. An analytic decreasing rate and the limit of the decoded noise mean power are given. The limit of the decoded noise mean power of the iterative decoding of a turbo code with two parallel codes with their rates less than 1/2 is one third of the noise power before the decoding, which can not be achieved by any non-turbo codes with the same rate. From this study, the role of designing a good interleaver can also be clearly seen.</p></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"2 1","pages":"Pages 1-13"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949715923000616/pdfft?md5=f118ebffb9d9e7932e08138648929b52&pid=1-s2.0-S2949715923000616-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136009520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inherent-attribute-aware dual-graph autoencoder for rating prediction 用于评级预测的固有属性感知双图自动编码器
Pub Date : 2024-01-01 DOI: 10.1016/j.jiixd.2023.10.004
Yangtao Zhou , Qingshan Li , Hua Chu , Jianan Li , Lejia Yang , Biaobiao Wei , Luqiao Wang , Wanqiang Yang

Autoencoder-based rating prediction methods with external attributes have received wide attention due to their ability to accurately capture users' preferences. However, existing methods still have two significant limitations: i) External attributes are often unavailable in the real world due to privacy issues, leading to low quality of representations; and ii) existing methods lack considering complex associations in users' rating behaviors during the encoding process. To meet these challenges, this paper innovatively proposes an inherent-attribute-aware dual-graph autoencoder, named IADGAE, for rating prediction. To address the low quality of representations due to the unavailability of external attributes, we propose an inherent attribute perception module that mines inductive user active patterns and item popularity patterns from users' rating behaviors to strengthen user and item representations. To exploit the complex associations hidden in users’ rating behaviors, we design an encoder on the item-item co-occurrence graph to capture the co-occurrence frequency features among items. Moreover, we propose a dual-graph feature encoder framework to simultaneously encode and fuse the high-order representations learned from the user-item rating graph and item-item co-occurrence graph. Extensive experiments on three real datasets demonstrate that IADGAE is effective and outperforms existing rating prediction methods, which achieves a significant improvement of 4.51%∼41.63 ​% in the RMSE metric.

基于外部属性的自动编码器评分预测方法因其能够准确捕捉用户偏好而受到广泛关注。然而,现有方法仍存在两个显著的局限性:i) 由于隐私问题,外部属性在现实世界中往往不可用,导致表征质量低下;ii) 现有方法在编码过程中缺乏对用户评分行为中复杂关联的考虑。为了应对这些挑战,本文创新性地提出了一种用于评分预测的固有属性感知双图自动编码器,命名为 IADGAE。为了解决由于外部属性不可用而导致的表征质量低的问题,我们提出了一个固有属性感知模块,从用户的评分行为中挖掘归纳用户活跃模式和项目受欢迎程度模式,以加强用户和项目表征。为了利用隐藏在用户评分行为中的复杂关联,我们设计了一个项目-项目共现图编码器,以捕捉项目间的共现频率特性。此外,我们还提出了一种双图特征编码器框架,可同时对从用户-物品评分图和物品-物品共现图中学习到的高阶表征进行编码和融合。在三个真实数据集上进行的大量实验证明,IADGAE 是有效的,而且优于现有的评分预测方法,在 RMSE 指标上实现了 4.51%∼41.63 % 的显著改进。
{"title":"Inherent-attribute-aware dual-graph autoencoder for rating prediction","authors":"Yangtao Zhou ,&nbsp;Qingshan Li ,&nbsp;Hua Chu ,&nbsp;Jianan Li ,&nbsp;Lejia Yang ,&nbsp;Biaobiao Wei ,&nbsp;Luqiao Wang ,&nbsp;Wanqiang Yang","doi":"10.1016/j.jiixd.2023.10.004","DOIUrl":"10.1016/j.jiixd.2023.10.004","url":null,"abstract":"<div><p>Autoencoder-based rating prediction methods with external attributes have received wide attention due to their ability to accurately capture users' preferences. However, existing methods still have two significant limitations: i) External attributes are often unavailable in the real world due to privacy issues, leading to low quality of representations; and ii) existing methods lack considering complex associations in users' rating behaviors during the encoding process. To meet these challenges, this paper innovatively proposes an inherent-attribute-aware dual-graph autoencoder, named IADGAE, for rating prediction. To address the low quality of representations due to the unavailability of external attributes, we propose an inherent attribute perception module that mines inductive user active patterns and item popularity patterns from users' rating behaviors to strengthen user and item representations. To exploit the complex associations hidden in users’ rating behaviors, we design an encoder on the item-item co-occurrence graph to capture the co-occurrence frequency features among items. Moreover, we propose a dual-graph feature encoder framework to simultaneously encode and fuse the high-order representations learned from the user-item rating graph and item-item co-occurrence graph. Extensive experiments on three real datasets demonstrate that IADGAE is effective and outperforms existing rating prediction methods, which achieves a significant improvement of 4.51%∼41.63 ​% in the RMSE metric.</p></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"2 1","pages":"Pages 82-97"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949715923000628/pdfft?md5=e0de0d732524d082d68a4ba7d99dc225&pid=1-s2.0-S2949715923000628-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136054613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Information and Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1