Pub Date : 2024-10-11DOI: 10.1109/JETCAS.2024.3478359
Xinyu Guo;Xiaojiang Zuo;Rui Han;Junyan Ouyang;Jing Xie;Chi Harold Liu;Qinglong Zhang;Ying Guo;Jing Chen;Lydia Y. Chen
AI applications powered by deep learning models are increasingly run natively at edge. A deployed model not only encounters continuously evolving input distributions (domains) but also faces adversarial attacks from third-party. This necessitates adapting the model to shifting domains to maintain high natural accuracy, while avoiding degrading the model’s robust accuracy. However, existing domain adaptation and adversarial attack preventation techniques often have conflicting optimization objectives and they rely on time-consuming training process. This paper presents RobustDA, an on-device lightweight approach that co-optimizes natural and robust accuracies in model retraining. It uses a set of low-rank adapters to retain all learned domains’ knowledge with small overheads. In each model retraining, RobustDA constructs an adapter to separate domain-related and robust-related model parameters to avoid their conflicts in updating. Based on the retained knowledge, it quickly generates adversarial examples with high-quality pseudo-labels and uses them to accelerate the retraining process. We demonstrate that, comparing against 14 state-of-the-art DA techniques under 7 prevalent adversarial attacks on edge devices, the proposed co-optimization approach improves natural and robust accuracies by 6.34% and 11.41% simultaneously. Under the same accuracy, RobustDA also speeds up the retraining process by 4.09x.
{"title":"RobustDA: Lightweight Robust Domain Adaptation for Evolving Data at Edge","authors":"Xinyu Guo;Xiaojiang Zuo;Rui Han;Junyan Ouyang;Jing Xie;Chi Harold Liu;Qinglong Zhang;Ying Guo;Jing Chen;Lydia Y. Chen","doi":"10.1109/JETCAS.2024.3478359","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3478359","url":null,"abstract":"AI applications powered by deep learning models are increasingly run natively at edge. A deployed model not only encounters continuously evolving input distributions (domains) but also faces adversarial attacks from third-party. This necessitates adapting the model to shifting domains to maintain high natural accuracy, while avoiding degrading the model’s robust accuracy. However, existing domain adaptation and adversarial attack preventation techniques often have conflicting optimization objectives and they rely on time-consuming training process. This paper presents RobustDA, an on-device lightweight approach that co-optimizes natural and robust accuracies in model retraining. It uses a set of low-rank adapters to retain all learned domains’ knowledge with small overheads. In each model retraining, RobustDA constructs an adapter to separate domain-related and robust-related model parameters to avoid their conflicts in updating. Based on the retained knowledge, it quickly generates adversarial examples with high-quality pseudo-labels and uses them to accelerate the retraining process. We demonstrate that, comparing against 14 state-of-the-art DA techniques under 7 prevalent adversarial attacks on edge devices, the proposed co-optimization approach improves natural and robust accuracies by 6.34% and 11.41% simultaneously. Under the same accuracy, RobustDA also speeds up the retraining process by 4.09x.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"688-704"},"PeriodicalIF":3.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1109/JETCAS.2024.3477976
Brian Belgodere;Pierre Dognin;Adam Ivankay;Igor Melnyk;Youssef Mroueh;Aleksandra Mojsilović;Jiri Navratil;Apoorva Nitsure;Inkit Padhi;Mattia Rigotti;Jerret Ross;Yair Schiff;Radhika Vedpathak;Richard A. Young
Real-world data often exhibits bias, imbalance, and privacy risks. Synthetic datasets have emerged to address these issues by enabling a paradigm that relies on generative AI models to generate unbiased, privacy-preserving data while maintaining fidelity to the original data. However, assessing the trustworthiness of synthetic datasets and models is a critical challenge. We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models. It focuses on preventing bias and discrimination, ensuring fidelity to the source data, and assessing utility, robustness, and privacy preservation. We demonstrate our framework’s effectiveness by auditing various generative models across diverse use cases like education, healthcare, banking, and human resources, spanning different data modalities such as tabular, time-series, vision, and natural language. This holistic assessment is essential for compliance with regulatory safeguards. We introduce a trustworthiness index to rank synthetic datasets based on their safeguards trade-offs. Furthermore, we present a trustworthiness-driven model selection and cross-validation process during training, exemplified with “TrustFormers” across various data types. This approach allows for controllable trustworthiness trade-offs in synthetic data creation. Our auditing framework fosters collaboration among stakeholders, including data scientists, governance experts, internal reviewers, external certifiers, and regulators. This transparent reporting should become a standard practice to prevent bias, discrimination, and privacy violations, ensuring compliance with policies and providing accountability, safety, and performance guarantees.
{"title":"Auditing and Generating Synthetic Data With Controllable Trust Trade-Offs","authors":"Brian Belgodere;Pierre Dognin;Adam Ivankay;Igor Melnyk;Youssef Mroueh;Aleksandra Mojsilović;Jiri Navratil;Apoorva Nitsure;Inkit Padhi;Mattia Rigotti;Jerret Ross;Yair Schiff;Radhika Vedpathak;Richard A. Young","doi":"10.1109/JETCAS.2024.3477976","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3477976","url":null,"abstract":"Real-world data often exhibits bias, imbalance, and privacy risks. Synthetic datasets have emerged to address these issues by enabling a paradigm that relies on generative AI models to generate unbiased, privacy-preserving data while maintaining fidelity to the original data. However, assessing the trustworthiness of synthetic datasets and models is a critical challenge. We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models. It focuses on preventing bias and discrimination, ensuring fidelity to the source data, and assessing utility, robustness, and privacy preservation. We demonstrate our framework’s effectiveness by auditing various generative models across diverse use cases like education, healthcare, banking, and human resources, spanning different data modalities such as tabular, time-series, vision, and natural language. This holistic assessment is essential for compliance with regulatory safeguards. We introduce a trustworthiness index to rank synthetic datasets based on their safeguards trade-offs. Furthermore, we present a trustworthiness-driven model selection and cross-validation process during training, exemplified with “TrustFormers” across various data types. This approach allows for controllable trustworthiness trade-offs in synthetic data creation. Our auditing framework fosters collaboration among stakeholders, including data scientists, governance experts, internal reviewers, external certifiers, and regulators. This transparent reporting should become a standard practice to prevent bias, discrimination, and privacy violations, ensuring compliance with policies and providing accountability, safety, and performance guarantees.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"773-788"},"PeriodicalIF":3.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713321","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI has undergone a remarkable evolution journey marked by groundbreaking milestones. Like any powerful tool, it can be turned into a weapon for devastation in the wrong hands. Understanding that no model is perfect, trustworthy AI is initiated with an intuitive aim to mitigate the harm it can inflict on people and society by prioritizing socially responsible AI ideation, design, development, and deployment towards effecting positive changes. The scope of trustworthy AI is encompassing, covering qualities such as safety, security, privacy, transparency, explainability, fairness, impartiality, robustness, reliability, and accountability. This overview paper anchors on recent advances in four research hotspots of trustworthy AI with compelling and challenging security, privacy, and safety issues. The topics discussed include the intellectual property protection of deep learning and generative models, the trustworthiness of federated learning, verification and testing tools of AI systems, and the safety alignment of generative AI systems. Through this comprehensive review, we aim to provide readers with an overview of the most up-to-date research problems and solutions. By presenting the rapidly evolving factors and constraints that motivate the emerging attack and defense strategies throughout the AI life-cycle, we hope to inspire more research effort into guiding AI technologies towards beneficial purposes with greater robustness against malicious use intent.
{"title":"An Overview of Trustworthy AI: Advances in IP Protection, Privacy-Preserving Federated Learning, Security Verification, and GAI Safety Alignment","authors":"Yue Zheng;Chip-Hong Chang;Shih-Hsu Huang;Pin-Yu Chen;Stjepan Picek","doi":"10.1109/JETCAS.2024.3477348","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3477348","url":null,"abstract":"AI has undergone a remarkable evolution journey marked by groundbreaking milestones. Like any powerful tool, it can be turned into a weapon for devastation in the wrong hands. Understanding that no model is perfect, trustworthy AI is initiated with an intuitive aim to mitigate the harm it can inflict on people and society by prioritizing socially responsible AI ideation, design, development, and deployment towards effecting positive changes. The scope of trustworthy AI is encompassing, covering qualities such as safety, security, privacy, transparency, explainability, fairness, impartiality, robustness, reliability, and accountability. This overview paper anchors on recent advances in four research hotspots of trustworthy AI with compelling and challenging security, privacy, and safety issues. The topics discussed include the intellectual property protection of deep learning and generative models, the trustworthiness of federated learning, verification and testing tools of AI systems, and the safety alignment of generative AI systems. Through this comprehensive review, we aim to provide readers with an overview of the most up-to-date research problems and solutions. By presenting the rapidly evolving factors and constraints that motivate the emerging attack and defense strategies throughout the AI life-cycle, we hope to inspire more research effort into guiding AI technologies towards beneficial purposes with greater robustness against malicious use intent.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"582-607"},"PeriodicalIF":3.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10711270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-27DOI: 10.1109/JETCAS.2024.3469377
Bowen Hu;Chip-Hong Chang
As deep neural network (DNN) models are used in a wide variety of applications, their security has attracted considerable attention. Among the known security vulnerabilities, backdoor attacks have become the most notorious threat to users of pre-trained DNNs and machine learning services. Such attacks manipulate the training data or training process in such a way that the trained model produces a false output to an input that carries a specific trigger, but behaves normally otherwise. In this work, we propose Diffense, a method for detecting such malicious inputs based on the distribution of the latent feature maps to clean input samples of the possibly infected target DNN. By learning the feature map distribution using the diffusion model and sampling from the model under the guidance of the data to be inspected, backdoor attack data can be detected by its distance from the sampled result. Diffense does not require knowledge about the structure, weights, and training data of the target DNN model, nor does it need to be aware of the backdoor attack method. Diffense is non-intrusive. The accuracy of the target model to clean inputs will not be affected by Diffense and the inference service can be run uninterruptedly with Diffense. Extensive experiments were conducted on DNNs trained for MNIST, CIFRA-10, GSTRB, ImageNet-10, LSUN Object and LSUN Scene applications to show that the attack success rates of diverse backdoor attacks, including BadNets, IDBA, WaNet, ISSBA and HTBA, can be significantly suppressed by Diffense. The results generally exceed the performances of existing backdoor mitigation methods, including those that require model modifications or prerequisite knowledge of model weights or attack samples.
{"title":"Diffense: Defense Against Backdoor Attacks on Deep Neural Networks With Latent Diffusion","authors":"Bowen Hu;Chip-Hong Chang","doi":"10.1109/JETCAS.2024.3469377","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3469377","url":null,"abstract":"As deep neural network (DNN) models are used in a wide variety of applications, their security has attracted considerable attention. Among the known security vulnerabilities, backdoor attacks have become the most notorious threat to users of pre-trained DNNs and machine learning services. Such attacks manipulate the training data or training process in such a way that the trained model produces a false output to an input that carries a specific trigger, but behaves normally otherwise. In this work, we propose Diffense, a method for detecting such malicious inputs based on the distribution of the latent feature maps to clean input samples of the possibly infected target DNN. By learning the feature map distribution using the diffusion model and sampling from the model under the guidance of the data to be inspected, backdoor attack data can be detected by its distance from the sampled result. Diffense does not require knowledge about the structure, weights, and training data of the target DNN model, nor does it need to be aware of the backdoor attack method. Diffense is non-intrusive. The accuracy of the target model to clean inputs will not be affected by Diffense and the inference service can be run uninterruptedly with Diffense. Extensive experiments were conducted on DNNs trained for MNIST, CIFRA-10, GSTRB, ImageNet-10, LSUN Object and LSUN Scene applications to show that the attack success rates of diverse backdoor attacks, including BadNets, IDBA, WaNet, ISSBA and HTBA, can be significantly suppressed by Diffense. The results generally exceed the performances of existing backdoor mitigation methods, including those that require model modifications or prerequisite knowledge of model weights or attack samples.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"729-742"},"PeriodicalIF":3.7,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-24DOI: 10.1109/JETCAS.2024.3466849
Quoc Bao Phan;Tuy Tan Nguyen
This paper addresses the challenges of data privacy and computational efficiency in artificial intelligence (AI) models by proposing a novel hybrid model that combines homomorphic encryption (HE) with AI to enhance security while maintaining learning accuracy. The novelty of our model lies in the introduction of a new matrix transformation technique that ensures compatibility with both HE algorithms and AI model weight matrices, significantly improving computational efficiency. Furthermore, we present a first-of-its-kind mathematical proof of convergence for integrating HE into AI models using the adaptive moment estimation optimization algorithm. The effectiveness and practicality of our approach for training on encrypted data are showcased through comprehensive evaluations of well-known datasets for air pollution forecasting and forest fire detection. These successful results demonstrate high model performance, with nearly 1 R-squared for air pollution forecasting and 99% accuracy for forest fire detection. Additionally, our approach achieves a reduction of up to 90% in data storage and a tenfold increase in speed compared to models that do not use the matrix transformation method. Our primary contribution lies in enhancing the security, efficiency, and dependability of AI models, particularly when dealing with sensitive data.
本文针对人工智能(AI)模型在数据隐私和计算效率方面的挑战,提出了一种新型混合模型,将同态加密(HE)与人工智能相结合,在保持学习准确性的同时增强安全性。我们模型的新颖之处在于引入了一种新的矩阵变换技术,它能确保同态加密算法和人工智能模型权重矩阵的兼容性,从而显著提高计算效率。此外,我们还首次提出了利用自适应矩估计优化算法将 HE 整合到人工智能模型中的收敛性数学证明。通过对空气污染预测和森林火灾检测等知名数据集的全面评估,我们展示了在加密数据上进行训练的有效性和实用性。这些成功的结果证明了模型的高性能,空气污染预测的 R 平方接近 1,森林火灾检测的准确率达到 99%。此外,与不使用矩阵变换方法的模型相比,我们的方法减少了多达 90% 的数据存储,速度提高了 10 倍。我们的主要贡献在于提高了人工智能模型的安全性、效率和可靠性,尤其是在处理敏感数据时。
{"title":"Efficient Artificial Intelligence With Novel Matrix Transformations and Homomorphic Encryption","authors":"Quoc Bao Phan;Tuy Tan Nguyen","doi":"10.1109/JETCAS.2024.3466849","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3466849","url":null,"abstract":"This paper addresses the challenges of data privacy and computational efficiency in artificial intelligence (AI) models by proposing a novel hybrid model that combines homomorphic encryption (HE) with AI to enhance security while maintaining learning accuracy. The novelty of our model lies in the introduction of a new matrix transformation technique that ensures compatibility with both HE algorithms and AI model weight matrices, significantly improving computational efficiency. Furthermore, we present a first-of-its-kind mathematical proof of convergence for integrating HE into AI models using the adaptive moment estimation optimization algorithm. The effectiveness and practicality of our approach for training on encrypted data are showcased through comprehensive evaluations of well-known datasets for air pollution forecasting and forest fire detection. These successful results demonstrate high model performance, with nearly 1 R-squared for air pollution forecasting and 99% accuracy for forest fire detection. Additionally, our approach achieves a reduction of up to 90% in data storage and a tenfold increase in speed compared to models that do not use the matrix transformation method. Our primary contribution lies in enhancing the security, efficiency, and dependability of AI models, particularly when dealing with sensitive data.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"717-728"},"PeriodicalIF":3.7,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1109/JETCAS.2024.3463738
Ningxin He;Tiegang Gao;Chuan Zhou
Federated learning (FL) exhibits promising potential in the Industrial Internet of Things (IIoT) as it allows multiple institutions to collaboratively train a global model without sharing local data. However, there are still many privacy and security concerns in FL systems. The cloud server responsible for aggregating model parameters may be malicious, and it may distribute manipulated aggregation results that could launch nefarious attacks. Additionally, industrial agents may provide incomplete parameters, negatively impacting the global model’s performance. To address these issues, we introduce Re_useVFL, an efficient privacy-preserving full-process FL verification scheme. It integrates BLS-based signature verification, adaptive gradient sparsification (AdaGS), and Multi-Key CKKS encryption (MK-CKKS). Our scheme ensures the integrity of agents-uploaded parameters, the correctness of the cloud server’s aggregation results, and the consistency verification of distributed results, thereby providing comprehensive verification across the entire FL process. It also maintains validation accuracy even with some agents dropout during computation. The AdaGS algorithm notably reduces validation overhead by optimizing parameter sparsification and reuse. Additionally, employing MK-CKKS to protect agents privacy and prevent agent and server collusion. Our experiments on three datasets confirm that Re_useVFL achieves lower validation resource overhead compared to existing methods, demonstrating its practical effectiveness.
{"title":"Re_useVFL: Reuse of Parameters-Based Verifiable Federated Learning With Privacy Preservation Using Gradient Sparsification","authors":"Ningxin He;Tiegang Gao;Chuan Zhou","doi":"10.1109/JETCAS.2024.3463738","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3463738","url":null,"abstract":"Federated learning (FL) exhibits promising potential in the Industrial Internet of Things (IIoT) as it allows multiple institutions to collaboratively train a global model without sharing local data. However, there are still many privacy and security concerns in FL systems. The cloud server responsible for aggregating model parameters may be malicious, and it may distribute manipulated aggregation results that could launch nefarious attacks. Additionally, industrial agents may provide incomplete parameters, negatively impacting the global model’s performance. To address these issues, we introduce Re_useVFL, an efficient privacy-preserving full-process FL verification scheme. It integrates BLS-based signature verification, adaptive gradient sparsification (AdaGS), and Multi-Key CKKS encryption (MK-CKKS). Our scheme ensures the integrity of agents-uploaded parameters, the correctness of the cloud server’s aggregation results, and the consistency verification of distributed results, thereby providing comprehensive verification across the entire FL process. It also maintains validation accuracy even with some agents dropout during computation. The AdaGS algorithm notably reduces validation overhead by optimizing parameter sparsification and reuse. Additionally, employing MK-CKKS to protect agents privacy and prevent agent and server collusion. Our experiments on three datasets confirm that Re_useVFL achieves lower validation resource overhead compared to existing methods, demonstrating its practical effectiveness.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"647-660"},"PeriodicalIF":3.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/JETCAS.2024.3450049
{"title":"IEEE Circuits and Systems Society Information","authors":"","doi":"10.1109/JETCAS.2024.3450049","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3450049","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"C3-C3"},"PeriodicalIF":3.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680688","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/JETCAS.2024.3450055
{"title":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems Publication Information","authors":"","doi":"10.1109/JETCAS.2024.3450055","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3450055","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"C2-C2"},"PeriodicalIF":3.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680687","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This Special Issue of IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS) is devoted to advancing the field of chip and package-scale communications across diverse computing domains, bridging academic research and industrial innovation. As we enter a new golden age of computer architecture, marked by both challenges and opportunities, the anticipated end of Moore’s law necessitates reimagining the future of computing systems as we approach the physical limits of transistors. Three leading approaches to address these challenges include the chiplet paradigm, domain-specific customization, and quantum computing. However, these architectural and technological innovations have shifted the primary bottleneck from computation to communication. Consequently, on-chip and on-package communication now play a critical role in determining the performance, efficiency, and scalability of general-purpose, domain-specific, and quantum computing systems. Their ever-growing importance has garnered significant attention from both academia and industry.
{"title":"Guest Editorial Chip and Package-Scale Communication-Aware Architectures for General-Purpose, Domain-Specific, and Quantum Computing Systems","authors":"Abhijit Das;Maurizio Palesi;John Kim;Partha Pratim Pande","doi":"10.1109/JETCAS.2024.3445208","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3445208","url":null,"abstract":"This Special Issue of IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS) is devoted to advancing the field of chip and package-scale communications across diverse computing domains, bridging academic research and industrial innovation. As we enter a new golden age of computer architecture, marked by both challenges and opportunities, the anticipated end of Moore’s law necessitates reimagining the future of computing systems as we approach the physical limits of transistors. Three leading approaches to address these challenges include the chiplet paradigm, domain-specific customization, and quantum computing. However, these architectural and technological innovations have shifted the primary bottleneck from computation to communication. Consequently, on-chip and on-package communication now play a critical role in determining the performance, efficiency, and scalability of general-purpose, domain-specific, and quantum computing systems. Their ever-growing importance has garnered significant attention from both academia and industry.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"349-353"},"PeriodicalIF":3.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680692","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/JETCAS.2024.3450053
{"title":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems Information for Authors","authors":"","doi":"10.1109/JETCAS.2024.3450053","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3450053","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 3","pages":"575-575"},"PeriodicalIF":3.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680690","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}