Early studies on discourse rhetorical structure parsing mainly adopt bottom-up approaches, limiting the parsing process to local information. Although current top-down parsers can better capture global information and have achieved particular success, the importance of local and global information at various levels of discourse parsing is different. This paper argues that combining local and global information for discourse parsing is more sensible. To prove this, we introduce a top-down discourse parser with bidirectional representation learning capabilities. Existing corpora on Rhetorical Structure Theory (RST) are known to be much limited in size, which makes discourse parsing very challenging. To alleviate this problem, we leverage some boundary features and a data augmentation strategy to tap the potential of our parser. We use two methods for evaluation, and the experiments on the RST-DT corpus show that our parser can primarily improve the performance due to the effective combination of local and global information. The boundary features and the data augmentation strategy also play a role. Based on gold standard elementary discourse units (EDUs), our parser significantly advances the baseline systems in nuclearity detection, with the results on the other three indicators (span, relation, and full) being competitive. Based on automatically segmented EDUs, our parser still outperforms previous state-of-the-art work.
{"title":"Top-down Text-Level Discourse Rhetorical Structure Parsing with Bidirectional Representation Learning","authors":"Long-Yin Zhang, Xin Tan, Fang Kong, Pei-Feng Li, Guo-Dong Zhou","doi":"10.1007/s11390-022-1167-0","DOIUrl":"https://doi.org/10.1007/s11390-022-1167-0","url":null,"abstract":"<p>Early studies on discourse rhetorical structure parsing mainly adopt bottom-up approaches, limiting the parsing process to local information. Although current top-down parsers can better capture global information and have achieved particular success, the importance of local and global information at various levels of discourse parsing is different. This paper argues that combining local and global information for discourse parsing is more sensible. To prove this, we introduce a top-down discourse parser with bidirectional representation learning capabilities. Existing corpora on Rhetorical Structure Theory (RST) are known to be much limited in size, which makes discourse parsing very challenging. To alleviate this problem, we leverage some boundary features and a data augmentation strategy to tap the potential of our parser. We use two methods for evaluation, and the experiments on the RST-DT corpus show that our parser can primarily improve the performance due to the effective combination of local and global information. The boundary features and the data augmentation strategy also play a role. Based on gold standard elementary discourse units (EDUs), our parser significantly advances the baseline systems in nuclearity detection, with the results on the other three indicators (span, relation, and full) being competitive. Based on automatically segmented EDUs, our parser still outperforms previous state-of-the-art work.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"12 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1007/s11390-022-1232-8
Zi-Yang Kang, Shi-Ming Li, Shi-Ying Wang, Lian-Hua Qu, Rui Gong, Wei Shi, Wei-Xia Xu, Lei Wang
Network-on-Chip (NoC) is widely adopted in neuromorphic processors to support communication between neurons in spiking neural networks (SNNs). However, SNNs generate enormous spiking packets due to the one-to-many traffic pattern. The spiking packets may cause communication pressure on NoC. We propose a path-based multicast routing method to alleviate the pressure. Firstly, all destination nodes of each source node on NoC are divided into several clusters. Secondly, multicast paths in the clusters are created based on the Hamiltonian path algorithm. The proposed routing can reduce the length of path and balance the communication load of each router. Lastly, we design a lightweight microarchitecture of NoC, which involves a customized multicast packet and a routing function. We use six datasets to verify the proposed multicast routing. Compared with unicast routing, the running time of path-based multicast routing achieves 5.1x speedup, and the number of hops and the maximum transmission latency of path-based multicast routing are reduced by 68.9% and 77.4%, respectively. The maximum length of path is reduced by 68.3% and 67.2% compared with the dual-path (DP) and multi-path (MP) multicast routing, respectively. Therefore, the proposed multicast routing has improved performance in terms of average latency and throughput compared with the DP or MP multicast routing.
{"title":"Path-Based Multicast Routing for Network-on-Chip of the Neuromorphic Processor","authors":"Zi-Yang Kang, Shi-Ming Li, Shi-Ying Wang, Lian-Hua Qu, Rui Gong, Wei Shi, Wei-Xia Xu, Lei Wang","doi":"10.1007/s11390-022-1232-8","DOIUrl":"https://doi.org/10.1007/s11390-022-1232-8","url":null,"abstract":"<p>Network-on-Chip (NoC) is widely adopted in neuromorphic processors to support communication between neurons in spiking neural networks (SNNs). However, SNNs generate enormous spiking packets due to the one-to-many traffic pattern. The spiking packets may cause communication pressure on NoC. We propose a path-based multicast routing method to alleviate the pressure. Firstly, all destination nodes of each source node on NoC are divided into several clusters. Secondly, multicast paths in the clusters are created based on the Hamiltonian path algorithm. The proposed routing can reduce the length of path and balance the communication load of each router. Lastly, we design a lightweight microarchitecture of NoC, which involves a customized multicast packet and a routing function. We use six datasets to verify the proposed multicast routing. Compared with unicast routing, the running time of path-based multicast routing achieves 5.1x speedup, and the number of hops and the maximum transmission latency of path-based multicast routing are reduced by 68.9% and 77.4%, respectively. The maximum length of path is reduced by 68.3% and 67.2% compared with the dual-path (DP) and multi-path (MP) multicast routing, respectively. Therefore, the proposed multicast routing has improved performance in terms of average latency and throughput compared with the DP or MP multicast routing.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"49 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1007/s11390-022-1803-8
Hua Jiang, Ke Bai, Hai-Jiao Liu, Chu-Min Li, Felip Manyà, Zhang-Hua Fu
Given an undirected graph, the Maximum Clique Problem (MCP) is to find a largest complete subgraph of the graph. MCP is NP-hard and has found many practical applications. In this paper, we propose a parallel Branch-and- Bound (BnB) algorithm to tackle this NP-hard problem, which carries out multiple bounded searches in parallel. Each search has its upper bound and shares a lower bound with the rest of the searches. The potential benefit of the proposed approach is that an active search terminates as soon as the best lower bound found so far reaches or exceeds its upper bound. We describe the implementation of our highly scalable and efficient parallel MCP algorithm, called PBS, which is based on a state-of-the-art sequential MCP algorithm. The proposed algorithm PBS is evaluated on hard DIMACS and BHOSLIB instances. The results show that PBS achieves a near-linear speedup on most DIMACS instances and a super-linear speedup on most BHOSLIB instances. Finally, we give a detailed analysis that explains the good speedups achieved for the tested instances.
{"title":"Parallel Bounded Search for the Maximum Clique Problem","authors":"Hua Jiang, Ke Bai, Hai-Jiao Liu, Chu-Min Li, Felip Manyà, Zhang-Hua Fu","doi":"10.1007/s11390-022-1803-8","DOIUrl":"https://doi.org/10.1007/s11390-022-1803-8","url":null,"abstract":"<p>Given an undirected graph, the Maximum Clique Problem (MCP) is to find a largest complete subgraph of the graph. MCP is NP-hard and has found many practical applications. In this paper, we propose a parallel Branch-and- Bound (BnB) algorithm to tackle this NP-hard problem, which carries out multiple bounded searches in parallel. Each search has its upper bound and shares a lower bound with the rest of the searches. The potential benefit of the proposed approach is that an active search terminates as soon as the best lower bound found so far reaches or exceeds its upper bound. We describe the implementation of our highly scalable and efficient parallel MCP algorithm, called PBS, which is based on a state-of-the-art sequential MCP algorithm. The proposed algorithm PBS is evaluated on hard DIMACS and BHOSLIB instances. The results show that PBS achieves a near-linear speedup on most DIMACS instances and a super-linear speedup on most BHOSLIB instances. Finally, we give a detailed analysis that explains the good speedups achieved for the tested instances.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"42 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1007/s11390-023-3009-0
Chun-Yu Hu, Li-Sha Hu, Lin Yuan, Dian-Jie Lu, Lei Lyu, Yi-Qiang Chen
Wearable health monitoring is a crucial technical tool that offers early warning for chronic diseases due to its superior portability and low power consumption. However, most wearable health data is distributed across different organizations, such as hospitals, research institutes, and companies, and can only be accessed by the owners of the data in compliance with data privacy regulations. The first challenge addressed in this paper is communicating in a privacy-preserving manner among different organizations. The second technical challenge is handling the dynamic expansion of the federation without model retraining. To address the first challenge, we propose a horizontal federated learning method called Federated Extremely Random Forest (FedERF). Its contribution-based splitting score computing mechanism significantly mitigates the impact of privacy protection constraints on model performance. Based on FedERF, we present a federated incremental learning method called Federated Incremental Extremely Random Forest (FedIERF) to address the second technical challenge. FedIERF introduces a hardness-driven weighting mechanism and an importance-based updating scheme to update the existing federated model incrementally. The experiments show that FedERF achieves comparable performance with non-federated methods, and FedIERF effectively addresses the dynamic expansion of the federation. This opens up opportunities for cooperation between different organizations in wearable health monitoring.
{"title":"FedIERF: Federated Incremental Extremely Random Forest for Wearable Health Monitoring","authors":"Chun-Yu Hu, Li-Sha Hu, Lin Yuan, Dian-Jie Lu, Lei Lyu, Yi-Qiang Chen","doi":"10.1007/s11390-023-3009-0","DOIUrl":"https://doi.org/10.1007/s11390-023-3009-0","url":null,"abstract":"<p>Wearable health monitoring is a crucial technical tool that offers early warning for chronic diseases due to its superior portability and low power consumption. However, most wearable health data is distributed across different organizations, such as hospitals, research institutes, and companies, and can only be accessed by the owners of the data in compliance with data privacy regulations. The first challenge addressed in this paper is communicating in a privacy-preserving manner among different organizations. The second technical challenge is handling the dynamic expansion of the federation without model retraining. To address the first challenge, we propose a horizontal federated learning method called Federated Extremely Random Forest (FedERF). Its contribution-based splitting score computing mechanism significantly mitigates the impact of privacy protection constraints on model performance. Based on FedERF, we present a federated incremental learning method called Federated Incremental Extremely Random Forest (FedIERF) to address the second technical challenge. FedIERF introduces a hardness-driven weighting mechanism and an importance-based updating scheme to update the existing federated model incrementally. The experiments show that FedERF achieves comparable performance with non-federated methods, and FedIERF effectively addresses the dynamic expansion of the federation. This opens up opportunities for cooperation between different organizations in wearable health monitoring.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"36 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1007/s11390-022-1229-3
Pei Cao, Chi Zhang, Xiang-Jun Lu, Hai-Ning Lu, Da-Wu Gu
In the era of the Internet of Things, Bluetooth low energy (BLE/BTLE) plays an important role as a well-known wireless communication technology. While the security and privacy of BLE have been analyzed and fixed several times, the threat of side-channel attacks to BLE devices is still not well understood. In this work, we highlight a side-channel threat to the re-keying protocol of BLE. This protocol uses a fixed long term key for generating session keys, and the leakage of the long term key could render the encryption of all the following (and previous) connections useless. Our attack exploits the side-channel leakage of the re-keying protocol when it is implemented on embedded devices. In particular, we present successful correlation electromagnetic analysis and deep learning based profiled analysis that recover long term keys of BLE devices. We evaluate our attack on an ARM Cortex-M4 processor (Nordic Semiconductor nRF52840) running Nimble, a popular open-source BLE stack. Our results demonstrate that the long term key can be recovered within only a small amount of electromagnetic traces. Further, we summarize the features and limitations of our attack, and suggest a range of countermeasures to prevent it.
{"title":"Side-Channel Analysis for the Re-Keying Protocol of Bluetooth Low Energy","authors":"Pei Cao, Chi Zhang, Xiang-Jun Lu, Hai-Ning Lu, Da-Wu Gu","doi":"10.1007/s11390-022-1229-3","DOIUrl":"https://doi.org/10.1007/s11390-022-1229-3","url":null,"abstract":"<p>In the era of the Internet of Things, Bluetooth low energy (BLE/BTLE) plays an important role as a well-known wireless communication technology. While the security and privacy of BLE have been analyzed and fixed several times, the threat of side-channel attacks to BLE devices is still not well understood. In this work, we highlight a side-channel threat to the re-keying protocol of BLE. This protocol uses a fixed long term key for generating session keys, and the leakage of the long term key could render the encryption of all the following (and previous) connections useless. Our attack exploits the side-channel leakage of the re-keying protocol when it is implemented on embedded devices. In particular, we present successful correlation electromagnetic analysis and deep learning based profiled analysis that recover long term keys of BLE devices. We evaluate our attack on an ARM Cortex-M4 processor (Nordic Semiconductor nRF52840) running Nimble, a popular open-source BLE stack. Our results demonstrate that the long term key can be recovered within only a small amount of electromagnetic traces. Further, we summarize the features and limitations of our attack, and suggest a range of countermeasures to prevent it.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"23 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1007/s11390-021-1103-8
Bi-Ying Yan, Chao Yang, Feng Chen, Kohei Takeda, Changjun Wang
With the goal of predicting the future rainfall intensity in a local region over a relatively short period time, precipitation nowcasting has been a long-time scientific challenge with great social and economic impact. The radar echo extrapolation approaches for precipitation nowcasting take radar echo images as input, aiming to generate future radar echo images by learning from the historical images. To effectively handle complex and high non-stationary evolution of radar echoes, we propose to decompose the movement into optical flow field motion and morphologic deformation. Following this idea, we introduce Flow-Deformation Network (FDNet), a neural network that models flow and deformation in two parallel cross pathways. The flow encoder captures the optical flow field motion between consecutive images and the deformation encoder distinguishes the change of shape from the translational motion of radar echoes. We evaluate the proposed network architecture on two real-world radar echo datasets. Our model achieves state-of-the-art prediction results compared with recent approaches. To the best of our knowledge, this is the first network architecture with flow and deformation separation to model the evolution of radar echoes for precipitation nowcasting. We believe that the general idea of this work could not only inspire much more effective approaches but also be applied to other similar spatio-temporal prediction tasks.
{"title":"FDNet: A Deep Learning Approach with Two Parallel Cross Encoding Pathways for Precipitation Nowcasting","authors":"Bi-Ying Yan, Chao Yang, Feng Chen, Kohei Takeda, Changjun Wang","doi":"10.1007/s11390-021-1103-8","DOIUrl":"https://doi.org/10.1007/s11390-021-1103-8","url":null,"abstract":"<p>With the goal of predicting the future rainfall intensity in a local region over a relatively short period time, precipitation nowcasting has been a long-time scientific challenge with great social and economic impact. The radar echo extrapolation approaches for precipitation nowcasting take radar echo images as input, aiming to generate future radar echo images by learning from the historical images. To effectively handle complex and high non-stationary evolution of radar echoes, we propose to decompose the movement into optical flow field motion and morphologic deformation. Following this idea, we introduce Flow-Deformation Network (FDNet), a neural network that models flow and deformation in two parallel cross pathways. The flow encoder captures the optical flow field motion between consecutive images and the deformation encoder distinguishes the change of shape from the translational motion of radar echoes. We evaluate the proposed network architecture on two real-world radar echo datasets. Our model achieves state-of-the-art prediction results compared with recent approaches. To the best of our knowledge, this is the first network architecture with flow and deformation separation to model the evolution of radar echoes for precipitation nowcasting. We believe that the general idea of this work could not only inspire much more effective approaches but also be applied to other similar spatio-temporal prediction tasks.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"51 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1007/s11390-022-1458-5
Min Shi, Hao Lu, Zhao-Xin Li, Deng-Ming Zhu, Zhao-Qi Wang
Grasp detection is a visual recognition task where the robot makes use of its sensors to detect graspable objects in its environment. Despite the steady progress in robotic grasping, it is still difficult to achieve both real-time and high accuracy grasping detection. In this paper, we propose a real-time robotic grasp detection method, which can accurately predict potential grasp for parallel-plate robotic grippers using RGB images. Our work employs an end-to-end convolutional neural network which consists of a feature descriptor and a grasp detector. And for the first time, we add an attention mechanism to the grasp detection task, which enables the network to focus on grasp regions rather than background. Specifically, we present an angular label smoothing strategy in our grasp detection method to enhance the fault tolerance of the network. We quantitatively and qualitatively evaluate our grasp detection method from different aspects on the public Cornell dataset and Jacquard dataset. Extensive experiments demonstrate that our grasp detection method achieves superior performance to the state-of-the-art methods. In particular, our grasp detection method ranked first on both the Cornell dataset and the Jacquard dataset, giving rise to the accuracy of 98.9% and 95.6%, respectively at real-time calculation speed.
{"title":"Accurate Robotic Grasp Detection with Angular Label Smoothing","authors":"Min Shi, Hao Lu, Zhao-Xin Li, Deng-Ming Zhu, Zhao-Qi Wang","doi":"10.1007/s11390-022-1458-5","DOIUrl":"https://doi.org/10.1007/s11390-022-1458-5","url":null,"abstract":"<p>Grasp detection is a visual recognition task where the robot makes use of its sensors to detect graspable objects in its environment. Despite the steady progress in robotic grasping, it is still difficult to achieve both real-time and high accuracy grasping detection. In this paper, we propose a real-time robotic grasp detection method, which can accurately predict potential grasp for parallel-plate robotic grippers using RGB images. Our work employs an end-to-end convolutional neural network which consists of a feature descriptor and a grasp detector. And for the first time, we add an attention mechanism to the grasp detection task, which enables the network to focus on grasp regions rather than background. Specifically, we present an angular label smoothing strategy in our grasp detection method to enhance the fault tolerance of the network. We quantitatively and qualitatively evaluate our grasp detection method from different aspects on the public Cornell dataset and Jacquard dataset. Extensive experiments demonstrate that our grasp detection method achieves superior performance to the state-of-the-art methods. In particular, our grasp detection method ranked first on both the Cornell dataset and the Jacquard dataset, giving rise to the accuracy of 98.9% and 95.6%, respectively at real-time calculation speed.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"35 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1007/s11390-023-3272-0
Chen-Xi Wang, Yi-Zhou Shan, Peng-Fei Zuo, Hui-Min Cui
Due to the unprecedented development of low-latency interconnect technology, building large-scale disaggregated architecture is drawing more and more attention from both industry and academia. Resource disaggregation is a new way to organize the hardware resources of datacenters, and has the potential to overcome the limitations, e.g., low resource utilization and low reliability, of conventional datacenters. However, the emerging disaggregated architecture brings severe performance and latency problems to the existing cloud systems. In this paper, we take memory disaggregation as an example to demonstrate the unique challenges that the disaggregated datacenter poses to the existing cloud software stacks, e.g., programming interface, language runtime, and operating system, and further discuss the possible ways to reinvent the cloud systems.
{"title":"Reinvent Cloud Software Stacks for Resource Disaggregation","authors":"Chen-Xi Wang, Yi-Zhou Shan, Peng-Fei Zuo, Hui-Min Cui","doi":"10.1007/s11390-023-3272-0","DOIUrl":"https://doi.org/10.1007/s11390-023-3272-0","url":null,"abstract":"<p>Due to the unprecedented development of low-latency interconnect technology, building large-scale disaggregated architecture is drawing more and more attention from both industry and academia. Resource disaggregation is a new way to organize the hardware resources of datacenters, and has the potential to overcome the limitations, e.g., low resource utilization and low reliability, of conventional datacenters. However, the emerging disaggregated architecture brings severe performance and latency problems to the existing cloud systems. In this paper, we take memory disaggregation as an example to demonstrate the unique challenges that the disaggregated datacenter poses to the existing cloud software stacks, e.g., programming interface, language runtime, and operating system, and further discuss the possible ways to reinvent the cloud systems.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"96 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-08DOI: 10.57237/j.cst.2023.03.003
Muhammad Tahir Rasheed, Daming Shi
{"title":"Towards Low-light Image Restoration via Color Correction Matrix Learning","authors":"Muhammad Tahir Rasheed, Daming Shi","doi":"10.57237/j.cst.2023.03.003","DOIUrl":"https://doi.org/10.57237/j.cst.2023.03.003","url":null,"abstract":"","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"43 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74291212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}