Pub Date : 2021-09-01DOI: 10.1109/ISSSR53171.2021.00020
Yu-ping Li, She-Feng Yuan, Yi-lin Zhang
Traditional video automatic capture and recognition system adopts ZigBee coding and feature extraction method in dynamic environment. Attenuation distortion is produced in the process, which leads to packet loss in video image acquisition and transmission, and the performance of automatic capture is not good. This paper presents a kind of automatic capture system based on Huffman coding and MUX101 switch control. An automatic capture system of video interest area in dynamic environment based on virtual reality. High speed video data is transmitted to AD8021 chip through VXI system bus for feedback resistance control. In order to dynamically extract and capture video features, VCA810 is selected to provide sensor signals to HP sensor through local bus, so as to adjust the video. interest area. The purpose of feature region of interest magnification. The Huffman encoding and the feature extraction of video interest area algorithm are designed as the embedded core parts of the software. The hardware adopts VXI bus technology is adopted in the hardware. The simulation results show that the system has the advantages of high accuracy, low packet loss rate and excellent performance for dynamic video acquisition and interest area analysis.
{"title":"Design of Automatic Capture System for Interest Area of Dynamic Video based on Huffman Coding","authors":"Yu-ping Li, She-Feng Yuan, Yi-lin Zhang","doi":"10.1109/ISSSR53171.2021.00020","DOIUrl":"https://doi.org/10.1109/ISSSR53171.2021.00020","url":null,"abstract":"Traditional video automatic capture and recognition system adopts ZigBee coding and feature extraction method in dynamic environment. Attenuation distortion is produced in the process, which leads to packet loss in video image acquisition and transmission, and the performance of automatic capture is not good. This paper presents a kind of automatic capture system based on Huffman coding and MUX101 switch control. An automatic capture system of video interest area in dynamic environment based on virtual reality. High speed video data is transmitted to AD8021 chip through VXI system bus for feedback resistance control. In order to dynamically extract and capture video features, VCA810 is selected to provide sensor signals to HP sensor through local bus, so as to adjust the video. interest area. The purpose of feature region of interest magnification. The Huffman encoding and the feature extraction of video interest area algorithm are designed as the embedded core parts of the software. The hardware adopts VXI bus technology is adopted in the hardware. The simulation results show that the system has the advantages of high accuracy, low packet loss rate and excellent performance for dynamic video acquisition and interest area analysis.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125143391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/ISSSR53171.2021.00036
Wenge Le, Yong Wang, Fei Yang, Xue Wang, Shouhang Wang
With the increasing complexity of software configuration, software configuration fault has become one of the main causes of system failure. The research work of software configuration mainly focuses on detection, diagnosis and repair, so that the system can run as expected. In order to understand the research progress of software configuration faults systematically, we summarize the work on this topic in recent years. Initially, giving the research framework and two important research techniques (program analysis and machine learning) is obtained. Furthermore, the data source and research objects of this study is expounded. Additionally, summarizing the existing work from the perspectives of the two research techniques mentioned above. Then, we discuss the future directions with regard to system configuration to guide follow up research. Finally, the article is summarized briefly.
{"title":"A Survey on Tackling Software Configuration Faults","authors":"Wenge Le, Yong Wang, Fei Yang, Xue Wang, Shouhang Wang","doi":"10.1109/ISSSR53171.2021.00036","DOIUrl":"https://doi.org/10.1109/ISSSR53171.2021.00036","url":null,"abstract":"With the increasing complexity of software configuration, software configuration fault has become one of the main causes of system failure. The research work of software configuration mainly focuses on detection, diagnosis and repair, so that the system can run as expected. In order to understand the research progress of software configuration faults systematically, we summarize the work on this topic in recent years. Initially, giving the research framework and two important research techniques (program analysis and machine learning) is obtained. Furthermore, the data source and research objects of this study is expounded. Additionally, summarizing the existing work from the perspectives of the two research techniques mentioned above. Then, we discuss the future directions with regard to system configuration to guide follow up research. Finally, the article is summarized briefly.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122332634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/ISSSR53171.2021.00026
Qinyun Tan, Kun Xiao, Wen He, Pinyuan Lei, Lirong Chen
As Internet of Things(IOT) devices become intelli-gent, more powerful computing capability is required. Multi-core processors are widely used in IoT devices because they provide more powerful computing capability while ensuring low power consumption. Therefore, it requires the operating system on IoT devices to support and optimize the scheduling algorithm for multi-core processors. Nowadays, microkernel-based operating systems, such as QNX Neutrino RTOS and HUAWEI Harmony OS, are widely used in IoT devices because of their real-time and security feature. However, research on multi-core scheduling for microkernel operating systems is relatively limited, especially for load balancing mechanisms. Related research is still mainly focused on the traditional monolithic operating systems, such as Linux. Therefore, this paper proposes a low-latency, high- performance, and high real-time centralized global dynamic multi-core load balancing method for the microkernel operating system. It has been implemented and tested on our own microkernel operating system named Mginkgo. The test results show that when there is load imbalance in the system, load balancing can be performed automatically so that all processors in the system can try to achieve the maximum throughput and resource utilization. And the latency brought by load balancing to the system is very low, about 4882 cycles (about 6.164us) triggered by new task creation and about 6596 cycles (about 8.328us) triggered by timing. In addition, we also tested the improvement of system throughput and CPU utilization. The results show that load balancing can improve the CPU utilization by 20% under the preset case, while the CPU utilization occupied by load balancing is negligibly low, about 0.0082%.
{"title":"A Global Dynamic Load Balancing Mechanism with Low Latency for Micokernel Operating System","authors":"Qinyun Tan, Kun Xiao, Wen He, Pinyuan Lei, Lirong Chen","doi":"10.1109/ISSSR53171.2021.00026","DOIUrl":"https://doi.org/10.1109/ISSSR53171.2021.00026","url":null,"abstract":"As Internet of Things(IOT) devices become intelli-gent, more powerful computing capability is required. Multi-core processors are widely used in IoT devices because they provide more powerful computing capability while ensuring low power consumption. Therefore, it requires the operating system on IoT devices to support and optimize the scheduling algorithm for multi-core processors. Nowadays, microkernel-based operating systems, such as QNX Neutrino RTOS and HUAWEI Harmony OS, are widely used in IoT devices because of their real-time and security feature. However, research on multi-core scheduling for microkernel operating systems is relatively limited, especially for load balancing mechanisms. Related research is still mainly focused on the traditional monolithic operating systems, such as Linux. Therefore, this paper proposes a low-latency, high- performance, and high real-time centralized global dynamic multi-core load balancing method for the microkernel operating system. It has been implemented and tested on our own microkernel operating system named Mginkgo. The test results show that when there is load imbalance in the system, load balancing can be performed automatically so that all processors in the system can try to achieve the maximum throughput and resource utilization. And the latency brought by load balancing to the system is very low, about 4882 cycles (about 6.164us) triggered by new task creation and about 6596 cycles (about 8.328us) triggered by timing. In addition, we also tested the improvement of system throughput and CPU utilization. The results show that load balancing can improve the CPU utilization by 20% under the preset case, while the CPU utilization occupied by load balancing is negligibly low, about 0.0082%.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132889732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/ISSSR53171.2021.00010
Qiyuan Hu, Yang Gao
Clustering by fast search and find of density peaks (DPC) is a highly innovative clustering method published on Science in June 2014. The DPC algorithm assumes that the cluster center of each class is far away from each other and has a higher local density. However, this method has certain disadvantages: it needs to find the clustering centers of the data set, whereas finding the clustering centers for a data set with a complex structure is error-prone. Therefore, this paper proposes an algorithm (DPC-SC) that combines DPC and spectral clustering (SC). The algorithm first uses the DPC algorithm to pre-cluster the data set, extracting the core points of the data, and then exploits the spectral clustering algorithm to analyze the pre-clusterred data and to perform subsequent clustering. This method avoids the shortcomings of DPC in selecting complex data clustering centers, and also improves the clustering speed of spectral clustering significantly.Experimental evaluations show that DPC-SC is very competetive compared with several classic clustering algorithms.
快速搜索发现密度峰聚类(fast search and find of density peaks, DPC)是2014年6月发表在Science上的一种极具创新性的聚类方法。DPC算法假设每一类的聚类中心相距较远,具有较高的局部密度。但是,这种方法有一定的缺点:需要找到数据集的聚类中心,而对于结构复杂的数据集,寻找聚类中心容易出错。为此,本文提出了一种将DPC和谱聚类(SC)相结合的DPC-SC算法。该算法首先利用DPC算法对数据集进行预聚类,提取数据的核心点,然后利用谱聚类算法对预聚类后的数据进行分析并进行后续聚类。该方法避免了DPC在选择复杂数据聚类中心方面的不足,也显著提高了谱聚类的聚类速度。实验结果表明,与几种经典聚类算法相比,DPC-SC算法具有很强的竞争力。
{"title":"A Novel Clustering Scheme based on Density Peaks and Spectral Analysis","authors":"Qiyuan Hu, Yang Gao","doi":"10.1109/ISSSR53171.2021.00010","DOIUrl":"https://doi.org/10.1109/ISSSR53171.2021.00010","url":null,"abstract":"Clustering by fast search and find of density peaks (DPC) is a highly innovative clustering method published on Science in June 2014. The DPC algorithm assumes that the cluster center of each class is far away from each other and has a higher local density. However, this method has certain disadvantages: it needs to find the clustering centers of the data set, whereas finding the clustering centers for a data set with a complex structure is error-prone. Therefore, this paper proposes an algorithm (DPC-SC) that combines DPC and spectral clustering (SC). The algorithm first uses the DPC algorithm to pre-cluster the data set, extracting the core points of the data, and then exploits the spectral clustering algorithm to analyze the pre-clusterred data and to perform subsequent clustering. This method avoids the shortcomings of DPC in selecting complex data clustering centers, and also improves the clustering speed of spectral clustering significantly.Experimental evaluations show that DPC-SC is very competetive compared with several classic clustering algorithms.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114928255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/ISSSR53171.2021.00011
Canh Minh Do, Yati Phyo, A. Riesco, K. Ogata
The L+1-layer divide & conquer approach to leads-to model checking (L + 1-DCA2L2MC) is a new technique to mitigate the state space explosion in model checking. As shown by the name, L + 1-DCA2L2MC is dedicated to leads-to properties. The paper describes a parallel version of L+1-DCA2L2MC and a tool that supports it. In a temporal logic called UNITY designed by Chandy and Misra, the leads-to temporal connective plays an important role and many case studies have been conducted in UNITY, demonstrating that many systems requirements can be expressed as leads-to properties. Hence, it is worth dedicating to the properties. The paper also reports on some experiments that demonstrate that the tool can increase the running performance of model checking.
{"title":"A Parallel Stratified Model Checking Technique/Tool for Leads-to Properties","authors":"Canh Minh Do, Yati Phyo, A. Riesco, K. Ogata","doi":"10.1109/ISSSR53171.2021.00011","DOIUrl":"https://doi.org/10.1109/ISSSR53171.2021.00011","url":null,"abstract":"The L+1-layer divide & conquer approach to leads-to model checking (L + 1-DCA2L2MC) is a new technique to mitigate the state space explosion in model checking. As shown by the name, L + 1-DCA2L2MC is dedicated to leads-to properties. The paper describes a parallel version of L+1-DCA2L2MC and a tool that supports it. In a temporal logic called UNITY designed by Chandy and Misra, the leads-to temporal connective plays an important role and many case studies have been conducted in UNITY, demonstrating that many systems requirements can be expressed as leads-to properties. Hence, it is worth dedicating to the properties. The paper also reports on some experiments that demonstrate that the tool can increase the running performance of model checking.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116344816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/ISSSR53171.2021.00040
C. Li, Sa Meng, Liang Luo, Yuan Gao
With the continuous integration of Internet and cloud computing and the rapid popularization of mobile terminals, the development of mobile cloud computing has been promoted. However, applications such as Internet of vehicles, AR / VR and face recognition are sensitive to time delay. Unloading tasks to the cloud data center requires a long experiment, which can’t meet the application requirements. Computing resources and storage resources sink to the edge network, using wireless network communication to offload application tasks to edge access points can meet this demand, while the computing and storage resources at edge access points are limited. This paper proposes a joint optimization method of application task offloading and resource scheduling under the condition of limited resources at the edge. The simulation results show that this method can obtain less delay and lower energy consumption, it improves the user experience of the mobile terminal.
{"title":"Joint Optimization of Resource Constrained Mobile Terminal Task Unloading and Edge Computing Resource Scheduling","authors":"C. Li, Sa Meng, Liang Luo, Yuan Gao","doi":"10.1109/ISSSR53171.2021.00040","DOIUrl":"https://doi.org/10.1109/ISSSR53171.2021.00040","url":null,"abstract":"With the continuous integration of Internet and cloud computing and the rapid popularization of mobile terminals, the development of mobile cloud computing has been promoted. However, applications such as Internet of vehicles, AR / VR and face recognition are sensitive to time delay. Unloading tasks to the cloud data center requires a long experiment, which can’t meet the application requirements. Computing resources and storage resources sink to the edge network, using wireless network communication to offload application tasks to edge access points can meet this demand, while the computing and storage resources at edge access points are limited. This paper proposes a joint optimization method of application task offloading and resource scheduling under the condition of limited resources at the edge. The simulation results show that this method can obtain less delay and lower energy consumption, it improves the user experience of the mobile terminal.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128102148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/ISSSR53171.2021.00029
Yong Gan, R. Yang, Chenfang Zhang, Dongwei Jia
Among many named entity recognition modes in natural languages, most of the processing in the text preprocessing stage only pays attention to the vector representation of single words and characters, and seldom pays attention to the semantic relationship in the text. In the language text information, there are many pronouns and polysemous words, which makes the problem of polysemous words appear in the text preprocessing stage. Based on this problem, this paper adopts a Chinese named entity recognition method based on the BERT-Transformer-BiLSTM-CRF model. First, use the pre-trained BERT model in a large-scale corpus to dynamically generate a sequence of word vectors according to its input context, then use the Transformer encoder to model the contextual long-distance semantic features of the text, and use the BiLSTM model to perform sentence context features Extract, and finally input the feature vector sequence into CRF (Conditional Random Field) to get the final prediction result. Tested on the public MSRA Chinese corpus. Experimental results on the corpus show that the model has improved accuracy, recall and F1 value than most models.
在自然语言的众多命名实体识别模式中,文本预处理阶段的处理大多只关注单个单词和字符的向量表示,很少关注文本中的语义关系。在语言文本信息中,存在着大量的代词和多义词,这使得多义词问题在文本预处理阶段就出现了。针对这一问题,本文采用了一种基于BERT-Transformer-BiLSTM-CRF模型的中文命名实体识别方法。首先,在大规模语料库中使用预训练的BERT模型根据其输入上下文动态生成词向量序列,然后使用Transformer编码器对文本的上下文远距离语义特征进行建模,并使用BiLSTM模型进行句子上下文特征提取,最后将特征向量序列输入CRF (Conditional Random Field)得到最终预测结果。在公开的MSRA中文语料库上进行了测试。在语料库上的实验结果表明,与大多数模型相比,该模型在准确率、召回率和F1值上都有提高。
{"title":"Chinese Named Entity Recognition based on BERT-Transformer-BiLSTM-CRF Model","authors":"Yong Gan, R. Yang, Chenfang Zhang, Dongwei Jia","doi":"10.1109/ISSSR53171.2021.00029","DOIUrl":"https://doi.org/10.1109/ISSSR53171.2021.00029","url":null,"abstract":"Among many named entity recognition modes in natural languages, most of the processing in the text preprocessing stage only pays attention to the vector representation of single words and characters, and seldom pays attention to the semantic relationship in the text. In the language text information, there are many pronouns and polysemous words, which makes the problem of polysemous words appear in the text preprocessing stage. Based on this problem, this paper adopts a Chinese named entity recognition method based on the BERT-Transformer-BiLSTM-CRF model. First, use the pre-trained BERT model in a large-scale corpus to dynamically generate a sequence of word vectors according to its input context, then use the Transformer encoder to model the contextual long-distance semantic features of the text, and use the BiLSTM model to perform sentence context features Extract, and finally input the feature vector sequence into CRF (Conditional Random Field) to get the final prediction result. Tested on the public MSRA Chinese corpus. Experimental results on the corpus show that the model has improved accuracy, recall and F1 value than most models.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127254686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/ISSSR53171.2021.00017
Pan Liu, Yihao Li, Hao Chen, Xuankui Zheng, Si Huang
It is a widely recognized practice for test generation from the graphical model of the system using traversal algorithms in industry and academia. However, because traversal algorithms of the graph are not designed for test generation, some inexecutable test paths are usually generated from the graphical model of the system when the system has complex software behaviors. This problem will not only lead to the failure of software testing, but also greatly increase test cost of software. The paper discusses the problem of inexecutable test paths in model-based testing. Then, an improved algorithm is intended to generate a test tree from the graphical model so that test paths generated from the test tree satisfy transition constraints in the model. Next, we conduct an experiment on four systems to analyze the problem of inexecutable test paths. Experimental result shows that 1) our algorithm is more efficient than two traditional algorithms for constructing the test tree of the system, and 2) there are still some challenges that need to be overcome in order to obtain more reliable test cases in test generation from graphs.
{"title":"An Improved Test Tree Generation Algorithm from a Graphical Model","authors":"Pan Liu, Yihao Li, Hao Chen, Xuankui Zheng, Si Huang","doi":"10.1109/ISSSR53171.2021.00017","DOIUrl":"https://doi.org/10.1109/ISSSR53171.2021.00017","url":null,"abstract":"It is a widely recognized practice for test generation from the graphical model of the system using traversal algorithms in industry and academia. However, because traversal algorithms of the graph are not designed for test generation, some inexecutable test paths are usually generated from the graphical model of the system when the system has complex software behaviors. This problem will not only lead to the failure of software testing, but also greatly increase test cost of software. The paper discusses the problem of inexecutable test paths in model-based testing. Then, an improved algorithm is intended to generate a test tree from the graphical model so that test paths generated from the test tree satisfy transition constraints in the model. Next, we conduct an experiment on four systems to analyze the problem of inexecutable test paths. Experimental result shows that 1) our algorithm is more efficient than two traditional algorithms for constructing the test tree of the system, and 2) there are still some challenges that need to be overcome in order to obtain more reliable test cases in test generation from graphs.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121952009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/ISSSR53171.2021.00027
Yueyou Qiu, Junchu Fang, Zhilong Zhu
Anti-lock braking system (ABS) can prevent the situation of front and rear wheels locked. However, the car having only ABS is not enough. For example, the distribution of braking force in the early stage of the car braking, and the car in the wet road starting acceleration. In the situation, the car requires ABS and other control systems of the car work together. In this paper, the electronic braking force distribution device (EBD) is integrated into the ABS to form the ABS/EBD braking system based on CAN bus. It consists of speed sensor, CAN bus, control unit and braking force distribution device. The system is used to improve the directional stability of braking before the ABS system starts. By measuring the slip rate of four wheels, the degree of locking is judged, so that the slip rate of four wheels is close, so that all wheels can be locked at the same time as much as possible. Compared with other braking force distribution systems, the system is characterized by the use of CAN bus as the carrier, which effectively improves the system's response ability and computing ability. It is also convenient to combine with other electronic systems on the car to form the intelligent network of the car.
{"title":"ABS/EBD Automobile Auxiliary Brake System based on CAN Bus","authors":"Yueyou Qiu, Junchu Fang, Zhilong Zhu","doi":"10.1109/ISSSR53171.2021.00027","DOIUrl":"https://doi.org/10.1109/ISSSR53171.2021.00027","url":null,"abstract":"Anti-lock braking system (ABS) can prevent the situation of front and rear wheels locked. However, the car having only ABS is not enough. For example, the distribution of braking force in the early stage of the car braking, and the car in the wet road starting acceleration. In the situation, the car requires ABS and other control systems of the car work together. In this paper, the electronic braking force distribution device (EBD) is integrated into the ABS to form the ABS/EBD braking system based on CAN bus. It consists of speed sensor, CAN bus, control unit and braking force distribution device. The system is used to improve the directional stability of braking before the ABS system starts. By measuring the slip rate of four wheels, the degree of locking is judged, so that the slip rate of four wheels is close, so that all wheels can be locked at the same time as much as possible. Compared with other braking force distribution systems, the system is characterized by the use of CAN bus as the carrier, which effectively improves the system's response ability and computing ability. It is also convenient to combine with other electronic systems on the car to form the intelligent network of the car.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115221589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/ISSSR53171.2021.00021
J. Zhai, Chunhua Zhu, Tiantian Miao
The content of impurities in a batch of grain is an important index for grain storage and grain quality standard evaluation. In order to improve the measurement reliability and real-time capability, one new impurity separating and counting system is presented, which integrates the image enhancement, image segmentation and morphological image processing algorithm for impurity separation in doped grain. Firstly, histogram equalization and Gauss-Laplacian operator are used to enhance the gray difference between grains and impurities; then the parameters of expansion and area of impurities are introduced to remove false points, and each impurity edge is extracted by Roberts operator; finally, all the impurities are labeled and counted. Experimental analysis shows the effectiveness of the proposed algorithm.
{"title":"Detection of Impurity within Grain Samples by Image Analysis","authors":"J. Zhai, Chunhua Zhu, Tiantian Miao","doi":"10.1109/ISSSR53171.2021.00021","DOIUrl":"https://doi.org/10.1109/ISSSR53171.2021.00021","url":null,"abstract":"The content of impurities in a batch of grain is an important index for grain storage and grain quality standard evaluation. In order to improve the measurement reliability and real-time capability, one new impurity separating and counting system is presented, which integrates the image enhancement, image segmentation and morphological image processing algorithm for impurity separation in doped grain. Firstly, histogram equalization and Gauss-Laplacian operator are used to enhance the gray difference between grains and impurities; then the parameters of expansion and area of impurities are introduced to remove false points, and each impurity edge is extracted by Roberts operator; finally, all the impurities are labeled and counted. Experimental analysis shows the effectiveness of the proposed algorithm.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121461193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}