Pub Date : 2020-10-17DOI: 10.1109/CISP-BMEI51763.2020.9263668
L. Pang, Yurong Fan, Ye Deng, Xin Wang, Tianbo Wang
This paper presents a method to objectively evaluate mental workload by analyzing the changes of eye movements characteristics in different visual search tasks. Eye movements data were collected by the eye tracking device called Eye Tracking Core+ while subjects were performing four different visual search tasks produced by NASA’s Multi-Attribute Task Battery (MATB) on the screen of computer. By varying the difficulty of visual search tasks, the eye movements were measured to examine whether they could be used to classify the mental workload. As a result, the five indexes (Saccades Amplitude, Saccades Velocity, Fixation Duration, Blink Duration and Pupil Diameter) showed significant differences under low and high workload of visual search tasks. Moreover, with the increase of task workload, Saccades Amplitude, Saccades Velocity, and Blink Duration decreased significantly, while Fixation Duration and Pupil Diameter increased gradually.
{"title":"Mental Workload Classification By Eye Movements In Visual Search Tasks","authors":"L. Pang, Yurong Fan, Ye Deng, Xin Wang, Tianbo Wang","doi":"10.1109/CISP-BMEI51763.2020.9263668","DOIUrl":"https://doi.org/10.1109/CISP-BMEI51763.2020.9263668","url":null,"abstract":"This paper presents a method to objectively evaluate mental workload by analyzing the changes of eye movements characteristics in different visual search tasks. Eye movements data were collected by the eye tracking device called Eye Tracking Core+ while subjects were performing four different visual search tasks produced by NASA’s Multi-Attribute Task Battery (MATB) on the screen of computer. By varying the difficulty of visual search tasks, the eye movements were measured to examine whether they could be used to classify the mental workload. As a result, the five indexes (Saccades Amplitude, Saccades Velocity, Fixation Duration, Blink Duration and Pupil Diameter) showed significant differences under low and high workload of visual search tasks. Moreover, with the increase of task workload, Saccades Amplitude, Saccades Velocity, and Blink Duration decreased significantly, while Fixation Duration and Pupil Diameter increased gradually.","PeriodicalId":346757,"journal":{"name":"2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129079217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-17DOI: 10.1109/CISP-BMEI51763.2020.9263634
Dimitris Kastaniotis, Dimitrios Tsourounis, S. Fotopoulos
Automated Lip Reading (LR) task is the process of predicting a spoken word using only visual information of a sequence of frames. This sequence modeling task has been approached with Convolutional Neural Networks (CNNs) combined with Long Short-Term Memory networks (LSTM). In this work, a novel scheme for modeling LR sequences with a combination of Temporal Convolutional Networks (TCN) driven by the feature vectors produced by CNN is presented. More specifically, the contribution of this work is two-fold. Firstly, a novel approach that utilize the TCN topology as an alternative way to deal with the sequential data of the LR task is presented. Secondly, this approach is evaluated on a new real-world challenging dataset particularly designed for the problem of LR in Greek words related to biomedical and clinical conditions. More specifically, the Greek words of the dataset are selected to be words that a patient would like to communicate when receiving medical treatment using the frontal camera of a mobile phone. Experimental results indicate that the proposed CNN-TCN architecture can surpass recurrent oriented approaches based on CNN-LSTM while also providing major benefits for deployment in model hardware architectures and more stability during training.
{"title":"Lip Reading modeling with Temporal Convolutional Networks for medical support applications","authors":"Dimitris Kastaniotis, Dimitrios Tsourounis, S. Fotopoulos","doi":"10.1109/CISP-BMEI51763.2020.9263634","DOIUrl":"https://doi.org/10.1109/CISP-BMEI51763.2020.9263634","url":null,"abstract":"Automated Lip Reading (LR) task is the process of predicting a spoken word using only visual information of a sequence of frames. This sequence modeling task has been approached with Convolutional Neural Networks (CNNs) combined with Long Short-Term Memory networks (LSTM). In this work, a novel scheme for modeling LR sequences with a combination of Temporal Convolutional Networks (TCN) driven by the feature vectors produced by CNN is presented. More specifically, the contribution of this work is two-fold. Firstly, a novel approach that utilize the TCN topology as an alternative way to deal with the sequential data of the LR task is presented. Secondly, this approach is evaluated on a new real-world challenging dataset particularly designed for the problem of LR in Greek words related to biomedical and clinical conditions. More specifically, the Greek words of the dataset are selected to be words that a patient would like to communicate when receiving medical treatment using the frontal camera of a mobile phone. Experimental results indicate that the proposed CNN-TCN architecture can surpass recurrent oriented approaches based on CNN-LSTM while also providing major benefits for deployment in model hardware architectures and more stability during training.","PeriodicalId":346757,"journal":{"name":"2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129916214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-17DOI: 10.1109/CISP-BMEI51763.2020.9263673
Xinyu Guo, S. Ou, Meng Gao, Ying Gao
In view of the residual problem of speech background noise in supervised model based single-channel speech separation algorithm in non-stationary noise environments, a piecewise time-frequency masking target based on Wiener filtering principle is proposed and used as the training target of neural network, which can not only track the SNR changes, but also reduce the damage to speech quality. By combing the four features of Relative spectral transform and perceptual linear prediction (RASTA-PLP) + amplitude modulation spectrogram (AMS) + Mel-frequency cepstral coefficients (MFCC) + Gammatone frequency cepstral coefficient (GFCC), the extracted multi-level voice information is used as the training features of the neural network, and then a deep neural network (DNN) based speech separation system is constructed to separate the noisy speech. The experimental results show that: compared with traditional time-frequency masking methods, the segmented time-frequency masking algorithm can improve the speech quality and clarity, and achieves the purpose of noise suppression and better speech separation performance at low SNR.
{"title":"Segmented Time-Frequency Masking Algorithm for Speech Separation Based on Deep Neural Networks","authors":"Xinyu Guo, S. Ou, Meng Gao, Ying Gao","doi":"10.1109/CISP-BMEI51763.2020.9263673","DOIUrl":"https://doi.org/10.1109/CISP-BMEI51763.2020.9263673","url":null,"abstract":"In view of the residual problem of speech background noise in supervised model based single-channel speech separation algorithm in non-stationary noise environments, a piecewise time-frequency masking target based on Wiener filtering principle is proposed and used as the training target of neural network, which can not only track the SNR changes, but also reduce the damage to speech quality. By combing the four features of Relative spectral transform and perceptual linear prediction (RASTA-PLP) + amplitude modulation spectrogram (AMS) + Mel-frequency cepstral coefficients (MFCC) + Gammatone frequency cepstral coefficient (GFCC), the extracted multi-level voice information is used as the training features of the neural network, and then a deep neural network (DNN) based speech separation system is constructed to separate the noisy speech. The experimental results show that: compared with traditional time-frequency masking methods, the segmented time-frequency masking algorithm can improve the speech quality and clarity, and achieves the purpose of noise suppression and better speech separation performance at low SNR.","PeriodicalId":346757,"journal":{"name":"2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"31 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120844979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-17DOI: 10.1109/CISP-BMEI51763.2020.9263580
Zhengmin Li, Haoran Hong
Discrimination of coefficient matrix plays an important role in discriminative analysis dictionary learning (ADL) model. However, the local geometric structure of the profiles(i.e., row vector of coefficient matrix) is seldom exploited to design discriminative terms in the discriminative ADL algorithms. In this paper, we proposed a discriminative ADL algorithm with adaptive graph constrained (DADL-AGC)model, which can adaptively preserve the local geometric structure information of profiles. First, we construct an adaptive graph constrained model by maximizing the information entropy of the similarity matrix of profiles. In this way, the coefficient matrix can preserve and inherit the local geometric information of analysis atoms and training samples by using the K-means method to initialize the analysis dictionary. Moreover, a robust linear classifier is simultaneously learned to improve the classification performance of our DADL-AGC algorithm. On the four deep features and hand-crafted features databases, experimental results demonstrate that our DADL-AGC algorithm can achieve better performance than seven ADL and synthesis dictionary learning algorithms.
{"title":"Discriminative Analysis Dictionary Learning With Adaptive Graph Constraint for Image Classification","authors":"Zhengmin Li, Haoran Hong","doi":"10.1109/CISP-BMEI51763.2020.9263580","DOIUrl":"https://doi.org/10.1109/CISP-BMEI51763.2020.9263580","url":null,"abstract":"Discrimination of coefficient matrix plays an important role in discriminative analysis dictionary learning (ADL) model. However, the local geometric structure of the profiles(i.e., row vector of coefficient matrix) is seldom exploited to design discriminative terms in the discriminative ADL algorithms. In this paper, we proposed a discriminative ADL algorithm with adaptive graph constrained (DADL-AGC)model, which can adaptively preserve the local geometric structure information of profiles. First, we construct an adaptive graph constrained model by maximizing the information entropy of the similarity matrix of profiles. In this way, the coefficient matrix can preserve and inherit the local geometric information of analysis atoms and training samples by using the K-means method to initialize the analysis dictionary. Moreover, a robust linear classifier is simultaneously learned to improve the classification performance of our DADL-AGC algorithm. On the four deep features and hand-crafted features databases, experimental results demonstrate that our DADL-AGC algorithm can achieve better performance than seven ADL and synthesis dictionary learning algorithms.","PeriodicalId":346757,"journal":{"name":"2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117094460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Few-shot representation learning is one of the most challenging tasks in machine learning research field. The related applications including gun image retrieval usually achieve limited performance due to the lack of learning samples. In this paper, We propose a flexible and conceptually straightforward framework for few-shot gun image retrieval. We use ResNet as backbone network and design a hierarchical loss system based on auxiliary attributes extracted from different layers. Enhanced by a series of auxiliary attributes, discriminative features are learned efficiently. Experiments on a gun image dataset demonstrate the effectiveness of the proposed approach. In addition, it is worth noting that our framework can be easily extended to other few-shot learning tasks.
{"title":"Auxiliary Attribute Aided Few-shot Representation Learning for Gun Image Retrieval","authors":"Zhifei Zhou, Shaoyu Zhang, Jinlong Wu, Yiyi Li, Xiaolin Wang, Silong Peng","doi":"10.1109/CISP-BMEI51763.2020.9263507","DOIUrl":"https://doi.org/10.1109/CISP-BMEI51763.2020.9263507","url":null,"abstract":"Few-shot representation learning is one of the most challenging tasks in machine learning research field. The related applications including gun image retrieval usually achieve limited performance due to the lack of learning samples. In this paper, We propose a flexible and conceptually straightforward framework for few-shot gun image retrieval. We use ResNet as backbone network and design a hierarchical loss system based on auxiliary attributes extracted from different layers. Enhanced by a series of auxiliary attributes, discriminative features are learned efficiently. Experiments on a gun image dataset demonstrate the effectiveness of the proposed approach. In addition, it is worth noting that our framework can be easily extended to other few-shot learning tasks.","PeriodicalId":346757,"journal":{"name":"2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132517138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For a HVDC converter station, the commonly used power loss determination methods are difficult to accurately reflect the changes of power loss of the converter in real time, given that the operating parameters of the converter station are dynamically changing when the converter station is running normally. Therefore, this paper proposes a method for predicting power loss of HVDC converters based on support vector regression. According to this method, firstly, the power loss data of a converter is analyzed. Then the appropriate feature in the power loss data is selected and thus a dataset of power loss samples can be obtained for further work. By applying the support vector regression algorithm to the dataset collected before, it is possible to predict the power loss of a converter for various operating parameters of the HVDC converter station. Finally, the cross-validation method was used to validate the stability of the prediction method. The result of the validation shows that the proposed method is able to accurately and stably predict the power loss of a converter of the HVDC converter station in real time.
{"title":"A Method for Predicting Power Loss of HVDC Converters Based on Support Vector Regression","authors":"Bingyuan Tan, Jia Liu, Wenmin Luo, Huibin Zhou, Jin-quan Zhao","doi":"10.1109/CISP-BMEI51763.2020.9263642","DOIUrl":"https://doi.org/10.1109/CISP-BMEI51763.2020.9263642","url":null,"abstract":"For a HVDC converter station, the commonly used power loss determination methods are difficult to accurately reflect the changes of power loss of the converter in real time, given that the operating parameters of the converter station are dynamically changing when the converter station is running normally. Therefore, this paper proposes a method for predicting power loss of HVDC converters based on support vector regression. According to this method, firstly, the power loss data of a converter is analyzed. Then the appropriate feature in the power loss data is selected and thus a dataset of power loss samples can be obtained for further work. By applying the support vector regression algorithm to the dataset collected before, it is possible to predict the power loss of a converter for various operating parameters of the HVDC converter station. Finally, the cross-validation method was used to validate the stability of the prediction method. The result of the validation shows that the proposed method is able to accurately and stably predict the power loss of a converter of the HVDC converter station in real time.","PeriodicalId":346757,"journal":{"name":"2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131607613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-17DOI: 10.1109/CISP-BMEI51763.2020.9263560
Xiaochun Wang, Sheng Zhou, Jianjun Ji, Jun Yang
Objective To develop a portable very-high-frequency ultrasound biomicroscope. Methods This system is primarily an ultrasonic transducer, ultrasonic transmission and receiving modules, imaging software on a host computer and peripheral equipment. A PVDF transducer with a frequency between 20 and 50 MHz was used for the ultrasonic transducer. In the transmission and receiving modules, the radio frequency echo signals were digitized by high-speed A/D. Then, the digital signals were transmitted, added, filtered, demodulated, log-amplified, double-sampled, and transferred to the host computer by USB interface for real-time display. Results The system was tested with a resolution test and an imaging experiment using a normal human eye, and improved experimental results and real-time images were obtained. Conclusions The system enabled real-time imaging using portable VHF ultrasound biomicroscope. The scheme was concise and clear. The overall design of the system was simple, and the overall performance and portability of the system were improved.
{"title":"Design of a Portable Very-High-Frequency Ultrasound Biomicroscope","authors":"Xiaochun Wang, Sheng Zhou, Jianjun Ji, Jun Yang","doi":"10.1109/CISP-BMEI51763.2020.9263560","DOIUrl":"https://doi.org/10.1109/CISP-BMEI51763.2020.9263560","url":null,"abstract":"Objective To develop a portable very-high-frequency ultrasound biomicroscope. Methods This system is primarily an ultrasonic transducer, ultrasonic transmission and receiving modules, imaging software on a host computer and peripheral equipment. A PVDF transducer with a frequency between 20 and 50 MHz was used for the ultrasonic transducer. In the transmission and receiving modules, the radio frequency echo signals were digitized by high-speed A/D. Then, the digital signals were transmitted, added, filtered, demodulated, log-amplified, double-sampled, and transferred to the host computer by USB interface for real-time display. Results The system was tested with a resolution test and an imaging experiment using a normal human eye, and improved experimental results and real-time images were obtained. Conclusions The system enabled real-time imaging using portable VHF ultrasound biomicroscope. The scheme was concise and clear. The overall design of the system was simple, and the overall performance and portability of the system were improved.","PeriodicalId":346757,"journal":{"name":"2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"47 15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131993720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A defect detection algorithm of cloth based on Neural Network by involving effective use of image processing and neural network is presented in this paper. The samples collected on the surface of the cloth are preprocessed by wavelet transforming and Otsu method, then they would be identified and classified through AlexNet. The defect information on the surface of samples is removed by filtering, and the feature is strengthened by threshold method. The image is adjusted to meet the requirement of neural network. The training data is learned by the feature detection layer, so as to achieve the detection of test data. It can detect the flaws on the cloth fast and correctly, and raise the product quality and improve production efficiency. Through the study of 400 collected samples, this method is applied to the 40 samples for testing. The success rate of the trained neural network is 99.2%, and the actual test accuracy was 93.33%, which is higher than 81.8% of Gabor method, 87.2% of MRF method and 90.4% of SE algorithm. It is considered as a suitable way for flaw detection and has a good application prospect.
{"title":"Defect Detection System Of Cloth Based On Convolutional Neural Network","authors":"Qiyan Zhang, Mingjing Li, Denghao Yan, Longbiao Yang, Miao Yu","doi":"10.1109/CISP-BMEI51763.2020.9263521","DOIUrl":"https://doi.org/10.1109/CISP-BMEI51763.2020.9263521","url":null,"abstract":"A defect detection algorithm of cloth based on Neural Network by involving effective use of image processing and neural network is presented in this paper. The samples collected on the surface of the cloth are preprocessed by wavelet transforming and Otsu method, then they would be identified and classified through AlexNet. The defect information on the surface of samples is removed by filtering, and the feature is strengthened by threshold method. The image is adjusted to meet the requirement of neural network. The training data is learned by the feature detection layer, so as to achieve the detection of test data. It can detect the flaws on the cloth fast and correctly, and raise the product quality and improve production efficiency. Through the study of 400 collected samples, this method is applied to the 40 samples for testing. The success rate of the trained neural network is 99.2%, and the actual test accuracy was 93.33%, which is higher than 81.8% of Gabor method, 87.2% of MRF method and 90.4% of SE algorithm. It is considered as a suitable way for flaw detection and has a good application prospect.","PeriodicalId":346757,"journal":{"name":"2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128694335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-17DOI: 10.1109/CISP-BMEI51763.2020.9263528
A. Levin, A. Ragazzi, S. Szot, T. Ning
This paper presents a heart murmur detection and classification approach via machine learning. We extracted heart sound and murmur features that are of diagnostic importance and developed additional 16 features that are not perceivable by human ears but are valuable to improve murmur classification accuracy. We examined and compared the classification performance of supervised machine learning with k-nearest neighbor (KNN) and support vector machine (SVM) algorithms. We put together a test repertoire having more than 450 heart sound and murmur episodes to evaluate the performance of murmur classification using cross-validation of 80-20 and 90-10 splits. As clearly demonstrated in our evaluation, the specific set of features chosen in our study resulted in accurate classification consistently exceeding 90% for both classifiers.
{"title":"A Machine Learning Approach to Heart Murmur Detection and Classification","authors":"A. Levin, A. Ragazzi, S. Szot, T. Ning","doi":"10.1109/CISP-BMEI51763.2020.9263528","DOIUrl":"https://doi.org/10.1109/CISP-BMEI51763.2020.9263528","url":null,"abstract":"This paper presents a heart murmur detection and classification approach via machine learning. We extracted heart sound and murmur features that are of diagnostic importance and developed additional 16 features that are not perceivable by human ears but are valuable to improve murmur classification accuracy. We examined and compared the classification performance of supervised machine learning with k-nearest neighbor (KNN) and support vector machine (SVM) algorithms. We put together a test repertoire having more than 450 heart sound and murmur episodes to evaluate the performance of murmur classification using cross-validation of 80-20 and 90-10 splits. As clearly demonstrated in our evaluation, the specific set of features chosen in our study resulted in accurate classification consistently exceeding 90% for both classifiers.","PeriodicalId":346757,"journal":{"name":"2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131201259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-17DOI: 10.1109/CISP-BMEI51763.2020.9263696
Qixuan Wang, Jingjuan Guo
At present, there are some problems such as fragmentation of information and single function in the cost of assembly construction. To solve these problems, based on the BIM and the context, the basic structure of cost information is established by analyzing the functional requirements of cost information. Then, this paper uses an ontology which has obvious advantages in semantic expression, formalization and inference to construct the context cost information model of assembly building based on the BIM, which aims at reconstructing the BIM -based cost management mode of assembly building and realizing the integration, dynamic and multi-function of cost management.
{"title":"Research on Context Cost Information Model of Assembly Building Based on BIM","authors":"Qixuan Wang, Jingjuan Guo","doi":"10.1109/CISP-BMEI51763.2020.9263696","DOIUrl":"https://doi.org/10.1109/CISP-BMEI51763.2020.9263696","url":null,"abstract":"At present, there are some problems such as fragmentation of information and single function in the cost of assembly construction. To solve these problems, based on the BIM and the context, the basic structure of cost information is established by analyzing the functional requirements of cost information. Then, this paper uses an ontology which has obvious advantages in semantic expression, formalization and inference to construct the context cost information model of assembly building based on the BIM, which aims at reconstructing the BIM -based cost management mode of assembly building and realizing the integration, dynamic and multi-function of cost management.","PeriodicalId":346757,"journal":{"name":"2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122914833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}