首页 > 最新文献

Proceedings of the 4th International Conference on Biomedical Signal and Image Processing最新文献

英文 中文
Minimal Path based Particle Tracking in Low SNR Fluorescence Microscopy Images 低信噪比荧光显微图像中基于最小路径的粒子跟踪
Sheng Lu, Tong Chen, Fan Yang, Chenglei Peng, S. Du, Yang Li
Single Particle Tracking (SPT) in fluorescence microscopy image is of great importance in the field of computational biology. Automatic or slightly interactive tracking algorithms are essential for the motional analysis of micro particles. Even with prior knowledge, conventional methods may fail when the signal-to-noise ratio (SNR) is too low because they highly depend on the quality of the image and the results of detection. To reliably track particles in the low SNR images, we proposed a novel method based on minimal path theory and attempted to extract complete trajectories between two points. Our method was evaluated on several simulated image sequences and showed its accuracy and robustness in the task of particle tracking.
荧光显微镜图像中的单粒子跟踪(SPT)在计算生物学领域具有重要意义。自动或微交互跟踪算法对于微粒子的运动分析是必不可少的。即使有先验知识,当信噪比(SNR)过低时,传统方法也可能失败,因为它们高度依赖于图像的质量和检测结果。为了在低信噪比图像中可靠地跟踪粒子,我们提出了一种基于最小路径理论的新方法,并尝试提取两点之间的完整轨迹。在多个模拟图像序列上对该方法进行了验证,证明了该方法在粒子跟踪任务中的准确性和鲁棒性。
{"title":"Minimal Path based Particle Tracking in Low SNR Fluorescence Microscopy Images","authors":"Sheng Lu, Tong Chen, Fan Yang, Chenglei Peng, S. Du, Yang Li","doi":"10.1145/3354031.3354035","DOIUrl":"https://doi.org/10.1145/3354031.3354035","url":null,"abstract":"Single Particle Tracking (SPT) in fluorescence microscopy image is of great importance in the field of computational biology. Automatic or slightly interactive tracking algorithms are essential for the motional analysis of micro particles. Even with prior knowledge, conventional methods may fail when the signal-to-noise ratio (SNR) is too low because they highly depend on the quality of the image and the results of detection. To reliably track particles in the low SNR images, we proposed a novel method based on minimal path theory and attempted to extract complete trajectories between two points. Our method was evaluated on several simulated image sequences and showed its accuracy and robustness in the task of particle tracking.","PeriodicalId":286321,"journal":{"name":"Proceedings of the 4th International Conference on Biomedical Signal and Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128481838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Application of Granger Causality in Decoding Covert Selective Attention with Human EEG 格兰杰因果关系在人脑隐选择注意解码中的应用
Weikun Niu, Yuying Jiang, Yujin Zhang, Xin Zhang, Shan Yu
Electroencephalography (EEG)-based BCIs have experienced a significant growth in recent years, especially the passive Brain Computer Interfaces (BCIs) with a wide application in the detection of cognitive and emotional states. But it is still unclear whether more subtle states, e.g., covert selective attention can be decoded with EEG signals. Here we used a behavioral paradigm to introduce the shift of selective attention between the visual and auditory domain. With EEG signals, we extracted features based on Grange Causality (GC) and successfully decoded the attentional shift through a support vector machine (SVM) based classifier. The decoding accuracy was significantly above the chance level for all 8 subjects tested. The features based on GC were further analyzed with tree-based feature importance analysis and recursive feature elimination (RFE) method to search for the optimal features for classification. Our work demonstrate that specific patterns of brain activities reflected by GC can be used to decode subtle state changes of the brain related to cross-modal selective attention, which opens new possibility of using passive BCIs in sophisticated perceptual and cognitive tasks.
近年来,基于脑电图(EEG)的脑机接口(bci)得到了显著的发展,尤其是被动脑机接口(bci)在认知和情绪状态的检测中得到了广泛应用。但目前尚不清楚是否更微妙的状态,例如隐蔽的选择性注意可以用脑电图信号解码。在这里,我们使用行为范式来介绍选择性注意在视觉和听觉领域之间的转移。对脑电信号进行基于Grange因果关系(GC)的特征提取,并通过基于支持向量机(SVM)的分类器成功解码注意转移。所有8名被测者的解码准确率都显著高于机会水平。利用基于树的特征重要性分析和递归特征消除(RFE)方法对基于GC的特征进行进一步分析,寻找最优特征进行分类。我们的工作表明,GC反映的特定大脑活动模式可以用来解码与跨模态选择性注意相关的大脑微妙状态变化,这为在复杂的感知和认知任务中使用被动脑机接口开辟了新的可能性。
{"title":"Application of Granger Causality in Decoding Covert Selective Attention with Human EEG","authors":"Weikun Niu, Yuying Jiang, Yujin Zhang, Xin Zhang, Shan Yu","doi":"10.1145/3354031.3354032","DOIUrl":"https://doi.org/10.1145/3354031.3354032","url":null,"abstract":"Electroencephalography (EEG)-based BCIs have experienced a significant growth in recent years, especially the passive Brain Computer Interfaces (BCIs) with a wide application in the detection of cognitive and emotional states. But it is still unclear whether more subtle states, e.g., covert selective attention can be decoded with EEG signals. Here we used a behavioral paradigm to introduce the shift of selective attention between the visual and auditory domain. With EEG signals, we extracted features based on Grange Causality (GC) and successfully decoded the attentional shift through a support vector machine (SVM) based classifier. The decoding accuracy was significantly above the chance level for all 8 subjects tested. The features based on GC were further analyzed with tree-based feature importance analysis and recursive feature elimination (RFE) method to search for the optimal features for classification. Our work demonstrate that specific patterns of brain activities reflected by GC can be used to decode subtle state changes of the brain related to cross-modal selective attention, which opens new possibility of using passive BCIs in sophisticated perceptual and cognitive tasks.","PeriodicalId":286321,"journal":{"name":"Proceedings of the 4th International Conference on Biomedical Signal and Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116954527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Different Goal-driven CNNs Affect Performance of Visual Encoding Models based on Deep Learning 不同目标驱动cnn对基于深度学习的视觉编码模型性能的影响
Ziya Yu, Chi Zhang, Linyuan Wang, Li Tong, Bin Yan
A convolutional neural network with outstanding performance in computer vision can be used to construct an encoding model that simulates the process of human visual information processing. However, training goal of the network may have impacted the performance of encoding model. Most neural networks used to establish encoding models in the past were performed image classification task, the task of which is single. While in the process of human's visual perception, multiple tasks are performed simultaneously. Thus, the existing encoding model does not well satisfy the diversity and complexity of the human visual mechanism. In this paper, we first established a feature extraction model based on Fully Convolutional Network (FCN) and Visual Geometry Group (VGG) with similar network structure but different training goal, and employed Regularize Orthogonal Matching Pursuit (ROMP) to establish the response model, which can predict the stimuli-evoked responses measured by functional magnetic resonance imaging (fMRI). The results revealed that the convolutional neural networks trained by different visual tasks had significant difference in the performance of visual encoding with almost the same network structure. The VGG-based encoding model can achieve a higher performance in most voxels of ROIs. We concluded that classification task in computer vision can better fit the visual mechanism of human compared to visual segmentation task.
卷积神经网络在计算机视觉领域具有优异的性能,可以用来构建模拟人类视觉信息处理过程的编码模型。然而,网络的训练目标可能会影响编码模型的性能。以往用于建立编码模型的神经网络多用于图像分类,分类任务单一。而在人的视觉感知过程中,多重任务同时进行。因此,现有的编码模型不能很好地满足人类视觉机制的多样性和复杂性。本文首先建立了基于网络结构相似但训练目标不同的全卷积网络(FCN)和视觉几何组(VGG)的特征提取模型,并采用正则化正交匹配追踪(ROMP)建立响应模型,该模型可以预测功能磁共振成像(fMRI)测量的刺激诱发反应。结果表明,在几乎相同的网络结构下,不同视觉任务训练的卷积神经网络在视觉编码性能上存在显著差异。基于vgg的编码模型可以在roi的大多数体素上实现更高的性能。结果表明,与视觉分割任务相比,计算机视觉中的分类任务更符合人类的视觉机制。
{"title":"Different Goal-driven CNNs Affect Performance of Visual Encoding Models based on Deep Learning","authors":"Ziya Yu, Chi Zhang, Linyuan Wang, Li Tong, Bin Yan","doi":"10.1145/3354031.3354045","DOIUrl":"https://doi.org/10.1145/3354031.3354045","url":null,"abstract":"A convolutional neural network with outstanding performance in computer vision can be used to construct an encoding model that simulates the process of human visual information processing. However, training goal of the network may have impacted the performance of encoding model. Most neural networks used to establish encoding models in the past were performed image classification task, the task of which is single. While in the process of human's visual perception, multiple tasks are performed simultaneously. Thus, the existing encoding model does not well satisfy the diversity and complexity of the human visual mechanism. In this paper, we first established a feature extraction model based on Fully Convolutional Network (FCN) and Visual Geometry Group (VGG) with similar network structure but different training goal, and employed Regularize Orthogonal Matching Pursuit (ROMP) to establish the response model, which can predict the stimuli-evoked responses measured by functional magnetic resonance imaging (fMRI). The results revealed that the convolutional neural networks trained by different visual tasks had significant difference in the performance of visual encoding with almost the same network structure. The VGG-based encoding model can achieve a higher performance in most voxels of ROIs. We concluded that classification task in computer vision can better fit the visual mechanism of human compared to visual segmentation task.","PeriodicalId":286321,"journal":{"name":"Proceedings of the 4th International Conference on Biomedical Signal and Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127857969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relationships of Cohen's Kappa, Sensitivity, and Specificity for Unbiased Annotations 无偏注释的Cohen Kappa,敏感性和特异性的关系
Juan Wang, Bin Xia
For the binary classification tasks in supervised learning, the labels of data have to be available for classifier development. Cohen's kappa is usually employed as a quality measure for data annotation, which is inconsistent with its true functionality of assessing the inter-annotator consistency. However, the derived relationship functions of Cohen's kappa, sensitivity, and specificity in the literature are complicated, thus are unable to be employed to interpret classification performance from kappa values. In this study, based on an annotation generation model, we develop simple relationships of kappa, sensitivity, and specificity when there is no bias in the annotations. A relationship between kappa and Youden's J statistic, a performance metric for binary classification, is further obtained. The derived relationships are evaluated on a synthetic dataset using linear regression analysis. The results demonstrate the accuracy of the derived relationships. It suggests the potential of estimating classification performance from kappa values when bias is absent in the annotations.
对于监督学习中的二分类任务,数据的标签必须是可用于分类器开发的。Cohen的kappa通常被用作数据注释的质量度量,这与其评估注释器间一致性的真正功能不一致。但是,文献中导出的Cohen’s kappa、敏感性和特异性的关系函数比较复杂,无法从kappa值来解释分类效果。在本研究中,基于注释生成模型,我们在注释中没有偏见的情况下建立了kappa、敏感性和特异性的简单关系。进一步得到了kappa与Youden's J统计量(二元分类的性能指标)之间的关系。使用线性回归分析在合成数据集上评估导出的关系。结果证明了推导关系的准确性。这表明当注释中没有偏见时,从kappa值估计分类性能的潜力。
{"title":"Relationships of Cohen's Kappa, Sensitivity, and Specificity for Unbiased Annotations","authors":"Juan Wang, Bin Xia","doi":"10.1145/3354031.3354040","DOIUrl":"https://doi.org/10.1145/3354031.3354040","url":null,"abstract":"For the binary classification tasks in supervised learning, the labels of data have to be available for classifier development. Cohen's kappa is usually employed as a quality measure for data annotation, which is inconsistent with its true functionality of assessing the inter-annotator consistency. However, the derived relationship functions of Cohen's kappa, sensitivity, and specificity in the literature are complicated, thus are unable to be employed to interpret classification performance from kappa values. In this study, based on an annotation generation model, we develop simple relationships of kappa, sensitivity, and specificity when there is no bias in the annotations. A relationship between kappa and Youden's J statistic, a performance metric for binary classification, is further obtained. The derived relationships are evaluated on a synthetic dataset using linear regression analysis. The results demonstrate the accuracy of the derived relationships. It suggests the potential of estimating classification performance from kappa values when bias is absent in the annotations.","PeriodicalId":286321,"journal":{"name":"Proceedings of the 4th International Conference on Biomedical Signal and Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126630737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fall Guard: Fall Monitoring Application for the Elderly based on Android Platform Fall Guard:基于Android平台的老年人跌倒监测应用
Jenny Ni, Wenliang Zhu, Jinfu Huang, Longfei Niu, Lirong Wang
In order to solve the situation that the elderly can't get timely assistance when they fall down, this paper designs and implements an Android-based fall monitoring application for the elderly (Fall Guard). Combined with existing fall detection devices and cloud server, Fall Guard uses Model-View-Control (MVC) structure and OkHttp network request framework. In this paper, OkHttp is used to send network requests to the cloud server and the fall detection device is bound to the mobile client to obtain user and device information. After successfully obtaining data, the obtained information is displayed on the user interface through JSON parsing, including device positioning, electronic fence, fall alarm information and motion track. One login account can be bound to multiple devices. The login account is set to the emergency contact number of the elderly by default. When the elderly fall, Fall Guard can receive many types of fall alarm prompts, including alarm information list, notification bar reminder, SMS notification, device user status bar information and other prompt functions. Test results show that Fall Guard has good monitoring accuracy and terminal compatibility. On the one hand, it can adapt to different models and brands of Android mobile terminals to achieve accurate positioning and alarm functions. On the other hand, due to its one-to-many management mode, it can be applied to deployment of different application scenarios such as home, community and nursing home.
为了解决老年人在跌倒时得不到及时帮助的情况,本文设计并实现了一款基于android的老年人跌倒监测应用(fall Guard)。fall Guard结合现有的跌倒检测设备和云服务器,采用模型-视图-控制(MVC)结构和OkHttp网络请求框架。本文使用OkHttp向云服务器发送网络请求,并将跌倒检测设备绑定到移动客户端,获取用户和设备信息。数据获取成功后,通过JSON解析将获取的信息显示在用户界面上,包括设备定位、电子围栏、坠落报警信息、运动轨迹等。一个登录账号可以绑定多台设备。登录帐号默认为老年人紧急联系号码。当老年人跌倒时,fall Guard可接收多种类型的跌倒报警提示,包括报警信息列表、通知栏提醒、短信通知、设备用户状态栏信息等提示功能。试验结果表明,Fall Guard具有良好的监测精度和终端兼容性。一方面,它可以适应不同型号和品牌的安卓移动终端,实现精准定位和报警功能。另一方面,由于其一对多的管理模式,可适用于家庭、社区、养老院等不同应用场景的部署。
{"title":"Fall Guard: Fall Monitoring Application for the Elderly based on Android Platform","authors":"Jenny Ni, Wenliang Zhu, Jinfu Huang, Longfei Niu, Lirong Wang","doi":"10.1145/3354031.3354055","DOIUrl":"https://doi.org/10.1145/3354031.3354055","url":null,"abstract":"In order to solve the situation that the elderly can't get timely assistance when they fall down, this paper designs and implements an Android-based fall monitoring application for the elderly (Fall Guard). Combined with existing fall detection devices and cloud server, Fall Guard uses Model-View-Control (MVC) structure and OkHttp network request framework. In this paper, OkHttp is used to send network requests to the cloud server and the fall detection device is bound to the mobile client to obtain user and device information. After successfully obtaining data, the obtained information is displayed on the user interface through JSON parsing, including device positioning, electronic fence, fall alarm information and motion track. One login account can be bound to multiple devices. The login account is set to the emergency contact number of the elderly by default. When the elderly fall, Fall Guard can receive many types of fall alarm prompts, including alarm information list, notification bar reminder, SMS notification, device user status bar information and other prompt functions. Test results show that Fall Guard has good monitoring accuracy and terminal compatibility. On the one hand, it can adapt to different models and brands of Android mobile terminals to achieve accurate positioning and alarm functions. On the other hand, due to its one-to-many management mode, it can be applied to deployment of different application scenarios such as home, community and nursing home.","PeriodicalId":286321,"journal":{"name":"Proceedings of the 4th International Conference on Biomedical Signal and Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130365358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RGB-D-based Hand Gesture Recognition for Letters Expression 基于rgb - d的字母表达手势识别
Jin Li, J. Yan, Guangxu Li, Liyuan Wang, Fan Yang
Hand Gesture Recognition (HGR) is a system to translate the hand gestures express to literature, which is a natural way of communication between deaf-mutes and non-disabled people. However, due to the complexity of relative positions of fingers, hands sizes, and environmental illumination, the hand gesture recognition is difficult. In this paper, a Fully Connected Neural Network (FCNN) algorithm for RGB-D sensor based HGR is proposed. We firstly build datasets of fingers joints and the center coordinates of hands in 3 dimensions. Then we normalize the samples to eliminate the natural difference of hands. Finally, the data are classified using a 3 layers FCNN. Totally 13,000 data of 26 hand gestures are collected. We randomly select 80% of these data for training and 20% of them for testing. According to the experiments, the average recognition accuracy is 94.73%.
手势识别(Hand Gesture Recognition, HGR)是一种将所表达的手势转化为文字的系统,是聋哑人与非残疾人之间自然的交流方式。然而,由于手指的相对位置、手的大小和环境光照的复杂性,手势识别是困难的。本文提出了一种基于RGB-D传感器的全连接神经网络(FCNN) HGR算法。首先在三维空间中建立手指关节和手的中心坐标数据集。然后对样本进行归一化,消除手的自然差异。最后,使用3层FCNN对数据进行分类。共收集了26种手势的13000个数据。我们随机选择这些数据的80%用于训练,20%用于测试。实验结果表明,平均识别准确率为94.73%。
{"title":"RGB-D-based Hand Gesture Recognition for Letters Expression","authors":"Jin Li, J. Yan, Guangxu Li, Liyuan Wang, Fan Yang","doi":"10.1145/3354031.3354044","DOIUrl":"https://doi.org/10.1145/3354031.3354044","url":null,"abstract":"Hand Gesture Recognition (HGR) is a system to translate the hand gestures express to literature, which is a natural way of communication between deaf-mutes and non-disabled people. However, due to the complexity of relative positions of fingers, hands sizes, and environmental illumination, the hand gesture recognition is difficult. In this paper, a Fully Connected Neural Network (FCNN) algorithm for RGB-D sensor based HGR is proposed. We firstly build datasets of fingers joints and the center coordinates of hands in 3 dimensions. Then we normalize the samples to eliminate the natural difference of hands. Finally, the data are classified using a 3 layers FCNN. Totally 13,000 data of 26 hand gestures are collected. We randomly select 80% of these data for training and 20% of them for testing. According to the experiments, the average recognition accuracy is 94.73%.","PeriodicalId":286321,"journal":{"name":"Proceedings of the 4th International Conference on Biomedical Signal and Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132469727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Design of Cleaning Module based on CAN 基于CAN总线的清洗模块设计
Feifei Sun, Wenliang Zhu, Gang Ma, Kongpeng Xing, Lirong Wang
In order to promote the automation of medical devices, we designed a test and analysis instrument which we call it as automatic liquid chip system. Based on the fully automatic liquid chip system, we designed various modules, including sample needle module, reagent bin module, reaction plate module, cleaning module, waste bin module and system liquid module. It includes the design of mechanical structure and electronic control system. The control chip of each module is STM32F103RET6. The main control part includes stepping motor, optocoupler sensor and AD converter. The whole communication is carried out through CAN (Controller Area Network) protocol. The serial instructions are sent to the upper computer, and the conversion from serial instructions to CAN instructions is completed in the transfer station. Each module receives CAN instructions, performs FIFO caching, and then performs corresponding operations. Considering the stability of each module, the universality of debugging and the stability of the whole system, this experiment designs the common parts of the module, including the design of stepping motor driver, the generation of software PWM and the configuration of CAN communication protocol. Then we take the cleaning module as an example to design its circuit and workflow.
为了促进医疗器械的自动化,我们设计了一种测试分析仪器,我们称之为自动液体芯片系统。在全自动液片系统的基础上,设计了样品针模块、试剂仓模块、反应板模块、清洗模块、垃圾桶模块和系统液模块。包括机械结构设计和电子控制系统设计。各模块的控制芯片为STM32F103RET6。主要控制部分包括步进电机、光耦传感器和AD转换器。整个通信通过CAN (Controller Area Network,控制器局域网)协议进行。将串行指令发送到上位机,在中转站完成串行指令到CAN指令的转换。每个模块接收CAN指令,进行FIFO缓存,然后进行相应的操作。考虑到各模块的稳定性、调试的通用性和整个系统的稳定性,本实验对模块的通用部分进行了设计,包括步进电机驱动器的设计、软件PWM的生成和CAN通信协议的配置。然后以清洗模块为例,设计了清洗模块的电路和工作流程。
{"title":"Design of Cleaning Module based on CAN","authors":"Feifei Sun, Wenliang Zhu, Gang Ma, Kongpeng Xing, Lirong Wang","doi":"10.1145/3354031.3354053","DOIUrl":"https://doi.org/10.1145/3354031.3354053","url":null,"abstract":"In order to promote the automation of medical devices, we designed a test and analysis instrument which we call it as automatic liquid chip system. Based on the fully automatic liquid chip system, we designed various modules, including sample needle module, reagent bin module, reaction plate module, cleaning module, waste bin module and system liquid module. It includes the design of mechanical structure and electronic control system. The control chip of each module is STM32F103RET6. The main control part includes stepping motor, optocoupler sensor and AD converter. The whole communication is carried out through CAN (Controller Area Network) protocol. The serial instructions are sent to the upper computer, and the conversion from serial instructions to CAN instructions is completed in the transfer station. Each module receives CAN instructions, performs FIFO caching, and then performs corresponding operations. Considering the stability of each module, the universality of debugging and the stability of the whole system, this experiment designs the common parts of the module, including the design of stepping motor driver, the generation of software PWM and the configuration of CAN communication protocol. Then we take the cleaning module as an example to design its circuit and workflow.","PeriodicalId":286321,"journal":{"name":"Proceedings of the 4th International Conference on Biomedical Signal and Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114044729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kinematic Characteristics of Backhand Block in Table Tennis 乒乓球反手拦网的运动特征
Yi Ren, Zhipei Huang, Yiming Guo, Jiankang Wu, Yingfei Sun
Kinematic characteristics have been playing a crucial role in assessing the quality of movements and improving training plans. We design five characteristic parameters of table tennis technical movements in this paper, i.e., the normalized path, joint angle, phase duration, root mean square and velocity entropy. Based on the motion data obtained from immersive motion capture system, the validity of these characteristic parameters was verified by analyzing backhand block movement. Twenty subjects with two different skill levels were involved in this test to perform backhand block against the ball. The statistical analysis results revealed that there were significant differences between the parameters of the professional group and those of the novice group, including normalized path, velocity entropy, root mean square and joint angle. Meanwhile, phase duration and joint angle showed practical significance biomechanically. These characteristic parameters could serve as indicators for movement quality assessment and could be extended to other table tennis technical movements as well as further biomechanics research.
运动学特征在评估运动质量和改进训练计划方面起着至关重要的作用。本文设计了乒乓球技术动作的归一化路径、关节角度、相位持续时间、均方根和速度熵五个特征参数。基于沉浸式动作捕捉系统获取的运动数据,通过对反手挡块运动的分析,验证了这些特征参数的有效性。20名不同技术水平的被试进行了反手挡球的测试。统计分析结果显示,专业组与新手组在归一化路径、速度熵、均方根、关节角等参数上存在显著差异。同时,相持续时间和关节角度在生物力学上具有实际意义。这些特征参数可作为动作质量评价的指标,并可推广到其他乒乓球技术动作以及进一步的生物力学研究中。
{"title":"Kinematic Characteristics of Backhand Block in Table Tennis","authors":"Yi Ren, Zhipei Huang, Yiming Guo, Jiankang Wu, Yingfei Sun","doi":"10.1145/3354031.3354034","DOIUrl":"https://doi.org/10.1145/3354031.3354034","url":null,"abstract":"Kinematic characteristics have been playing a crucial role in assessing the quality of movements and improving training plans. We design five characteristic parameters of table tennis technical movements in this paper, i.e., the normalized path, joint angle, phase duration, root mean square and velocity entropy. Based on the motion data obtained from immersive motion capture system, the validity of these characteristic parameters was verified by analyzing backhand block movement. Twenty subjects with two different skill levels were involved in this test to perform backhand block against the ball. The statistical analysis results revealed that there were significant differences between the parameters of the professional group and those of the novice group, including normalized path, velocity entropy, root mean square and joint angle. Meanwhile, phase duration and joint angle showed practical significance biomechanically. These characteristic parameters could serve as indicators for movement quality assessment and could be extended to other table tennis technical movements as well as further biomechanics research.","PeriodicalId":286321,"journal":{"name":"Proceedings of the 4th International Conference on Biomedical Signal and Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122717303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Computer Aided Annotation of Early Esophageal Cancer in Gastroscopic Images based on Deeplabv3+ Network 基于Deeplabv3+网络的早期食管癌胃镜图像计算机辅助标注
Dingyun Liu, Hongxiu Jiang, N. Rao, Cheng-Si Luo, Wenju Du, Zheng-wen Li, Tao Gan
The diagnoses of Early Esophageal Cancer (EEC) based on gastroscopic images is a challenging task in clinic, which relies heavily on subjective artificial detection and annotation. As a result, computer aided diagnosis (CAD) methods that support the clinicians become highly attractive. In this paper, we proposed a CAD method which realized the automatic detection and annotation of EEC lesions in gastroscopic images. The proposed method initially utilized an advanced Deep Learning (DL) network Deeplabv3+ to obtain a preliminary prediction of EEC regions. Then, a post-processing step which referenced the clinical requirements was designed and applied to get the final annotation results. Totally 3190 gastroscopic images of 732 patients were used in this work. The final experimental results show that the EEC detection rate of our method was 97.07%, and the mean Dice Similarity Coefficient (DSC) was 74.01%, which are higher than those of other state-of-the-are DL-based methods. In addition, the false positive output of our method is fewer. Therefore, the proposed method offers a good potential to aid the clinical diagnoses of EEC.
基于胃镜图像的早期食管癌诊断在临床上是一项具有挑战性的任务,它严重依赖于主观的人工检测和注释。因此,支持临床医生的计算机辅助诊断(CAD)方法变得非常有吸引力。本文提出了一种CAD方法,实现了胃镜图像中EEC病变的自动检测与标注。该方法首先利用先进的深度学习(DL)网络Deeplabv3+对EEC区域进行初步预测。然后,参照临床需求设计并应用后处理步骤,得到最终标注结果。本研究共使用了732例患者的3190张胃镜图像。最终实验结果表明,该方法的EEC检测率为97.07%,平均Dice相似系数(DSC)为74.01%,高于其他基于state- are dl的方法。此外,我们的方法的假阳性输出更少。因此,该方法在辅助脑电图临床诊断方面具有很好的潜力。
{"title":"Computer Aided Annotation of Early Esophageal Cancer in Gastroscopic Images based on Deeplabv3+ Network","authors":"Dingyun Liu, Hongxiu Jiang, N. Rao, Cheng-Si Luo, Wenju Du, Zheng-wen Li, Tao Gan","doi":"10.1145/3354031.3354046","DOIUrl":"https://doi.org/10.1145/3354031.3354046","url":null,"abstract":"The diagnoses of Early Esophageal Cancer (EEC) based on gastroscopic images is a challenging task in clinic, which relies heavily on subjective artificial detection and annotation. As a result, computer aided diagnosis (CAD) methods that support the clinicians become highly attractive. In this paper, we proposed a CAD method which realized the automatic detection and annotation of EEC lesions in gastroscopic images. The proposed method initially utilized an advanced Deep Learning (DL) network Deeplabv3+ to obtain a preliminary prediction of EEC regions. Then, a post-processing step which referenced the clinical requirements was designed and applied to get the final annotation results. Totally 3190 gastroscopic images of 732 patients were used in this work. The final experimental results show that the EEC detection rate of our method was 97.07%, and the mean Dice Similarity Coefficient (DSC) was 74.01%, which are higher than those of other state-of-the-are DL-based methods. In addition, the false positive output of our method is fewer. Therefore, the proposed method offers a good potential to aid the clinical diagnoses of EEC.","PeriodicalId":286321,"journal":{"name":"Proceedings of the 4th International Conference on Biomedical Signal and Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132226066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Detection of Abnormal Regions on Temporal Subtraction Images based on CNN 基于CNN的时间差图像异常区域检测
Mitsuaki Nagao, Huimin Lu, Hyoungseop Kim, T. Aoki, S. Kido
Recently, visual screening based on CT images become the useful tool in the medical diagnosis. However, due to the increasing data volumes and the computational complexity of the algorithms, image processing technique for the high quality visual screening is still required. To this end, some computer aided diagnosis (CAD) algorithms are proposed. Meanwhile, cancer is a leading cause of death in the world. Detection of cancer region in CT images is the most important task to early detection and early treatment. We design and develop a framework combining convolutional neural networks (CNN) with temporal subtraction techniques-based non-rigid image registration algorithm. However, conventional CNN has the issue that as the layers deeper, global information close to input images is lost. Therefore, we add a skip connection to conventional CNN. By adding a new skip connection, the proposed CNN network maintains the global information without loss of important features of input image. All in all, our proposed method can be built into three main steps; i) pre-processing for image segmentation, ii) image matching for registration, and iii) classification of abnormal regions based on machine learning algorithms. We perform our proposed technique to 25 thoracic MDCT sets and obtain the AUC score of 0.951.
近年来,基于CT图像的视觉筛查已成为医学诊断的重要工具。然而,由于数据量的增加和算法的计算复杂度,仍然需要高质量视觉筛选的图像处理技术。为此,提出了一些计算机辅助诊断(CAD)算法。与此同时,癌症是世界上导致死亡的主要原因。CT图像中肿瘤区域的检测是早期发现和早期治疗的重要任务。我们设计并开发了一个将卷积神经网络(CNN)与基于时间减法技术的非刚性图像配准算法相结合的框架。然而,传统的CNN存在一个问题,即随着层数的加深,接近输入图像的全局信息会丢失。因此,我们在传统CNN的基础上增加了一个跳跃连接。通过增加新的跳过连接,本文提出的CNN网络保持了输入图像的全局信息,而不丢失重要特征。总而言之,我们提出的方法可以分为三个主要步骤;1)图像分割预处理,2)图像匹配配准,3)基于机器学习算法的异常区域分类。我们对25组胸部MDCT进行了该技术,得到AUC评分为0.951。
{"title":"Detection of Abnormal Regions on Temporal Subtraction Images based on CNN","authors":"Mitsuaki Nagao, Huimin Lu, Hyoungseop Kim, T. Aoki, S. Kido","doi":"10.1145/3354031.3354049","DOIUrl":"https://doi.org/10.1145/3354031.3354049","url":null,"abstract":"Recently, visual screening based on CT images become the useful tool in the medical diagnosis. However, due to the increasing data volumes and the computational complexity of the algorithms, image processing technique for the high quality visual screening is still required. To this end, some computer aided diagnosis (CAD) algorithms are proposed. Meanwhile, cancer is a leading cause of death in the world. Detection of cancer region in CT images is the most important task to early detection and early treatment. We design and develop a framework combining convolutional neural networks (CNN) with temporal subtraction techniques-based non-rigid image registration algorithm. However, conventional CNN has the issue that as the layers deeper, global information close to input images is lost. Therefore, we add a skip connection to conventional CNN. By adding a new skip connection, the proposed CNN network maintains the global information without loss of important features of input image. All in all, our proposed method can be built into three main steps; i) pre-processing for image segmentation, ii) image matching for registration, and iii) classification of abnormal regions based on machine learning algorithms. We perform our proposed technique to 25 thoracic MDCT sets and obtain the AUC score of 0.951.","PeriodicalId":286321,"journal":{"name":"Proceedings of the 4th International Conference on Biomedical Signal and Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117009519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 4th International Conference on Biomedical Signal and Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1