As Computer-Assisted Surgery (CAS) getting popular, more and more research has been conducted to help surgeons operate. We aim at the semantic segmentation in the endoscopy surgery scenario because semantic segmentation is the first step for a computer to grasp what shows up in the vision of an endoscope. However, modern Deep Learning algorithms need myriads of training data. Since data of the endoscopy surgery scene is relatively scarce, the performance of existing algorithms is thus rather limited. Therefore, we tried to solve the problem of training a semantic segmentation network with few data in this work. We propose a proof-of-concept system offering the ability to enlarge the dataset and improve the performance. The system aims to synthesize a pair of training data in a single pass and provides a sufficient amount of data to train a network. We evaluated our method using the dataset provided by MICCAI 2018 Robotic Scene Segmentation Sub-Challenge. Our method yielded 11.79% mIoU improvement in recognizing anatomical objects and 2.2% mIoU in recognizing surgical instruments. Recognizing anatomical objects accurately would definitely benefit CAS. Preliminary results suggest our method helps the classifier become more robust and accurate even if not having large amount of data.
{"title":"Using Synthesized Data to Train Deep Neural Net with Few Data","authors":"Cheng-Shao Chiang, C.-S. Shih","doi":"10.1145/3400286.3418244","DOIUrl":"https://doi.org/10.1145/3400286.3418244","url":null,"abstract":"As Computer-Assisted Surgery (CAS) getting popular, more and more research has been conducted to help surgeons operate. We aim at the semantic segmentation in the endoscopy surgery scenario because semantic segmentation is the first step for a computer to grasp what shows up in the vision of an endoscope. However, modern Deep Learning algorithms need myriads of training data. Since data of the endoscopy surgery scene is relatively scarce, the performance of existing algorithms is thus rather limited. Therefore, we tried to solve the problem of training a semantic segmentation network with few data in this work. We propose a proof-of-concept system offering the ability to enlarge the dataset and improve the performance. The system aims to synthesize a pair of training data in a single pass and provides a sufficient amount of data to train a network. We evaluated our method using the dataset provided by MICCAI 2018 Robotic Scene Segmentation Sub-Challenge. Our method yielded 11.79% mIoU improvement in recognizing anatomical objects and 2.2% mIoU in recognizing surgical instruments. Recognizing anatomical objects accurately would definitely benefit CAS. Preliminary results suggest our method helps the classifier become more robust and accurate even if not having large amount of data.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130903970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of cloud technology, distributed and collaborative container platform technology has emerged to overcome the limitations of the existing stand-alone container platform, which has limitations in the mobility and resource scalability of cloud services. Distributed and collaborative container platform technology enables flexible expansion of resources and maximization of service mobility between container platforms distributed locally. In this paper, we propose a two-stage scheduler based on multi-resource metrics. The proposed scheduler determines the proper federated cluster where the request deployment can be deployed in a distributed and collaborative cluster environment. In order to select an proper federated cluster, filtering to select candidate clusters to which the scheduling request deployment can be deployed and scoring to evaluate the preference of each filtered cluster are performed.
{"title":"Scheduler for Distributed and Collaborative Container Clusters based on Multi-Resource Metric","authors":"Y. Lee, J. An, Younghwan Kim","doi":"10.1145/3400286.3418281","DOIUrl":"https://doi.org/10.1145/3400286.3418281","url":null,"abstract":"With the development of cloud technology, distributed and collaborative container platform technology has emerged to overcome the limitations of the existing stand-alone container platform, which has limitations in the mobility and resource scalability of cloud services. Distributed and collaborative container platform technology enables flexible expansion of resources and maximization of service mobility between container platforms distributed locally. In this paper, we propose a two-stage scheduler based on multi-resource metrics. The proposed scheduler determines the proper federated cluster where the request deployment can be deployed in a distributed and collaborative cluster environment. In order to select an proper federated cluster, filtering to select candidate clusters to which the scheduling request deployment can be deployed and scoring to evaluate the preference of each filtered cluster are performed.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129272952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The PCIe (PCI Express) bus has long played a key role in interconnecting devices inside a system. In addition, advances in PCIe technology have made it possible to connect between servers using the PCIe bus. In this study, we've tried to improve the performance of our PCIe adapter cards for expanding the PCIe bus and connecting servers. Especially, we've looked for ways to make the most of the DMA capabilities offered by the PCIe switch chips mounted on our adapter cards. Our experimental results show that the dual ports method using multiple DMAs in each adapter card simultaneously, improves the performance up to 1.7 times compared to using a single port.
{"title":"Performance improvement of PCI Express adapter cards by adjusting the location of DMA functions","authors":"Kwangho Cha, Kyungmo Koo, Hyun Mi Jung","doi":"10.1145/3400286.3418229","DOIUrl":"https://doi.org/10.1145/3400286.3418229","url":null,"abstract":"The PCIe (PCI Express) bus has long played a key role in interconnecting devices inside a system. In addition, advances in PCIe technology have made it possible to connect between servers using the PCIe bus. In this study, we've tried to improve the performance of our PCIe adapter cards for expanding the PCIe bus and connecting servers. Especially, we've looked for ways to make the most of the DMA capabilities offered by the PCIe switch chips mounted on our adapter cards. Our experimental results show that the dual ports method using multiple DMAs in each adapter card simultaneously, improves the performance up to 1.7 times compared to using a single port.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129240839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jisu Park, Shin Cha, Seongbae Eun, J. Park, Young-Sun Yun
Speaker Change Detection (SCD) is the process that detects speaker changes during a conversation. The conversation can be divided into homogeneous segments using a typical SCD system or speaker diarization system in which the segments are partitioned according to a speaker identity. When the d-vectors are used to identify or verify the speakers with deep neural network model, they are often considered insufficient to train model for detecting the speaker changes by using only acoustic information. There are few dedicated datasets for system training, so the progress of the SCD study is slow and the performance is poor. Therefore, we presented data augmentation method based on TIMIT dataset to suit for the system, and we also proposed several methods to represent d-vectors for SCD systems and their preliminary results. In the proposed data augmentation method, the boundary information of speakers is transformed into probability according to the offset in a given frame and collected in the segment. To model the boundaries of the speakers, we concatenate two random speech sentences dedicated to speech recognition system. The preliminary experimental results, specifically recall percentage, shows the possibility of the proposed approaches. In the future, we will add linguistic information to the proposed classification system, or improve the system to use hybrid system of d-vector and frame vectors, or convolutional networks.
{"title":"Data Augmentation and D-vector Representation Methods for Speaker Change Detection","authors":"Jisu Park, Shin Cha, Seongbae Eun, J. Park, Young-Sun Yun","doi":"10.1145/3400286.3418270","DOIUrl":"https://doi.org/10.1145/3400286.3418270","url":null,"abstract":"Speaker Change Detection (SCD) is the process that detects speaker changes during a conversation. The conversation can be divided into homogeneous segments using a typical SCD system or speaker diarization system in which the segments are partitioned according to a speaker identity. When the d-vectors are used to identify or verify the speakers with deep neural network model, they are often considered insufficient to train model for detecting the speaker changes by using only acoustic information. There are few dedicated datasets for system training, so the progress of the SCD study is slow and the performance is poor. Therefore, we presented data augmentation method based on TIMIT dataset to suit for the system, and we also proposed several methods to represent d-vectors for SCD systems and their preliminary results. In the proposed data augmentation method, the boundary information of speakers is transformed into probability according to the offset in a given frame and collected in the segment. To model the boundaries of the speakers, we concatenate two random speech sentences dedicated to speech recognition system. The preliminary experimental results, specifically recall percentage, shows the possibility of the proposed approaches. In the future, we will add linguistic information to the proposed classification system, or improve the system to use hybrid system of d-vector and frame vectors, or convolutional networks.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133299049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Peng, Yufeng Chen, Zhengtao Xiang, Kai Che, Jinliang Zhang, Lianbing Xu
Motors have a wide range of applications in various aspects such as automotive, medical, industrial production, etc. Commonly used motors can generally be divided into DC motors and AC motors. Permanent Magnet Synchronous Motor (PMSM) is a type of AC motors with strong starting ability, high peak efficiency and high reliability, and with greater application value. This paper mainly studies the control technology of PMSM, and designs a set of motor controller and its protection system based on FPGA. First, based on the development and simulation platform of FPGA, the important algorithms of the motor controller and protection system are simulated and analysed. Then the system is built in the actual environment. Through actual testing, the speed control of the motor can be accurately achieved and various fault protection and instructions of the motor controller can be realized. The system is based on the development platform of FPGA, which with fast running speed, high flexibility, short development cycle, high resource utilization rate, and strong portability.
{"title":"Design and Research of Permanent Magnet Synchronous Motor Controller and Protection System Based on FPGA","authors":"G. Peng, Yufeng Chen, Zhengtao Xiang, Kai Che, Jinliang Zhang, Lianbing Xu","doi":"10.1145/3400286.3418247","DOIUrl":"https://doi.org/10.1145/3400286.3418247","url":null,"abstract":"Motors have a wide range of applications in various aspects such as automotive, medical, industrial production, etc. Commonly used motors can generally be divided into DC motors and AC motors. Permanent Magnet Synchronous Motor (PMSM) is a type of AC motors with strong starting ability, high peak efficiency and high reliability, and with greater application value. This paper mainly studies the control technology of PMSM, and designs a set of motor controller and its protection system based on FPGA. First, based on the development and simulation platform of FPGA, the important algorithms of the motor controller and protection system are simulated and analysed. Then the system is built in the actual environment. Through actual testing, the speed control of the motor can be accurately achieved and various fault protection and instructions of the motor controller can be realized. The system is based on the development platform of FPGA, which with fast running speed, high flexibility, short development cycle, high resource utilization rate, and strong portability.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122610249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chih-Han Yang, Jhih-Wun Zeng, C. Liu, Shih-Hao Hung
Due to the rapid evolution of the next-generation sequencing (NGS) technology, the sequence of an individual's genome can be determined from billions of short reads at a decreasing cost, which has advanced the fields of medical research and precision medicine with the ability to correlate mutations between genomes. Analysis of genome sequences, especially variant calling, is exceedingly computationally intensive, as it demands large storage capacity, computing power, and high-speed network to reduce the processing time. In the case of DeepVariant, an open-source software package which employs a deep neural network (DNN) to calls genetic variants, it took four hours to complete the analysis on a workstation with a high-performance GPU device to accelerate the DNN. Therefore, we profiled the performance of DeepVariant and refactored the code to reduce the time and cost of the NGS pipeline with a series of code optimization works. As a result, our distributed version of DeepVariant can finish the same job within 8 minutes on 8 dual-CPU nodes and 8 GPUs, which outperforms commercial versions in the market.
{"title":"Accelerating Variant Calling with Parallelized DeepVariant","authors":"Chih-Han Yang, Jhih-Wun Zeng, C. Liu, Shih-Hao Hung","doi":"10.1145/3400286.3418243","DOIUrl":"https://doi.org/10.1145/3400286.3418243","url":null,"abstract":"Due to the rapid evolution of the next-generation sequencing (NGS) technology, the sequence of an individual's genome can be determined from billions of short reads at a decreasing cost, which has advanced the fields of medical research and precision medicine with the ability to correlate mutations between genomes. Analysis of genome sequences, especially variant calling, is exceedingly computationally intensive, as it demands large storage capacity, computing power, and high-speed network to reduce the processing time. In the case of DeepVariant, an open-source software package which employs a deep neural network (DNN) to calls genetic variants, it took four hours to complete the analysis on a workstation with a high-performance GPU device to accelerate the DNN. Therefore, we profiled the performance of DeepVariant and refactored the code to reduce the time and cost of the NGS pipeline with a series of code optimization works. As a result, our distributed version of DeepVariant can finish the same job within 8 minutes on 8 dual-CPU nodes and 8 GPUs, which outperforms commercial versions in the market.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123425916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate gender classification from fingerprint-images brings benefits to various forensic, security and authentication analysis. Those benefits help to narrow down the space for searching and speed up the process for matching for applications such as automatic fingerprint identification systems (AFIS). However, achieving high prediction accuracy without human intervention (such as preprocessing and hand-crafted feature extraction) is currently and potentially a challenge. Therefore, this paper presents a deep learning method to automatically and conveniently estimate gender from fingerprint-images. In particular, the VGG-19, ResNet-50 and EfficientNet-B3 model were exploited to train from scratch. The raw images of fingerprints were fed into the networks for end-to-end learning. The networks trained on 8,000 images, validated on 1,520 images and tested on 360 images. Our experimental results showed that by comparing between those state-of-the-art models (VGG-19, ResNet-50 and EfficientNet-B3), EfficientNet-B3 model achieved the best accuracy of 97.89%, 69.86% and 63.05% for training, validating, and testing, respectively.
{"title":"Gender Classification from Fingerprint-images using Deep Learning Approach","authors":"Beanbonyka Rim, Junseob Kim, Min Hong","doi":"10.1145/3400286.3418237","DOIUrl":"https://doi.org/10.1145/3400286.3418237","url":null,"abstract":"Accurate gender classification from fingerprint-images brings benefits to various forensic, security and authentication analysis. Those benefits help to narrow down the space for searching and speed up the process for matching for applications such as automatic fingerprint identification systems (AFIS). However, achieving high prediction accuracy without human intervention (such as preprocessing and hand-crafted feature extraction) is currently and potentially a challenge. Therefore, this paper presents a deep learning method to automatically and conveniently estimate gender from fingerprint-images. In particular, the VGG-19, ResNet-50 and EfficientNet-B3 model were exploited to train from scratch. The raw images of fingerprints were fed into the networks for end-to-end learning. The networks trained on 8,000 images, validated on 1,520 images and tested on 360 images. Our experimental results showed that by comparing between those state-of-the-art models (VGG-19, ResNet-50 and EfficientNet-B3), EfficientNet-B3 model achieved the best accuracy of 97.89%, 69.86% and 63.05% for training, validating, and testing, respectively.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131822569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Apart from the accuracy, the size of Convolutional Neural Networks (CNN) model is another principal factor for facilitating the deployment of models on memory, power and budget constrained devices. Conventional compression techniques require human expert to setup parameters to explore the design space and iterative based pruning requires heavy training which is sub-optimal and time consuming. Given a CNN model, we propose deep reinforcement learning [8] DQN based automated compression which effectively turned off kernels on each layer by observing its significance. Observing accuracy, compression ratio and convergence rate, proposed DQN model can automatically re- activate the healthiest kernels back to train it again to regain accuracy which greatly ameliorate the model compression quality. Based on experiments on MNIST [3] dataset, our method can compress convolution layers for VGG-like [10] model up to 60% with 0.5% increase in test accuracy within less than a half the number of initial amount of training (speed-up up to 2.5×), state- of-the-art results of dropping 80% of kernels (compressed 86% parameters) with increase in accuracy by 0.14%. Further dropping 84% of kernels (compressed 94% parameters) with the loss of 0.4% accuracy. The first proposed Auto-AEC (Accuracy-Ensured Compression) model can compress the network by preserving original accuracy or increase in accuracy of the model, whereas, the second proposed Auto-CECA (Compression-Ensured Considering the Accuracy) model can compress to the maximum by preserving original accuracy or minimal drop of accuracy. We further analyze effectiveness of kernels on different layers based on how our model explores and exploits in various stages of training.
{"title":"Kernel-controlled DQN based CNN Pruning for Model Compression and Acceleration","authors":"Romancha Khatri, Kwanghee Won","doi":"10.1145/3400286.3418258","DOIUrl":"https://doi.org/10.1145/3400286.3418258","url":null,"abstract":"Apart from the accuracy, the size of Convolutional Neural Networks (CNN) model is another principal factor for facilitating the deployment of models on memory, power and budget constrained devices. Conventional compression techniques require human expert to setup parameters to explore the design space and iterative based pruning requires heavy training which is sub-optimal and time consuming. Given a CNN model, we propose deep reinforcement learning [8] DQN based automated compression which effectively turned off kernels on each layer by observing its significance. Observing accuracy, compression ratio and convergence rate, proposed DQN model can automatically re- activate the healthiest kernels back to train it again to regain accuracy which greatly ameliorate the model compression quality. Based on experiments on MNIST [3] dataset, our method can compress convolution layers for VGG-like [10] model up to 60% with 0.5% increase in test accuracy within less than a half the number of initial amount of training (speed-up up to 2.5×), state- of-the-art results of dropping 80% of kernels (compressed 86% parameters) with increase in accuracy by 0.14%. Further dropping 84% of kernels (compressed 94% parameters) with the loss of 0.4% accuracy. The first proposed Auto-AEC (Accuracy-Ensured Compression) model can compress the network by preserving original accuracy or increase in accuracy of the model, whereas, the second proposed Auto-CECA (Compression-Ensured Considering the Accuracy) model can compress to the maximum by preserving original accuracy or minimal drop of accuracy. We further analyze effectiveness of kernels on different layers based on how our model explores and exploits in various stages of training.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132584112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, monitoring of road surface is a key factor for road maintenance and management. With the advances in the optical methods, the road monitoring systems have been equipped with high accuracy and resolution sensor package. However, most of the existing sensor packages are equipped with expensive equipment such as optical and complex sensors in considering the dynamics of mobile vehicles and dynamic outdoor environments. In this paper, we propose a CNN-based line laser refinement. The proposed system is designed based on the improvement of CNN-based line lasers, and it is more cost-effective than the existing expensive system.
{"title":"Road Surface Profiling based on Artificial-Neural Networks","authors":"Seungho Choi, Seoyeon Kim, Heelim Hong, Y. B. Kim","doi":"10.1145/3400286.3418282","DOIUrl":"https://doi.org/10.1145/3400286.3418282","url":null,"abstract":"Recently, monitoring of road surface is a key factor for road maintenance and management. With the advances in the optical methods, the road monitoring systems have been equipped with high accuracy and resolution sensor package. However, most of the existing sensor packages are equipped with expensive equipment such as optical and complex sensors in considering the dynamics of mobile vehicles and dynamic outdoor environments. In this paper, we propose a CNN-based line laser refinement. The proposed system is designed based on the improvement of CNN-based line lasers, and it is more cost-effective than the existing expensive system.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114901500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes the application of three key methods to multimodal neuroimaging data fusion. The first step is to classify neurodegenerative brain diseases in the considered scans from the available neuroimaging techniques. We propose to classify scans by selecting relevant disease detection features utilizing a gametheoretic approach and evidence combination. We applied a filtering feature selection based on a coalitional game. The second step is to aggregate the classifiers' outcomes by leveraging an improvement of the Dempster-Shafer combination rule obtained by applying evolutionary game theory to determine a final decision from the various classifiers' results, also considering the subjective doctor opinion. Last, the overall solution can be deployed in a distributed manner. The robustness of the interactions is achievable by modeling them as a signaling game to determine when rejecting those messages suspected of being malicious.
{"title":"Multimodal Neuroimaging Game Theoretic Data Fusion in Adversarial Conditions","authors":"C. Esposito, Oscar Tamburis, Chang Choi","doi":"10.1145/3400286.3418269","DOIUrl":"https://doi.org/10.1145/3400286.3418269","url":null,"abstract":"This paper proposes the application of three key methods to multimodal neuroimaging data fusion. The first step is to classify neurodegenerative brain diseases in the considered scans from the available neuroimaging techniques. We propose to classify scans by selecting relevant disease detection features utilizing a gametheoretic approach and evidence combination. We applied a filtering feature selection based on a coalitional game. The second step is to aggregate the classifiers' outcomes by leveraging an improvement of the Dempster-Shafer combination rule obtained by applying evolutionary game theory to determine a final decision from the various classifiers' results, also considering the subjective doctor opinion. Last, the overall solution can be deployed in a distributed manner. The robustness of the interactions is achievable by modeling them as a signaling game to determine when rejecting those messages suspected of being malicious.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125374530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}