Pub Date : 2015-03-03DOI: 10.1109/AISP.2015.7123494
A. H. Poorjam, M. H. Bahari, H. Van hamme
In this paper, a novel approach for automatic speaker weight estimation from spontaneous telephone speech signals is proposed. In this method, each utterance is modeled using the i-vector framework which is based on the factor analysis on Gaussian Mixture Model (GMM) mean supervectors, and the Non-negative Factor Analysis (NFA) framework which is based on a constrained factor analysis on GMM weights. Then, the available information in both Gaussian means and Gaussian weights is exploited through a feature-level fusion of the i-vectors and the NFA vectors. Finally, a least-squares support vector regression (LS-SVR) is employed to estimate the weight of speakers from given utterances. The proposed approach is evaluated on the telephone speech signals of National Institute of Standards and Technology (NIST) 2008 and 2010 Speaker Recognition Evaluation (SRE) corpora. Experimental results over 2339 utterances show that the correlation coefficients between actual and estimated weights of male and female speakers are 0.56 and 0.49, respectively, which indicate the effectiveness of the proposed method in speaker weight estimation.
{"title":"Speaker weight estimation from speech signals using a fusion of the i-vector and NFA frameworks","authors":"A. H. Poorjam, M. H. Bahari, H. Van hamme","doi":"10.1109/AISP.2015.7123494","DOIUrl":"https://doi.org/10.1109/AISP.2015.7123494","url":null,"abstract":"In this paper, a novel approach for automatic speaker weight estimation from spontaneous telephone speech signals is proposed. In this method, each utterance is modeled using the i-vector framework which is based on the factor analysis on Gaussian Mixture Model (GMM) mean supervectors, and the Non-negative Factor Analysis (NFA) framework which is based on a constrained factor analysis on GMM weights. Then, the available information in both Gaussian means and Gaussian weights is exploited through a feature-level fusion of the i-vectors and the NFA vectors. Finally, a least-squares support vector regression (LS-SVR) is employed to estimate the weight of speakers from given utterances. The proposed approach is evaluated on the telephone speech signals of National Institute of Standards and Technology (NIST) 2008 and 2010 Speaker Recognition Evaluation (SRE) corpora. Experimental results over 2339 utterances show that the correlation coefficients between actual and estimated weights of male and female speakers are 0.56 and 0.49, respectively, which indicate the effectiveness of the proposed method in speaker weight estimation.","PeriodicalId":405857,"journal":{"name":"2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126446721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-03DOI: 10.1109/AISP.2015.7123520
S. Mohammadi
A novel video watermarking algorithm based on wavelet transform and chaotic maps is here introduced. We apply the two dimensional wavelet transform on I-frames and then insert the chaotic watermark into part of the sub-band coefficients. Since chaotic maps are sensitive to initial values, initial values of the chaotic maps and their chaotic parameters are exploited as secret keys in our algorithm. Results are presented to reveal the usefulness of the algorithm. Comparisons are made with the latest video watermarking schemes.
{"title":"A novel video watermarking algorithm based on chaotic maps in the transform domain","authors":"S. Mohammadi","doi":"10.1109/AISP.2015.7123520","DOIUrl":"https://doi.org/10.1109/AISP.2015.7123520","url":null,"abstract":"A novel video watermarking algorithm based on wavelet transform and chaotic maps is here introduced. We apply the two dimensional wavelet transform on I-frames and then insert the chaotic watermark into part of the sub-band coefficients. Since chaotic maps are sensitive to initial values, initial values of the chaotic maps and their chaotic parameters are exploited as secret keys in our algorithm. Results are presented to reveal the usefulness of the algorithm. Comparisons are made with the latest video watermarking schemes.","PeriodicalId":405857,"journal":{"name":"2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133423186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-03DOI: 10.1109/AISP.2015.7123522
A. Rashno, S. Sadri, Hossein SadeghianNejad
A novel content-based image retrieval (CBIR) schema with wavelet and color features followed by ant colony optimization (ACO) feature selection has been proposed in this paper. A new feature extraction schema including texture features from wavelet transformation and color features in RGB and HSV domain is proposed as representative feature vector for images in database. Also, appropriate similarity measure for each feature is presented. Retrieving results are so sensitive to image features used in content-based image retrieval. We address this problem with selection of most relevant features among complete feature set by ant colony optimization based feature selection. To evaluate the performance of our proposed CBIR schema, it has been compared with older proposed systems, results show that the precision and recall of our proposed schema are higher than older ones for the majority of image categories.
{"title":"An efficient content-based image retrieval with ant colony optimization feature selection schema based on wavelet and color features","authors":"A. Rashno, S. Sadri, Hossein SadeghianNejad","doi":"10.1109/AISP.2015.7123522","DOIUrl":"https://doi.org/10.1109/AISP.2015.7123522","url":null,"abstract":"A novel content-based image retrieval (CBIR) schema with wavelet and color features followed by ant colony optimization (ACO) feature selection has been proposed in this paper. A new feature extraction schema including texture features from wavelet transformation and color features in RGB and HSV domain is proposed as representative feature vector for images in database. Also, appropriate similarity measure for each feature is presented. Retrieving results are so sensitive to image features used in content-based image retrieval. We address this problem with selection of most relevant features among complete feature set by ant colony optimization based feature selection. To evaluate the performance of our proposed CBIR schema, it has been compared with older proposed systems, results show that the precision and recall of our proposed schema are higher than older ones for the majority of image categories.","PeriodicalId":405857,"journal":{"name":"2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116738207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-03DOI: 10.1109/AISP.2015.7123517
A. Yassin, A. Hussain, Keyan Abdul-Aziz Mutlaq
The security of cloud computing is the most important concerns that may delay its well-known adoption. Authentication is the central part of cloud security, targeting to gain valid users for accessing to stored data in cloud computing. There are several authentication schemes that based on username/password, but they are considered weak methods of cloud authentication. In the other side, image's digitization becomes highly vulnerable to malicious attacks over cloud computing. Our proposed scheme focuses on two-factor authentication that used image partial encryption to overcome above aforementioned issues and drawbacks of authentication schemes. Additionally, we use a fast partial image encryption scheme using Canny's edge detection with symmetric encryption is done as a second factor. In this scheme, the edge pixels of image are encrypted using the stream cipher as it holds most of the image's data and then we applied this way to authenticate valid users. The results of security analysis and experimental results view that our work supports a good balance between security and performance for image encryption in cloud computing environment.
{"title":"Cloud authentication based on encryption of digital image using edge detection","authors":"A. Yassin, A. Hussain, Keyan Abdul-Aziz Mutlaq","doi":"10.1109/AISP.2015.7123517","DOIUrl":"https://doi.org/10.1109/AISP.2015.7123517","url":null,"abstract":"The security of cloud computing is the most important concerns that may delay its well-known adoption. Authentication is the central part of cloud security, targeting to gain valid users for accessing to stored data in cloud computing. There are several authentication schemes that based on username/password, but they are considered weak methods of cloud authentication. In the other side, image's digitization becomes highly vulnerable to malicious attacks over cloud computing. Our proposed scheme focuses on two-factor authentication that used image partial encryption to overcome above aforementioned issues and drawbacks of authentication schemes. Additionally, we use a fast partial image encryption scheme using Canny's edge detection with symmetric encryption is done as a second factor. In this scheme, the edge pixels of image are encrypted using the stream cipher as it holds most of the image's data and then we applied this way to authenticate valid users. The results of security analysis and experimental results view that our work supports a good balance between security and performance for image encryption in cloud computing environment.","PeriodicalId":405857,"journal":{"name":"2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117045127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-03DOI: 10.1109/AISP.2015.7123503
Nima Najafzadeh, Mehran Fotouhi, S. Kasaei
This paper, describes a solution for tracking multiple soccer players, simultaneously, in soccer ground. It adapts Kalman filter for tracking multiple players. Adapting Kalman filter is divided to four main tasks. The first task is defining the state vector for multiple object tracking. The second task is determining a motion model for estimating the position of soccer players in the next frame. The third task is defining an observation method for detecting soccer players in each frame. Finally, the fourth task is tuning the measurement noise covariance and estimating noise covariance. In the third task, a novel observation method for detecting soccer players is proposed. This method divides the player body into three parts and calculates the histogram of each part, separately. Also, an algorithm for updating the reference object patch is introduced in observation method. Each task is discussed in detail and the promising performance of the proposed method for tracking soccer players when run on the Azadi dataset is shown.
{"title":"Multiple soccer players tracking","authors":"Nima Najafzadeh, Mehran Fotouhi, S. Kasaei","doi":"10.1109/AISP.2015.7123503","DOIUrl":"https://doi.org/10.1109/AISP.2015.7123503","url":null,"abstract":"This paper, describes a solution for tracking multiple soccer players, simultaneously, in soccer ground. It adapts Kalman filter for tracking multiple players. Adapting Kalman filter is divided to four main tasks. The first task is defining the state vector for multiple object tracking. The second task is determining a motion model for estimating the position of soccer players in the next frame. The third task is defining an observation method for detecting soccer players in each frame. Finally, the fourth task is tuning the measurement noise covariance and estimating noise covariance. In the third task, a novel observation method for detecting soccer players is proposed. This method divides the player body into three parts and calculates the histogram of each part, separately. Also, an algorithm for updating the reference object patch is introduced in observation method. Each task is discussed in detail and the promising performance of the proposed method for tracking soccer players when run on the Azadi dataset is shown.","PeriodicalId":405857,"journal":{"name":"2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127214481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-03DOI: 10.1109/AISP.2015.7123493
Ali Nemati, S. Feizi, A. Ahmadi, Saeed Haghiri, M. Ahmadi, S. Alirezaee
Radio-frequency identification (RFID) are becoming a part of our everyday life with a wide range of applications such as labeling products and supply chain management and etc. These smart and tiny devices have extremely constrained resources in terms of area, computational abilities, memory, and power. At the same time, security and privacy issues remain as an important problem, thus with the large deployment of low resource devices, increasing need to provide security and privacy among such devices, has arisen. Resource-efficient cryptographic incipient become basic for realizing both security and efficiency in constrained environments and embedded systems like RFID tags and sensor nodes. Among those primitives, lightweight block cipher plays a significant role as a building block for security systems. In 2014 Manoj Kumar et al proposed a new Lightweight block cipher named as FeW, which are suitable for extremely constrained environments and embedded systems. In this paper, we simulate and synthesize the FeW block cipher. Implementation results of the FeW cryptography algorithm on a FPGA are presented. The design target is efficiency of area and cost.
{"title":"An efficient hardware implementation of few lightweight block cipher","authors":"Ali Nemati, S. Feizi, A. Ahmadi, Saeed Haghiri, M. Ahmadi, S. Alirezaee","doi":"10.1109/AISP.2015.7123493","DOIUrl":"https://doi.org/10.1109/AISP.2015.7123493","url":null,"abstract":"Radio-frequency identification (RFID) are becoming a part of our everyday life with a wide range of applications such as labeling products and supply chain management and etc. These smart and tiny devices have extremely constrained resources in terms of area, computational abilities, memory, and power. At the same time, security and privacy issues remain as an important problem, thus with the large deployment of low resource devices, increasing need to provide security and privacy among such devices, has arisen. Resource-efficient cryptographic incipient become basic for realizing both security and efficiency in constrained environments and embedded systems like RFID tags and sensor nodes. Among those primitives, lightweight block cipher plays a significant role as a building block for security systems. In 2014 Manoj Kumar et al proposed a new Lightweight block cipher named as FeW, which are suitable for extremely constrained environments and embedded systems. In this paper, we simulate and synthesize the FeW block cipher. Implementation results of the FeW cryptography algorithm on a FPGA are presented. The design target is efficiency of area and cost.","PeriodicalId":405857,"journal":{"name":"2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127191758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-03DOI: 10.1109/AISP.2015.7123523
Mehdi Khademloo, M. Rezghi
This paper presents a new and efficient approach for single-image super-resolution based on sparse signal recovery. This approach uses a co-occurrence trained dictionary of image patches that obtained from a set of observed low- and high-resolution images. The linear combination of the dictionary patches can recover every patch, then each patch that used on the low-resolution image, can be recovered by the dictionary patches. Since the recovered patch is a linear combination of some patches, the noise of every patch, aggregated in the recovered patch, then we prefer a linear combination which is more sparse rather than other combinations. So the sparse representation of patches can filter the noise in the solution. Recently this approach has been used in single image super-resolution problem. These methods calculate the sparse representation of every patches separately and set it to the recovered high-resolution image. So the complexity of such methods are very high and for suitable solution the parameters of algorithm must be estimated, therefore, this process (recover all patch with an iterative algorithm and parameter estimation for each iterate) is very time consuming. This paper presents an integrated method for recovering a low-resolution image based on sparse representation of patches with one step and recover whole image together.
{"title":"Integrated single image super resolution based on sparse representation","authors":"Mehdi Khademloo, M. Rezghi","doi":"10.1109/AISP.2015.7123523","DOIUrl":"https://doi.org/10.1109/AISP.2015.7123523","url":null,"abstract":"This paper presents a new and efficient approach for single-image super-resolution based on sparse signal recovery. This approach uses a co-occurrence trained dictionary of image patches that obtained from a set of observed low- and high-resolution images. The linear combination of the dictionary patches can recover every patch, then each patch that used on the low-resolution image, can be recovered by the dictionary patches. Since the recovered patch is a linear combination of some patches, the noise of every patch, aggregated in the recovered patch, then we prefer a linear combination which is more sparse rather than other combinations. So the sparse representation of patches can filter the noise in the solution. Recently this approach has been used in single image super-resolution problem. These methods calculate the sparse representation of every patches separately and set it to the recovered high-resolution image. So the complexity of such methods are very high and for suitable solution the parameters of algorithm must be estimated, therefore, this process (recover all patch with an iterative algorithm and parameter estimation for each iterate) is very time consuming. This paper presents an integrated method for recovering a low-resolution image based on sparse representation of patches with one step and recover whole image together.","PeriodicalId":405857,"journal":{"name":"2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131640752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-03DOI: 10.1109/AISP.2015.7123533
M. Kooshesh, G. Akbarizadeh
The goal of the present research is to despeckle SAR images, which is critical for segmentation and target recognition in satellite SAR images. When a despeckling algorithm is applied to a SAR image, important information such as the edges, corners, textures, and object parts will degrade. Curvelet transform is a recently proposed form of multi-scale analysis that achieves better performance of wavelet and Gabor transforms in edge and curve detection. This is a geometric transform that is useful for SAR image processing. For unsupervised texture images, segmentation is different and distinct from the textures, so the textures at the boundary noises will disappear. Curvelet transform has produced good results in the detection of curved edges with higher accuracy in finding the orientation than wavelet transforms. The present study uses fast discrete curvelet transform (FDCT) based on wresting and uses unsupervised adaptive threshold learning to develop a new despeckling algorithm for SAR images. In the proposed algorithm, each segment of the SAR image can be learned for selection of its adaptive threshold. Simulation results demonstrate that the proposed algorithm performs better than similar methods.
{"title":"Despeckling algorithm for remote sensing synthetic aperture radar images using multi-scale curvelet transform","authors":"M. Kooshesh, G. Akbarizadeh","doi":"10.1109/AISP.2015.7123533","DOIUrl":"https://doi.org/10.1109/AISP.2015.7123533","url":null,"abstract":"The goal of the present research is to despeckle SAR images, which is critical for segmentation and target recognition in satellite SAR images. When a despeckling algorithm is applied to a SAR image, important information such as the edges, corners, textures, and object parts will degrade. Curvelet transform is a recently proposed form of multi-scale analysis that achieves better performance of wavelet and Gabor transforms in edge and curve detection. This is a geometric transform that is useful for SAR image processing. For unsupervised texture images, segmentation is different and distinct from the textures, so the textures at the boundary noises will disappear. Curvelet transform has produced good results in the detection of curved edges with higher accuracy in finding the orientation than wavelet transforms. The present study uses fast discrete curvelet transform (FDCT) based on wresting and uses unsupervised adaptive threshold learning to develop a new despeckling algorithm for SAR images. In the proposed algorithm, each segment of the SAR image can be learned for selection of its adaptive threshold. Simulation results demonstrate that the proposed algorithm performs better than similar methods.","PeriodicalId":405857,"journal":{"name":"2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134260240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-03DOI: 10.1109/AISP.2015.7123479
B. Safarinejadian, M. Asad
In the present paper the identification and estimation problem of a fractional order state space system will be addressed. This paper presents a fractional order information filter and also a hierarchical identification algorithm to identify and estimate parameters and states of a fractional order system. Then, merging this algorithm with fractional order information filter, a novel identification method based on hierarchical identification theory is introduced to reduce the computational complexity. Finally, the applicability and performance of this platform on an exemplary system is examined.
{"title":"Fractional order state space canonical model identification using fractional order information filter","authors":"B. Safarinejadian, M. Asad","doi":"10.1109/AISP.2015.7123479","DOIUrl":"https://doi.org/10.1109/AISP.2015.7123479","url":null,"abstract":"In the present paper the identification and estimation problem of a fractional order state space system will be addressed. This paper presents a fractional order information filter and also a hierarchical identification algorithm to identify and estimate parameters and states of a fractional order system. Then, merging this algorithm with fractional order information filter, a novel identification method based on hierarchical identification theory is introduced to reduce the computational complexity. Finally, the applicability and performance of this platform on an exemplary system is examined.","PeriodicalId":405857,"journal":{"name":"2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116580969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-03DOI: 10.1109/AISP.2015.7123492
Alieh Hajizadeh-S, M. Akbarzadeh-T., A. Rowhanimanesh
Nanomedicine is an interdisciplinary research area that aims at prevention, diagnosis and treatment of complex diseases by the nanoscale operators to reduce side effects and increase the cure rate. Simplicity and limited functionality of these particles, as well as the decentralized computing and the uncertain dynamics of the human body environment are some of major challenges in this area. In this paper, we propose that equipping the nano-agents with learning ability provides high robustness against the uncertainties and changing dynamics of the human body. In particular, we propose a swarm of learning nano-agents for the treatment of Atherosclerosis. The swarm learns to approximate the desirable drug release function that changes in time according to the environmental conditions of the disease location. For this purpose, we use radial basis function neuron structures that can adapt with human body. Experimental results show the effectiveness of the proposed method in terms of disease control time and drug release rate, as well as robustness against possible disturbances.
{"title":"Dynamic swarm learning for nanoparticles to control drug release function using RBF networks in atherosclerosis","authors":"Alieh Hajizadeh-S, M. Akbarzadeh-T., A. Rowhanimanesh","doi":"10.1109/AISP.2015.7123492","DOIUrl":"https://doi.org/10.1109/AISP.2015.7123492","url":null,"abstract":"Nanomedicine is an interdisciplinary research area that aims at prevention, diagnosis and treatment of complex diseases by the nanoscale operators to reduce side effects and increase the cure rate. Simplicity and limited functionality of these particles, as well as the decentralized computing and the uncertain dynamics of the human body environment are some of major challenges in this area. In this paper, we propose that equipping the nano-agents with learning ability provides high robustness against the uncertainties and changing dynamics of the human body. In particular, we propose a swarm of learning nano-agents for the treatment of Atherosclerosis. The swarm learns to approximate the desirable drug release function that changes in time according to the environmental conditions of the disease location. For this purpose, we use radial basis function neuron structures that can adapt with human body. Experimental results show the effectiveness of the proposed method in terms of disease control time and drug release rate, as well as robustness against possible disturbances.","PeriodicalId":405857,"journal":{"name":"2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116700889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}